id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.15126
From Peptides to Nanostructures: A Euclidean Transformer for Fast and Stable Machine Learned Force Fields
Recent years have seen vast progress in the development of machine learned force fields (MLFFs) based on ab-initio reference calculations. Despite achieving low test errors, the reliability of MLFFs in molecular dynamics (MD) simulations is facing growing scrutiny due to concerns about instability over extended simulation timescales. Our findings suggest a potential connection between robustness to cumulative inaccuracies and the use of equivariant representations in MLFFs, but the computational cost associated with these representations can limit this advantage in practice. To address this, we propose a transformer architecture called SO3krates that combines sparse equivariant representations (Euclidean variables) with a self-attention mechanism that separates invariant and equivariant information, eliminating the need for expensive tensor products. SO3krates achieves a unique combination of accuracy, stability, and speed that enables insightful analysis of quantum properties of matter on extended time and system size scales. To showcase this capability, we generate stable MD trajectories for flexible peptides and supra-molecular structures with hundreds of atoms. Furthermore, we investigate the PES topology for medium-sized chainlike molecules (e.g., small peptides) by exploring thousands of minima. Remarkably, SO3krates demonstrates the ability to strike a balance between the conflicting demands of stability and the emergence of new minimum-energy conformations beyond the training data, which is crucial for realistic exploration tasks in the field of biochemistry.
J. Thorben Frank, Oliver T. Unke, Klaus-Robert Müller, Stefan Chmiela
2023-09-21T09:22:05Z
http://arxiv.org/abs/2309.15126v2
# From Peptides to Nanostructures: A Euclidean Transformer for ###### Abstract Recent years have seen vast progress in the development of machine learned force fields (MLFFs) based on _ab-initio_ reference calculations. Despite achieving low test errors, the suitability of MLFFs in molecular dynamics (MD) simulations is being increasingly scrutinized due to concerns about instability. Our findings suggest a potential connection between MD simulation stability and the presence of equivariant representations in MLFFs, but their computational cost can limit practical advantages they would otherwise bring. To address this, we propose a transformer architecture called SO3krates that combines sparse equivariant representations (_Euclidean variables_) with a self-attention mechanism that can separate invariant and equivariant information, eliminating the need for expensive tensor products. SO3krates achieves a unique combination of accuracy, stability, and speed that enables insightful analysis of quantum properties of matter on unprecedented time and system size scales. To showcase this capability, we generate stable MD trajectories for flexible peptides and supra-molecular structures with hundreds of atoms. Furthermore, we investigate the PES topology for medium-sized chainlike molecules (e.g., small peptides) by exploring thousands of minima. Remarkably, SO3krates demonstrates the ability to strike a balance between the conflicting demands of stability and the emergence of new minimum-energy conformations beyond the training data, which is crucial for realistic exploration tasks in the field of biochemistry. ## I Introduction Atomistic modeling relies on long-timescale molecular dynamics (MD) simulations to reveal how experimentally observed macroscopic properties of a system emerge from interactions on the microscopic scale [1]. The predictive accuracy of such simulations is determined by the accuracy of the interatomic forces that drive them. Traditionally, these forces are either obtained from exceedingly approximate mechanistic force fields (FF) or accurate, but computationally prohibitive _ab initio_ electronic structure calculations. Recently, machine learning (ML) potentials have started to bridge this gap, by exploiting statistical dependencies of molecular systems with so far unprecedented flexibility [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]. The accuracy of MLFFs is traditionally determined by their test errors on a few established benchmark datasets [26, 8, 27]. Despite providing an initial estimate of MLFF accuracy, recent research [28, 29, 30] indicates that there is only a weak correlation between MLFF test errors and their performance in long MD simulations, which is considered the true measure of predictive usefulness. Faithful representations of dynamical and thermodynamic observables can only be derived from accurate MD trajectories. From an ML perspective this shortcoming can be attributed to poor extrapolation behavior, which becomes particularly severe for high temperature configurations or conformationally flexible structures. In these cases, the geometries explored during MD simulations significantly deviate from the distribution of the training data. The ongoing progress in MLFF development has resulted in a wide range of increasingly sophisticated model architectures aiming to improve the extrapolation behavior. Among these, message passing neural networks (MPNNs) [12, 31, 9] have emerged as a particularly effective class of architectures. MPNNs can be considered as a generalization of convolutions to handle unstructured data domains, such as molecular graphs. This operation provides an effective way to extract features from the input data and is ubiquitous in many modern ML architectures. Recent advances in this area focused on the incorporation of physically meaningful geometric priors [32, 33, 11, 23]. This has lead to so-called _equivariant_ MPNNs, which have been found to reduce the obtained approximation error [34, 35, 36, 33] and offer better data efficiencies than invariant models [33]. Invariant models rely on pairwise distances to describe atomic interactions, as they do not change upon rotation [5]. However, with growing system size, flexibility or chemical heterogeneity, it becomes increasingly harder to derive the correct interaction patterns within this limited representation. This is why equivariant models enable to incorporate additional directional information, to capture interactions depending on the relative orientation of neighboring atoms. It allows them to discriminate interactions that can appear inseparable to simpler models [34] and to learn more transferable interaction patterns from the same training data. A fundamental building block of most equivariant architectures is the tensor product. It is evaluated within the convolution operation \((f*g)(x)\) between pairs of func tions \(f(x)\) and \(g(x)\) expanded in linear bases [37]. The result is then defined in the product space of the original basis function sets. Thus, the associated product space quickly becomes computationally intractable as it grows exponentially in the number of convolution operations. In SO(3) equivariant architectures, convolutions are performed over the SO(3) group of rotations in the basis of the _spherical harmonics_. By doing so, the exponential growth of the associated function space can be avoided by fixing the maximum degree \(l_{\text{max}}\) of the spherical harmonics in the architecture. The largest degree has been shown to be closely connected to accuracy, data efficiency [24, 33] and offer the potential for more reliable MD simulations. However, SO(3) convolutions scale as \(l_{\text{max}}^{\text{f}}\), which can increase the prediction time per conformation by up to two orders of magnitude compared to an invariant model [30, 38]. This has lead to a situation where one has to compromise between accuracy, stability and speed, which can pose significant practical problems that need to be addressed before such models can become useful in practice for high-throughput or extensive exploration tasks. We take this as motivation to propose an _Euclidean self-attention_ mechanism that replaces SO(3) convolutions with a filter on the relative orientation of atomic neighborhoods, representing atomic interactions without \begin{table} \begin{tabular}{l l c} \hline \hline Architecture & Scaling & \(l_{\text{max}}\) \\ \hline SchNet[9] & \(\mathcal{O}(n\times\langle\mathcal{N}\rangle\times F)\) & 0 \\ \hline PaiNN [34] & \(\mathcal{O}(n\times\langle\mathcal{N}\rangle\times l_{\text{max}}^{2}\times F)\) & 1 \\ \hline SpookyNet[24] & \(\mathcal{O}(n\times\langle\mathcal{N}\rangle\times l_{\text{max}}^{2}\times F)\) & 2 \\ \hline NequIP [33] & \(\mathcal{O}(n\times\langle\mathcal{N}\rangle\times l_{\text{max}}^{6}\times F)\) & 3 \\ \hline SO3rrates & \(\mathcal{O}(n\times\langle\mathcal{N}\rangle\times(l_{\text{max}}^{2}+F))\) & 3 \\ \hline \hline \end{tabular} \end{table} Table 1: Scaling for different (equivariant) message passing architectures, where \(n\) is the number of atoms, \(\langle\mathcal{N}\rangle\) the average number of neighbors and \(l_{\text{max}}\) the maximal degree. Figure 1: (a) Illustration of an invariant convolution, an SO(3) convolution and of the Euclidean attention mechanism that underlies the SO3krates transformer. We decompose the representation of molecular structure into high dimensional invariant features and equivariant Euclidean variables (EV), which interact via self-attention. (b) The proposed design paradigm can help to overcome current trade-offs between stability in MD simulations and computational efficiency experienced for other (equivariant) MPNNs. (c) Computational efficiency of SO3krates allows the calculation of velocity-auto correlation functions from converged MD simulations for supra-molecular structures. (d) SO3krates enables to explore thousands of minima of the potential energy surface of small chainlike molecules such as Ac-Ala3-NHMe or DHA, where SO3krates can robustly extrapolate beyond the training data. the need for expensive tensor products. Our solution builds on recent advances in neural network architecture design [39] and from the field of geometric deep learning [33; 34; 35; 40]. Our SO3krates method uses a sparse representation for the molecular geometry and restricts projections of all convolution responses to the most relevant invariant component of the equivariant basis functions. Due to the orthonormality of the spherical harmonics, such a projection corresponds to partial traces of the product-tensor, which can be expressed in terms of linear-scaling inner products. This enables efficient scaling to high degree equivariant representations without sacrificing computational speed and memory cost. Force predictions are obtained from the gradient of the resulting invariant energy model, which represents a piece-wise linearization that is naturally equivariant. Throughout, a self-attention mechanism is used to decouple invariant and equivariant basis elements within the model. We compare the stability and speed of the proposed SO3krates model with current state-of-the art ML potentials and find that our solution overcomes the limitations of current equivariant MLFFs, without compromising on their advantages. Our proposed mathematical formulation leading to an efficient equivariant architecture enables reliably stable MD simulations with a speedup of up to a factor of \(\sim 25\) over equivariant MPNNs with comparable stability and accuracy [30]. To demonstrate this, we run accurate nanosecond-long MD simulations for supra-molecular structures within only a few hours, which allows us to calculate converged auto-correlation functions (vibrational spectra) for structures that range from small peptides with 42 atoms up to nanostructures with 370 atoms. We further apply our model to explore the topology of the PES of docosahexaenoic-aenoic acid (DHA) and Ac-Ala3-NHMe by investigating 10k minima using a minima hopping algorithm [41]. Such an investigation requires roughly 30M FF evaluations that are queried at temperatures between a few 100 K up to \(\sim\) 1200 K. With DFT methods, this analysis would require more than a year of computation time. Existing equivariant MLFFs with comparable prediction accuracy would run more than a month for such an analysis. In contrast, we are able to perform the simulation in only 2.5 days, opening up the possibility to explore hundreds of thousands of PES minima on practical timescales. In one of our experiments, we further show that SO3krates enables the detection of physically valid minima conformations which have not been part of the training data. The ability to extrapolate to unknown parts of the PES is essential for scaling MLFFs to large structures, since the availability of _ab-initio_ reference data can only cover subregions for conformationally rich structures. Furthermore, we examine the impact of disabling the equivariance property in our network architecture to gain a deeper understanding of its influence on the characteristics of the model and its reliability in MD simulations. We find, that the equivariant nature can be linked to the stability of the resulting MD simulation and to the extrapolation behavior to higher temperatures. We are able to show, that equivariance lowers the spread in the error distribution even when the test error estimate is the same on average. Thus, using directional information via equivariant representations shows analogies in spirit to classical ML theory, where mapping into higher dimensions yields richer features spaces that are easier to parametrize [42; 43; 44]. ## II Results From Equivariant Message Passing Neural Networks to Separating Invariant and Equivariant Structure: SO3krates MPNNs [31] carry over many of the properties of convolutions to unstructured input domains, such as sets of atomic positions in Euclidean space. This has made them one promising approach for the description of the PES [45; 46; 47; 17; 24; 33; 12], where the potential energy is typically predicted as \[E_{\mathrm{pot}}(\vec{r}_{1},\ldots,\vec{r}_{n})=\sum_{i=1}^{n}E_{i}. \tag{1}\] The energy contributions \(E_{i}\in\mathbb{R}\) are calculated from high dimensional atomic representations \(f_{i}^{[T]}\). They are constructed iteratively (from \(T\) steps), by aggregating Figure 2: SO(3) convolutions are constructed as triplet tensor products in the spherical harmonics basis, which is performed \(F\) times along the feature dimension. We replace SO(3) convolutions by a parametrized filter function on the invariants (red blocks), which effectively reduces the tripled tensor product to taking the partial (per-degree) trace of a simple tensor product. Colored volumes correspond to the non-zero entries in the Clebsch-Gordan coefficients, which mask the tensor products. pairwise messages \(m_{ij}\) over atomic neighborhoods \(\mathcal{N}(i)\) \[f_{i}^{[t+1]}=\text{\sc Upd}\left(f_{i}^{[t]},\bigoplus_{j\in\mathcal{N}(i)}m_{ ij}\right), \tag{2}\] where \(\text{\sc Upd}(\cdot)\) is an update function that mixes the representations from the prior iteration and the aggregated messages. One way of incorporating the rotational invariance of the PES is to build messages that are based on invariant inputs such as distances, angles or dihedral angles. However, this incomplete list of features can not discriminate certain interaction patterns [48]. An alternative is to use SO(3) equivariant representations [47, 35, 33, 49] within a basis that allows for systematic expansion to match the complexity of the modelled system. This requires to generalize the concept of invariant continuous convolutions [12] to the SO(3) group of rotations. A message function performing an SO(3) convolution can be written as [33, 37] \[m_{ij}^{LM}=\sum_{l_{1}l_{2}m_{1}m_{2}}C_{l_{1}l_{2}L}^{m_{1}m_{2}M}\phi^{l_{1 }l_{2}L}(r_{ij})Y_{l_{1}}^{m_{1}}(\hat{r}_{ij})f_{j}^{l_{2}m2}, \tag{3}\] where \(C_{l_{1}l_{2}L}^{m_{1}m_{2}M}\) are the _Clebsch-Gordan coefficients_, \(Y_{m}^{l}\) is a spherical harmonic of degree \(l\) and order \(m\), the function \(\phi^{l_{1}l_{2}}:\mathbb{R}\mapsto\mathbb{R}^{F}\) modulates the radial part and \(f_{j}^{l_{2}m_{2}}\in\mathbb{R}^{F}\) is an atomic feature vector. Thus, performing a single convolution scales as \(\mathcal{O}(l_{\text{max}}^{6}\times F)\), where \(l_{\text{max}}\) is the largest degree in the network (Fig. 2). Here we _propose_ two conceptual changes to Eq. (3) that we will denote as Euclidean self-attention: (1) We separate the message into an invariant and an equivariant part and (2) replace the SO(3) convolution by and attention function on its invariant output. To do so, we start by initializing atomic features \(f_{i}^{[t=0]}\in\mathbb{R}^{F}\) and _Euclidean variables_ (EV) \(x_{i,LM}^{[t=0]}\in\mathbb{R}\) from the atomic types and the atomic neighborhoods, respectively. Collecting all orders and degrees for the EV in a single vector, gives \((l_{\text{max}}+1)^{2}\) dimensional representations \(\mathbf{x}_{i}\) that transform equivariant under rotation and capture directional information Figure 3: SO3krates architecture and building blocks. Taking the atomic types and positions as input they are embedded into invariant features \(F\) and equivariant EV \(X\) (methods section IV.1). They are then refined by \(T\) Euclidean transformer blocks (ecTblock) (Eq. (9)) before the final invariant features are used to predict the potential energy (Eq. (1)). After the Euclidean attention block, features and EV exchange per-atom information within the interaction block. Both blocks are enveloped by skip connection which allows to carry over information from prior layers. For an in detail description of the individual parts see methods section. up to degree \(l_{\text{max}}\) (methods section IV.1). (1) The message for the invariant part is written as \[m_{ij}=\alpha_{ij}f_{j} \tag{4}\] and the one for the equivariant part as \[m_{ijLM}=\alpha_{ij,L}Y_{L}^{M}(\hat{r}_{ij}), \tag{5}\] where \(\alpha_{ij}\in\mathbb{R}\) are (per-degree) _attention coefficients_. Features and EV are updated with the aggregated messages, which writes as \[f_{i}^{[t+1]}=f_{i}^{[t]}+\sum_{j\in\mathcal{N}(i)}m_{ij} \tag{6}\] for the features and as \[x_{iLM}^{[t+1]}=x_{iLM}^{[t]}+\sum_{j\in\mathcal{N}(i)}m_{ijLM}. \tag{7}\] for the EV. Due to the separation, the overall message calculation scales as \(\mathcal{O}(l_{\text{max}}^{2}+F)\), replacing the multiplication of feature dimension and \(l_{\text{max}}\) from other equivariant architectures by addition (Tab. 1). (2) Instead of performing full SO(3) convolutions, we move the learning of complex interaction patterns into an attention function \[\alpha_{ij}=\alpha\left(f_{i},f_{j},r_{ij},\oplus_{l=0}^{l_{\text{max}}} \mathbf{x}_{ij,l\sim 0}\right), \tag{8}\] where \(\oplus_{l=0}^{l_{\text{max}}}\mathbf{x}_{ij,l\sim 0}\) is the invariant output of the SO(3) convolution over the EV signals located on atom \(i\) and \(j\) (methods section IV.2). Thus, Eq. (8) non-linearly incorporates information about the relative orientation of atomic neighborhoods. Since the Clebsch-Gordan coefficients are diagonal matrices along the \(l=0\) axis (Fig. 2), calculating the invariant projections requires to take per-degree traces of length (\(2l+1\)) and can be computed efficiently in \(\mathcal{O}(l_{\text{max}}^{2})\). Within SO3krates atomic representations are refined iteratively as \[[\mathbf{f}_{i}^{[t+1]},\mathbf{x}_{i}^{[t+1]}]=\text{ecTblock}\big{[}\{\mathbf{f }_{j}^{[t]},\mathbf{x}_{j}^{[t]},\vec{r}_{ij}\}_{j\in\mathcal{N}(i)}\big{]}, \tag{9}\] where each Euclidean transformer block (ecTblock) consists of a self-attention block and an interaction block. The self-attention block, implements the Euclidean self-attention mechanism described in the former section. The interaction block gives additional freedom for parametrization by exchanging information between features and EV located at the same atom. After \(T\) MP steps, per-atom energies \(E_{i}\) are calculated from the final features \(f_{i}^{[T]}\) using a two-layered neural network and are summed to the total potential energy (Eq. (1)). Atomic forces are obtained using automatic differentiation, which ensures energy conservation. A detailed outline of the architectural components and the proposed Euclidean self-attention framework is given in the methods section. ### Overcoming Accuracy-Stability-Speed Trade-Offs We show in the following experiment, that SO3krates can overcome the trade-offs between MD stability, accuracy and computational efficiency (Fig. 4). A recent study compared the stability of different state-of-the-art MLFFs in short MD simulations and found that only the SO(3) convolution based architecture NequIP [33] gave reliably stable results [30]. The excellent stability of such models, however, comes at the price of extensive computational cost (Fig. 4 (a)) which stems from equivariant features and SO(3) convolutions. This leads to a trade-off between the stability and the computational efficiency of MP based MLFFs, but SO3krates can overcome this stability-speed trade-off. It allows to predict up to one order of magnitude more frames per second (FPS) without sacrificing reliability in MD simulations. Although the test accuracy does not necessarily correlate with the stability (compare e.g. GemNet and SphereNet in Fig. 4 (a) and (b)), only accurate _and_ stable models are of ultimate interest. We find, that SO3krates yields accurate force predictions, thus overcoming the complementary trade-off between accuracy and speed (Fig. 4 (b)). For the radial distribution functions (RDFs), we find consistent results across five simulations (SI Fig. 12) for all of the four investigated structures, which are in agreement with the RDFs from DFT calculations. Interestingly, it has been found that other approaches with a larger number of FPS can give inaccurate RDFs, which result in MAEs between 0.35 for salicylic acid and 1.02 for naphthalene [30]. In comparison the achieved accuracies with SO3krates show that the seemingly contradictory requirements of high computational speed and accurate observables from MD trajectories can be reconciled. A recent work, proposed a strictly local equivariant architecture, called Allegro[53]. This allows for parallelization without additional communication, whereas parallelization of MPNNs with \(T\) layers requires \(T-1\) additional communication calls between computational nodes. On the example of the Li\({}_{3}\)PO\({}_{4}\) solid electrolyte we compare accuracy and speed to the Allegro model for a unit cell with 192 atoms (Tab. 2). Remarkably, SO3krates achieves energy and force accuracies, more than 50% better than the ones reported in [53], even with only one tenth of the training data. At the same time, the timings in MD simulations are on par. To validate the physical validity of the obtained MD trajectory, we compare the RDFs at 600K to the ones obtained from DFT in the quenched phase of Li\({}_{3}\)SO\({}_{4}\) (SI Fig. 15). ### Data Efficiency, Stability and Extrapolation Data efficiency and MD stability play an important role for the applicability of a MLFFs. High data ef ficiency allows to obtain accurate PES approximations even when only little data is available, which is a common setting due to the computational complexity of quantum mechanical _ab-initio_ methods. Even when high accuracies can be achieved, without MD stability the calculation of physical observables from the trajectories becomes impossible. Here, we show that the data efficiency of SO3krates can be successively increased further by increasing the largest degree \(l_{\text{max}}\) in the network (SI Fig. 13). We further find, that the stability and extrapolation to higher temperatures of the MLFF can be linked to the presence of equivariant representations, independent of the test error estimate (Fig. 5). To understand the benefits of directional information, we use an equivariant (\(l_{\text{max}}=3\)) and an invariant model (\(l_{\text{max}}=0\)) within our analysis. Due to the use of multi-head attention, the change in the number of network parameters is negligible when going from \(l_{\text{max}}=0\) to \(l_{\text{max}}=3\) (methods section IV.8). All models were trained on 11k randomly sampled geometries from which 1k are used for validation. This number of training samples was necessary to attain force errors close to 1 kcal mol\({}^{-1}\) A\({}^{-1}\) for the invariant model. Since equivariant representations increase the data efficiency of ML potentials [33, 24], we expect the equivariant model to have a smaller test error estimate given the same number of training samples. We confirm this expectation on the example of the DHA molecule, where we compare the data efficiency for different degrees \(l_{\text{max}}\) on the example of the DHA molecule (SI Fig. 13). To make the comparison of invariant and equivariant model as fair as possible, we train the invariant model until the validation loss converges. Afterwards, we train the equivariant model towards the same validation error, which leads to identical errors on the unseen test set (Fig. 5 and SI Tab. IV). Since the equivariant model makes more efficient use of the training data, it requires only \(\sim\nicefrac{{1}}{{5}}\) of the number of training steps of an invariant model to reach the same validation error (SI Fig. 14 (a)). After training, we compare the test error distributions since identical mean statistics do not imply a similar distribution. We calculate per atom force errors as \(\epsilon_{i}=||\vec{F}_{i}-\vec{F}_{i}^{\text{GGT}}||_{2}\) and compare the resulting distribution of the invariant and the equivariant model. The so observed distributions are identical in nature and only differ slightly in height and spread without the presence \begin{table} \begin{tabular}{l c c c c} \hline \hline & \(n_{\text{train}}\) & \(E_{\text{MAE}}\) [\(\frac{\text{mV}}{\text{atom}}\)] & \(F_{\text{MAE}}\) [\(\frac{\text{mV}}{\text{A}}\)] & \(\frac{\text{gs}}{\text{step-atom}}\) \\ \hline Allegro [53] & 10k & 1.7 & 73.4 & \(27.785^{*}\) \\ \hline SO3krates & 10k & 0.2 & 28.2 & \(23.593^{*}\) \\ \hline SO3krates & 1k & 0.3 & 31.8 & \(23.593^{*}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Speed in MD simulation and accuracy comparison to the strictly local Allegro model for Li\({}_{3}\)PO\({}_{4}\) (192 atoms) on a single V100 GPU as reported in [53]. Figure 4: (a) Number of frames per second (FPS) vs. the averaged stability coefficient (Eq. (28)) in MD simulations run with different state-of-the-art MPNN architectures [33, 24, 38, 45, 50, 51, 52, 34] and (b) FPS vs. the averaged force MAE for four small organic molecules from the MD17 data set as reported in [30]. SO3krates yields reliable MD simulations and high accuracies without sacrificing computational performance. (c) Stability and speed of SO3krates enable nanosecond long MD simulations for supra molecular structures within a few hours. For the buckyball catcher, the ball stays in the catcher over the full simulation time of 20 ns, illustrating that the model successfully picks up on weak, non-covalent bonding. of a clear trend (SI Fig. 14 (b)). In the distributions of the per-structure \(\mathcal{S}\) force error \(R_{i}=\frac{1}{|\mathcal{S}|}\sum_{i\in\mathcal{S}}\epsilon_{i}\), however, one finds a consistently larger spread of the error (Fig. 5 (a)). Thus, the invariant model performs particularly well (and even better than the equivariant model) on certain conformations which comes at the price of worse performance for other conformations, a fact which is invisible to per-atom errors. The stability coefficients (Eq. (28)) are determined from six \(300\,\mathrm{ps}\) MD simulations with a time step of \(0.5\,\mathrm{fs}\) at temperatures \(T=$300\,\mathrm{K}$\) and \(T=$500\,\mathrm{K}$\) (Fig. 5 (b)). We find the invariant model to perform best on Ac-Ala3-NHMe, which is the smallest and less flexible structure of the three under investigation where one observes a noticeable decay in stability for larger temperature. Due to the increase in temperature configurations that have not been part of the training data are visited more frequently, which requires better extrapolation behavior. When going to flexible structures such as DHA (second row Fig. 5) the invariant model becomes unable to yield stable MD simulations. To exclude the possibility that the instabilities in the invariant case are due to the SO3krates model itself, we also trained a SchNet model which yielded MD stabilities comparable to the invariant SO3krates model. Thus, directional information has effects on the learned energy manifold that go beyond accuracy and data efficiency. A subtle case is highlighted by the adenine-thymine complex (AT-AT). The MD simulations show one instability (in a total of six runs) for the equivariant model at \(500\,\mathrm{K}\), which illustrates that the stability improvement of an equivariant model should be considered as a reduction of the chance of failure rather than a guarantee for stability. We remark that unexpected behaviors can not be ruled out for any empirical model. We further observed dissociation of substructures (either A, T or AT) from the AT-AT complex during MD simulations (Fig. 6 (a.ii)). Such a behavior corresponds to the breaking of hydrogen bonds or \(\pi\)-\(\pi\)-interactions, which highlights weak interactions as a challenge for MLFFs. Interestingly, for other supra-molecular structures the non-covalent interactions are described correctly (section II.4 and Fig. 1 (b)). The training data for AT-AT has been sampled from a \(20\,\mathrm{ps}\) long _ab-initio_ MD trajectory which only covers a small subset of all possible conformations and makes it likely to leave the data manifold. As a consequence, we observe an increase in the rate of dissociation when increasing the simulation temperature, since it effectively extends the space of accessible conformations per unit simulation time. ### From Peptides to Nanostructures Velocity auto-correlation functions are an important tool to relate MD simulations to real world experimental data. Here, we calculate velocity auto-correlation functions for systems ranging from small peptides up to host-guest systems and nanostructures. To achieve a correct description for such systems, the model must describe non-covalent bonding correctly and be stable for nanoseconds of simulation time. For the largest structure with 370 atoms, 5M MD steps with SO3krates takes 20h simulation time (\(\sim$15\,\mathrm{ms}$\) per step). We train an individual model for each structure in the MD22 data set and compare it to the sGDML model (Tab. 3). To that end, we decided to train the model on two different sets of training data sizes: (A) On structure depended sizes (600 to 8k) as reported in [55], and (B) on structure independent sizes of 1k training points per structure. Since some settings might require accurate predictions when trained on a smaller number of training data points, we chose to include setting (B) into our analysis. The approximation accuracies achievable with SO3krates compare favourably to the ones that have been observed with the sGDML model [55, 32] (Tab. 3). Even for setting (B) the force errors on the test set are below \(1\,\mathrm{kcal}\,\mathrm{mol}^{-1}\,\mathrm{\SIUnitSymbolAngstrom}^{-1}\). We use the SO3krates FFs to run \(1\,\mathrm{ns}\) long MD simulations, which enables the calculation of converged velocity auto-correlation functions and a comparison to experimental data from IR spectroscopy. We start by analysing two supra-molecular structures in form of a host-guest system and a small nanomaterial. The former play an important role for a wide range of systems in chemistry and biology [56, 25], whereas the latter offer promises for the design of materials with so Figure 5: (a) Per-structure error distributions for an invariant and an equivariant SO3krates model with the same mean error on the test set. Spread and mean of the error distributions are given in SI Tab. 4. (b) The MD stability observed at temperatures \(300\,\mathrm{K}\) and \(500\,\mathrm{K}\). The transition to higher temperatures results in a drop of stability for the invariant model, hinting towards less robustness and weaker extrapolation behavior. Flexible molecules such as DHA pose a challenge for the invariant model at \(300\,\mathrm{K}\) already. far unprecedented properties [57]. Here, we investigate the applicability of the SO3krates FF to such structures on the example of the buckyball catcher and the double walled nanotube (Fig. 6 (b)). For both systems under investigation, one finds notable peaks for C-C vibrations (\(500\,\mathrm{cm}^{-1}\) and \(1500\,\mathrm{cm}^{-1}\)), C-H bending (\(\sim 900\,\mathrm{cm}^{-1}\)) and for high frequency C-H stretching (\(\sim 3000\,\mathrm{cm}^{-1}\)). Both systems exhibit covalent and non-covalent interactions [56, 58], where e.g. van-der-Waals interactions hold the inner tube within the outer one. Although small in magnitude, we find the MLFF to yield a correct description for both interaction classes, such that the largest degree of freedom for the double walled nanotube corresponds to the rotation of the tubes w. r. t. each other, in line with the findings from [55]. For DHA, we further analyze the evolution of the velocity auto-correlation function with temperature and find non-trivial shifts in the spectrum hinting towards the capability of the model to learn non-harmonic contributions of the PES. As pointed out in [59], FFs that only rely on (learn) harmonic bond and angle approximations fail to predict changing population or temperature shifts in the middle to high frequency regime. Similar results are obtained for Ac-Ala3-NIMMe (SI Fig. 11). ### Potential Energy Surface Topology The accurate description of conformational changes remains one of the hardest challenges in molecular biophysics. Every conformation is associated with a local minimum on the PES, and the count of these minima Figure 6: (a) Dissociation of the AT-AT complex over time, due to the breaking of \(\pi\)-\(\pi\) interactions. (b) Velocity auto-correlation function for the buckyball catcher (upper) and the double-walled nanotube (lower). For the nanotube, the structure is shown from the side (i) and from the front (ii). (c) Temperature dependency of the velocity auto-correlation function, investigated along the DHA molecule for three different temperatures. All auto-correlation functions have been obtained from MD simulations over 1 ns. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Ac-Ala3-NHMe} & DHA & Stachyose & AT-AT & AT-AT-CG-CG & Buckyball catcher & Double walled nanotube \\ \hline \# training points & 6k & 8k & 8k & 3k & 2k & 600 & 800 \\ \hline \multirow{2}{*}{sGDML} & _Energy_ & 0.39 & 1.29 & 4.00 & 0.72 & 1.42 & 1.17 & 4.00 \\ & _Forces_ & 0.79 & 0.75 & 0.68 & 0.69 & 0.70 & 0.68 & 0.52 \\ \multirow{2}{*}{SO3krates} & _Energy_ & 0.337 & 0.379 & 0.442 & 0.178 & 0.345 & 0.381 & 0.993 \\ & _Forces_ & 0.244 & 0.242 & 0.435 & 0.216 & 0.332 & 0.237 & 0.727 \\ \hline \multirow{2}{*}{\# training points} & 1k & 1k & 1k & 1k & 1k & 1k & 1k \\ \hline \multirow{2}{*}{SO3krates} & _Energy_ & 0.270 & 0.338 & 0.571 & 0.237 & 0.387 & 0.343 & 1.171 \\ & _Forces_ & 0.417 & 0.363 & 0.623 & 0.310 & 0.404 & 0.224 & 0.761 \\ \hline \hline \end{tabular} \end{table} Table 3: We report MAEs for the recently introduced MD22 benchmark and compare it to the sGDML results. Additionally, we report results for a constant number of 1k training points. Units for energy and forces are \(\mathrm{kcal\,mol}^{-1}\) and \(1\,\mathrm{kcal\,mol}^{-1}\,\mathrm{\AA}^{-1}\). increases exponentially with system size. This limits the applicability of _ab-initio_ methods or computationally expensive MLFFs, since even the sampling of sub-regions of the PES involves the calculation of thousands to millions of equilibrium structures. Here, we explore 10k minima for two small bio-molecules, which requires \(\sim 30\)M FF evaluations per simulation. This analysis would require more than a year with DFT and more than a month with previous equivariant architectures, whereas we are able to perform it in \(\sim 2\) days. We employ the minima hopping algorithm [41], which explores the PES based on short MD simulations (escapes) that are followed by structure relaxations. The MD temperature is determined dynamically, based on the history of minima already found. In that way low energy regions are explored and high energy (temperature) barriers can be crossed as soon as no new minima are found. This necessitates a fast MLFF, since each escape and structure relaxation process consists of up to a few thousands of steps. At the same time, the adaptive nature of the MD temperature, can result in temperatures larger than the training temperature (SI Fig. 16 (a)) which requires stability towards out-of-distribution geometries. We start by exploring the PES of DHA and analyse the minima that are visited during the optimization (Fig. 7 (a)). We find many minima close in energy which are associated with different foldings of the carbon chain due to van-der-Waals interactions. This is in contrast to the minimum energies found for other chain-like molecules such as Ac-Ala3-NHMe, where less local minima are found per energy unit (SI Fig. 18 (a)). The largest observed energy difference corresponds to \(0.57\,\mathrm{eV}\), where the minima with the largest potential energy (top) and the lowest potential energy (bottom) as well as an example structure from the intermediate energy regime (middle) are shown in Fig. 7 (b). We find the observed geometries to be in line with the expectation that higher energy configurations promote an unfolding of the carbon chain. Funnels are sets of local minima separated to other sets of local minima by large energy barriers. The detection of folding funnels plays an important role in protein folding and finding native states, which determine biological functioning and properties of proteins. The combinatorial explosion of the number of minima configurations makes funnel detection unfeasible with _ab-initio_ methods or computationally expensive MLFFs. We use the visited minima and the transition state energies that are estimated from the MD between successive minima to create a so-called _disconnectivity graph_[60]. It allows detect multiple funnels in the PES of DHA, which are separated by energy barriers up to \(3\,\mathrm{eV}\). Ac-Ala-NHMe is a popular example system for bio-molecular simulations, as its conformational changes are primarily determined by Ramachandran dihedral angles. These dihedral angles also play a crucial role in represent Figure 7: (a) Results of a minima search for DHA. We ran the simulation until 10k minima have been visited, which corresponds to 20M MD steps for the escape trials and to \(\sim 10\)M PES evaluations for the structure relaxations, afterwards. (b) Minima with the largest energy (top), the lowest energy (bottom) and an example minimum with an intermediate energy value (middle) are depicted. (c) Disconnectivity graph for all unique minima in the first 2k visited minima. Disconnectivity graphs show groups of minima at different energy levels. (d) Ramachandran density plots for the training conformations (upper, blue) and of the visited minima during minima hopping (lower, green) for two of the six backbone angles in Ac-Ala3-NHMe. Red dots correspond to the actually visited minima. Parts of the visited minima have not been in the training data, hinting towards the capability of the model to find minima beyond the training data. (e) Ac-Ala3-NHMe structure with backbone angles as inset. (f) Relative energies for four minima, which have been selected from the regions in \(\psi-\phi\) space visited most frequently during minima hopping (1 - 4 in (d)). SO3krates energies are compared to a DFT single point calculation and to the conformation obtained from a full DFT relaxation starting from the minima obtained from SO3krates. (g) Location in the Ramachandran plot of the minima obtained with SO3krates and the relaxed DFT minima. ing important degrees of freedom in significantly larger peptides or proteins [61]. Here, we go beyond this simple example and use the minima hopping algorithm to explore 10k minima of Ac-Ala3-NHMe and visualize their locations in a Ramachandran plot (green in Fig. 7 (d)) for two selected backbone angles \(\phi\) and \(\psi\) (Fig. 7 (e)). By investigating high density minima regions and comparing them to the training data (blue in (Fig. 7 (d))), we can show that SO3krates finds minima in PES regions, which highlights the capability of the model to extrapolate beyond known conformations. Extrapolation to unknown parts of the PES is inevitable for the application of MLFFs in bio-molecular simulations, since the computational cost of DFT only allows to sample sub-regions of the PES for increasingly large structures. To confirm the physical validity of the found minima, we select one equilibrium geometry from each of the four highest density regions in the Ramachandran plots (1 - 4 in Fig. 7 (d)). A comparison of the corresponding energies predicted by SO3krates with DFT single point calculations (Fig. 7 (f)) shows excellent agreement with a mean deviation of 3.45 meV for this set of four points. Remarkably, the minimum in the unsampled region of the PES (red box in Fig. 7 (d)) only deviates by mere 0.7 meV in energy. We further compare the SO3krates relaxed structure to structures obtained from a DFT relaxation, initiated from the same starting points. For minima 1 and 2, we again find excellent agreement with an energy error of 2.38 meV and 3.57 meV, respectively. The extrapolated minima 4 shows a slightly increased deviation (41.84 meV), which aligns with our expectation that the model performs optimally within the training data regime. Further, minima 1, 2 and 4 show good agreement with the backbone angles obtained from DFT relaxations (Fig. 7 (g)). For minimum 3, we find the largest energy deviation w. r. t. both, DFT single point calculation and DFT relaxation. When comparing the relaxed structures, we observe that one methyl group is rotated by 180\({}^{\circ}\), the addition of a hydrogen bond and a stronger steric strain in the SO3krates prediction. These deviations coincide with a relatively large distance in the \(\phi\)-\(\psi\) plane (Fig. 7 (g)). To investigate the extend of minimum 3, we have generated random perturbations of the equilibrium geometry from which additional relaxation runs have been initiated. All optimizations returned into the original minimum (SI Fig. 18 (b)), confirming that it is not an artifact due to a non-smooth or noisy PES representation. ## III Discussion Long time-scale MD simulations are essential to reveal converged dynamic and thermodynamic observables of molecular systems [62, 63, 64, 65]. Despite achieving low test errors, many state-of-the-art MLFFs exhibit unpredictable behavior caused by the accumulation of unphysical contributions to the output, making it extremely difficult or even impossible to reach extended timescales [30]. This prevents the extraction of physically faithful observables at scale. Ongoing research aims at improving stability by incorporating physically meaningful inductive biases via various kinds of symmetry constraints [17, 33, 34, 66, 67, 8, 11], but the large computational cost of current solutions mitigates many practical advantages. We overcome the challenging trade-off between stability and computational cost by combining two novel concepts - a Euclidean self-attention mechanism and the EV as efficient representation for molecular geometry - within the equivariant transformer architecture SO3krates. The exceptional performance of our approach is due to the decoupling of invariant and equivariant information, which enables a substantial reduction in computational complexity compared to other equivariant models. Our architecture strategically emphasises the importance of the more significant invariant features over equivariant ones, resulting in a more efficient allocation of computational resources. While equivariant features carry important directional information, the core of ML inference lies in the invariant features. Only invariant features can be subjected to powerful non-linear transformations within the architecture, while equivariant features essentially have to be passed-through to the output in order to be preserved. In our implementation, the computationally cheap invariant parts (\(l=0\)) of the model are allowed to use significantly more parameters than the costly equivariant ones (\(l>0\)). Despite this heavy parameter reduction of the equivariant components, desirable properties associated with equivariant models, such as high data efficiency, reliable MD stability, and temperature extrapolation could still be preserved. In the context of MD simulations, we found that the equivariant network (SO3krates with \(l_{\text{max}}>0\)) gives smaller force error distributions than its invariant counterpart (SO3krates with \(l_{\text{max}}=0\)). This effect, however, is only visible when the force error is investigated on a per-structure and not on the per-atom level. This observation indicates that the invariant network over-fits to certain structures. We also found the equivariant model to remain stable across a large range of temperatures, whereas the stability of the invariant model quickly decreases with increasing temperature. Since higher temperatures increase the probability of out-of-distribution geometries, this may hint towards a better extrapolation behavior of the equivariant model. Applying the SO3krates architecture to different structures from the MD22 benchmark, including peptides (Ac-Ala3-NHMe, DHA) and supra-molecular structures (AT-AT, buckyball catcher, double walled nanotube), yields stable molecular dynamics (MD) simulations with impressive time scales of tens of nanoseconds per day. This enables the computation of converged velocity auto-correlation functions, allowing comparison to experimental measurements. We have also shown, that SO3krates reliably reveals conformational changes in small bio-molecules on the example of DHA and Ac-Ala3-NHMe. To that end, SO3krates is able to predict physically valid minima conformations which have not been part of the training data. The representative nature of Ac-Ala3-NHMe holds the potential that a similar behavior can be obtained for much larger peptides and proteins. The limited availability of _ab-initio_ data for structures at this scale, makes extrapolation to unknown parts of the PES a crucial ingredient on the way to large scale bio-molecular modeling. While our development makes stable extended simulation timescales accessible using modern MLFF modeling paradigms in an unprecedented manner, future work remains to be done in order to bring the applicability of MLFFs even closer to that of conventional classical FFs. Various encouraging avenues in that direction are currently emerging: In the current design, the EV are only defined in terms of two-body interactions. Recent results suggest that accuracy can be further improved by incorporating atomic cluster expansions into the MP step [68, 69, 47]. At the same time, this may help reducing the number of MP steps which in turn decreases the computational complexity of the model. Another, yet open discussion is the appropriate treatment of global effects. Promising steps have been taken by using low-rank approximations [70, 24], trainable Ewald summation [71] or by adding long-range corrections from continuum solvent theory [72]. Further, a recent work showed that adding long-range interactions can improve the accuracy on the MD22 benchmark [73]. Future work will therefore focus on the seamless incorporation of many-body expansions, global effects, and long-range interactions into the EV formalism and aim to further increase computational efficiency to ultimately bridge MD time-scales at high accuracy. ## IV Methods ### Features and Euclidean Variables (EV) Per-atom feature representations are initialized based on the atomic number \(z_{i}\) using an embedding function \[f_{i}=f_{\text{emb}}(z_{i}), \tag{10}\] which maps the atomic number into the \(F\) dimensional feature space \(f_{\text{emb}}:\mathbb{N}_{+}\mapsto\mathbb{R}^{F}\). For a given degree \(l\) and order \(m\), the EV are defined as \[x_{ilm}=\frac{1}{\langle\mathcal{N}\rangle}\sum_{j\in\mathcal{N}(i)}\phi_{r_{ \text{cut}}}(r_{ij})\cdot Y_{m}^{l}(\hat{r}_{ij}), \tag{11}\] where the output of \(Y_{m}^{l}(\hat{r}_{ij})\) is modulated with a distance dependent cutoff function which ensures a smooth PES when atoms leave or enter the cutoff sphere. Alternatively, the EV can be initialized with all zeros, such that they are "initialized" in the first attention update 16. The aggregation is re-scaled by the average number of neighbors over the whole training data set \(\langle\mathcal{N}\rangle\), which helps stabilizing network training. By collecting all degrees and orders up to \(l_{\text{max}}\) within one vector \[\mathbf{x}_{i}=\big{[}x_{i00},x_{i1-1},\ldots,x_{il_{\text{max}}l_{\text{max}}} \big{]}, \tag{12}\] one obtains an equivariant per-atom representation of dimension \((l_{\text{max}}+1)^{2}\) which transforms according to the corresponding Wigner-D matrices. ### SO(3) Convolution Invariants The convolution output for degree \(L\) and order \(M\) on the difference vector \(\mathbf{x}_{ij}=\mathbf{x}_{j}-\mathbf{x}_{i}\) can be written as \[x_{ij}^{LM}=\sum_{l_{1}l_{2}m_{1}m_{2}}C_{l_{1}l_{2}L}^{m_{1}m_{2}M}x_{ij}^{l1 m1}x_{ij}^{l2m2} \tag{13}\] where \(C_{l_{1}l_{2}L}^{m_{1}m_{2}M}\) are the Clebsch-Gordan coefficients. Considering the projection on the zeroth degree \(L=M=0\) \[x_{ij}^{00}=\sum_{l_{1}}\underbrace{\sum_{m_{1}}C_{l_{1}l_{0}0}^{m_{1}-m_{1} 0}x_{ij}^{l1m1}x_{ij}^{l1-m1}}_{\equiv\bigoplus_{l=0}^{l_{\text{max}}}\mathbf{x}_{ i,j,l-0}}, \tag{14}\] one can make use of the fact that \(C_{l_{1}l_{2}L}^{m_{1}m_{2}M}\) is valid for \(|l_{1}-l_{2}|\leq L\leq l_{1}+l_{2}\) and \(M=m_{1}+m_{2}\), which corresponds to having nonzero values along the diagonal only (Fig. 2). Thus, evaluating \(\bigoplus_{l=0}^{l_{\text{max}}}\mathbf{x}_{ij,l\to 0}\) requires to take per-degree traces of length \((2l+1)\) and can be computed efficiently in \(\mathcal{O}(l_{\text{max}}^{2})\). ### Euclidean Transformer Block (ecTblock) and Euclidean Self-Attention Given input features, EV and pairwise distance vectors the Euclidean attention block returns _attended_ features and EV as \[f_{i}^{\text{\tiny ATT}}=f_{i}+\sum_{j\in\mathcal{N}(i)}\phi_{r_{\text{cut}}} (r_{ij})\cdot\alpha_{ij}\cdot f_{j}\,, \tag{15}\] and \[x_{ilm}^{\text{\tiny ATT}}=x_{ilm}+\sum_{j\in\mathcal{N}(i)}\phi_{r_{\text{ cut}}}(r_{ij})\cdot\alpha_{ijl}\cdot Y_{l}^{m}(\hat{r}_{ij})\,, \tag{16}\] with a cosine cutoff function \[\phi_{r_{\text{cut}}}(r_{ij})=\frac{1}{2}\left(\cos\left(\frac{\pi r_{ij}}{r_ {\text{cut}}}\right)+1\right), \tag{17}\] which guarantees that pairwise interactions (attention coefficients) smoothly decay to zero when atoms enter or leave the cutoff radius \(r_{\text{cut}}\). Eqs. (15) and Eq. (16) from above involve attention coefficients which are constructed from an equivariant attention operation (next paragraphs). Attention coefficients are calculated as \[\alpha_{ij}=\alpha\Big{(}f_{i},f_{j},g_{1,\dots K}(r_{ij}),\oplus_{l=0}^{l_{ \text{max}}}\mathbf{x}_{ij,l\to 0}\Big{)}, \tag{18}\] where \(\mathbf{x}_{ij}\equiv\mathbf{x}_{j}-\mathbf{x}_{i}\in\mathbb{R}^{(l_{\text{max}}+1)^{2}}\) is a relative, higher order geometric shift between neighborhoods. The function \(\bigoplus_{l=0}^{l_{\text{max}}}\mathbf{x}_{ij,l\to 0}\) contracts each degree in \(\mathbf{x}_{ij}\) to the zeroth degree which results in \(l_{\text{max}}+1\) invariant scalars (Eq. (14)). The function \(g\) expands interatomic distances in \(K\) radial basis functions (RBFs) \[g_{k}(r_{ij})=\exp\big{(}-\gamma(\exp{(-r_{ij})}-\mu_{k})^{2}\big{)}, \tag{19}\] where \(\mu_{k}\) is the center of the \(k\)-th basis function and \(\gamma\) is a function of \(K\) and \(r_{\text{cut}}\)[17]. Based on the output of the contraction function and the RBFs we construct an \(F\)-dimensional filter vector as \[w=\textsc{MLP}_{[F/4,F]}(u)+\textsc{MLP}_{[F,F]}(g), \tag{20}\] where \(\textsc{MLP}_{[F_{1},\dots,F_{L}]}\) denotes a multi layer perceptron network with \(L\) layers, layer dimension \(F_{i}\) and silu non-linearity. The first MLP acting on \(u\) has a reduced dimension in the first hidden layer (since the dimension of \(u\) itself is only \(l_{\text{max}}+1\)). Attention coefficients are then calculated using dot-product attention as \[\alpha_{ij}=\frac{1}{\sqrt{F}}\,q_{i}^{T}(w_{ij}\odot k_{j}), \tag{21}\] where \(\odot\) denotes the entry-wise product and \(q_{i}=Qf_{j}\) and \(k_{j}=Kf_{j}\) with \(K\in\mathbb{R}^{F\times F}\) and \(Q\in\mathbb{R}^{F\times F}\) are trainable key and query matrices. The attention update of the features (Eq. (15)) is performed for \(h\) heads in parallel. The features \(f_{i}\) of dimension \(F\) are split into \(h\) feature heads \(f_{i}^{h}\) of dimension \((h,F/h)\). From each feature head, one attention coefficient \(\alpha_{ij}^{h}\) is calculated following Eq. (18) where \(q_{i},k_{j}\) and \(w_{ij}\) are all of dimension \(F/h\). For each head, the attended features are then calculated from Eq. (15) with \(f_{i}\) replaced by the corresponding head \(f_{i}^{h}\). Afterwards, the heads are stacked to form again a feature vector of dimension \(f_{i}\). Multi-head attention allows the model to focus on different sub-spaces in the feature representation, e.g. information about distances, angles or atomic types [39]. ### Interaction Block An interaction block (Iblock) aims to interchange per-atom information between the invariant and the geometric variables. Refinements for invariant features and equivariant EV are calculated as \[\mathrm{d}f_{i},\mathrm{d}\mathbf{x}_{i}=\textsc{Iblock}\big{(}f_{i}^{\textsc{ start}},\oplus_{l=0}^{l_{\text{max}}}\mathbf{x}_{i,l\to 0}^{\text{att}}\big{)} \big{)}, \tag{22}\] More specifically, the refinements are calculated as \[\mathrm{d}f_{i}=a \tag{23}\] and \[\mathrm{d}x_{ilm}=b_{l}x_{ilm}^{\textsc{start}}, \tag{24}\] where \(a\in\mathbb{R}^{F}\) and one \(b_{l}\in\mathbb{R}\) for each degree \(l\). They are calculated from a singled layered MLP as \[a,b=\textsc{MLP}_{[f+l_{\text{max}}+1]}(f_{i}^{\textsc{start}},\oplus_{l=0}^{ l_{\text{max}}}\mathbf{x}_{i,l\to 0}^{\text{att}}) \tag{25}\] such that \(a\) and \(b=[b_{0},\dots,b_{l_{\text{max}}}]\) contain mixed information about both \(f_{i}\) and \(\mathbf{x}_{i}\). Updates are then calculated as \[f_{i}^{[t+1]} =f_{i}^{\textsc{start}}+\mathrm{d}f_{i}, \tag{26}\] \[\mathbf{x}_{i}^{[t+1]} =\mathbf{x}_{i}^{\textsc{start}}+\mathrm{d}\mathbf{x}_{i}, \tag{27}\] which builds the relation to the initially stated update equations of the ecTblock and concludes the architecture description. ### MD Stability We define an MD to be stable when (A) there is an uncontrolled dissociation of the system, and (B) each bond length follows a reasonable distribution over time. We refer to failure mode (A) as an explosion of the MD simulation when (at least one of) the force predictions of the MLFF diverges during the MD simulation (Fig. 8 (a). A decomposition of (parts of) the molecule can be detected by a strong peak in MD temperature, which is usually a few orders of magnitude larger than the target temperature. We assume a bond length to be distributed reasonably, when it does not differ by more than \(50\,\mathrm{\char 37}\) from the equilibrium bond distance at any point of the simulation. Criteria (A) has e.g. been used in [30] to determine the MD stability of different MLFFs. However, in certain cases analysing MD temperature can be an insufficient condition to detect unstable behavior, e.g. when single bonds dissociate slowly over time or take on non-physical values over a temporarily limited interval (Fig. 8 (b)). Such a behavior, however, is easily identified using crite Figure 8: Potential instabilities that can occur in an MD simulation using MLFF. (a) Illustration of an "explosion" during an MD simulation. (b) Illustration of a temporarily limited instability (here the breaking of a covalent bond). A stability coefficient \(c_{s}\in[0,1]\) is then calculated as \[c_{s}=\frac{n_{s}}{n_{\mathrm{tot}}}, \tag{28}\] where \(n_{s}\) is the number of MD steps until an instability occurs and \(n_{\mathrm{tot}}\) is the maximal number of MD steps. When no instability is observed in the simulations we set \(n_{s}=n_{\mathrm{tot}}\). ### MD Simulations For the MD simulation of Li\({}_{3}\)PO\({}_{4}\) we chose the first conformation of in the quenched state as initial starting point. We then run the simulation for 50 ps with a time step of 2 fs using a Nose-Hoover thermostat at 600 K. For MD simulations with molecules from the MD22 data set we first chose a structure which has not been part of the training data. It is then relaxed using the LBFGS optimizer until the maximal force norm per atom is smaller than \(10^{-4}\,\mathrm{eV\AA}^{-1}\). The relaxed structure serves as starting point for the MD simulation. For the comparison of invariant and equivariant model, we run three MD simulations per molecule from three different initial conformations with a time step of 0.5 fs and a total time of 300 ps using the Velocity Verlet algorithm without thermostat. For the calculation of the velocity-auto-correlation functions, we ran MD simulations with a time step of 0.2 fs following [55] and a total time of 1 ns. Temperatures vary between molecules and are reported in the main body of the text. Again only the Velocity Verlet without thermostat is used. We show in SI A.1, that the performed simulations are energy conserving and reach temperature equilibrium. When using the Velocity Verlet algorithm, initial velocities are drawn from a Maxwell Boltzmann distribution with a temperature twice as large as the MD target temperature. For the MD stability experiments on the MD17 molecules, we follow [30] and run simulations with a Nose-Hoover thermostat at 500 K and a time step of 0.5 fs for 300 ps. ### Minima Hopping Algorithm For the minima hopping experiments we use the models that have been trained on the MD22 data set with 1k training samples. Each escape run corresponds to a 1 ps MD simulation with a time step of 0.5 fs using the Velocity Verlet algorithm. The following structure relaxation is performed using the LBFGS optimizer until the maximal norm per atomic force vector is smaller than \(10^{-4}\,\mathrm{eV\AA}^{-1}\), which took around 1k optimizer steps on average. The initial velocities are drawn from the Maxwell-Boltzmann distribution at temperature \(T_{0}\), which are re-scaled afterwards such that the systems temperature matches \(T_{0}\) exactly. Since the structure is in a (local) minima at initialization, the equipartition principle will result in an MD which has temperature \(T_{0}/2\) on average. After the MD escape run the newly proposed minima is compared to the current minima as well as to all the minima that have been visited before (history). Minima are compared based on their RMSD. To remove translations, we compare the coordinates relative to the center of mass. Also, since structures might differ by a global rotation only, we minimize the RMSD over SO(3), following the algorithm described in section 7.1.9 _Rotations_ (p. 246-250) in [74]. If RMSD \(\leq 10^{-1}\) between two minima, they are considered to be identical. If the newly proposed minima is not the current minima (i.e. it is either completely new or in the history), the new minima is accepted if the energy difference is below a certain threshold \(E_{\mathrm{diff}}\). For the initial temperature we chose \(T_{0}=1000\) K and for \(E_{\mathrm{diff}}=2\) eV. Both quantities are dynamically adjusted during runtime, where we stick to the default parameters [41]. The development of \(T_{0}\) along the number of performed escape runs shows initial temperatures ranging from \(\sim 300\) K up to \(\sim 1300\) K (SI Fig. 16 (b)). To estimate the transition states for the connectivity graph, the largest potential energy observed between two connected minima is taken for its energy (SI Fig. 16 (b)). ### Network and Training All SO3krates models use a feature dimension of \(F=132\), \(h=4\) heads in the invariant MP update and \(r_{\mathrm{cut}}=5\) A. The number of MP updates and the degrees in the EV vary between experiments. For the comparison of invariant and equivariant model we use degrees \(l=\{0\}\) and \(l=\{0,1,2,3\}\), \(T=3\) and EV initialization following Eq. (11). The invariant degree is explicitly included, in order to exclude the possibility that stability issues might come from the inclusion of degree \(l=0\). The number of network parameters of the invariant model is 386k and of the equivariant model is 311k, such that the better stability is not be related to a larger parameter capacity but truly to the degree of geometric information. Due to the use of as many heads as degrees in the MP update for the EV, increasing the number of degrees results in a slightly smaller parameter number for the equivariant model. Per molecule 10,500 conformations are drawn of which 500 are used for validation. For the invariant and equivariant model, two models are trained on training data sets which are drawn with different random seeds. The model for Li\({}_{4}\)PO\({}_{3}\) uses \(T=2\), \(l=\{1,2,3\}\) and initializes the EV to all zeros. For training, 11k samples are drawn randomly from the full data set of which 1k are used for validation, following [53]. All other models use degrees \(l=\{1,2,3\}\) in the EV, \(T=3\) and initialize the EV according to Eq. (11). For the MD17 stability experiments, 10,000 conformations are randomly selected of which 9,500 are used for training and 500 for validation. For the MD22 benchmark a varying number of training samples plus 500 valida tion samples or 1000 training samples plus 500 validation samples are drawn randomly. The models trained on 1000 samples are used for the calculation of the velocity auto-correlation functions and for the minima hopping experiments. All models are trained on a combined loss of energy and forces \[\mathcal{L}=(1-\beta)\cdot(E-\tilde{E})^{2}+\frac{\beta}{3N}\sum_{k=1 }^{n}\sum_{i\in(x,y,z)}(F_{k}^{i}-\tilde{F}_{k}^{i})^{2}, \tag{29}\] where \(\tilde{E}\) and \(\tilde{F}\) are the ground truth and \(E\) and \(F\) are the predictions of the model. We use the ADAM [75] optimizer with an initial learning rate (LR) of \(\eta=10^{-3}\) and a trade-off parameter of \(\beta=0.99\). The LR is decreased by a factor of 0.7 every 100k training steps using exponential LR decay. Training is stopped after 1M steps. The batch sizes \(B_{s}\) for training depends on the number of training points \(n_{\text{train}}\), where we use \(B_{s}=1\) if \(n_{\text{train}}\leq 1000\) and \(B_{s}=10\) if \(n_{\text{train}}\geq 1000\). All presented models can be trained in less than 12h on a single NVIDIA A100 GPU. ## V Code and data availability The code for SO3krates is available at [https://github.com/thorben-frank/mlff](https://github.com/thorben-frank/mlff), which contains interfaces for model training and running MD simulations on GPU. MD17 data for stability experiments and MD22 data are freely available from [http://sgdml.org/#datasets](http://sgdml.org/#datasets). The Li\({}_{3}\)PO\({}_{4}\) data can be downloaded from [https://archive.materialscloud.org/record/2022.128](https://archive.materialscloud.org/record/2022.128). ## VI Acknowledgements JTF, KRM, and SC acknowledge support by the Federal Ministry of Education and Research (BMBF) for BIFOLD (01IS18037A). KRM was partly supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grants funded by the Korea government(MSIT) (No. 2019-0-00079, Artificial Intelligence Graduate School Program, Korea University and No. 2022-0-00984, Development of Artificial Intelligence Technology for Personalized Plug-and-Play Explanation and Verification of Explanation), and was partly supported by the German Ministry for Education and Research (BMBF) under Grants 01IS14013A-E, AIMM, 01GQ1115, 01GQ0850, 01IS18025A and 01IS18037A; the German Research Foundation (DFG). The authors would like to thank Niklas Schmitz and Mihail Bogojeski for helpful discussion. Correspondence to KRM and SC.
2309.14238
Explaining PTA Data with Inflationary GWs in a PBH-Dominated Universe
We show that an ultralight primordial black hole (PBH) dominated phase makes blue-tilted inflationary gravitational waves (BGW) compatible with the recent detection of an nHz stochastic GW background by pulsar-timing arrays (PTAs), for high reheating temperatures. This PBH-dominated phase suppresses the BGW spectrum via entropy dilution and generates a new GW spectrum from PBH density fluctuations. This combined spectrum is detectable at ongoing and planned near-future GW detectors and exhibits a unique shape with a low-frequency peak explaining PTA data, a mid-range dip, and a sharp peak followed by a third peak at high-frequency. This distinctive shape sets it apart from spectra generated by other matter dominations or exotic physics. Therefore, while important for studying GWs in the nHz range, the recent PTA result also sets the stage for testing and constraining various well-studied mechanisms following a PBH domination, using low-frequency measurements and correlated observations of unique high-frequency GW spectral features.
Satyabrata Datta
2023-09-25T15:54:40Z
http://arxiv.org/abs/2309.14238v1
# Explaining PTA Data with Inflationary GWs in a PBH-Dominated Universe ###### Abstract We show that an ultralight primordial black hole (PBH) dominated phase makes blue-tilted inflationary gravitational waves (BGW) compatible with the recent detection of an nHz stochastic GW background by pulsar-timing arrays (PTAs), for high reheating temperatures. This PBH-dominated phase suppresses the BGW spectrum via entropy dilution and generates a new GW spectrum from PBH density fluctuations. This combined spectrum is detectable at ongoing and planned near-future GW detectors and exhibits a unique shape with a low-frequency peak explaining PTA data, a mid-range dip, and a sharp peak followed by a third peak at high-frequency. This distinctive shape sets it apart from spectra generated by other matter dominations or exotic physics. Therefore, while important for studying GWs in the nHz range, the recent PTA result also sets the stage for testing and constraining various well-studied mechanisms following a PBH domination, using low-frequency measurements and correlated observations of unique high-frequency GW spectral features. Introduction Recently, collaborations of pulsar-timing arrays (PTAs) such as NANOGrav, EPTA, and PPTA, along with InPTA and CPTA, have published their latest data. This data presents substantial evidence for a stochastic gravitational wave background (SGWB) at nHz frequencies [1; 2; 3; 4]. A similar discovery, although with less statistical significance, has been present for the past two years, generating considerable interest within the scientific community [5; 6; 7]. Intriguingly, this time, the signal displays the distinct angular correlations of pulsars, referred to as the quadrupolar Hellings-Downs curve [8], a feature that is specific to an SGWB. While the origins of such GWs are still uncertain, the favored power-law \(\Omega_{\rm GW}\propto f^{1.8\pm 0.6}\), for instance, in the new NANOGrav data, does not rule out the straightforward GW-driven models of supermassive black hole binaries (SMBHBs) at \(3\sigma\). Yet another intriguing prospect is to explore the GWs that originate from cosmological sources. In a companion theory paper [9], the NANOGrav collaboration (we shall specifically focus on NANOGrav 15 yrs data [1]; the results of other PTAs align well) has compiled a comprehensive list discussing numerous cosmological sources that are consistent with the data 1. Following this, various articles have explored these sources either within the framework of different cosmological models or by re-analyzing the fit to the new data, incorporating the results from other PTAs [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63]. Inflationary GWs with a significant tensor blue-tilt, also known as Blue Tilted Gravitational Waves (BGWs), are a notable aspect that aligns well with both past and recent data [1; 2; 64; 65; 66; 67; 68]; although, It's important to mention that such BGWs can be generated in models that may not necessarily align with the conventional slow-roll inflation paradigm, see, e.g., [69; 70; 71; 72; 73; 74; 75; 76]. However, the range of parameters that allow for such a fit is, in fact, limited. This is primarily because GWs with large blue-tilt saturate big bang nucleosynthesis (BBN) bound on the effective number of neutrino species, disfavoring any post-inflationary cosmology founded on high reheating temperature (\(T_{\rm RH}\gtrsim 10\) GeV) after inflation [9; 68]. Nonetheless, if a non-standard matter epoch leads to entropy production between the reheating after inflation and the most recent radiation domination before the BBN [77], BGWs get suppressed and provide a good fit to PTA data for high reheating temperatures. Now, contrary to the standard case, such a scenario allows the overall GW spectrum to span decades of frequencies with characteristic spectral features testable by high-frequency detectors, e.g., by the future LIGO run [78; 79; 80]. Footnote 1: Contrary to the 12.5 years of NANOGrav data, which was well-matched by stable cosmic strings [10; 11; 12; 13; 14; 15], the latest data seems to challenge this fit, suggesting that stable cosmic strings may not be the best explanation [1]. Building on the methodology outlined in the ref.[81; 82], we carried out a tomographic analysis of early matter domination (EMD) resulting from PBHs and their imprint on BGWs. While previous research has explored the impact of EMD induced by PBHs on the GW spectrum generated by cosmic strings [13; 14; 15; 83; 84], no studies have specifically focused on BGWs and GWs originating from PBHs. Such ultralight PBH domination has been extensively studied within the context of Beyond Standard Model (BSM) physics scenarios, such as superheavy dark matter (DM) [14] and leptogenesis [13; 15]. Motivated by these, we have undertaken a systematic investigation into the impact of a PBH-dominated epoch on BGWs, with a particular focus on explaining the results of the NANOGrav at low frequencies. We have demonstrated that the characteristic features of PBHs, specifically their mass and initial energy fraction, leave a mark on BGWs. This necessitates a larger initial fraction, which in turn leads to a higher entropy injection. This higher entropy injection suppresses the BGWs at high frequencies to ensure compliance with the constraints imposed by the LIGO and BBN. Among the various mechanisms through which PBHs can generate GWs, we focus on the one induced by PBH density fluctuations [85; 86; 87; 88; 89; 90]. While a requirement for a higher initial energy fraction of PBHs appears to conflict directly with GWs from density fluctuations saturating the constraints imposed by the LIGO and BBN, we have successfully identified a complementary region. This region not only addresses PTA data at low frequency and a peak at high frequency, but it also presents a unique, sharply peaked GW spectrum at mid-frequencies due to PBH density fluctuations. Thus, the overall spectrum introduces a rich phenomenology with characteristic features that can be explored in a complementary manner by future gravitational wave detectors. The structure of the paper is as follows. In Section II, we provide a brief discussion on GWs from inflation with tensor blue tilt. Section III is dedicated to the dynamics of PBHs and the necessary components for studying their effect on BGWs. In Section IV, we conduct a numerical analysis, prioritizing the satisfaction of NANOGrav data and examining PBH archaeology with BGWs. Section V discusses a unique possibility of detecting a complementary signal from BGWs and GWs resulting from PBH density fluctuations. Finally, in Section VI, we conclude our findings. ## II Blue-tilted GWs from inflation One of the most plausible explanations for the origin of primordial GWs is cosmic inflation [91; 92]. In this section, we will briefly discuss how GWs are produced during inflation and how they travel through different cosmic eras until they reach the present day. GWs are described as a perturbation in the FLRW line element: \[ds^{2}=a(\tau)\left[-d\tau^{2}+(\delta_{ij}+h_{ij})dx^{i}dx^{j})\right],\] (II.1) with \(\tau\) being the conformal time, and \(a(\tau)\) the scale factor. The GWs are interpreted by the transverse and traceless (\(\partial_{i}h^{ij}=0\), \(\delta^{ij}h_{ij}=0\)) part of the of the \(3\times 3\) symmetric matrix \(h_{ij}\). Since the GWs are too feeble, \(|h_{ij}|\ll 1\), the linearized evolution equation \[\partial_{\mu}(\sqrt{-g}\partial^{\mu}h_{ij})=16\pi a^{2}(\tau)\pi_{ij}\] (II.2) is sufficient for studying their propagation. The tensor part of the anisotropy stress, \(\pi_{ij}\), coupled with \(h_{ij}\) acts as an external source. It is useful to express \(h_{ij}\) in the Fourier space: \[h_{ij}(\tau,\vec{x})=\sum_{\lambda}\int\frac{d^{3}\vec{k}}{(2\pi)^{3/2}}e^{i \vec{k}.\vec{x}}\epsilon^{\lambda}_{ij}(\vec{k})h^{\lambda}_{\vec{k}}(\tau),\] (II.3) where the index \(\lambda=``+/-"\) denotes that the GWs have two polarisation states. The polarization tensors, in addition to being transverse and traceless, also fulfill the conditions: \[\begin{split}&\text{(i)}\,\epsilon^{(\lambda)ij}(\vec{k})\epsilon^{( \lambda^{\prime})}_{ij}(\vec{k})=2\delta_{\lambda\lambda^{\prime}}\\ &\text{(ii)}\,\epsilon^{(\lambda)}_{ij}(-\vec{k})=\epsilon^{( \lambda)}_{ij}(\vec{k}).\end{split}\] (II.4) Assuming that each polarization state evolves identically and isotropically, we can rename the notation by letting \(h^{\lambda}_{\vec{k}}(\tau)\) as \(h_{k}(\tau)\), where \(k=|\vec{k}|=2\pi f\) with \(f\) being the frequency of the GWs today at \(a_{0}=1\). Taking into account the sub-dominant contribution from \(\pi_{ij}\), the equation for the propagation of GWs in Fourier space can be written as \[\ddot{h}_{k}+2\frac{\dot{a}}{a}\dot{h}_{k}+k^{2}h_{k}=0,\] (II.5) where the dot represents a derivative with respect to the conformal time. By utilizing Eq.(II.3) and Eq.(II.5), we can compute the energy density of the GWs as [93] \[\rho_{\text{GW}}=\frac{1}{32\pi G}\int\frac{dk}{k}\left(\frac{k}{a}\right)^{2} T_{T}^{2}(\tau,k)P_{T}(k),\] (II.6) where \(T_{T}^{2}(\tau,k)=|h_{k}(\tau)|^{2}/|h_{k}(\tau_{i})|^{2}\) is a transfer function that is derived from Eq.(II.5), with \(\tau_{i}\) being an initial conformal time. The primordial power spectrum, \(P_{T}(k)=\frac{k^{3}}{\pi^{2}}|h_{k}(\tau_{i})|^{2}\) is linked to the inflation models with specific forms and is parametrised as a power-law given by \[P_{T}(k)=rA_{s}(k_{*})\left(\frac{k}{k_{*}}\right)^{nr},\] (II.7) where \(r\lesssim 0.06\)[94] is the tensor-to-scalar-ratio, \(A_{s}\simeq 2\times 10^{-9}\) represents the scalar perturbation amplitude at the pivot scale \(k_{*}=0.01\)Mpc\({}^{-1}\) and the tensor spectral index is denoted by \(n_{T}\). Interestingly, the standard slow-roll inflation models satisfy a consistency relation \(n_{T}=-r/8\)[95], which results in slightly red-tilted GWs with \(n_{T}\lesssim 0\). However, in this work, we have considered GWs with a significant blue tilt, where \(n_{T}>0\), and we have assumed it to be constant throughout. The GW energy density, which is crucial for detection purposes, can be expressed as \[\Omega_{\rm GW}(k)=\frac{k}{\rho_{c}}\frac{d\rho_{\rm GW}}{dk},\] (II.8) where the quantity \(\rho_{c}=3H_{0}^{2}/8\pi G\) with \(H_{0}\simeq 2.2\times 10^{-4}\) Mpc\({}^{-1}\) being the present-day Hubble constant. From Eq.(II.6), the quantity \(\Omega_{\rm GW}(k)\) is computed as \[\Omega_{\rm GW}(k)=\frac{1}{12H_{0}^{2}}\left(\frac{k}{a_{0}} \right)^{2}T_{T}^{2}(\tau_{0},k)P_{T}(k),\ {\rm with}\ \ \tau_{0}=1.4\times 10^{4}\ {\rm Mpc}.\] (II.9) There have been various attempts to compute the transfer function analytically [96; 97; 98; 99], and one of the commonly used one for standard reheating is given by[100; 101] \[T_{T}^{2}(\tau_{0},k)=F(k)T_{1}^{2}(\zeta_{\rm eq})T_{2}^{2}( \zeta_{R}),\] (II.10) where \(F(k)\) reads \[F(k)=\Omega_{m}^{2}\left(\frac{g_{*}(T_{k,{\rm in}})}{g_{*0}} \right)\left(\frac{g_{*s0}}{g_{*s}(T_{k,{\rm in}})}\right)^{4/3}\left(\frac{3 j_{1}(k\tau_{0})}{k\tau_{0}}\right)^{2}.\] (II.11) Here \(j_{1}(k\tau_{0})\) is the spherical Bessel function, \(\Omega_{m}=0.31\), \(g_{*0}=3.36\), and \(g_{*0s}=3.91\). The scale-dependent \(g_{*0(s)}(T_{k,{\rm in}})\), used in Eq.(II.11) can be approximated analytically as [101; 102; 103] \[g_{*0(s)}(T_{k,{\rm in}})=g_{*0}\left(\frac{A+\tanh\ {\rm k}_{1}}{A+1} \right)\left(\frac{B+\tanh\ {\rm k}_{2}}{B+1}\right),\] (II.12) where \[A=\frac{-1-10.75/g_{*0(s)}}{-1+10.75/g_{*0(s)}},\ \ B=\frac{-1-g_{max}/10.75}{-1+g_{ max}/10.75},\] (II.13) and \[k_{1}=-2.5\,\log_{10}\left(\frac{k/2\pi}{2.5\times 10^{-12}{ \rm Hz}}\right),\] (II.14) \[k_{2}=-2.0\,\log_{10}\left(\frac{k/2\pi}{6.0\times 10^{-9}{\rm Hz}} \right).\] (II.15) The transfer functions are given by \[T_{1}^{2}(\zeta)=1+1.57\zeta+3.42\zeta^{2},\] (II.16) \[T_{2}^{2}(\zeta)=\left(1-0.22\zeta^{1.5}+0.65\zeta^{2}\right)^{-1},\] (II.17) where \(\zeta_{i}\equiv k/k_{i}\), with \(k_{i}\)s being the wave number of the modes entering the horizon at different epochs and are derived as \[k_{\rm eq} = 7.1\times 10^{-2}\Omega_{m}h^{2}{\rm Mpc}^{-1}.\] (II.18) In this paper, we shall use \(k_{*}=0.01\) Mpc\({}^{-1}\) and \(h=0.7\). It's essential to highlight that a significant limitation to consider regarding the potential for BGWs is the \(\Delta N_{\rm eff}\) bound from BBN, as well as the absence of any SGWB detection by LIGO [79; 80]. In the following section, we will demonstrate that if any late-time entropy production occurs through the evaporation of ultralight PBHs after significant PBH domination before reheating, it can considerably alter the transfer function during the PBH-dominated era and suppress the GW spectrum for modes that entered the horizon during the PBH-dominated phase. This could potentially ameliorate the constraints from BBN and LIGO. ## III Diluting Bgws through entropy injection in PBH domination The dynamical evolution of energy densities of the black holes (\(\rho_{\rm BH}\)), and radiation (\(\rho_{\rm R}\)) is governed by the following Friedmann equations [104; 13]: \[\frac{d\rho_{R}}{dz}+\frac{4}{z}\rho_{R} =0,\] (III.1) \[\frac{d\rho_{\rm BH}}{dz}+\frac{3}{z}\frac{H}{\tilde{H}}\rho_{ \rm BH}-\frac{\dot{M}_{\rm BH}}{M_{\rm BH}}\frac{1}{z\tilde{H}}\rho_{\rm BH} =0,\] (III.2) where \(H\) is the Hubble parameter and \(z=T_{\rm Bf}/T\). The scale factor \(a\) and the quantity \(\tilde{H}\) evolve as \[\tilde{H}=(H+\mathcal{K})\,,\ \frac{da}{dz}=\left(1-\frac{\mathcal{K}}{ \tilde{H}}\right)\frac{a}{z},\] (III.3) where \(\mathcal{K}=\frac{\dot{M}_{\rm BH}}{M_{\rm BH}}\frac{\rho_{\rm BH}}{4\rho_{ \rm R}}\). In the derivation of Eq.(III.1)-Eq.(III.3), we have made the assumption that entropy (\(g_{*s}\)) and the energy (\(g_{*\rho}\)) degrees of freedom are equal and constant. If the initial energy fraction of PBHs, denoted as \(\beta\equiv\frac{\rho_{\rm BH}(T_{\rm Bf})}{\rho_{\rm R}(T_{\rm Bf})}\), surpasses a critical value of \(\beta_{c}\equiv\gamma^{-1/2}\sqrt{(\mathcal{G}g_{*B}(T_{\rm BH})/10240\pi)} \frac{M_{\rm Pl}}{M_{\rm BH}}\), it can result in early matter domination. For a given \(\beta\), the above equations can be solved numerically to determine the temperatures of PBH domination and evaporation, as well as the entropy production \(\Delta_{\rm PBH}=\tilde{S}_{2}/\tilde{S}_{1}\), where \(\tilde{S}_{1,\,(2)}\propto a_{1,\,(2)}^{3}/z_{1,\,(2)}^{3}\) is the total entropy before (after) the PBH evaporation. We will ultimately utilize analytical expressions that are in close agreement with numerical results and can be approximated accordingly [15] \[\Delta_{\rm PBH}\simeq 233\beta\left(\frac{M_{\rm BH}}{M_{\rm Pl}}\right) \left(\frac{\gamma}{g_{*B}(T_{\rm BH})\mathcal{G}}\right)^{1/2},\] (III.4) where \(\gamma\simeq 0.2\) is the formation efficiency of PBHs, \(g_{*B}\simeq 100\) is the no of relativistic d.o.f. below the Hawking temperature \(T_{\rm BH}\), and \(\mathcal{G}\simeq 3.8\) is the greybody factor. In the present scenario where late-time entropy production takes place after reheating through PBH evaporation, the background evolution follows the sequence of MD (inflation-dominated) \(\rightarrow\) RD \(\rightarrow\) MD (PBH-dominated) \(\rightarrow\) RD. In such a case, the transfer function is defined by \[T_{T}^{2}(\tau_{0},k)=F(k)T_{1}^{2}(\zeta_{\rm eq})T_{2}^{2}(\zeta_{\rm ev})T_ {3}^{2}(\zeta_{\rm dom})T_{2}^{2}(\zeta_{R}),\] (III.5) where \(T_{3}^{2}(\zeta)\) is the transfer function corresponding to the transition from the first RD phase to the PBH-dominated phase and is given by \[T_{3}^{2}(\zeta)=1+0.59\zeta+0.65\zeta^{2},\] (III.6) and the modes \(k_{i}\)'s given by \[k_{\rm ev} = 1.7\times 10^{14}\left(\frac{g_{*s}(T_{\rm ev})}{106.75}\right)^{1/ 6}\left(\frac{T_{\rm ev}}{10^{7}{\rm GeV}}\right){\rm Mpc}^{-1},\] (III.7) \[k_{\rm dom} = 1.7\times 10^{14}\left(\frac{g_{*s}(T_{\rm dom})}{106.75}\right)^{ 1/6}\left(\frac{T_{\rm dom}}{10^{7}{\rm GeV}}\right){\rm Mpc}^{-1},\] (III.8) and \[k_{\rm R}=1.7\times 10^{14}\Delta_{\rm PBH}^{-1/3}\left(\frac{g_{*s}(T_{\rm R })}{106.75}\right)^{1/6}\left(\frac{T_{\rm R}}{10^{7}{\rm GeV}}\right){\rm Mpc }^{-1}\] (III.9) which reenters the horizon at PBH evaporation temperature (\(T_{\rm ev}\)), PBH domination temperature (\(T_{\rm dom}\)), and inflationary reheating temperature (\(T_{\rm R}\)), respectively where [13] \[T_{\rm ev}=\left(\frac{45M_{\rm Pl}^{2}}{16\pi^{3}g_{*}(T_{\rm ev })\tau^{2}}\right)^{1/4},\] (III.10) and \[T_{\rm dom}=\beta T_{\rm Bf},\] (III.11) where \(\tau\), and \(T_{\rm Bf}\) correspond to the BH lifetime and BH formation temperature respectively, and are given by \[\tau=\frac{10240\pi M_{\rm BH}^{3}}{{\cal G}g_{*B}(T_{\rm BH})M_{ \rm Pl}^{4}},\] (III.12) and \[T_{\rm Bf}=\left(\frac{45\gamma^{2}}{16\pi^{3}g_{*}(T_{\rm Bf}) }\right)^{1/4}\left(\frac{M_{\rm Pl}}{M_{\rm BH}}\right)^{1/2}M_{\rm Pl}.\] (III.13) Note also that, by construction, the formation of BHs occurs after the initial inflationary reheating, i.e. \(T_{\rm Bf}\lesssim T_{\rm R}\). For simplicity, we shall use \(T_{\rm R}=T_{\rm Bf}\) throughout the rest of our discussion. ## IV A fit to the nanograv data in a PBH-dominated scenario and PBH Archaeology with Bgws Taking into account the aforementioned equations and utilizing \(\Delta_{\rm PBH}\) from Eq.(III.4), we evaluate Eq.(II.9) to get the GW spectrum and to fit the NANOGrav data, while considering two additional constraints. **I)** The LIGO O3 constraint on SGWB, which can be approximately expressed as \(\Omega_{\rm GW}(25\,{\rm Hz})h^{2}\leq 2.2\times 10^{-9}\)[78], and **II)** BBN bound on the effective number of neutrino species: \(\int_{f_{\rm low}}^{f_{\rm high}}f^{-1}df\Omega_{\rm GW}(f)h^{2}\lesssim 5.6 \times 10^{-6}\Delta N_{\rm eff}\), where \(\Delta N_{\rm eff}\lesssim 0.2\)[105]. The lower limit of the integration is associated with the frequency that entered the horizon during the BBN epoch. Conversely, the Hubble rate at the end of inflation sets the upper limit: \(f_{\rm high}=a_{\rm end}H_{\rm end}/2\pi\) For numerical calculations, \(f_{\rm high}\simeq 10^{5}\) Hz is sufficient as the spectrum falls and the integration saturates. We follow the NANOGrav parametrization for the GW energy density to conduct a power-law fit to the new data within the frequency range \(f\in\left[10^{-9}\ {\rm Hz},f_{\rm yr}\right]\). The parametrization reads \[\Omega_{\rm GW}(f)=\Omega_{\rm yr}\left(\frac{f}{f_{\rm yr}}\right)^{(5-\gamma)}\] (IV.1) with \(\Omega_{\rm yr}=\frac{2\pi^{2}}{3H_{0}^{2}}A^{2}f_{\rm yr}^{2}\) and \(f_{\rm yr}=1{\rm yr}^{-1}\simeq 32\) nHz. To fit the data, one must compare Eq.(II.9) and Eq.(IV.1), then extract the values of the amplitude \(A\) and the spectral index \(\gamma\) that fall within the 1, 2, 3\(\sigma\) contours as reported by the NANOGrav [1]. In Fig.1, we present a particular benchmark where \(n_{T}=1.4\) and \(r=3\times 10^{-6}\) for which the BGW spectrum intersects with the \(1\sigma\) range of NANOGrav. For this specific benchmark point, the parameters that are vital to PBH dynamics, namely \(\beta\) and \(M_{\rm BH}\), have been adjusted freely. Due to this flexibility, for a wide range of \(n_{T}\) and \(r\), the BGW spectrum, which was previously ruled out by stringent constraints from LIGO and BBN at higher frequencies, can now be permitted by choosing appropriate values for \(\beta\) and \(M_{\rm BH}\). To provide a quantitative understanding of the impact of PBH domination, we consider the same benchmark point \(n_{T}=1.4\) and \(r=3\times 10^{-6}\) as a reference. In the top/bottom left plot of Fig.2, we varied \(\beta\) for three distinct values: \(10^{-6}\) (depicted in red), \(10^{-5}\) (depicted in brown), and \(10^{-4}\) (depicted in purple), while keeping \(M_{\rm BH}=10^{7}\) grams fixed. Conversely, in the top/bottom right plot of Fig.2, we varied \(M_{\rm BH}\) for three different values: \(10^{5.5}\) grams (shown in brown), \(10^{6}\) grams (shown in red), and \(10^{7}\) grams (shown in purple) with \(\beta=3\times 10^{-5}\) held constant. Interestingly, it is possible to find the analytical expressions of three turning point frequencies as a function of \(\beta\) and \(M_{\rm BH}\) (c.f. Eq.(III.7), (III.8), (III.9)) and these are shown as follows: \[f_{\rm ev}=4.88\times 10^{-10}M_{\rm Pl}\left(\frac{g_{\rm sS}(T_{\rm ev})}{10 6.75}\right)^{-1/12}\left(\frac{g_{\rm sB}}{100}\right)^{1/2}\left(\frac{M_{ \rm Pl}}{M_{\rm BH}}\right)^{3/2},\] (IV.2) Figure 1: Top: The red \(\bullet\) represents the amplitude \(A\) and the spectral index \(\gamma\) corresponding to a benchmark (\(n_{T}\), \(r\))\(\equiv(1.4\), \(3\times 10^{-6}\)). NANOGrav \(1,2,3\sigma\) regions are shown by the cyan shades. \[f_{\rm dom}=2.04\times 10^{-17}M_{\rm Pl}\left(\frac{\beta}{10^{-8}} \right)\left(\frac{g_{\rm*B}}{100}\right)^{-1/4}\left(\frac{g_{\rm*s}(T_{\rm dom })}{106.75}\right)^{1/6}\left(\frac{M_{\rm Pl}}{M_{\rm BH}}\right)^{1/2},\] (IV.3) \[f_{\rm R}=5.42\times 10^{-7}M_{\rm Pl}\left(\frac{\beta}{10^{-8}} \right)^{-1/3}\left(\frac{g_{\rm*B}}{100}\right)^{-1/12}\left(\frac{g_{\rm*s}( T_{\rm R})}{106.75}\right)^{1/6}\left(\frac{M_{\rm Pl}}{M_{\rm BH}}\right)^{5/6}.\] (IV.4) From the inspection of the subplots in Fig.2 and Eqs.(II.9), (III.5), (IV.2), (IV.3), (IV.4), we can extract the following pieces of information, **I)** The variation of \(\beta\) has a significant impact on the second peak and the mid-frequency dip, i.e., Eq.(IV.4) and (IV.3), respectively. For larger values of \(\beta\), the high-frequency peak shifts towards lower frequency according to Eq.(IV.4). Additionally, the dip shifts towards higher frequency as per Eq.(IV.3). Furthermore, with increasing \(\beta\), the BGW amplitude at high frequency gets suppressed while the amplitude at low frequency remains unaffected. The rationale for this can be readily comprehended by examining the top-left plot of Fig.2. As \(\beta\) increases, the strength of matter domination will be much stronger, leading to a larger entropy injection that dilutes BGWs at high frequency. The evaporation temperature (see Eq.(III.10)), which is not influenced by \(\beta\), indicates that the BGW amplitude at low-frequency peak (c.f. Eq.(III.5)), which depends on the evaporation temperature, remains unchanged. **II)** Altering \(M_{\rm BH}\) impacts the entire BGW spectrum and all the turning point frequencies. As can be observed from the top-right plot of Fig.2, an increase in \(M_{\rm BH}\) extends the lifetime of PBHs (See Figure 2: Top-left: Evolution of the radiation energy density (solid), PBH energy density (dashed), and the total entropy (dot-dashed) with the inverse temperature. The benchmark points are represented in different colors: red for \(\beta=10^{-6}\), brown for \(\beta=10^{-5}\), and purple for \(\beta=10^{-4}\). In all these cases, the PBH mass, \(M_{\rm BH}\) is fixed at \(10^{7}\) grams. Top-right: Evolution of the same quantities, but for different PBH masses and fixed initial energy fraction. Bottom-left: The BGW spectra comply with the quantities that determine the nature of the corresponding plots on the top. Eq.(III.12)). Consequently, this leads to a larger entropy injection, resulting in a more diluted BGW spectrum. We end this section by highlighting a potential challenge: in the future, if GW detectors identify such inflationary double-peak GWs, we may not be able to distinguish between ultralight PBH domination and other intermediate matter domination scenarios. However, there is an intriguing solution to this issue. For certain PBH model parameters, as we will demonstrate next, they can generate their own GWs from PBH density fluctuations. V Detection prospects for the unique GW spectrum(BGW+PBH density fluctuations) and realistic BSM physics scenarios It's noteworthy to state that ultralight PBHs play multiple roles in the generation of GWs. For instance, the initial curvature perturbations that lead to the formation of PBHs also give rise to GWs (see, e.g., [106; 107; 108]), PBHs emit gravitons, which constitute high-frequency GWs [109], PBHs also merge, releasing GWs [110], Lastly, the inhomogeneous distribution of PBHs, which results in density fluctuations, triggers the production of GWs [87; 88; 89]. In this section, we will concentrate on the last point mentioned above. As recently highlighted in ref.[87] and further elaborated in refs. [88; 89], it's observed that PBHs, immediately after their formation, are distributed randomly in space following Poisson distribution. Hence, even though the PBH gas, on average, behaves like pressure-less dust, the uneven spatial distribution results in density fluctuations, which are inherently isocurvature. When PBHs start to dominate the energy density of the Universe, the isocurvature component transitions into curvature perturbations, which subsequently give rise to secondary GWs. Due to the significant density fluctuations at smaller scales (equivalent to the average separation of PBHs at \(T_{\rm{Bf}}\)), substantial GWs are generated. These GWs are further amplified due to the nearly instantaneous evaporation of PBHs. The amplitude of such induced GWs in the present day is given by2 Footnote 2: Note that the amplitude of the induced GWs is inherently independent of the history of PBH formation and, by design, does not rely on any non-standard inflation dynamics [88; 89]. \[\Omega_{\rm{GW}}^{\rm PBH}(t_{0},f)\simeq\Omega_{\rm{GW}}^{p}\left(\frac{f}{f_ {p}}\right)^{11/3}\Theta(f_{p}-f),\] (V.1) where the peak amplitude is given by \[\Omega_{\rm{GW}}^{p}\simeq 2\times 10^{-6}\left(\frac{\beta}{10^{-8}}\right)^{ 16/3}\left(\frac{M_{\rm{BH}}}{10^{7}g}\right)^{34/9},\] (V.2) and the peak frequency is given by \[f_{p}\simeq 1.7\times 10^{3}\,{\rm{Hz}}\left(\frac{M_{\rm{BH}}}{10^{4}g} \right)^{-5/6}.\] (V.3) The ultraviolet cutoff represented by the \(\Theta\) function, which is on par with the frequency that corresponds to the comoving scale, signifies the average separation of PBHs at their time of formation. As previously demonstrated, PBHs with \(\beta>\beta_{c}\) have the potential to cause spectral distortion in the BGWs from inflation due to the domination and subsequent evaporation of PBHs. Concurrently, this also gives rise to a new source of GWs originating from density fluctuations leading to a complex GW spectral shape characterized by three distinct peak features. The left plot in Fig.3 shows a red-shaded region that is excluded due to the condition \(\Omega_{\rm BGW}(f^{p})>\Omega_{\rm GW}^{\rm PBH}(f^{p})\). This implies that the characteristic peak from PBH density fluctuations will always be suppressed below the BGW spectrum, rendering the PBH scenario indistinguishable from any other intermediate matter domination. We present an allowed parameter space on the \(\beta\)-\(M_{\rm BH}\) plane for a benchmark point \(n_{T}=0.9\) and \(r=0.06\). Although this does not provide the best fit to the NANOGrav data, it fits the data at \(2\sigma\). The green region is allowed, satisfying both the required frequency and amplitude by NANOGrav and the bound on SGWB from LIGO and BBN. Interestingly, throughout the entire parameter space, the condition \(\Omega_{\rm BGW}(f^{p})<\Omega_{\rm GW}^{\rm PBH}(f^{p})\) holds true. This allows for the possibility of a characteristic GW signal from PBHs that exhibits three peaks. For this specific benchmark point, we have also ensured that the inequality \(f_{\rm dom}<f^{p}<f_{\rm R}\) holds true. An important point to note is that while the parameters \(n_{T}\) and \(r\) can be varied to identify new allowed regions that fit the NANOGrav data more exhaustively, there is no qualitative difference. The allowed values of \(n_{T}\) and \(r\) are close to the presented benchmark. This is because, as seen in the bottom left plot of Fig.2, higher \(\beta\) values are needed to suppress the BGW spectrum at higher frequencies to satisfy the BBN and LIGO constraints. However, from Eq.(V.2), it can be seen that the GW peak amplitude from PBH density fluctuations increases with \(\beta\). These two features, in conjunction with the need for consistency with the NANOGrav data, strongly constrain the parameter space. Considering the broad overview provided earlier, we can highlight how such a spectrum could serve as an indicator for many BSM models in particle physics. Firstly, while such light PBHs may ultimately evaporate, they could generate relics that are either stable or unstable. A stable relic could potentially represent dark matter, while an unstable particle, such as right-handed neutrinos in the seesaw mechanism, that emerges from PBH could potentially trigger baryogenesis via leptogenesis (see, e.g., refs.[111, 112, 113, 114]). Moreover, based on the mass and initial energy fraction, a universe dominated by PBHs can modify the standard parameter space of numerous particle physics models, such as dark matter models [115]. Therefore, PBHs (their mass and initial energy fraction) serve as a gateway linking GWs to the parameters of high-energy particle physics models. This further demands a unified investigation of these models using both GWs and conventional particle physics experiments which is beyond the scope of this work. To conclude, we would like to emphasize that one of the main motivations of this paper is to show that the effects of an EMD caused by the standard long-lived fields or some additional new physics [116, 81, 117] and PBHs Figure 3: Left: Allowed region of combined GW spectra from BGW and PBH density fluctuations for \(n_{T}=0.9\) and \(r=0.06\), Right: Combined GW spectrum subjected to the benchmark point (\(M_{\rm BH}=7.08\times 10^{6}\)gram, \(\beta=10^{-8}\)) could be very different. Specifically, distorting BGWs with EMD leads to a low-frequency peak followed by a dip and a high-frequency peak in both scenarios. However, in the case of EMD from PBH, depending on the initial energy fraction of PBHs, a third distinct sharp peak might appear at mid-frequency. This sets the PBH scenarios apart. Furthermore, the impacts of ultra-light PBHs, which are highly intriguing to investigate, for instance, in the context of dark matter production, baryogenesis, and axions, could make these spectral features in the GWs a unique indicator of these models, regardless of the strength of the particle physics coupling. ## VI Conclusion We have discussed a unique framework that uses NANOGrav PTA data, interpreted as a stochastic gravitational wave background (SGWB) from inflation, to investigate an epoch dominated by primordial black holes (PBH). With an appropriate choice of PBH parameters, a PBH-dominated epoch can accommodate almost any value of the inflationary parameters, fitting the GW spectrum at the NANOGrav frequency while also satisfying the stringent high-frequency constraints from LIGO and BBN. The presence of any significant intermediate matter domination (here PBH domination) after inflation results in a double-peaked GW spectrum with a dip in between, while the introduction of ultralight PBHs contributes an additional GW spectrum from density fluctuations. The combined GW spectrum exhibits a unique shape, characterized by a low-frequency peak that explains the PTA data, a dip in the middle, and a sharp tilted peak, followed by a third high-frequency peak. Such a combined spectrum, along with the PTA fit and constraints from LIGO and BBN, predicts a highly constrained PBH parameter space that remains distinguishable from any other intermediate matter domination scenario. In addition to the distinct features that can be verified in GW detectors, the framework discussed in our work can also lead to a plethora of beyond the Standard model phenomenological implications, such as the production of dark matter from PBH evaporation and high-scale leptogenesis, among others. ## Acknowledgement The author would like to thank Rome Samanta for useful discussions and careful reading of the manuscript.
2309.12891
EarnHFT: Efficient Hierarchical Reinforcement Learning for High Frequency Trading
High-frequency trading (HFT) uses computer algorithms to make trading decisions in short time scales (e.g., second-level), which is widely used in the Cryptocurrency (Crypto) market (e.g., Bitcoin). Reinforcement learning (RL) in financial research has shown stellar performance on many quantitative trading tasks. However, most methods focus on low-frequency trading, e.g., day-level, which cannot be directly applied to HFT because of two challenges. First, RL for HFT involves dealing with extremely long trajectories (e.g., 2.4 million steps per month), which is hard to optimize and evaluate. Second, the dramatic price fluctuations and market trend changes of Crypto make existing algorithms fail to maintain satisfactory performance. To tackle these challenges, we propose an Efficient hieArchical Reinforcement learNing method for High Frequency Trading (EarnHFT), a novel three-stage hierarchical RL framework for HFT. In stage I, we compute a Q-teacher, i.e., the optimal action value based on dynamic programming, for enhancing the performance and training efficiency of second-level RL agents. In stage II, we construct a pool of diverse RL agents for different market trends, distinguished by return rates, where hundreds of RL agents are trained with different preferences of return rates and only a tiny fraction of them will be selected into the pool based on their profitability. In stage III, we train a minute-level router which dynamically picks a second-level agent from the pool to achieve stable performance across different markets. Through extensive experiments in various market trends on Crypto markets in a high-fidelity simulation trading environment, we demonstrate that EarnHFT significantly outperforms 6 state-of-art baselines in 6 popular financial criteria, exceeding the runner-up by 30% in profitability.
Molei Qin, Shuo Sun, Wentao Zhang, Haochong Xia, Xinrun Wang, Bo An
2023-09-22T14:25:03Z
http://arxiv.org/abs/2309.12891v1
# EarnHFT: Efficient Hierarchical Reinforcement Learning for ###### Abstract High-frequency trading (HFT) uses computer algorithms to make trading decisions in short time scales (e.g., second-level), which is widely used in the Cryptocurrency (Crypto) market (e.g., Bitcoin). Reinforcement learning (RL) in financial research has shown stellar performance on many quantitative trading tasks. However, most methods focus on low-frequency trading, e.g., day-level, which cannot be directly applied to HFT because of two challenges. First, RL for HFT involves dealing with extremely long trajectories (e.g., 2.4 million steps per month), which is hard to optimize and evaluate. Second, the dramatic price fluctuations and market trend changes of Crypto make existing algorithms fail to maintain satisfactory performance. To tackle these challenges, we propose an **E**fficient **i**he**A**rchical **R**e**e**fonfcement **i**ear**N**ing method for **H**igh **F**requency **T**rading (EarnHFT), a novel three-stage hierarchical RL framework for HFT. In stage I, we compute a Q-tacher, i.e., the optimal action value based on dynamic programming, for enhancing the performance and training efficiency of second-level RL agents. In stage II, we construct a pool of diverse RL agents for different market trends, distinguished by return rates, where hundreds of RL agents are trained with different preferences of return rates and only a tiny fraction of them will be selected into the pool based on their profitability. In stage III, we train a minute-level router which dynamically picks a second-level agent from the pool to achieve stable performance across different markets. Through extensive experiments in various market trends on Crypto markets in a high-fidelity simulation trading environment, we demonstrate that EarnHFT significantly outperforms 6 state-of-art baselines in 6 popular financial criteria, exceeding the runner-up by 30% in profitability. ## 1 Introduction High-frequency trading (HFT), taking up more than 73% volume in the final market, refers to leveraging complicated computer algorithms or mathematical models to place or cancel orders at incredibly short time scales [1]. A good HFT strategy enables investors to make more profit than a low-frequency strategy and is therefore pursued by many radical traders. It has been widely used in Cryptocurrency (Crypto) market due to Crypto's 24/7 non-stop trading time, which prevents Crypto holders from overnight risk, and dramatic price fluctuations, which provides more profitable trading opportunities for HFT. Although reinforcement learning (RL) algorithms [2, 14, 15] have achieved outstanding results in low-frequency trading in traditional financial markets like stock or futures, few maintain robust performance under the setting of HFT due to two challenges: * An extremely large time horizon induces low data efficiency for RL training. Compared with Atari games where the time horizon is 6000 [12], the time horizon of HFT is around 1 million, because second-level agents need to be evaluated in dozens of days.1 Large time horizons need more data to converge [13], demanding more computational resources. Footnote 1: \(60\)s/m \(\times\)\(60\)m/h \(\times\)\(24\)h/d \(\times\)\(12d\)\(=\)\(1036800\)s * The dramatic market changes cause agents trained on history data to fail in maintaining performance over long periods. In a traditional RL setting, the training and testing environments remain consistent. However, Crypto market trend changes cause a significant difference between the training and testing environments. An agent trained on one market trend tends to cause tremendous losses once the trend changes dramatically in the market. To tackle the challenges, we propose an **E**fficient **i**he**A**rchical **R**e**fonfcement **i**ear**N**ing method for **H**igh **F**requency **T**rading (EarnHFT) as shown in Figure 1. In stage I, we build a Q-teacher indicating the optimal action value based on dynamic programming and future price information, which is used as a regularizer to train RL agents delivering a target position every second for better performance and faster training speed. In stage II, we first train hundreds of second-level RL agents following the stage I process under different market trend preferences, where buy and hold [16] return rates are used as the preference indicators. We further label each market based on DTW [10] as different categories and use the profitability performance under each market category to select a tiny fraction of trained second-level RL agents to construct a strategy pool. In stage III, we train a router which dynamically picks a second-level agent from the pool per minute to achieve stable performance across different markets. Through extensive experiments in various market trends in the Crypto market under a high-fidelity simulation trading environment, we demonstrate that EarnHFT significantly outperforms 6 state-of-art baselines in terms of 6 popular financial criteria, exceeds the runner-up by at least 30% in terms of profitability. ## 2 Related Works In this section, we introduce the related works on traditional finance methods used in HFT and RL for quantitative trading. More discussion can be found in Appendix A. ### High-Frequency Trading in Crypto HFT, aiming to profit from slight price fluctuation in a short period of time in the market, has been widely used in companies [13]. Crypto traders invest differently from those in stock markets because of the high volatility [1]. In the Crypto market, there are many high-frequency technical indicators [10], such as imbalance volume (IV) [11] and moving average convergence divergence (MACD) [12], to capture buying and selling pressures among different time scales. However, these technical indicators also have limitations. In the volatile Crypto market, technical indicators may produce false signals. The result is sensitive to hyper-parameter settings, e.g. the take profit point and the stop loss. ### RL for Quantitative Trading Many deep reinforcement learning methods for quantitative trading have been proposed. DeepScalper [23] uses a hindsight bonus and auxiliary task to improve the model's generalization ability in intraday trading. DRA [1] uses LSTM and PPO. CDQNRP [13] uses a random perturbation to increase the stability of training a convolution DQN. However, these algorithms focus mainly on designing only one powerful RL agent to conduct profitable trading in short-term scenarios, neglecting its failure of maintaining performance over long periods. Hierarchical Reinforcement Learning (HRL), which decomposes a long-horizon task into a hierarchy of subproblems, has been studied for decades. There are some hierarchical RL frameworks for quantitative trading. HRPM [22] utilizes a hierarchical framework to simulate portfolio management and order execution. MetaTrader [14] proposes a router to pick the most suitable strategy for the current market situation. However, these hierarchical frameworks are all utilized in portfolio management. Its application remains unexplored in HFT where only one asset is traded. ## 3 Problem Formulation In this section, we present some basic finance concepts used in simulating the trading process and propose a hierarchical Markov decision process (MDP) framework for HFT2. Footnote 2: More detailed discussions are described in Appendix B. ### Financial Foundations for HFT We first introduce some basic financial concepts used to describe state, reward, and action in the following hierarchical Markov Decision Process (MDP) framework and present the objective of HFT. **Limit Order Book** (LOB) records unfilled orders. It is widely used to describe the market micro-structure (Madhavan 2000) in finance. We denote an m-level LOB at time \(t\) as \(b_{t}=(p_{t}^{b_{1}},p_{t}^{a_{1}},q_{t}^{b_{1}},q_{t}^{a_{1}},...,p_{t}^{b_{m }},p_{t}^{a_{m}},q_{t}^{b_{m}},q_{t}^{a_{m}})\), where \(p_{t}^{b_{i}}\) (\(p_{t}^{a_{i}}\)) is the level \(i\) bid (ask) price, \(q_{t}^{b_{i}}\) (\(q_{t}^{a_{i}}\)) is the quantity. **OHLC** is aggregated information of executed orders. OHLC vector at time \(t\) is denoted as \(x_{t}=(p_{t}^{a},p_{t}^{b},p_{t}^{b},p_{t}^{c})\), where \(p_{t}^{a},p_{t}^{b},p_{t}^{d},p_{t}^{c}\) indicate the open, high, low and close price. **Technical Indicators** indicate features calculated by a formulaic combination of the original OHLC or LOB to uncover the underlying pattern of the financial market. We denote the technical indicator vector at time \(t\)\(y_{t}=\phi(b_{t},x_{t},...,b_{t-h},x_{t-h})\), where \(\phi\) is the function that maps OHLC and LOB to technical indicators. **Market Order** is a trade to buy or sell a financial asset instantly. The executed price is calculated as Equation 1. \[E_{t}(M)=\sum_{i}(p_{t}^{i}\times\min(q_{t}^{i},R_{i-1}))(1+\sigma) \tag{1}\] where \(E\) is the execution price, \(R_{i-1}\) is the remaining quantity after level \(i\) in LOB, \(\sigma\) is the commission fee rate, and \(q_{t}^{i},p_{t}^{i}\) are the level \(i\) price and quantity in LOB respectively. **Position** is the amount of a financial asset traders hold. Position at time \(t\) is denoted as \(P_{t}\) and \(P_{t}\geq 0\), indicating only a long position is permitted in this formulation. **Net Value**\(V_{t}\) is the sum of cash and value of the position, calculated as \(V_{t}=V_{ct}+P_{t}\times p_{t}^{b1}\), where \(V_{ct}\) is the cash. We aim to maximize the net value by conducting market orders on a single asset based on market information (e.g., LOB and OHLC) at a second-level time scale. ### Hierarchical MDP Framework In this subsection, we formulate HFT as a hierarchical MDP. An MDP is defined by the tuple: \((S,A,P,r,\gamma,T)\), where \(S\) is the state space and \(A\) is the action space. \(P:S\times A\times S\rightarrow[0,1]\) is the transition function, \(r:S\times A\times S\to R\) is the reward function, \(\gamma\in(0,1]\) is the discount factor and \(T\) is the time horizon. In an MDP, the agent receives the current state \(s_{t}\in S\) from the environment, performs an action \(a_{t}\in A\), and gets the next state \(s_{t+1}\in S\) and a reward \(r_{t}\). An agent's policy is defined by \(\pi_{\theta}:S\times A\rightarrow[0,1]\), which is parameterized by \(\theta\). The objective of the agent is to learn an optimal policy \(\pi^{*}=\arg\max_{\theta}E_{\pi_{\theta}}[\sum_{t=0}^{T}\gamma^{t}r_{t}|S_{0}]\) where \(s_{0}\) is the initial state of the MDP. In RL for HFT, data drifting of the micro-level market information prevents a single agent from maintaining its performance over long periods and it is difficult to train a profitable agent under all trends because of the conflict in effective strategies under different market conditions. Macro level information, as an aggregation of micro-level market information, provides insight into the dynamics of the micro-level market. Therefore, we formulate HFT as a hierarchical MDP, where the low-level MDP operating on a second-level time scale formulates the process of micro-level market dynamics and trading execution and the high-level MDP operating on a minute-level time scale formulates the process of the macro-level market trends and strategy adjustment. It is defined by (MDP\({}_{h}\), MDP\({}_{l}\)), where MDP \[{}_{h}=(S_{h},A_{h},P_{h},R_{h},\gamma_{h},T_{h})\] MDP \[{}_{l}=(S_{l},A_{l},P_{l},R_{l},\gamma_{l},T_{l})\] **Low-level State \(S_{lt}\)** at time \(t\) consists of two parts: the latent representation \(X_{lt}\), which is the micro-level market technical indicators and private state \(P_{t}\). \(X_{lt}\) consists of 54 features and is calculated as \(X_{lt}=\phi_{l}(C_{lt})\) where \(C_{lt}\) is a rolling window of second-level OHLC and the snapshot of LOB with length 60. \(P_{t}\) indicates the current position of the agent. **Low-level Action \(a_{lt}\)** at time \(t\) is the target position. It is chosen from a predefined position pool \(A_{l}\) with finite elements defined as \(\{0,\frac{H}{|A|-1},...,H\}\) where \(|A|\) represents the number of the action choices and \(H\) is the maximum position. If \(a_{lt}\neq P_{t}\), then we instantly take a market order \(E_{t}(a_{lt}-P_{t})\), making the current position to \(a_{lt}\). **Low-level Reward \(r_{lt}\)** at time \(t\) is the net value differential in the second-level time scale, referring to money made through one second. It is calculated as \(r_{lt}=P_{t+1}\times p_{t+1}^{b1}-(P_{t}\times p_{t}^{b1}+E_{t}(P_{t+1}-P_{t})\), where we use the best bid price to calculate the value of the current position. **High-level State \(S_{ht}\)** at time \(t\) also consists of two parts: the latent representation \(X_{ht}\), which is the macro-level market technical indicators and private state \(P_{t}\). \(X_{ht}\) consists of 19 features and is calculated as \(X_{ht}=\phi_{h}(C_{ht})\) where \(C_{ht}\) is a rolling window of minute-level OHLC length 60. \(P_{t}\) indicates the current position of the agent. **High-level Action \(a_{ht}\)** is the selected agent at time \(t\). It is chosen from a pre-trained agent pool \(A_{h}\), each of which is trained under a low-level MDP. **High-level Reward \(r_{lt}\)** at time \(t\) is the net value differential in the minute-level time scale, referring to money made through one minute. It is also the return of the selected low-level agent makes under low-level MDP in one minute and is calculated as \(r_{ht}=\sum_{t=T}^{T+\tau}r_{lt}\). In this bilevel hierarchical MDP framework, for every minute, our high-level agent picks a low-level agent, which will adjust its position every second to make profit. We aim to find a set of low-level agents (traders) and a high-level agent (router) to maximize our total profit. ## 4 EarnHFT In this section, we demonstrate three stages of EarnHFT as shown in Figure 1. In stage I, we present RL with Q-teacher, which improves the training efficiency, to train low-level agents. In stage II, agents are trained and evaluated in different market trends, forming a diverse pool for hierarchical constructions. In stage III, we train a router to pick a proper agent to maintain profitability in the non-stationary market. ### Stage I: Efficient RL with Q-Teacher A long trajectory causes extra computational cost in traditional RL settings. However, in our low-level MDP, the price information is not influenced by our policy. By using future price information and dynamic programming, we can easily construct the optimal action value [22] to help train RL agents more efficiently. Here, we use an optimal value supervisor and an optimal actor to aid training. **Optimal Value Supervisor.** Although using RL to conduct HFT suffers from drawbacks such as overfitting stated in [16], it can compute the optimal action value Figure 1: The overview of EarnHFT. First, we compute a Q-teacher for enhancing the performance and training efficiency of second-level RL agents. Then, we efficiently train diverse RL agents under various market trends and select a small fraction of them to form an agent pool based on profitability. Finally, we train a minute-level router which dynamically picks a second-level agent from the pool to achieve stable performance across different markets. for any state, unlike traditional RL where even expert trajectories are hard to acquire. Since our position choice is finite, we can backward calculate the optimal action value. By calculating the market order costing and the value of position fluctuations, we can get the reward and calculate the action value for the previous state as shown in Algorithm 1. Adding optimal action value as a supervision signal can help the agent to explore faster and get positive rewards more quickly. During the training of the DDQN [20] agent, we can add a supervision term which is the Kullback-Leibler (KL) divergence between the agent's action values and the optimal action value picked from the same state. Let \(Q_{t}(\chi,p,a)\) denote the action-value from evaluate network for latent representation \(\chi\), position \(p\) and action \(a\) at time \(t\), and let \(Q^{*}(\chi,p,a)\) denote the optimal action-value function, which we have calculated by Algorithm 1. The loss function could be described as follows: \[L(\theta_{i})=L_{td}+\alpha KL(Q_{t}(\chi,p,\cdot;\theta_{i})||Q^{*}(\chi,p, \cdot)) \tag{2}\] where \[L_{td}=(r+\gamma\max Q_{t}(\chi^{\prime},a,\cdot;\theta^{\prime}_{i})-Q_{t}( \chi,p,a;\theta_{i}))^{2} \tag{3}\] representing the TD error in DDQN and \(\alpha\) is a coefficient that decays along time. The second term in Equation 2 enables low-level agents to acquire the advantage function of other actions under the same state without exploration, enhancing the efficiency of RL training. It can be proven that with this supervisor, the action value still converges to the optimal action value, as shown in Appendix C.2. **Optimal Actor.** Although we have improved the training efficiency using the optimal value supervisor, it is still very hard for the agent to learn the optimal policy. The reason is that our optimal action value is based on the current position. It is often the case that once our RL agents deviate from the optimal policy, the supervision term also changes, which leads to a more significant deviation. Therefore, instead of just training from the transitions the agent explores using \(\epsilon\)-greedy policy, we further train the agent using the transitions generated by the optimal policy where the action with the highest optimal action value is chosen. The optimal transitions provide extra experience and prevent the agents from falling into the local trap. **Input**: Multivariate Time Series \(\mathcal{D}\) with Length \(N\), Commission Fee Rate \(\delta\), Action Space \(A\) **Output**: Network Parameter \(\theta\) ``` 1: Initialize experience replay \(R\), network \(Q_{\theta}\), target network \(Q_{\theta^{\prime}}\) and construct the optimal action value using Algorithm 1 and trading environment \(Env\). 2: Initialize trading environment \(Env\) 3:for\(t=1\) to \(N-1\)do 4: Choose action \(a_{e}\) using \(\epsilon\)-greedy policy. 5: Store transition \((s,a_{e},r,s^{\prime},Q^{*})\) in \(D\) 6:endfor 7: Reinitialize trading environment \(Env\) 8:for\(t=1\) to \(N-1\)do 9: Choose action \(a_{o}\) that \(\operatorname*{argmax}_{a}Q^{*}[t,p,a]\). 10: Store transition \((s,a_{o},r,s^{\prime},Q^{*})\) in \(R\) 11:endfor 12: Sample transitions \((s_{j},a_{j},r_{j},s^{\prime}_{j},Q^{*}_{j})\) 13: Calculate \(L\) following Equation 2, do its gradient descent on \(\theta\) and update \(\theta^{\prime}=\tau\theta+(1-\tau)\theta^{\prime}\). 14:return\(Q_{\theta}\) ``` **Algorithm 2**Efficient RL with Q-Teacher In Algorithm 2, the agents first collect experience from both \(\epsilon\)-greedy policy and the optimal actor, then update the network using both TD errors and KL divergence. ### Stage II: Construction of Agent Pool The micro-level market information in the Crypto market changes rapidly, causing models' failure in maintaining their performance over a long period. According to our preliminary experiments where training among different market trends is incompatible, we decide to decompose the whole market as different trends and develop a suitable trading strategy for each market trend. ``` 1:A Time Series \(\mathcal{D}\) with Length \(N\) 2:Risk threshold \(\theta\), Label number \(M\) 3:Output**: Labels indicating the trend they belong to for every point in time series D 4:\(D^{\prime}\leftarrow\) denoising high frequency noise \(D\). 5: Divide \(D^{\prime}\) according to its extrema into segments \(S\). 6:Merge adjacent segments in \(S\) if DTW [19] and slop difference are small enough until \(S\) is stable. 7:Calculate threshold \(H=Q_{1-\frac{\theta}{2}}(R)\), \(L=Q_{\frac{\theta}{2}}(R)\) 8:Calculate the upper bond and lower bonds of slopes for each label based on the quantile and the threshold. 9:Label each segment based on the bonds. 10:Return the label corresponding to each segment. ``` **Algorithm 3**Market Segmentation & Labelling **Generating Diverse Agents.** Previous works on generating diverse agents mainly focus on the different random seed initialization of the neural network or RL training's hyperparameter search [20], which is mainly unstructured and can be seen as a byproduct of algorithmic stochas ticity rather than an intentional design. Here we propose to train diverse agents following Algorithm 2 with different preferences over time series \(D\), i.e., market trends. We first separate the training dataset (a multivariate time series with a length of over 3 million) into data chunks with length \(L\), where each data chunk represents a continuous market trend, to reduce the time horizon for training. The preference is defined by \(\beta\), where a corresponding priority proportional to the probability of a data chunk with buy and hold return rate \(r\) being sampled is calculated as Equation 4. \[f(x)=\begin{cases}\frac{e^{\beta r}}{pdf(r)}&\text{if }Q_{\frac{a}{2}}(R)\leq r \leq Q_{1-\frac{a}{2}}(R)\\ e^{\beta r}&\text{if }r\geq Q_{1-\frac{a}{2}}(R)\lor r\leq Q_{\frac{a}{2}}(R) \end{cases} \tag{4}\] In Equation 4, \(Q_{\frac{a}{2}}(R)\) represents the \(\frac{a}{2}\)-th quantile of the samples' return rate \(R\). _pdf_ represents probability density functions estimated by kernel density estimation and are calculated as Equation 5: \[pdf(x)=\frac{1}{nh}\sum_{r\in R}K\left(\frac{x-r}{h}\right) \tag{5}\] where \(h\) is obtained by searching around Silverman's bandwidth [12] and \(K\) is the kernel function as which we use the normal distribution \(N(0,1)\). The kernel density term erases the influence of the distribution of the training dataset and therefore provides a more robust sampling outcome. We sample the data chunk based on the priority to construct our low-level MDP and train the agents following the process in stage I. Different agents are trained under different preference parameters \(\beta\). This sampling method ensures the agent can access all of the data chunks yet is trained with a preference over all the market trends and prevents the agent from being trapped in those extreme conditions, which may cause agents' performances on all other trends to plummet. **Agent Selection.** Although we have generated diverse agents, it is inefficient to put all of them into the agent pool because it will vastly increase the action space for the router. Therefore we only select a small fraction of generated agents to form the pool based on their profitability on various market trends. First, we precisely label each point in the valid dataset using Algorithm 3, which, unlike previous algorithms [20], can label different datasets without tunning the hyperparameters. A more detailed version of the algorithm is described in Appendix C.1. We evaluate agents with different market trends and initial positions and further select the agents with the best profitability (averaged return on various market segments) under each label with each initial position to construct a two-dimensional agent pool \((m,n)\), where \(m\) is the number of market trends and \(n\) is the initial position. ### Stage III: Dynamic Routing Optimization We apply DDQN [20] to train the router for the high-level MDP. However, the number of agents in the pool is still too large. Even though the trajectory length has been significantly reduced (by 98.33% 3) because we use the router to select the agent in a minute-level timescale, it is still computational-burdensome for the high-level agent to explore all the low-level agents. Therefore we use the priory knowledge of the agent pool to refine our options during trading. More specifically, before we choose the low-level agent, we will secure the chosen model whose initial positions are the same as the current position. Therefore, we reduce the number of possible low-level agents to \(m\). Here, we choose not to compute a Q-teacher to aid the learning process for two reasons: i) the time horizon is largely reduced, therefore the computational burden for RL to self-explore is reduced. ii) the high-level action is a low-level agent, i.e., a trading strategy instead of a target position, causing the extra computation for the position at the end of the trading session of the selected agent and the reward during the trading session. The decreasing computation for RL and increasing computation for computing the optimal action value make pure DDQN more efficient. Footnote 3: \(1-\frac{59}{60}=0.9833\) ## 5 Experiment Setup ### Datasets To comprehensively evaluate the algorithm, testing is conducted on four Crypto, encompassing both mainstream and niche options, over a period exceeding a week, covering both bull and bear market conditions. We summarize statistics of the 4 datasets in Table 1 and further elaborate them in Appendix D.1. For dataset split, we use data from the last 9 days for testing, the penultimate 9 days for validation and the remaining for training on all 4 datasets. We first train multiple low-level agents on the training dataset, and segment and label the valid dataset for model selection. We further train the router on the training dataset again and evaluate it on the whole valid dataset to pick the best router, which will be tested in the testing dataset. Experimental results in Table 2 show the great performance of that EarnHFT under different market statuses despite the difference between the valid dataset and the test dataset as shown in Appendix D.2. ### Evaluation Metrics We evaluate EarnHFT on 6 different financial metrics including one profit criterion, two risk criteria, and three risk-adjusted profit criteria listed below. \begin{table} \begin{tabular}{|l|c c c c|} \hline **Dataset** & **Dynamics** & **Seconds** & **From** & **To** \\ \hline BTC/TUSD & Sideways & 4057140 & 23/03/30 & 23/05/15 \\ BTC/USDT & Sideways & 3884400 & 22/09/01 & 22/10/15 \\ ETH/USDT & Bear & 3970800 & 22/05/01 & 22/06/15 \\ GALA/USDT & Bull & 3970740 & 22/07/01 & 22/08/15 \\ \hline \end{tabular} \end{table} Table 1: Dataset statistics detailing market, data frequency, number of stocks, trading days and chronological period4. * **Total Return (TR)** is the overall return rate of the whole trading period. It is defined as \(TR=\frac{V_{t}-V_{1}}{V_{1}}\), where \(V_{t}\) is the final net value and \(V_{1}\) is the initial net value. * **Annual Volatility (AVOL)** is the variance of the annual return defined as \(\sigma[\mathbf{ret}]\times\sqrt{m}\) to measure the risk level of trading strategies, where \(\mathbf{ret}=(ret_{1},ret_{2},...,ret_{t})\) is a vector of secondly return, \(\sigma[.]\) is the variances and \(m\) is the number of seconds contained in a year. * **Drawdown (MDD)** measures the largest loss from any peak to show the worst case. * **Annual Sharpe Ratio (ASR)** considers the amount of extra return that a trader receives per unit of increase in risk. It is defined axes: \(SR=E[\mathbf{ret}]/\sigma[\mathbf{ret}]\times\sqrt{m}\), where \(E[.]\) is the expected value. * **Annual Calmar Ratio (ACR)** is defined as \(CR=\frac{E[\mathbf{ret}]}{MDD}\times m\), which is the expected annual return divided by the maximum drawdown. * **Annual Sortino Ratio (ASoR)** applies downside deviation as the risk measure. It is defined as: \(SoR=\frac{E[\mathbf{ret}]\times\sqrt{m}}{DD}\), where downside deviation is the standard deviation of the negative return rates. ### Training Setup We conduct all experiments on a 4090 GPU. For the trading setting, the commission fee rate is 0 for BTCT and 0.02% for the remaining datasets following the policy of Binance. For the training setting, we choose \(\beta\) in Equaition 4 in list \([-90,-10,30,100]\) and run each \(\beta\) for 50 epochs, generating a total of 200 agents. Adam is used as the optimizer for DDQN. As for other baselines, there are two conditions: i) there are authors' official or open-source library [2] implementations, we apply the same hyperparameters for a fair comparison5. ii) if there are no publicly available implementations6, we reimplement the algorithms and try our best to maintain consistency based on the original papers. It takes about 10 hours to run all experiments in 4 datasets. Descriptions of other parameter settings (e.g., the trading setting) are in Appendix D.3. Footnote 5: PPO and DQN. Footnote 6: DRA and CDQNRP ### Baselines To provide a comprehensive comparison of EarnHFT, we select 6 baselines including 4 SOTA RL algorithms and 2 widely-used rule base methods. * **PPO [10]** applies importance sampling to enhance the experience efficiency. * **DRA [1]** uses an LSTM [1] network to enhance the state representation to gain a better result using PPO. * **DQN [11]** applies experience replay and multi-layer perceptrons to Q-learning. * **CDQNRP [12]** uses a random perturbed target frequency to enhance the stability during training. * **MACD [13]** is an upgraded method based on the traditional moving average method. Not only does it show the rise or fall of the current price, but also indicates the speed of rising or falling. * **IV [1]** is a micro-market indicator widely used in HFT. ## 6 Results and Analysis ### Comparison with Baselines According to Table 2, our method achieves the highest profit in all 4 datasets and the highest risk-adjusted profit in 3 datasets. Value-based methods (e.g., CDQRP and DQN) perform well when the gap between the valid and test datasets is not large under a stable market trend. Policy-based methods (e.g., PPO and DRA) are easy to converge to a dummy policy where the agents just deliver the target position the same as their current position due to the existence of the commission fee even if the learning rate is set to \(1e^{-7}\) and therefore perform poorly on the bear market. Rule-based methods are extremely sensitive to the take profit point and the stop loss and only achieve moderate profit under volatile markets. Our method, EarnHFT, although it performs well on profit-related metrics, is a very radical trader due to the optimal value supervisor and optimal actor, which only delivers profit-related experience, neglecting the risk-related information, and therefore performs moderately in some datasets \begin{table} \begin{tabular}{|c|c|c|c c|c c|c c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{Profit} & \multicolumn{2}{|c|}{Risk-Adjusted Profit} & \multicolumn{2}{|c|}{Risk Metrics} & \multicolumn{2}{|c|}{Profit} & \multicolumn{2}{|c|}{Risk-Adjusted Profit} & \multicolumn{2}{|c|}{Risk Metrics} \\ \hline Market & Model & TR(@\(\uparrow\)) & ASR & ACR & ASoR & AVOL(@\(\downarrow\)) & MDD(@\(\downarrow\)) & Market & Model & TR(@\(\uparrow\)) & ASR & ACR & ASoR & AVOL(@\(\downarrow\)) & MDD(@\(\downarrow\)) \\ \hline \multirow{8}{*}{BTCT} & DRA & -4.56 & -4.28 & -19.57 & -4.65 & 42.25 & 9.24 & \multirow{8}{*}{BTC} & DRA & -2.65 & -4.82 & -17.48 & -4.77 & 21.18 & 5.84 \\ & PPO & -3.61 & -5.25 & -22.74 & -5.71 & 27.76 & 6.41 & & -14.74 & -35.80 & -0.14 & 1.59 & 0.65 \\ \cline{2-8} & CDQNRP & -2.83 & -2.91 & -14.85 & -3.31 & 37.61 & 7.38 & & -14.0 & -0.61 & -0.49 & 0.61 \\ \cline{2-8} & DQN & -3.48 & -12.37 & -35.01 & -11.86 & 11.57 & 4.09 & & -14.08 & 0.66 \\ \cline{2-8} & MACD & -6.07 & -10.11 & -25.17 & -8.05 & 24.84 & 9.98 & & -14.08 & -0.42 & -0.58 & 0.64 \\ \cline{2-8} & IV & -2.99 & -3.78 & -14.24 & -3.37 & 31.35 & & -8.32 & & -14.08 & -0.42 & -0.58 \\ \cline{2-8} & EarnHFT & 0.72 & 1.22 & 10.77 & 0.93 & 27.08 & 3.07 & & -14.08 & -0.42 & -0.58 & 0.61 \\ \hline \multirow{8}{*}{ETTI} & DRA & -33.37 & -9.06 & -32.23 & -9.20 & 163.25 & 45.88 & & -14.08 & -0.42 & -0.58 & 0.61 \\ \cline{2-8} & PPO & -22.61 & -10.11 & -31.17 & -10.39 & 96.12 & 31.17 & & -14.08 & -0.42 & -0.58 & 0.61 \\ \cline{1-1} \cline{2-8} & CDQNRP & -6.82 & -24.41 & -40.19 & -3.11 & 11.46 & & -0.58 & -0.42 & -0.58 & 0.65 \\ \cline{1-1} \cline{2-8} & DQN & -11.02 & -9.47 & -32.81 & -4.76 & & -13.79 & & -14.08 & -0.42 & -0.58 & 0.65 \\ \cline{1-1} \cline{2-8} & MACD & -4.29 & -1.78 & -8.71 & -1.19 & & -14.08 & & -0.42 & -0.58 & 0.65 & 0.65 \\ \cline{1-1} \cline{2-8} & IV & -27.42 & -12.27 & -36.01 & -9.00 & & -99.67 & & -14.08 & -0.58 & 0.65 & 0.65 \\ \cline{1-1} \cline{2-8} & EarnHFT & 4.52 & 2.92 & 14.90 & 1.78 & -7.08 & & -14.08 & -0.42 & -0.58 & 0.65 & 0.65 \\ \hline \end{tabular} \end{table} Table 2: Performance comparison on 4 Crypto markets with 6 baselines including 2 policy-based RL algorithms, 2 value-based RL algorithms, and 2 rule-based methods. Results in pink, green, and blue show the best, second-best, and third-best results. in terms of risk. As shown in Figure 2, EarnHFT opens a position and closes the position within 30 seconds and profits from a market trend which is viewed as a pullback at a minute-level timescale. More results can be found in Appendix D.4. ### The Effectiveness of Hierarchical Framework We examine the effectiveness of the hierarchical framework by analyzing the router's behaviors under different datasets and conducting experiments to show the performance comparison of the EarHFT and each agent from its pool. From Figure 3 we can see that bull and rally agent tends to buy and hold and therefore perform well in the bull market (e.g., GALA). The sideways agent tends to trade less and hold still its position. The pullback and bear market tends to close its position and perform well in the bear market (e.g., ETH). The router combines all the advantages of the agents and performs the best on all 4 datasets in terms of profit. Figure 4 refers to the selection distribution for the router on 4 datasets. While datasets with high volatility (e.g., ETH and GALA), the market dynamics change more frequently and therefore the routing shows a more balanced distribution across 5 market trends. While datasets with lower volatility (e.g., BTCT and BTCU), the router selection is more focused on two market dynamics. ### The Effectiveness of Optimal Action Value To demonstrate the effectiveness of the optimal value supervisor (OS) and the optimal actor (OA), we conduct an ablation study on two datasets, ETH and GALA. We evaluate the training efficiency by the number of steps need to converge (CS) and the converged reward sum (RS). We further investigate their influence on agent trading behavior by average holding length (AHL). In Table 3. For GALA we can see that compared with the original DDQN, the one with OS only takes 15% of the steps to converge and gain a higher return. OA can further improve the return in exchange for more steps to converge. For ETH, since the market is bull, the CS is not reduced by OS. However, the return is largely increased. The reason why OS is more effective is that OS provides more information for the agents and its instructions vary along the changes of the agents' policy while OA only provides demonstrations. ## 7 Conclusion In this paper, we propose EarnHFT, a novel three-stage hierarchical RL framework for HFT to alleviate training efficiency and data shifting. First, we compute the optimal action value to improve the performance and training efficiency of second-level RL agents. Then we train a diverse pool of agents excelling in various market trends. Finally, we train a router to regularly pick an agent from the pool to conduct trading to deal with the dynamic market. Extensive experiments on Crypto markets demonstrate that EarnHFT significantly outperforms many strong baselines. Ablation studies show the effectiveness of the proposed components. Figure 4: Router selection distribution Figure 3: Comparison of the router and agent pool Figure 2: Trading process of EarnHFT in ETH \begin{table} \begin{tabular}{c|c|c c c|c c c} \hline \hline \multirow{2}{*}{OA} & \multirow{2}{*}{OS} & \multicolumn{3}{c|}{GALA} & \multicolumn{3}{c|}{ETH} \\ & & CS & RS & AHL & CS & RS & AHL \\ \hline ✓ & ✓ & 78848 & 4.43 & 448 & 102400 & 12.32 & 81.3 \\ ✓ & & 102400 & 0.24 & 38.7 & 102400 & -1.40 & 4.15 \\ & ✓ & 4608 & 2.89 & 147 & 30720 & 4.87 & 35.8 \\ & & 30720 & -0.01 & 284 & 30720 & -29.6 & 39.1 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation Study of OS and OT
2306.17639
Point-Based Value Iteration for POMDPs with Neural Perception Mechanisms
The increasing trend to integrate neural networks and conventional software components in safety-critical settings calls for methodologies for their formal modelling, verification and correct-by-construction policy synthesis. We introduce neuro-symbolic partially observable Markov decision processes (NS-POMDPs), a variant of continuous-state POMDPs with discrete observations and actions, in which the agent perceives a continuous-state environment using a neural {\revise perception mechanism} and makes decisions symbolically. The perception mechanism classifies inputs such as images and sensor values into symbolic percepts, which are used in decision making. We study the problem of optimising discounted cumulative rewards for NS-POMDPs. Working directly with the continuous state space, we exploit the underlying structure of the model and the neural perception mechanism to propose a novel piecewise linear and convex representation (P-PWLC) in terms of polyhedra covering the state space and value vectors, and extend Bellman backups to this representation. We prove the convexity and continuity of value functions and present two value iteration algorithms that ensure finite representability. The first is a classical (exact) value iteration algorithm extending the $\alpha$-functions of Porta {\em et al} (2006) to the P-PWLC representation for continuous-state spaces. The second is a point-based (approximate) method called NS-HSVI, which uses the P-PWLC representation and belief-value induced functions to approximate value functions from below and above for two types of beliefs, particle-based and region-based. Using a prototype implementation, we show the practical applicability of our approach on two case studies that employ (trained) ReLU neural networks as perception functions, by synthesising (approximately) optimal strategies.
Rui Yan, Gabriel Santos, Gethin Norman, David Parker, Marta Kwiatkowska
2023-06-30T13:26:08Z
http://arxiv.org/abs/2306.17639v2
# Point-based Value Iteration for ###### Abstract Neuro-symbolic artificial intelligence is an emerging area that combines traditional symbolic techniques with neural networks. In this paper, we consider its application to sequential decision making under uncertainty. We introduce neuro-symbolic partially observable Markov decision processes (NS-POMDPs), which model an agent that perceives a continuous-state environment using a neural network and makes decisions symbolically, and study the problem of optimising discounted cumulative rewards. This requires functions over continuous-state beliefs, for which we propose a novel piecewise linear and convex representation (P-PWLC) in terms of polyhedra covering the continuous-state space and value vectors, and extend Bellman backups to this representation. We prove the convexity and continuity of value functions and present two value iteration algorithms that ensure finite representability by exploiting the underlying structure of the continuous-state model and the neural perception mechanism. The first is a classical (exact) value iteration algorithm extending \(\alpha\)-functions of Porta _et al_(2006) to the P-PWLC representation for continuous-state spaces. The second is a point-based (approximate) method called NS-HSVI, which uses the P-PWLC representation and belief-value induced functions to approximate value functions from below and above for two types of beliefs, particle-based and region-based. Using a prototype implementation, we show the practical applicability of our ap proach on two case studies that employ (trained) ReLU neural networks as perception functions, dynamic car parking and an aircraft collision avoidance system, by synthesising (approximately) optimal strategies. An experimental comparison with the finite-state POMDP solver SARSOP demonstrates that NS-HSVI is more robust to particle disturbances. keywords: Neuro-symbolic systems, continuous-state POMDPs, point-based value iteration, heuristic search value iteration + Footnote †: journal: Journal of Computational and Graphical Analysis ## 1 Introduction An emerging trend in artificial intelligence is to integrate traditional symbolic techniques with data-driven components in sequential decision making and optimal control. Application domains include mobile robotics [1], visual reasoning [2], autonomous driving [3] and aircraft control [4]. In real-world autonomous navigation systems, agents rely on unreliable sensors to perceive the environment, typically represented using continuous-state spaces, and planning and control must deal with environmental uncertainty. Neural networks (NNs) have proven effective in these complex settings at providing fast data-driven perception mechanisms capable of performing tasks such as object detection or localisation, and are increasingly often deployed in conjunction with conventional controllers. Because of the potential applicability in safety-critical domains, there is growing interest in methodologies for automated optimal policy synthesis for such _neuro-symbolic systems_, which are currently lacking. _Partially observable Markov decision processes_ (POMDPs) provide a convenient mathematical framework to plan under uncertainty. Solving POMDPs in a scalable and efficient manner is already challenging for finite-state models [5; 6], but significant progress has been made, e.g., through point-based methods [7], which extend the classic value iteration algorithm for MDPs by applying it to a selected set of _belief states_ of the POMDP. Typically, a belief state is a distribution over the states of the model representing an agent's knowledge about the current state. Since the resulting belief MDP is infinite-state, conventional value iteration cannot be directly applied and instead point-based methods rely on a so-called _\(\alpha\)-vector_ parameterisation, a linear function characterised by its values in the vertices of the belief simplex, which is finitely representable since the value function is piecewise linear and convex. Compared to finite-state POMDPs, solving continuous-state POMDPs suffers from additional challenges due to the uncountably infinite underlying state space. The common approach to discretise or approximate the continuous components with a grid and use methods for finite-state POMDPs may compromise accuracy and lead to an exponential growth in the number of states. Refinement of the discretization to improve accuracy further increases computational complexity. Therefore, an important research direction is to instead consider POMDP solution techniques that operate directly in continuous domains. Additionally, belief spaces for continuous-state POMDPs have infinitely many dimensions, which further complicates the problem. Since functions over continuous spaces can have arbitrary forms not amenable to computation, a key challenge is finding an efficient representation of the value function that allows closed-form belief updates and Bellman backups for the underlying (parameterisable) transition and reward functions. This problem was addressed by Porta _et al_ in [8], where it was proved that continuous-state POMDPs with discrete observations and actions have a piecewise linear and convex value function and admit a finite representation in terms of so-called \(\alpha\)_-functions_, which generalise \(\alpha\)-vectors by replacing weighted summation with integration. Working with a representation in terms of linear combinations of Gaussian mixtures, they derive point-based value iteration and implement it by randomly sampling belief points to approximate the value function. In this paper, we address the problem of optimal policy synthesis for discounted cumulative rewards on a subclass of continuous-state POMDPs with discrete observations and actions, called _neuro-symbolic POMDPs_ (NSPOMDPs), whose transition and reward functions are symbolic while observation functions are synthesised in a data-driven fashion, e.g., by means of NNs. The strengths of NNs include trainability from data and fast inference for complex scenarios (e.g., object detection and recognition), while symbolic approaches can provide high interpretability, provable correctness guarantees and ease of inserting human expert knowledge into the underlying systems [9]. Our model is expressive enough for realistic perception functions, while being sufficiently tractable to solve. Working directly with continuous state spaces rather than a discretisation, we propose novel finite representations of the value function inspired by the \(\alpha\)-functions of [8], prove convergence and continuity of the value func tion, and present two algorithms for this representation, the classical value iteration (VI), and a variant of the HSVI (Heuristic Search Value Iteration) algorithm [10]. Our first main contribution is demonstrating that, by exploiting the structure of NS-POMDPs, one can indeed find an \(\alpha\)-function representation, namely _piecewise linear and convex representation under piecewise constant \(\alpha\)-functions (P-PWLC)_, that has a simple parameterisation and is closed with respect to belief updates and the Bellman operator. More specifically, we show that value functions can be represented using pointwise maxima of piecewise constant \(\alpha\)-functions (a finite set of polyhedra and a value vector), which can be obtained directly as the preimage of the (NN) perception function, in conjunction with mild assumptions that ensure closure with respect to the transition and reward functions of NS-POMDPs. In contrast to [8], where Gaussian mixtures are used to represent \(\alpha\)-, transitions and reward functions, thus possibly requiring approximation of NS-POMDPs, our representation closely matches the structure of NS-POMDPs, even with NN perception functions. Since \(\alpha\)-functions for VI increase exponentially in the number of observations, our second main contribution is a continuous-state space variant of HSVI, called NS-HSVI, for scalable computation of approximate value function from below and above. Starting with the polyhedral preimage of the model's NN perception function, NS-HSVI works by progressively subdividing the continuous state space during value backups to compute lower bounds, and is able to track the evolution of the system without requiring a priori knowledge about how to discretise the state space. We use a lower \(K\)-Lipschitz envelope of a convex hull to approximate an upper bound. We formulate two representations of the belief space, which have closed forms for the quantities of interest: _particle-based_, which relies on sampling of individual points, and _region-based_, which places a (uniform) distribution over a region of continuous space. We develop a prototype implementation of the techniques and provide experimental results for strategy (policy) synthesis for particle- and region-based beliefs on two case studies: dynamic car parking and an aircraft collision avoidance system. We find that region-based values are more robust to disturbance than particle-based. We also compare our particle-based NS-HSVI to a finite-state POMDP approximation of an NS-POMDP model using SARSOP, and observe that our method consistently yields tighter lower bound values, at a higher computational cost due to expensive polyhedra computations, because the accuracy of SARSOP's lower bound depends on the length of the horizon considered when building the model. **Contributions.** In summary, this paper makes the following contributions. 1. We propose neuro-symbolic POMDPs (NS-POMDPs), a subclass of continuous-state POMDPs with discrete observations and actions, whose observation functions are synthesised in a data-driven fashion. 2. We propose a novel piecewise constant \(\alpha\)-function representation of the value function (as a pointwise maximum function over a set of piecewise constant \(\alpha\)-functions defined over the continuous-state space). We show that this representation admits a finite polyhedral representation and is closed with respect to the Bellman operator. 3. We prove continuity and convexity of the value function for discounted cumulative rewards and derive a value iteration (VI) algorithm. 4. We present a new point-based method called NS-HSVI for approximating values of NS-POMDPs, proving that piecewise constant \(\alpha\)-functions are a suitable representation for lower bound approximations of values. We develop two variants of the algorithm, one based on the popular particle-based beliefs and the other on novel region-based beliefs, and show they have closed forms for computing the quantities of interest. 5. We provide experimental results to demonstrate the applicability of NS-HSVI in practice for neural networks whose preimage (or that of their approximation) is in polyhedral form. **Related work.** Various approaches have been proposed to solve continuous-state POMDPs, including point-based value iteration [8; 11; 12], simulation-based policy iteration [13], discrete space approximation [14], locally-valid approximation [15] and tree search planning [16]. However, these approaches focus on traditional symbolic systems and, while extended to continuous transitions via sampling [8], they are not adapted to data-driven perception functions. HSVI is a point-based value iteration for finite-state POMDPs [10; 17], which was recently extended to stochastic games [18] and works in the continuous belief space, but, to the best our knowledge, has not been applied to continuous-state POMDPs. Approaches based on discretisation suffer from loss of accuracy and exponential growth in the number of states and the finite horizon. The point-based methods of [8; 11; 12] use \(\alpha\)-functions, which is similar to our approach, but they represent value functions as Gaussian mixtures or dynamic Bayes nets, which may result in looser approximation for NNs than our polyhedral representation. This is because our P-PWLC representation exploits the underlying piecewise constant structure of the continuous-state model and the neural perception mechanism (for which the value function may not be piecewise constant). While our VI and NS-HSVI algorithms work directly in the continuous state space of the POMDP, most existing approaches rely on constructing a finite-state POMDP to approximate the continuous-state POMDP and then solving the finite-state model. PBVI [19] was the first point-based algorithm to demonstrate good performance on large POMDPs. HSVI [10; 17] uses effective heuristics to guide the forward exploration towards beliefs that significantly reduce the gap between the upper and lower bounds on the optimal value function. FSVI [20], also a point-based value iteration method, explores the belief space by maintaining the true states, using the optimal value function of the underlying MDP to decide which action to take and then sampling the next states and observations. SARSOP [21], one of the fastest existing point-based algorithms, first approximates the optimally reachable belief space in each iteration by sampling a belief according to its stored lower and upper bound functions, then performs backups at selected nodes in the belief tree and finally prunes the \(\alpha\)-vectors that are dominated by others over a neighbourhood of the belief tree. Formal verification approaches for neuro-symbolic systems have been developed for the non-stochastic case [4] and for stochastic multi-agent systems [22; 23; 24] but under full observability. When the controller is data-driven which is a counterpart to the neural perception, the risk of the closed-loop stochastic systems is verified in [25]. Verified NN-based POMDP policies are synthesised in [26], though only for the finite-state setting. Our focus in this paper is on _optimal_ policy synthesis for neuro-symbolic systems, motivated by the need for such guarantees in safety-critical domains. To the best of our knowledge, our approach is the first value computation method for partially observable continuous-state neuro-symbolic systems. **Structure of the paper.** The remainder of the paper is structured as follows. Section 2 overviews the preliminaries of the POMDP framework. Section 3 proposes our model of neuro-symbolic POMDPs, together with its belief MDP, and gives an illustrative example. Section 4 introduces piecewise constant representations for functions in NS-POMDPs, and shows that they have a finite representation (P-PWLC) that ensures closure under the Bellman operator. A new value iteration (VI) algorithm is also proposed, and we prove the convexity and continuity of the value function. Section 5 presents a new HSVI algorithm for NS-POMDPs, which uses P-PWLC functions and belief-value induced functions to approximate the value function from below and above, and considers two belief representations for the implementation. Section 6 presents a prototype implementation and experimental evaluation of our algorithm for solving and optimal strategy synthesis for NS-POMDPs on two case studies. Section 7 concludes the paper. To ease presentation, proofs of the theorems and lemmas have been placed in the Appendix. ## 2 Background This section introduces the notation and preliminaries concerning Markov decision processes (MDPs) and their partially observable variant (POMDPs), execution paths and strategies (also called policies), and the construction of the (fully observable) belief MDP of a POMDP. **Notation.** The space of probability measures on a Borel space \(X\) is denoted \(\mathbb{P}(X)\), the space of bounded real-valued functions on \(X\) is denoted \(\mathbb{F}(X)\) and the subset of piecewise constant (PWC) functions of \(\mathbb{F}(X)\) is \(\mathbb{F}_{C}(X)\). **MDPs.** We focus on (Borel measurable) continuous-state MDPs, which model a single agent executing in a continuous environment by transitioning probabilistically between states. Formally, an MDP is given as a tuple \(\mathsf{M}=(S,Act,\Delta,\delta)\), where \(S\) is a Borel measurable set of states, \(Act\) a finite set of actions, \(\Delta:S\to 2^{Act}\) an available action function and \(\delta:(S\times Act)\rightarrow\mathbb{P}(S)\) a probabilistic transition function. When in state \(s\) of MDP \(\mathsf{M}\), the agent has a choice between available actions \(\Delta(s)\) and, if \(a\in\Delta(s)\) is chosen, then the probability of moving to state \(s^{\prime}\) is \(\delta(s,a)(s^{\prime})\). A path of \(\mathsf{M}\) is a sequence \(\pi=s_{0}\xrightarrow{a_{0}}s_{1}\xrightarrow{a_{1}}\cdots\) such that \(s_{i}\in S\), \(a_{i}\in\Delta(s_{i})\) and \(\delta(s_{i},a_{i})(s_{i+1})>0\) for all \(i\). We let \(\pi(i)=s_{i}\) and \(\pi[i]=a_{i}\) for all \(i\). \(F\mathit{Path}_{\mathsf{M}}\) is the set of finite paths of \(\mathsf{M}\) and \(\mathit{last}(\pi)\) is the last state of \(\pi\) for any \(\pi\in F\mathit{Path}_{\mathsf{M}}\). A _strategy (policy)_ of \(\mathsf{M}\) resolves the choices in each state based on the execution so far. Formally, a strategy \(\sigma\) is a Borel measurable mapping \(\sigma:\mathit{FPath}_{\mathsf{M}}\rightarrow\mathbb{P}(\mathit{Act})\) such that, if \(\sigma(\pi)(a)>0\), then \(a\in\Delta(\mathit{last}(\pi))\). We denote by \(\Sigma_{\mathsf{M}}\) the set of strategies of \(\mathsf{M}\). A strategy is memoryless if the choice depends only on the last state of each path and deterministic if it always selects an action with probability 1. Fixing a strategy \(\sigma\), the behaviour of \(\mathsf{M}\) from an initial state \(s\) is represented by a probability measure \(\mathbb{P}_{s}^{\sigma}\) over infinite paths starting in \(s\). **POMDPs.** POMDPs are an extension of MDPs, in which the agent cannot perceive the underlying state but instead must infer it based on observations. Formally, a POMDP is a tuple \(\mathsf{P}=(S,\mathit{Act},\Delta,\delta,\mathcal{O},\mathit{obs})\), where \((S,\mathit{Act},\Delta,\delta)\) is an MDP, \(\mathcal{O}\) is a finite set of observations and \(\mathit{obs}:S\rightarrow\mathcal{O}\) is a labelling of states with observations such that, for any \(s,s^{\prime}\in S\), if \(\mathit{obs}(s)=\mathit{obs}(s^{\prime})\) then \(\Delta(s)=\Delta(s^{\prime})\). Note that the underlying state space of the POMDP is uncountably infinite with a continuous-state structure. When in a state \(s\) of a POMDP \(\mathsf{P}\), a strategy cannot determine this state \(s\), but only the observation \(\mathit{obs}(s)\). The definitions of paths and strategies for \(\mathsf{P}\) carry over from MDPs. However, the set of strategies \(\Sigma_{\mathsf{P}}\) of \(\mathsf{P}\) includes only _observation-based strategies_. Formally, a strategy \(\sigma\) is observation-based if, for paths \(\pi=s_{0}\xrightarrow{a_{0}}\cdots\xrightarrow{a_{n-1}}s_{n}\) and \(\pi^{\prime}=s_{0}^{\prime}\xrightarrow{a_{0}}\cdots\xrightarrow{a_{n-1}}s_{ n}^{\prime}\) such that \(\mathit{obs}(s_{i})=\mathit{obs}(s_{i})\) for \(0\leq i\leq n\), then we have \(\sigma(\pi)=\sigma(\pi^{\prime})\). **Objectives, values and optimal strategies.** We focus on the _discounted accumulated reward_ objectives, since they balance the importance of immediate rewards compared to future rewards, and allow optimizing the behaviour over an infinite horizon. We note that the problem of undiscounted reward objectives is undecidable already for finite-state POMDPs. For reward structure \(r=(r_{A},r_{S})\), where \(r_{A}:(S\times\mathit{Act})\rightarrow\mathbb{R}\) and \(r_{S}:S\rightarrow\mathbb{R}\) are action and state bounded reward functions, the discounted accumulated reward for a path \(\pi\) of a POMDP \(\mathsf{P}\) is given by: \[Y(\pi)=\sum_{k=0}^{\infty}\beta^{k}\big{(}r_{A}(\pi(k),\pi[k])+r_{S}(\pi(k)) \big{)}\] where \(\beta\in(0,1)\) is the discount factor. Given a state \(s\) and strategy \(\sigma\) of \(\mathsf{P}\), \(\mathbb{E}_{s}^{\sigma}[Y]\) denotes the expected value of \(Y\) when starting from \(s\) under \(\sigma\). Solving \(\mathsf{P}\) means finding an _optimal strategy_\(\sigma^{\star}\in\Sigma_{\mathsf{P}}\) that maximises the objective function and the (optimal) _value function_\(V^{\star}:S\rightarrow\mathbb{R}\) is defined as \(V^{\star}(s)=\mathbb{E}_{s}^{\sigma^{\star}}[Y]\) for \(s\in S\). **Belief MDP.** A strategy of POMDP \(\mathsf{P}\) can infer the current state from the observations and actions performed. The usual way of representing this knowledge is as a _belief_\(b\in\mathbb{P}(S)\). In general, observation-based strategies represent more informative than belief-based strategies. However, since we focus on accumulated discounted rewards, under the Markov assumption belief-based strategies carry sufficient information to plan optimally [27], and therefore, for a given objective \(Y\), there exists an _optimal (observation-based) strategy_\(\sigma\) of \(\mathsf{P}\), which can be represented as \(\sigma:\mathbb{P}(S)\to\mathit{Act}\). The strategy updates its belief \(b\) to \(b^{a,o}\) via Bayesian inference based on the executed action \(a\) and observation \(o\), i.e. for \(s^{\prime}\in S\): \[b^{a,o}(s^{\prime})=(P(o\mid s^{\prime})/P(o\mid b,a))\int_{s\in S}\delta(s,a)( s^{\prime})b(s)\mathrm{d}s\,.\] Using this update we can define the corresponding (fully observable) _belief MDP_ in a standard way [28], from which an optimal strategy can be derived. We remark that belief spaces for continuous-state POMDPs are continuous and have infinitely many dimensions. ## 3 Neuro-Symbolic POMDPs In this section we introduce our model of neuro-symbolic POMDPs, aimed at scenarios where the agent perceives its environment using a data-driven perception function. While the model admits a wide class of perception functions, including those synthesised using machine learning methods such as regression or random forests, our main focus is on demonstrating practical applicability for models using neural network perception, which are being increasingly used in real-world applications [29] to partition continuous environments. This trend necessitates an integrated and automated approach to model and verify such systems. After defining the model, we give an illustrative example and then describe how a (fully observable) belief MDP can be obtained for an NS-POMDP. **NS-POMDPs.** The model of _neuro-symbolic POMDPs_ comprises a neuro-symbolic _agent_ acting in a continuous-state environment. This model is a single-agent partially observable variant of the fully-observable neuro-symbolic concurrent stochastic game model of [23; 24] and shares its syntax. The agent has finitely many local states and actions, and is endowed with a perception mechanism through which it can observe the state of the environment, recording such observations as (a discrete set of finitely many) _percepts_. Before discussing the special case of NN perception functions, we consider the general case, defined formally as follows. **Definition 1** (Syntax of NS-POMDPs).: _An NS-POMDP \(\mathsf{P}\) comprises an agent \(\mathsf{Ag}=(S_{A},\mathit{Act},\Delta_{A},\mathit{obs}_{A},\delta_{A})\) and environment \(E=(S_{E},\delta_{E})\) where:_ * \(S_{A}=\mathit{Loc}\times\mathit{Per}\) _is a set of states for_ \(\mathsf{Ag}\)_, where_ \(\mathit{Loc}\subseteq\mathbb{R}^{b}\) _and_ \(\mathit{Per}\subseteq\mathbb{R}^{d}\) _are finite sets of local states and percepts, respectively;_ * \(S_{E}\subseteq\mathbb{R}^{e}\) _is a closed set of continuous environment states;_ * \(\mathit{Act}\) _is a nonempty finite set of actions for_ \(\mathsf{Ag}\)_;_ * \(\Delta_{A}:S_{A}\to 2^{\mathit{Act}}\) _is an available action function for_ \(\mathsf{Ag}\)_;_ * \(\mathit{obs}_{A}:(\mathit{Loc}\times S_{E})\rightarrow\mathit{Per}\) _is_ \(\mathsf{Ag}\)_'s perception function;_ * \(\delta_{A}:(S_{A}\times\mathit{Act})\rightarrow\mathbb{P}(\mathit{Loc})\) _is_ \(\mathsf{Ag}\)_'s probabilistic transition function;_ * \(\delta_{E}:(S_{E}\times\mathit{Act})\rightarrow\mathbb{P}(S_{E})\) _is a finitely-branching probabilistic transition function for the environment._ NS-POMDPs are a subclass of continuous-state POMDPs with discrete observations (i.e., agent states \(S_{A}\), which are pairs consisting of a local state and percept) and actions. This model captures a number of key properties of POMDP models that we target. The environment is continuous, as many real-world systems such as robot navigation are naturally modelled by continuous states, and probabilities are used to account for uncertainties. At the same time, the agent's state space is finite to ensure tractability. The system executes as follows. A (global) state for an NS-POMDP \(\mathsf{P}\) comprises an agent state \(s_{A}=(\mathit{loc},\mathit{per})\), where \(\mathit{loc}\) is its local state and \(\mathit{per}\) is the percept, and environment state \(s_{E}\). In state \(s=(s_{A},s_{E})\), the agent \(\mathsf{Ag}\) chooses an action \(a\) available in \(s_{A}\), then updates its local state to \(\mathit{loc}^{\prime}\) according to the distribution \(\delta_{A}(s_{A},a)\) and, at the same time, the environment updates its state to \(s^{\prime}_{E}\) according to \(\delta_{E}(s_{E},a)\). Finally, the agent, based on \(\mathit{loc}^{\prime}\) (since it may require different information regarding the environment depending on its local state), observes \(s^{\prime}_{E}\) to generate a new percept \(\mathit{per}^{\prime}=\mathit{obs}_{A}(\mathit{loc}^{\prime},s^{\prime}_{E})\) and \(\mathsf{P}\) reaches the state \(s^{\prime}=((\mathit{loc}^{\prime},\mathit{per}^{\prime}),s^{\prime}_{E})\). While the NS-POMDP model admits any (deterministic) function \(\mathit{obs}_{A}\) from the continuous environment to percepts, in this work we focus on perception functions implemented via (trained) neural networks \(f:\mathbb{R}^{b+e}\rightarrow\mathbb{P}(\mathit{Per})\), yielding normalised scores over different percept values. A rule is then applied that selects the percept value with the maximum score. While restricting perception to deterministic functions with discrete outputs is limiting, it is well aligned with NN classifiers in applications such as object detection and localisation that we target. A polyhedral decomposition of the continuous state space can then be obtained by computing the preimage of the perception function [30]. To motivate our definition of NS-POMDPs, we consider a dynamic vehicle parking example, in which an autonomous vehicle uses an NN for localisation while looking for a parking spot. We are interested in automated synthesis of an optimal strategy to reach the spot. **Example 1**.: We consider the problem of an agent Ag (vehicle) looking for the green parking spot (Fig. 1, left). The vehicle uses an NN as a perception mechanism (Fig. 1, middle) that subdivides the continuous environment \(\mathcal{R}=\{(x,y)\in\mathbb{R}^{2}\mid 0\leq x,y\leq 4\}\) into 16 cells, resulting in a grid-like abstraction of the environment. We trained an NN with one hidden ReLU layer on randomly generated data to take the coordinates of the vehicle as input and output one of the 16 abstract grid points (percepts). The parking spot region is \(\mathcal{R}_{P}=\{(x,y)\in\mathbb{R}^{2}\mid 2\leq x\leq 3\wedge 3\leq y \leq 4\}\). We assume the agent can start from any position and has constant speed. The environment's state space \(\mathcal{R}\) corresponds to the continuous coordinates of the vehicle, with the percept value stored locally by the agent. The agent's actions are to move _up_, _down_, _left_ or _right_, or _park_, and a _suggested_ subset of these actions is associated with each percept (see Table 1). Since the NN is trained from data, the percepts do not perfectly align with the abstract grid (Fig. 1, right), the agent additionally records a trust value to reflect whether actions recommended by the perception function but disallowed (see Table 1), e.g., to _park_ before physically reaching the parking spot, are actually taken by the agent. At each step, the agent updates the trust level in the recommended actions and receives a percept of the environment to keep track of its position Figure 1: Car parking example, perception NN and perception FCP of its preimage consisting of 62 polygons and 16 classes. in the abstract grid. Then, the agent takes an action based on the path of previous trust-percept pairs. Next, the agent increases the trust level if the percept is compliant with the executed action and decreases it probabilistically otherwise. The environment's transition function corresponds to the vehicle moving in the direction specified by the agent for a fixed time step. Finally, the agent updates its percept of the updated environment state using its NN observation function. Formally, the car parking example can be modeled as an NS-POMDP with the following components. * \(S_{A}=\mathit{Loc}\times\mathit{Per}\) where \(\mathit{Loc}=\{1,\ldots,5\}\) (local states) are the 5 trust levels and \(\mathit{Per}=\{1,\ldots,16\}\) (percepts) are the 16 abstract grid points which are ordered according to Table 1. * \(S_{E}=\mathcal{R}=\{(x,y)\in\mathbb{R}^{2}\mid 0\leq x,y\leq 4\}\). * \(Act=\{\mathit{up},\mathit{down},\mathit{left},\mathit{right},\mathit{park}\}\). * For \((\mathit{tr},\mathit{per})\in S_{A}\) we have \(\Delta_{A}(\mathit{tr},\mathit{per})=\mathit{Act}\) if \(\mathit{per}=15\) and otherwise \(\Delta_{A}(\mathit{tr},\mathit{per})=\{\mathit{up},\mathit{down},\mathit{ left},\mathit{right}\}\). * For \(\mathit{tr}\in\mathit{Loc}\) and \((x,y)\in S_{E}\) we have \(\mathit{obs}_{A}(\mathit{tr},(x,y))=\mathrm{argmax}(f(x,y))\), where \(f\) is implemented via a feed-forward NN with one hidden ReLU layer and 14 neurons, takes the coordinate vector of the vehicle as input and then outputs one of the 16 abstract grid points (Fig. 1, middle). The boundary coordinate is resolved by assigning the grid point with the smallest label. * For \(s_{A}=(\mathit{tr},\mathit{per})\in S_{A}\), \(\mathit{tr}^{\prime}\in\mathit{Loc}\) and \(a\in\mathit{Act}\), if \(a\) is compliant with \(\mathit{per}\), see Table 1, then: \[\delta_{A}(s_{A},a)(\mathit{tr}^{\prime})=\left\{\begin{array}{rl}1&\text{if }( \mathit{tr}\leq 4)\land(\mathit{tr}^{\prime}=\mathit{tr}+1)\\ 1&\text{if }(\mathit{tr}=5)\land(\mathit{tr}^{\prime}=\mathit{tr})\\ 0&\text{otherwise}\end{array}\right.\] on the other hand, if \(a\) is not compliant with \(\mathit{per}\), then: \[\delta_{A}(s_{A},a)(\mathit{tr}^{\prime})=\left\{\begin{array}{rl}0.5&\text{if }( \mathit{tr}\geq 2)\land(\mathit{tr}^{\prime}=\mathit{tr}-1)\\ 0.5&\text{if }(\mathit{tr}\geq 2)\land(\mathit{tr}^{\prime}=\mathit{tr})\\ 1&\text{if }(\mathit{tr}=1)\land(\mathit{tr}^{\prime}=\mathit{tr})\\ 0&\text{otherwise.}\end{array}\right.\] * For \((x,y),(x^{\prime},y^{\prime})\in S_{E}\) and \(a\in Act\) if \(x^{\prime\prime}=x+\Delta td_{ax}\) and \(y^{\prime\prime}=y+\Delta td_{ay}\), then \[\delta_{E}((x,y),a)(x^{\prime},y^{\prime})=\left\{\begin{array}{ll}1&\mbox{ if }(x^{\prime\prime},y^{\prime\prime})\in\mathcal{R}\mbox{ and }(x^{\prime},y^{\prime})=(x^{\prime\prime},y^{ \prime\prime})\\ 1&\mbox{if }(x^{\prime\prime},y^{\prime\prime})\not\in\mathcal{R}\mbox{ and }(x^{\prime},y^{\prime})=(x,y)\\ 0&\mbox{otherwise}\end{array}\right.\] where \(\Delta t=1.0\) is the time step and \(d_{a}=(d_{ax},d_{ay})\) is the direction of movement of the action \(a\), e.g., \(d_{up}=(0,1)\) and \(d_{\mathit{left}}=(-1,0)\). \(\blacksquare\) **NS-POMDP semantics.** The semantics of an NS-POMDP \(\mathsf{P}\) is a POMDP over the product of the (discrete) states of the agent and the (continuous) states of the environment, except that we restrict those to states that are percept compatible. A state \(s=((\mathit{loc},\mathit{per}),s_{E})\) is _percept compatible_ if \(\mathit{per}=\mathit{obs}_{A}(\mathit{loc},s_{E})\). The semantics of an NS-POMDP is closed with respect to percept compatible states. Definition 2 (Semantics of NS-POMDPs): Given an NS-POMDP \(\mathsf{P}\), the semantics of \(\mathsf{P}\) is the POMDP \(\llbracket\mathsf{P}\rrbracket=(S,Act,\Delta,\delta,S_{A},\mathit{obs})\) where: * \(S\subseteq S_{A}\times S_{E}\) is the set of percept compatible states, which contain both discrete and continuous elements; * \(\Delta(s_{A},s_{E})=\Delta_{A}(s_{A})\) for \((s_{A},s_{E})\in S\); * \(\mathit{obs}(s_{A},s_{E})=s_{A}\) for \((s_{A},s_{E})\in S\); * for \(s=(s_{A},s_{E}),s^{\prime}=(s^{\prime}_{A},s^{\prime}_{E})\in S\) and \(a\in\Delta(s)\), if \(s^{\prime}_{A}=(\mathit{loc}^{\prime},\mathit{per}^{\prime})\) is percept compatible, then \(\delta(s,a)(s^{\prime})=\delta_{A}(s_{A},a)(\mathit{loc}^{\prime})\delta_{E}(s _{E},a)(s^{\prime}_{E})\) and \(\delta(s,a)(s^{\prime})=0\) otherwise. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Cell label & \multirow{2}{*}{Abstract grid point} & Suggested & Cell label & \multirow{2}{*}{Abstract grid point} & Suggested \\ (\(\mathit{per}\)) & & actions & (\(\mathit{per}\)) & actions \\ \hline \hline 1 & \((1,1)\) & \(\mathit{up},\mathit{right}\) & 9 & \((1,3)\) & \(\mathit{up},\mathit{right}\) \\ 2 & \((2,1)\) & \(\mathit{up},\mathit{right}\) & 10 & \((2,3)\) & \(\mathit{up},\mathit{right}\) \\ 3 & \((3,1)\) & \(\mathit{up}\) & 11 & \((3,3)\) & \(\mathit{up}\) \\ 4 & \((4,1)\) & \(\mathit{up},\mathit{left}\) & 12 & \((4,3)\) & \(\mathit{up},\mathit{left}\) \\ 5 & \((1,2)\) & \(\mathit{up},\mathit{right}\) & 13 & \((1,4)\) & \(\mathit{right}\) \\ 6 & \((2,2)\) & \(\mathit{up},\mathit{right}\) & 14 & \((2,4)\) & \(\mathit{right}\) \\ 7 & \((3,2)\) & \(\mathit{up}\) & 15 & \((3,4)\) & \(\mathit{park}\) \\ 8 & \((4,2)\) & \(\mathit{up},\mathit{left}\) & 16 & \((4,4)\) & \(\mathit{left}\) \\ \hline \end{tabular} \end{table} Table 1: Suggested actions for each percept of car parking \((4\times 4)\), where the abstract grid point \((i,j)\) is the \(i\)th and \(j\)th cell along the positive \(x\)-axis and \(y\)-axis, respectively. Since \(\delta_{E}\) has finite branching and \(S_{A}\) is finite, the branching set \(\Theta^{a}_{s_{E}}=\{s^{\prime}_{E}\mid\delta_{E}(s_{E},a)(s^{\prime}_{E})>0\}\) is finite for all \(s_{E}\in S_{E}\) and \(a\in Act\), and the branching set \(\Theta^{a}_{s}=\{s^{\prime}\mid\delta(s,a)(s^{\prime})>0\}\) is finite for all \(s\in S\) and \(a\in\Delta(s)\). Note that, while NS-POMDPs are finite branching, they are not discrete. **NS-POMDP strategies.** As \(\llbracket\text{P}\rrbracket\) is a POMDP, we consider _observation-based strategies_, which can be represented by memoryless strategies over its belief MDP \(\llbracket\text{P}\rrbracket_{B}\). Given agent state \(s_{A}=(\mathit{loc},\mathit{per})\), we let \(S_{E}^{s_{A}}=\{s_{E}\in S_{E}\mid\mathit{obs}_{A}(\mathit{loc},s_{E})= \mathit{per}\}\), i.e., the environment states generating percept _per_ given _loc_. Since agent states are observable and states of \(\llbracket\text{P}\rrbracket\) are percept compatible, beliefs can be represented as pairs \((s_{A},b_{E})\), where \(s_{A}\in S_{A}\) is an agent state, \(b_{E}\in\mathbb{P}(S_{E})\) is a belief over environment, and \(b_{E}(s_{E})=0\) for all \(s_{E}\in S_{E}\setminus S_{E}^{s_{A}}\), i.e., those states that are not percept compatible. Before giving the definition of \(\llbracket\text{P}\rrbracket_{B}\), we consider how beliefs are updated in this setting. Therefore, suppose \(s_{A}\) is the current agent state, i.e., what is observable, and \(b_{E}\) is the current belief about the environment. Then if action \(a\) is executed and \(s^{\prime}_{A}\) is observed, the updated belief is such that for any \(s^{\prime}_{E}\in S_{E}\): \[b_{E}^{s_{A},a,s^{\prime}_{A}}(s^{\prime}_{E})=\frac{P((s^{\prime}_{A},s^{ \prime}_{E})\mid(s_{A},b_{E}),a)}{P(s^{\prime}_{A}\mid(s_{A},b_{E}),a)}\text{ if }s^{\prime}_{E}\in S_{E}^{s^{\prime}_{A}}\text{ and equals 0 otherwise.} \tag{1}\] **Belief MDP and belief updates.** We can now derive the belief MDP of an NS-POMDP, which follows through a standard construction [28] while relying on Borel measurability of the underlying uncountable state space of the NS-POMDP. **Definition 3** (Belief MDP).: _The belief MDP of an NS-POMDP \(\text{P}\) is the MDP \(\llbracket\text{P}\rrbracket_{B}=(S_{B},Act,\Delta_{B},\delta_{B})\), where:_ * \(S_{B}\subseteq S_{A}\times\mathbb{P}(S_{E})\) _is the set of percept compatible beliefs;_ * \(\Delta_{B}(s_{A},b_{E})=\Delta_{A}(s_{A})\) _for_ \((s_{A},b_{E})\in S_{B}\)_;_ * _for_ \((s_{A},b_{E}),(s^{\prime}_{A},b^{\prime}_{E})\in S_{B}\)_, and_ \(a\in\Delta_{B}(s_{A},b_{E})\)_:_ \[\delta_{B}((s_{A},b_{E}),a)(s^{\prime}_{A},b^{\prime}_{E})=\left\{\begin{array} []{cl}P(s^{\prime}_{A}\mid(s_{A},b_{E}),a)&\text{if }b^{\prime}_{E}=b_{E}^{s_{A},a,s^{ \prime}_{A}}\\ 0&\text{otherwise.}\end{array}\right.\] Finally, in this section we discuss how the beliefs and probabilities of Definition 3 can be computed. For any \((s_{A},b_{E}),(s^{\prime}_{A},b^{\prime}_{E})\in S_{B}\) and \(s^{\prime}_{A}=(loc^{\prime},per^{\prime})\), we have that \(P(s^{\prime}_{A}\mid(s_{A},b_{E}),a)\) equals: \[\delta_{A}(s_{A},a)(loc^{\prime})\left(\int_{s_{E}\in S_{E}}b_{E}(s_{E}){\sum }_{s^{\prime}_{E}\in S^{s^{\prime}_{A}}_{E}}\delta_{E}(s_{E},a)(s^{\prime}_{E })\mathrm{d}s_{E}\right). \tag{2}\] Furthermore, \(P((s^{\prime}_{A},s^{\prime}_{E})\mid(s_{A},b_{E}),a)\) equals: \[\delta_{A}(s_{A},a)(loc^{\prime})\left(\int_{s_{E}\in S_{E}}b_{E}(s_{E})\delta _{E}(s_{E},a)(s^{\prime}_{E})\mathrm{d}s_{E}\right) \tag{3}\] if \(s^{\prime}_{E}\in S^{s^{\prime}_{A}}_{E}\) and 0 otherwise. Thus, using (1) we have that \(b^{s_{A},a,s^{\prime}_{A}}_{E}(s^{\prime}_{E})\) equals: \[\frac{\int_{s_{E}\in S_{E}}b_{E}(s_{E})\delta_{E}(s_{E},a)(s^{\prime}_{E}) \mathrm{d}s_{E}}{\int_{s_{E}\in S_{E}}b_{E}(s_{E}){\sum}_{s^{\prime \prime}_{E}\in S^{s^{\prime}_{A}}_{E}}\delta_{E}(s_{E},a)(s^{\prime\prime}_{E })\mathrm{d}s_{E}}\text{ if }s^{\prime}_{E}\in S^{s^{\prime}_{A}}_{E}\text{ and }0\text{ otherwise.} \tag{4}\] We note that the belief MDP \(\llbracket\mathsf{P}\rrbracket_{B}\) is continuous and infinite-dimensional, with finite branching. Thus, solving it exactly is intractable as closed-form operations and parametric forms for continuous functions are required. For efficient computation, beliefs also need to be in closed form. ## 4 Value Iteration A common approach to solving continuous-state POMDPs is to discretise or approximate the continuous components with a grid and use methods for finite-state POMDPs. As this may compromise accuracy and leads to an exponential growth in the number of states, we instead aim to operate directly in the continuous domain. Since functions over continuous spaces can have arbitrary forms not amenable to computation, we will extend \(\alpha\)-functions to the setting of NS-POMDPs, aided by the theoretical formulation of [8], where it was proved that continuous-state POMDPs with discrete observations and actions have a piecewise linear and convex value function. Rather than work with Gaussian mixtures as in [8], which would require approximations, we will directly exploit the structure of the model to induce a finite (polyhedral) representation of the value function. More specifically, in this section we show that _piecewise constant_ representations for the perception, reward and transition functions are sufficient for NS-POMDPs under mild assumptions, in the sense that they offer a finite representation and are closed with respect to belief update and the Bellman operator. We next propose a value iteration (VI) algorithm that utilises piecewise constant \(\alpha\)-functions, which does not scale but serves as a basis for designing a practical point-based algorithm in Section 5. We conclude this section by investigating the convexity and continuity of the value function. **Value functions.** We work with the belief MDP \(\llbracket\mathsf{P}\rrbracket_{B}=(S_{B},\mathit{Act},\Delta_{B},\delta_{B})\) of an NS-POMDP \(\mathsf{P}\) and consider discounted accumulated reward objectives \(Y\). The _value function_ is given by \(V^{\star}:S_{B}\to\mathbb{R}\), where \(V^{\star}(s_{A},b_{E})=\mathbb{E}_{(s_{A},b_{E})}^{\sigma^{\star}}[Y]\) for all \((s_{A},b_{E})\in S_{B}\). We require the following notation to evaluate beliefs through a function over the state space \(S\). Given \(f:S\to\mathbb{R}\) and belief \((s_{A},b_{E})\), let: \[\langle f,(s_{A},b_{E})\rangle=\int_{s_{E}\in S_{E}}f(s_{A},s_{E})b_{E}(s_{E}) \mathrm{d}s_{E} \tag{5}\] for which an integral over \(S_{E}\) is required. Recall that \(\mathbb{F}(S_{B})\) denotes the space of functions over the beliefs. **Definition 4** (Bellman operator).: _Given \(V\in\mathbb{F}(S_{B})\), the operator \(T:\mathbb{F}(S_{B})\to\mathbb{F}(S_{B})\) is defined as follows: \([TV](s_{A},b_{E})\) equals_ \[\max_{a\in\Delta_{A}(s_{A})}\Big{\{}\langle R_{a},(s_{A},b_{E})\rangle+\beta \mathrm{\sum}_{s^{\prime}_{A}\in S_{A}}P(s^{\prime}_{A}\mid(s_{A},b_{E}),a)V( s^{\prime}_{A},b_{E}^{s_{A},a,s^{\prime}_{A}})\Big{\}} \tag{6}\] _for \((s_{A},b_{E})\in S_{B}\), where \(R_{a}(s)=r_{A}(s,a)+r_{S}(s)\) for \(s\in S\)._ Since \(\llbracket\mathsf{P}\rrbracket_{B}\), the semantics of NS-POMDP \(\mathsf{P}\), is a continuous-state POMDP with discrete observations and actions, according to [8] the value function \(V^{\star}\) is the unique fixed point of the operator \(T\), and thus, theoretically, value iteration can be used to compute \(V^{\star}\). However, as the functions involved are defined over probability density functions from \(\mathbb{P}(S_{E})\) and \(S_{E}\) is a continuous space, to ensure feasible computation we require a finite parameterisable representation for the value function. To this end, we will extend the class of \(\alpha\)-functions with special structure introduced for continuous-state POMDPs in [8], which generalise \(\alpha\)-vector representations for finite-state POMDPs [31]. We first observe that perception functions are piecewise constant (PWC), and can therefore be used to induce a finite partition of the continuous state space consisting of connected and observationally-equivalent _regions_ by computing the preimage of the perception function. We then impose mild assumptions on the NS-POMDP structure (Assumption 1) to ensure that the agent and environment transition functions preserve the PWC properties of this partition, and on the reward function to ensure region-based reward accumulation (Assumption 2). ### PWC Representations A _finite connected partition (FCP)_ of \(S\), denoted \(\Phi\), is a finite collection of disjoint connected subsets (regions) that cover \(S\). **Definition 5** (PWC function): _A function \(f:S\rightarrow\mathbb{R}\) is piecewise constant (PWC) if there exists an FCP \(\Phi\) of \(S\) such that \(f:\phi\rightarrow\mathbb{R}\) is constant for all \(\phi\in\Phi\). Such an FCP \(\Phi\) is called constant-FCP of \(S\) for \(f\)._ Since the perception function is PWC, we can show that the continuous-state space of an NS-POMDP can be decomposed into a finite set of regions such that the states in each region have the same observation. **Lemma 1** (Perception FCP): _There exists a smallest FCP of \(S\), called the perception FCP, denoted \(\Phi_{P}\), such that all states in any \(\phi\in\Phi_{P}\) are observationally equivalent, i.e., if \((s_{A},s_{E}),(s^{\prime}_{A},s^{\prime}_{E})\in\phi\), then \(s_{A}=s^{\prime}_{A}\) and we let \(s^{\phi}_{A}=s_{A}\)._ The perception FCP \(\Phi_{P}\) can be used to find the set \(S^{s^{\prime}_{A}}_{E}\) for any agent state \(s^{\prime}_{A}\in S_{A}\) over which we integrate beliefs in closed form, see e.g., (2) and (4). If the perception function \(\mathit{obs}_{A}\) is specified as an NN, the corresponding FCP \(\Phi_{P}\) can be extracted, or approximated, by analyzing its pre-image [30], which can be computed offline. Implementing the transition functions \(\delta\) and \(\delta_{E}\) over continuous-state spaces is intractable. Since the perception function induces a decomposition into a finite set of regions, we further assume that such a decomposition is preserved under the transition function, so that states in a given FCP region reach the same regions of some other FCP under \(\delta\) (and likewise the same rewards). We assume that \(\delta_{E}\) is represented by a probabilistic choice over \(N_{e}\in\mathbb{N}\) (deterministic) continuous transition functions and that the reward structure is bounded PWC. **Assumption 1** (Transitions): _For \(a\in\mathit{Act}\) and FCP \(\Phi\) of \(S\), there exists an FCP \(\Phi^{\prime}\) of \(S\), called the pre-image FCP of \(\Phi\) for \(a\), where for \(\phi\in\Phi\) and \(\phi^{\prime}\in\Phi^{\prime}\) either \(\Theta^{a}_{s}\cap\phi=\varnothing\) for all \(s\in\phi^{\prime}\), or \(\Theta^{a}_{s}\cap\phi\neq\varnothing\) for all \(s\in\phi^{\prime}\) and if \(s,\tilde{s}\in\phi^{\prime}\), then \(\sum_{s^{\prime}\in\Theta^{a}_{s}\cap\phi}\delta(s,a)(s^{\prime})=\sum_{ \tilde{s}^{\prime}\in\Theta^{a}_{\tilde{s}}\cap\phi}\delta(\tilde{s},a)(\tilde {s}^{\prime})\). Furthermore, \(\delta_{E}=\sum_{j=1}^{N_{e}}\mu_{i}\delta^{i}_{E}\) where \(\delta^{i}_{E}:(S_{E}\times Act)\to S_{E}\) is piecewise continuous, \(\mu_{i}\geq 0\) and \(\sum_{i=1}^{N_{e}}\mu_{i}=1\)._ **Assumption 2** (Rewards).: _The reward functions \(r_{A}(\cdot,a),r_{S}:S\to\mathbb{R}\) are bounded PWC for all \(a\in Act\). Therefore, for each action \(a\in Act\), there exists a smallest FCP of \(S\), called the reward FCP under action \(a\) and denoted \(\Phi_{R}^{a}\), such that all states in any \(\phi\in\Phi_{R}^{a}\) have the same rewards, i.e., if \(s,s^{\prime}\in\phi\), then \(r_{A}(s,a)=r_{A}(s^{\prime},a)\) and \(r_{S}(s)=r_{S}(s^{\prime})\)._ **Example 2**.: Fig. 1 (right) shows an FCP representation for the pre-image of the perception function of Example 1. The FCP was constructed via the exact computation method from [30], and is composed of 62 polygons. Each colour indicates one of the grid cells as perceived by the agent. In the reward structure, all action rewards are zero and the state reward function is such that for any \((s_{A},(x,y))\in S\): \(r_{S}(s_{A},(x,y))=1000\) if \((x,y)\in\mathcal{R}_{P}\) and \(0\) otherwise, i.e., there is a positive reward if the parking spot is found. \(\blacksquare\) We emphasize that, although the states in any region of the perception FCP are observationally equivalent, by Assumption 1 the transitions have finite representations, and by Assumption 2 the states in any region of the reward FCP have the same reward, such states can still have different values as taking the same actions can yield paths that need not be observationally equivalent. Therefore, the value function \(V^{\star}\) may not be piecewise constant. Our results demonstrate that analysing NS-POMDPs under these PWC restrictions remains challenging, since any discretisation would imply that all states contained in an abstract region have the same sequences of transitions and rewards given a sequence of actions, and thus have the same value. Thus, it is not possible to construct, a priori, a partition of the state space that reduces the problem to finding the values of a finite-state POMDP. It would be possible to find, from some initial belief, all reachable states up to some finite depth and then compute an approximate value for the initial belief. However, this approach can yield an exponential blow up in the number of beliefs, or even infinitely many reachable states, for instance, when the initial belief has positive probabilities over a region of the continuous-state space. Instead of unrolling, our algorithm progressively subdivides the continuous state space during value backups. Additionally, we remark that finite branching of the environment transition function does not make the NS-POMDP discrete because, unlike in finite-state POMDPs, these transitions, represented by a finite number of piecewise continuous transition functions, cannot be characterized via a finite set of state-to-state transitions. Besides, if the current belief has positive probabilities over an infinite number of states, then the updated belief can also have an infinite number of states with positive probabilities. ### PWC \(\alpha\)-Function Value Iteration We can now show, utilising the results for continuous-state POMDPs [8], that \(V^{\star}\) is the limit of a sequence of \(\alpha\)-functions, called _piecewise linear and convex under PWC \(\alpha\)-functions (P-PWLC)_, where each such function can be represented by a (finite) set of PWC functions (concretely, as a finite set of FCP regions and a value vector). **Definition 6** (P-PWLC function): _A function \(V:S_{B}\to\mathbb{R}\) is piecewise linear and convex under PWC \(\alpha\)-functions (P-PWLC) if there exists a finite set \(\Gamma\subseteq\mathbb{F}_{C}(S)\) such that \(V(s_{A},b_{E})=\max_{\alpha\in\Gamma}\langle\alpha,(s_{A},b_{E})\rangle\) for all \((s_{A},b_{E})\in S_{B}\) where the functions in \(\Gamma\) are called PWC \(\alpha\)-functions._ Definition 6 implies that, if \(V\in\mathbb{F}(S_{B})\) is P-PWLC, then it can be represented by a set \(\Gamma\) of PWC continuous functions over \(S\). For NS-POMDPs, we demonstrate that, under Assumptions 1 and 2, a P-PWLC representation of value functions is closed under the Bellman operator and the value iteration converges. **Theorem 1** (P-PWLC closure and convergence): _If \(V\in\mathbb{F}(S_{B})\) and P-PWLC, then so is \([TV]\). If \(V^{0}\in\mathbb{F}(S_{B})\) and P-PWLC, then the sequence \((V^{t})_{t=0}^{\infty}\), such that \(V^{t+1}=[TV^{t}]\) are P-PWLC and converges to \(V^{\star}\)._ We remark that an implementation of this exact value iteration is feasible, since each \(\alpha\)-function involved is PWC and thus allows for a finite representation. However, as the number of \(\alpha\)-functions grows exponentially in the number of agent states, it is not scalable. ### Convexity and Continuity of the Value Function In Section 5, we will derive a variant of HSVI for lower and upper bounding of the value function, which is more scalable. To this end, the following properties will be required. Using Theorem 1 the value function can be represented as a pointwise maximum \(V^{\star}(s_{A},b_{E})=\sup_{\alpha\in\Gamma}\langle\alpha,(s_{A},b_{E})\rangle\) for \((s_{A},b_{E})\in S_{B}\), where \(\Gamma\subseteq\mathbb{F}_{C}(S)\) may be infinite. We now show that \(V^{\star}\) is convex and continuous for any fixed \(s_{A}\in S_{A}\). Since we assume bounded reward functions, the value function \(V^{\star}\) has lower and upper bounds: \[L=\min_{s\in S,a\in Act}R_{a}(s)/(1-\beta)\quad\text{and}\quad U=\max_{s\in S,a\in Act}R_{a}(s)/(1-\beta)\,. \tag{7}\] **Theorem 2** (Convexity and continuity): _For any \(s_{A}\in S_{A}\), the value function \(V^{\star}(s_{A},\cdot):\mathbb{P}(S_{E})\rightarrow\mathbb{R}\) is convex and for any \(b_{E},b^{\prime}_{E}\in\mathbb{P}(S_{E})\):_ \[|V^{\star}(s_{A},b_{E})-V^{\star}(s_{A},b^{\prime}_{E})|\leq K(b_{E},b^{\prime} _{E}) \tag{8}\] _where \(K(b_{E},b^{\prime}_{E})=(U-L)\int_{s_{E}\in S_{E}^{bx>b^{\prime}_{E}}}(b_{E}(s _{E})-b^{\prime}_{E}(s_{E}))\mathrm{d}s_{E}\) and \(S_{E}^{b_{E}>b^{\prime}_{E}}=\{s_{E}\in S_{E}^{s_{A}}\mid b_{E}(s_{E})-b^{ \prime}_{E}(s_{E})>0\}\)._ ## 5 Heuristic Search Value Iteration Value iteration with point-based updates has been proposed for finite-state POMDPs [8; 10; 7; 17; 32], relying on the fact that performing many fast approximate updates often results in a more useful value function than performing a few exact updates. HSVI [10] approximates the value function \(V^{\star}\) at a given initial belief via lower and upper bound functions, which are updated through heuristically generated beliefs. SARSOP [21] improves efficiency, but sacrifices convergence guarantees due to aggressive pruning. These methods focus on finite-state POMDPs and are not directly applicable to continuous-state NS-POMDPs, as they rely on discretisation or approximation. We now present a new HSVI algorithm for NS-POMDPs, which uses P-PWLC functions and belief-value induced functions to approximate \(V^{\star}\) from below and above. This HSVI algorithm progressively subdivides the continuous state space during value backups, to obtain a piecewise constant lower bound and a lower \(K\)-Lipschitz envelope of a convex hull upper bound on \(V^{\star}\) that itself may not be piecewise constant. We first introduce the representations of the lower and upper bound functions to the value function, then present point-based updates followed by our HSVI algorithm, and finally consider two belief representations for the implementation, both with closed forms for the quantities of interest, one based on particles (individually sampled points) and the other on regions (polyhedra) of the continuous space. ### Lower and Upper Bound Representations **Lower bound function.** Selecting an appropriate representation for \(\alpha\)-functions requires closure properties with respect to the Bellman operator, which includes both the transition function and the reward function. Rather than relying on Gaussian mixtures [8], which require both the transition and reward functions to be in this form, we represent the lower bound \(V_{\mathit{LB}}^{\Gamma}\in\mathbb{F}(S_{B})\) as a P-PWLC function for the finite set \(\Gamma\subseteq\mathbb{F}_{C}(S)\) of PWC \(\alpha\)-functions (see Definition 6), for which closure is guaranteed by Theorem 1. This is finitely representable as each \(\alpha\)-function is PWC. In contrast to Gaussian mixtures, our P-PWLC representation is designed to match the NS-POMDP perfectly, with the necessary closure properties ensured by exploiting the structure of the NS-POMDP. **Upper bound function.** The upper bound \(V_{\mathit{UB}}^{\Upsilon}\in\mathbb{F}(S_{B})\) is represented by a finite set of belief-value points \(\Upsilon=\{((s_{A}^{i},b_{E}^{i}),y_{i})\mid i\in I\}\), where \(y_{i}\) is an upper bound of \(V^{\star}(s_{A}^{i},b_{E}^{i})\). Since \(V^{\star}(s_{A},\cdot)\) is convex by Theorem 2, letting \(I_{s_{A}}=\{i\in I\mid s_{A}^{i}=s_{A}\}\), for any \(\lambda_{i}\geq 0\) such that \(\sum_{i\in I_{s_{A}}}\lambda_{i}=1\), we have: \[V^{\star}(s_{A},\sum_{i\in I_{s_{A}}}\lambda_{i}b_{E}^{i})\leq\sum_{i\in I_{s _{A}}}\lambda_{i}V^{\star}(s_{A}^{i},b_{E}^{i})\leq\sum_{i\in I_{s_{A}}} \lambda_{i}y_{i}\,.\] This fact is used in HSVI for finite-state POMDPs [10], as any new belief is a convex combination of the beliefs in \(\Upsilon\), and therefore the convexity of \(V^{\star}(s_{A},\cdot)\) yields an upper bound. However, there is no such convex combination guarantee in NS-POMDPs since, as \(\Upsilon\) is finite and beliefs are over a continuous-state space, any convex combinations of beliefs in \(\Upsilon\) cannot cover the belief space. Therefore, the upper bound \(V_{\mathit{UB}}^{\Upsilon}\) is instead defined as the lower envelope of the lower convex hull of the points in \(\Upsilon\) satisfying the following problem: \[V_{\mathit{UB}}^{\Upsilon}(s_{A},b_{E})=\text{minimize }\sum_{i\in I_{s_{A}}} \lambda_{i}y_{i}+K_{\mathit{UB}}(b_{E},\sum_{i\in I_{s_{A}}}\lambda_{i}b_{E} ^{i})\] \[\text{subject to: }\lambda_{i}\geq 0,\sum_{i\in I_{s_{A}}} \lambda_{i}=1\text{ for all }(s_{A},b_{E})\in S_{B} \tag{9}\] where \(K_{\mathit{UB}}:\mathbb{P}(S_{E})\times\mathbb{P}(S_{E})\to\mathbb{R}\) measures the difference between two beliefs such that, if \(K\) is from Theorem 2 showing the continuity of the value function, then for any \(b_{E},b_{E}^{\prime}\in\mathbb{P}(S_{E})\): \[K_{\mathit{UB}}(b_{E},b_{E}^{\prime})\geq K(b_{E},b_{E}^{\prime})\quad\text{ and}\quad K_{\mathit{UB}}(b_{E},b_{E})=0\,. \tag{10}\] It can be seen that (9) is close to the classical upper bound function used in regular HSVI for finite-state spaces, except for the function \(K_{\mathit{UB}}\) that measures the difference between two beliefs (two functions). We require that \(K_{\mathit{UB}}\) satisfies (10) to ensure that (9) is an upper bound after a value backup, as stated in Lemma 4 below. **Bound initialization.** The lower bound \(V^{\Gamma}_{LB}\) is initialized using the lower bound of the blind strategies of the form "always choose action \(a\in Act\)", which is given by \(\sum_{k=0}^{\infty}\beta^{k}\inf_{s\in S}R_{a}(s)\). Therefore, a lower bound for \(V^{\Gamma}_{LB}\) is given by: \[R_{LB}=\max_{a\in Act}\left(\sum_{k=0}^{\infty}\beta^{k}\inf_{s\in S}R_{a}(s) \right)=1/(1-\beta)\max_{a\in Act}\inf_{s\in S}R_{a}(s)\,.\] The PWC \(\alpha\)-function set \(\Gamma\) for the initial \(V^{\Gamma}_{LB}\) contains a single PWC function \(\alpha\), where \(\alpha(s)=R_{LB}\) for all \(s\in S\) and the associated FCP is the perception FCP \(\Phi_{P}\). We initialize the upper bound \(V^{\Upsilon}_{UB}\) by sampling a set of initial beliefs \(\{(s^{i}_{A},b^{i}_{E})\}_{i\in I}\) and letting \(y_{i}=U\) for all \((s^{i}_{A},b^{i}_{E})\). ### Point-Based Updates **Lower bound updates.** For the lower bound \(V^{\Gamma}_{LB}\), in each iteration we add a new PWC \(\alpha\)-function \(\alpha^{\star}\) to \(\Gamma\) leading to \(\Gamma^{\prime}\) at a belief \((s_{A},b_{E})\in S_{B}\) such that: \[\langle\alpha^{\star},(s_{A},b_{E})\rangle=[TV^{\Gamma}_{LB}](s_{A},b_{E})\,. \tag{11}\] To that end, let \(\bar{a}\) be an action maximizing the Bellman backup (6) at \((s_{A},b_{E})\), i.e., \(\bar{a}\) is a maximizer when computing \([TV^{\Gamma}_{LB}](s_{A},b_{E})\). If action \(\bar{a}\) is taken, then \(\bar{S}_{A}=\{s^{\prime}_{A}\in S_{A}\mid P(s^{\prime}_{A}\mid(s_{A},b_{E}), \bar{a})>0\}\) are agent states that can be observed. If \(s^{\prime}_{A}\) is observed, then the backup value at belief \((s_{A},b_{E})\) from an \(\alpha\)-function \(\alpha\in\Gamma\) equals \(\int_{s_{E}\in S_{E}}\mathit{bval}((s_{A},s_{E}),\bar{a},s^{\prime}_{A},\alpha )b_{E}(s_{E})\mathrm{d}s_{E}\), where for any \(s_{E}\in S_{E}\): \[\mathit{bval}((s_{A},s_{E}),\bar{a},s^{\prime}_{A},\alpha)=\beta\delta_{A}(s_ {A},\bar{a})(\mathit{loc}^{\prime}){\sum}_{s^{\prime}_{E}\in\Theta^{\bar{a}}_ {s_{E}}\cap S^{s^{\prime}_{A}}_{E}}\delta_{E}(s_{E},\bar{a})(s^{\prime}_{E}) \alpha(s^{\prime}_{A},s^{\prime}_{E})\,.\] For \(s^{\prime}_{A}\in\bar{S}_{A}\), let \(\alpha^{s^{\prime}_{A}}\in\Gamma\) be an \(\alpha\)-function maximizing the backup value, i.e., \(\alpha^{s^{\prime}_{A}}\in\mathrm{argmax}_{\alpha\in\Gamma}\int_{s_{E}\in S_{ E}}\mathit{bval}((s_{A},s_{E}),\bar{a},s^{\prime}_{A},\alpha)b_{E}(s_{E}) \mathrm{d}s_{E}\). Using \(\bar{a}\), \(\alpha^{s^{\prime}_{A}}\) for \(s^{\prime}_{A}\in\bar{S}_{A}\) and the perception FCP \(\Phi_{P}\), Algorithm 1 computes a new \(\alpha\)-function \(\alpha^{\star}\) at belief \((s_{A},b_{E})\). To guarantee (11) and improve the efficiency, we only compute the backup values for regions \(\phi\in\Phi_{P}\) over which \((s_{A},b_{E})\) has positive probabilities, i.e. \(s^{\phi}_{A}=s_{A}\) (recall \(s^{\phi}_{A}\) is the unique agent state appearing in \(\phi\)) and \(\int_{(s_{A},s_{E})\in\phi}b_{E}(s_{E})\mathrm{d}s_{E}>0\) and assign the trivial lower bound \(L\) otherwise. More precisely, for each such region \(\phi\) and \((\hat{s}_{A},\hat{s}_{E})\in\phi\): \[\alpha^{\star}(\hat{s}_{A},\hat{s}_{E})=R_{\bar{a}}(\hat{s}_{A},\hat{s}_{E})+{ \sum}_{s^{\prime}_{A}\in S_{A}}\mathit{bval}((\hat{s}_{A},\hat{s}_{E}),\bar{a}, s^{\prime}_{A},\alpha^{s^{\prime}_{A}}) \tag{12}\] where if \(s^{\prime}_{A}\notin\bar{S}_{A}\), then \(\alpha^{s^{\prime}_{A}}\) can be any \(\alpha\)-function in \(\Gamma\). Computing the backup values (12) state by state is computationally intractable, as region \(\phi\) contains an infinite number of states. However, the following lemma shows that \(\alpha^{\star}\) is PWC, thus resulting in a tractable region-by-region backup. The lemma also shows that the lower bound function increases uniformly, is valid after each update, and performs no worse than the Bellman backup at the current belief. **Lemma 2** (Lower bound): _At belief \((s_{A},b_{E})\in S_{B}\), the function \(\alpha^{\star}\) generated by Algorithm 1 is a PWC \(\alpha\)-function satisfying (11), \(V^{\Gamma}_{LB}\leq V^{\Gamma^{\prime}}_{LB}\leq V^{\star}\) and \(V^{\Gamma^{\prime}}_{LB}(s_{A},b_{E})\geq[TV^{\Gamma}_{LB}](s_{A},b_{E})\)._ Since \(\alpha^{\star}\) is PWC, we next present a new backup for (12) through finite region-by-region backups. Recall from Assumption 1 that \(\delta_{E}\) can be represented as \(\sum_{i=1}^{N_{c}}\mu_{i}\delta_{E}^{i}\). Algorithm 2 presents an Image-Split-Preimage-Product (ISPP) backup method to compute (12) region by region. This method, inspired by Lemma 2, is to divide a region \(\phi\) into subregions, where for each subregion \(\alpha^{\star}\) is constant, illustrated in Fig. 2. Given any reachable local state \(\mathit{loc}^{\prime}\) under \(\bar{a}\) and continuous transition function \(\delta_{E}^{i}\), the _image_ of \(\phi\) under \(\bar{a}\) and \(\delta_{E}^{i}\) to \(\mathit{loc}^{\prime}\) is divided into _image_ regions \(\Phi_{\mathrm{image}}\) such that the states in each region have a unique agent state. Each image region \(\phi_{\mathrm{image}}\) is then split into subregions by a constant-FCP of the PWC function \(\alpha^{s_{A}^{\phi_{\mathrm{image}}}}\) by pairwise intersections, and thus \(\Phi_{\mathrm{image}}\) is _split_ into a set of refined image regions \(\Phi_{\mathrm{split}}\). An FCP over \(\phi\), denoted by \(\Phi_{\mathrm{pre}}\), is constructed by computing the _preimage_ of each \(\phi_{\mathrm{image}}\in\Phi_{\mathrm{split}}\) to \(\phi\). Finally, the _product_ of these FCPs \(\Phi_{\mathrm{pre}}\) for all reachable local states and environment functions and reward FCP \(\Phi_{R}^{\bar{a}}\), denoted \(\Phi_{\text{product}}\), is computed. The following lemma demonstrates that \(\alpha^{\star}\) is constant in each region of \(\Phi_{\text{product}}\), and therefore (12) can be computed by finite backups. **Lemma 3** (ISPP backup).: _The FCP \(\Phi_{\text{product}}\) returned by Algorithm 2 is a constant-FCP of \(\phi\) for \(\alpha^{\star}\) and the region-by-region backup for \(\alpha^{\ast}\) satisfies (12)._ Computing \(\mathit{bval}((\hat{s}_{A},\hat{s}_{E}),\bar{a},s^{\prime}_{A},\alpha^{s^{ \prime}_{A}})\) in the value backup requires \(\alpha^{s^{\prime}_{A}}(s^{\prime}_{A},s^{\prime}_{E})\). To obtain this value, we need to find the region in the constant-FCP for \(\alpha^{s^{\prime}_{A}}\) containing \((s^{\prime}_{A},s^{\prime}_{E})\). Instead of searching, we record the region connections during ISPP, and thus can locate the region containing \((s^{\prime}_{A},s^{\prime}_{E})\) improving the efficiency. **Upper bound updates.** For the upper bound \(V_{\mathit{UB}}^{\Upsilon}\), working with the representation given in (9), at a belief \((s_{A},b_{E})\in S_{B}\) in each iteration, we add a new belief-value point \(((s_{A},b_{E}),p^{\star})\) to \(\Upsilon\) such that \(p^{\star}=[TV^{\Upsilon}_{\text{UB}}](s_{A},b_{E})\). The following lemma shows that \(p^{\star}\geq V^{\star}(s_{A},b_{E})\) required by (9), the upper bound function is decreasing uniformly, is valid after each update, and performs no worse than the Bellman backup at the current belief. **Lemma 4** (Upper bound): _Given belief \((s_{A},b_{E})\in S_{B}\), if \(p^{\star}=[TV^{\Upsilon}_{\text{UB}}](s_{A},b_{E})\), then \(p^{\star}\) is an upper bound of \(V^{\star}\) at \((s_{A},b_{E})\), i.e., \(p^{\star}\geq V^{\star}(s_{A},b_{E})\), and if \(\Upsilon^{\prime}=\Upsilon\cup\{((s_{A},b_{E}),p^{\star})\}\), then \(V^{\Upsilon}_{\text{UB}}\geq V^{\Upsilon^{\prime}}_{\text{UB}}\geq V^{\star}\) and \(V^{\Upsilon^{\prime}}_{\text{UB}}(s_{A},b_{E})\leq[TV^{\Upsilon}_{\text{UB}} ](s_{A},b_{E})\)._ ### NS-HSVI Algorithm Algorithm 3 presents the NS-HSVI algorithm for NS-POMDPs. Similarly to the heuristic search in HSVI [10], the algorithm (lines 5-7) selects an action \(\hat{a}\) greedily according to the upper bound at belief \((s_{A},b_{E})\in S_{B}\), i.e., Figure 2: Illustration of the steps taken by the ISPP algorithm. \(\hat{a}\) is a maximizer when computing \([TV^{\Upsilon}_{\mathit{UB}}](s_{A},b_{E})\). Furthermore, given \(\varepsilon>0\), it selects an agent state \(\hat{s}_{A}\) (observation) with the highest weighted excess approximation gap (line 9), denoted \(\mathit{excess}_{t+1}(s^{\prime}_{A},b^{s_{A},\hat{a},s^{\prime}_{A}}_{E})\), which equals: \[P(s^{\prime}_{A}\mid(s_{A},b_{E}),\hat{a})\big{(}V^{\Upsilon}_{\mathit{UB}}(s^ {\prime}_{A},b^{s_{A},\hat{a},s^{\prime}_{A}}_{E})-V^{\Gamma}_{\mathit{LB}}(s^ {\prime}_{A},b^{s_{A},\hat{a},s^{\prime}_{A}}_{E})-\varepsilon\beta^{t+1}\big{)}\] where \(t\) is the depth of \((s_{A},b_{E})\) from the initial belief \(s^{\mathit{init}}_{B}=(s^{\mathit{init}}_{A},b^{\mathit{init}}_{E})\in S_{B}\). NS-HSVI has the following convergence guarantees. **Theorem 3** (Ns-Hsvi).: _Algorithm 3 will terminate and upon termination:_ 1. \(V^{\Upsilon}_{\mathit{UB}}(s^{\mathit{init}}_{B})-V^{\Gamma}_{\mathit{LB}}( s^{\mathit{init}}_{B})\leq\varepsilon\)_;_ 2. \(V^{\Gamma}_{\mathit{LB}}(s^{\mathit{init}}_{B})\leq V^{\star}(s^{\mathit{init} }_{B})\leq V^{\Upsilon}_{\mathit{UB}}(s^{\mathit{init}}_{B})\)_;_ 3. \(V^{\star}(s^{\mathit{init}}_{B})-\mathbb{E}^{\hat{\sigma}}_{s^{\mathit{init}} _{B}}[Y]\leq\varepsilon\) _where_ \(\hat{\sigma}\) _is the one-step lookahead strategy from_ \(V^{\Gamma}_{\mathit{LB}}\)_._ Proof.: Given belief \((s_{A},b_{E})\in S_{B}\), through Lemma 2 after updating a lower bound \(V^{\Gamma}_{\mathit{LB}}\) we have: \[V^{\Gamma}_{\mathit{LB}}\leq V^{\Gamma^{\prime}}_{\mathit{LB}}\leq V^{\star} \quad\text{and}\quad V^{\Gamma^{\prime}}_{\mathit{LB}}(s_{A},b_{E})\geq[TV^{ \Gamma}_{\mathit{LB}}](s_{A},b_{E}) \tag{13}\] and through Lemma 4 after updating an upper bound \(V^{\Upsilon}_{\mathit{UB}}\), we have: \[V^{\Upsilon}_{\mathit{UB}}\geq V^{\Upsilon^{\prime}}_{\mathit{UB}}\geq V^{ \star}\quad\text{and}\quad V^{\Upsilon^{\prime}}_{\mathit{UB}}(s_{A},b_{E}) \leq[TV^{\Upsilon}_{\mathit{UB}}](s_{A},b_{E})\,. \tag{14}\] Now, since \(V^{\Gamma}_{\mathit{LB}}\) and \(V^{\Upsilon}_{\mathit{UB}}\) are initially bounded and from Lemmas 2 and 4 are uniformly improvable, \(\delta\) has finite branching and \(\beta\in(0,1)\), using [33, Theorem 6.8] we have that Algorithm 3 terminates after finite steps. Next, combining (13) and (14), and using [33, Section 6.5] both 1 and 2 follow directly. Finally, concerning 3, by (B.1), we have \[\langle\alpha^{\star},(\hat{s}_{A},\hat{s}_{E})\rangle\leq[TV^{\Gamma}_{ \mathit{LB}}](\hat{s}_{A},\hat{s}_{E}) \tag{15}\] for all \((\hat{s}_{A},\hat{s}_{E})\in S_{B}\). If \(V^{\Gamma}_{\mathit{LB}}\leq TV^{\Gamma}_{\mathit{LB}}\), we have \(V^{\Gamma^{\prime}}_{\mathit{LB}}\leq TV^{\Gamma}_{\mathit{LB}}\) using (15). Then, since Algorithm 3 terminates, according to [33, Theorem 3.18]: \[V^{\star}(s^{\mathit{init}}_{A},b^{\mathit{init}}_{E})-\mathbb{E}^{\hat{ \sigma}}_{(s^{\mathit{init}}_{A},b^{\mathit{init}}_{E})}[Y]\leq V^{\Upsilon}_{ \mathit{UB}}(s^{\mathit{init}}_{A},b^{\mathit{init}}_{E})-V^{\Gamma}_{\mathit{ LB}}(s^{\mathit{init}}_{A},b^{\mathit{init}}_{E})\leq\varepsilon\] which completes the proof. **Pruning.** We apply the following pruning to speed up Algorithm 3. First, a new \(\alpha\)-function \(\alpha^{\star}\) is added to \(\Gamma\) at belief \((s_{A},b_{E})\) in each update only if \(\alpha^{\star}\) strictly improves the value at \((s_{A},b_{E})\), i.e., \(\langle\alpha^{\star},(s_{A},b_{E})\rangle>V_{LB}^{\Gamma}(s_{A},b_{E})\). This leads to fewer \(\alpha\)-functions in \(\Gamma\) without changing convergence, and thus faster lower bound computation. Second, for the heuristic search, since the action \(\hat{a}\) (line 6) maximizing the upper bound backup may not be unique and different \(\hat{a}\) could result in different maximum gaps (line 8), we keep all maximizers and select the pair \((\hat{a},\hat{s}_{A})\) with the largest gap. We find this new excess heuristic to be empirically superior, as it tends to reduce the uncertainty the most. **Convergence.** Each belief update of Algorithm 3 is focused on a single belief, and therefore the number of iterations can be higher than value iteration; on the other hand, iterations are cheaper to perform. In the finite-state case, an upper bound on the number of HSVI iterations required can be calculated (33, Theorem 6.8). However, such analysis would be difficult in our setting, as the number of points to update depends on the initial beliefs, and which beliefs are updated at each iteration, and varies as the algorithm progresses. ### Two Belief Representations An implementation of the NS-HSVI algorithm crucially depends on the representations of beliefs, as a closed form is needed when computing belief \(b_{E}^{s_{A},a,s_{A}^{\prime}}\), expected values \(\langle\alpha,(s_{A},b_{E})\rangle\) and \(\langle R_{a},(s_{A},b_{E})\rangle\), probability \(P(s_{A}^{\prime}\mid(s_{A},b_{E}),a)\) and upper bound \(V_{UB}^{\Upsilon}(s_{A},b_{E})\). We first consider the popular particle-based belief representation and then propose a region-based belief representation to overcome the problem of requiring many particles to converge in the particle-based representation (34). **Particle-based beliefs.** Particle-based representations have been widely used in applications from computer vision (35), robotics (36; 8) to machine learning (37). They can approximate arbitrary beliefs (given sufficient particles), handle nonlinear and non-Gaussian systems, and allow efficient computations. **Definition 7** (Particle-based belief).: _A belief \((s_{A},b_{E})\in S_{B}\) is represented by a weighted particle set \(\{(s_{E}^{i},w_{i})\}_{i=1}^{N_{b}}\) with normalized weights if_ \[b_{E}(s_{E})=\sum_{i=1}^{N_{b}}w_{i}D(s_{E}-s_{E}^{i})\] _where \(w_{i}>0\), \(s_{E}^{i}\in S_{E}\) for all \(1\leq i\leq N_{b}\) and \(D(s_{E}-s_{E}^{i})\) is a Dirac delta function centered at \(0\). Let \(B(s_{E})\) be a small neighborhood of \(s_{E}\), and \(P(s_{E};b_{E})=\int_{s_{E}^{\prime}\in B(s_{E})}b_{E}(s_{E}^{\prime})\mathrm{d }s_{E}^{\prime}\) be the probability of particle \(s_{E}\) under \(b_{E}\)._ Given an initial particle-based belief \((s_{A}^{\text{init}},b_{E}^{\text{init}})\), the number of states reachable in any finite number of steps is finite, and therefore standard methods for finite-state POMDPs can be used to solve the resulting finite-state game tree, similarly to [22] under fully-observable strategies. However, the size of the game tree can increase exponentially as the number of steps increases, particularly given that the reachable states are likely to be distinct due to the continuous-state space. To implement NS-HSVI given in Algorithm 3 using particle-based beliefs, we must demonstrate that \(V_{\mathit{LB}}^{\Gamma}\) and \(V_{\mathit{UB}}^{\Upsilon}\) are eligible representations [8] for particle-based beliefs, that is, there are closed forms for the quantities of interest. For a particle-based belief \((s_{A},b_{E})\) with weighted particle set \(\{(s_{E}^{i},w_{i})\}_{i=1}^{N_{b}}\), it follows from (4) that for belief \(b_{E}^{s_{A},a,s_{A}^{\prime}}\) we have, for any \(s_{E}^{\prime}\in S_{E}\), \(b_{E}^{s_{A},a,s_{A}^{\prime}}(s_{E}^{\prime})\) equals: \[\frac{\sum_{i=1}^{N_{b}}w_{i}\delta_{E}(s_{E}^{i},a)(s_{E}^{\prime})}{\sum_{i= 1}^{N_{b}}w_{i}\sum_{s_{E^{\prime\prime}}\in S_{E}^{s_{E}^{\prime}}}\delta_{E }(s_{E}^{i},a)(s_{E}^{\prime\prime})}\text{ if }s_{E}^{\prime}\in S_{E}^{s_{A}^{ \prime}}\text{ and equals }0\text{ otherwise.} \tag{16}\] Similarly, we can compute \(\langle\alpha,(s_{A},b_{E})\rangle\), \(\langle R_{a},(s_{A},b_{E})\rangle\) and \(P(s_{A}^{\prime}\mid(s_{A},b_{E}),a)\) as simple summations. It remains to compute \(V_{\mathit{UB}}^{\Upsilon}\) in (9), which we achieve by designing a function \(K_{\mathit{UB}}\) that measures belief differences that satisfy (10). However, (10) is hard to check as, for beliefs \(b_{E}\) and \(b_{E}^{\prime}\), calculating \(K(b_{E},b_{E}^{\prime})\) involves the integral over the region \(S_{E}^{b_{E}>b_{E}}\). For particle-based beliefs, we propose the function \(K_{\mathit{UB}}\) where: \[K_{\mathit{UB}}(b_{E},b_{E}^{\prime})=(U-L)N_{b}\max_{s_{E}\in S_{E}\wedge b_ {E}(s_{E})>0}|P(s_{E};b_{E})-P(s_{E};b_{E}^{\prime})| \tag{17}\] where \(N_{b}\) is the number of particles in \(b_{E}\), which is shown to satisfy (10) and given \(\Upsilon=\{((s_{A}^{i},b_{E}^{i}),y_{i})\mid i\in I\}\), the upper bound can be computed by solving a linear program (LP) as demonstrated by the following lemma. **Lemma 5** (LP for upper bound).: _The function \(K_{\mathit{UB}}\) from (17) satisfies (10), and for particle-based belief \((s_{A},b_{E})\) represented by \(\{(s_{E}^{i},w_{i})\}_{i=1}^{N_{b}}\), we have that \(V_{\mathit{UB}}^{\Upsilon}(s_{A},b_{E})\) is the optimal value of the LP:_ \begin{tabular}{l l} minimize: & \(\sum_{k\in I_{s_{A}}}\lambda_{k}y_{k}+(U-L)N_{b}c\) \\ subject to: & \(c\geq|w_{i}-\sum_{k\in I_{s_{A}}}\lambda_{k}P(s_{E}^{i};b_{E}^{k})|\) for \(1\leq i\leq N_{b}\) \\ & \(\lambda_{k}\geq 0\text{ for }k\in I_{s_{A}}\text{ and }\sum_{k\in I_{s_{A}}} \lambda_{k}=1\,.\) \\ \end{tabular} Since all quantities of interest in Algorithm 3 are computed exactly, the convergence guarantee in Theorem 3 holds for any initial particle-based belief. **Region-based beliefs.** Particle filter approaches [35] are required to approximate the updated belief of particle-based representations if the current belief has zero weight at the true state due to partial observations and random perturbations. However, for NS-POMDPs the usual sampling importance re-sampling (SIR) approach [38] requires many particles, which can be computationally expensive. Therefore, we propose a new belief representation using _regions_ of the continuous state space and show that it performs well empirically in handling the uncertainties. For any connected subset (region) \(\phi_{E}\subseteq S_{E}\), let \(\operatorname{vol}(\phi_{E})=\int_{s_{E}\in\phi_{E}}\mathrm{d}s_{E}\). **Definition 8** (Region-based belief).: _A belief \((s_{A},b_{E})\in S_{B}\) is represented by a weighted region set \(\{(\phi_{E}^{i},w_{i})\}_{i=1}^{N_{b}}\) if \(b_{E}(s_{E})=\sum_{i=1}^{N_{b}}\chi_{\phi_{E}^{i}}(s_{E})w_{i}\) where \(w_{i}>0\), \(\phi_{E}^{i}\) is a region of \(S_{E}^{s_{A}}\) and \(\chi_{\phi_{E}^{i}}:S_{E}\to\mathbb{R}\) is such that \(\chi_{\phi_{E}^{i}}(s_{E})=1\) if \(s_{E}\in\phi_{E}^{i}\) and \(0\) otherwise for \(1\leq i\leq N_{b}\), and \(\sum_{i=1}^{N_{b}}w_{i}\mathrm{vol}(\phi_{E}^{i})=1\)._ In the case of region-based beliefs, finite-state POMDPs are not applicable even when approximating by finding all reachable states up to some finite depth, as from an initial (region-based) belief this would yield infinitely many reachable states. Region-based beliefs assume a uniform distribution over each region and allow the regions to overlap. Ensuring that belief updates of region-based beliefs result in region-based beliefs is difficult [39], as even simple transitions of variables with simple distributions can lead to complex distributions. Assumption 1 only ensures a finite partitioning of the state space for the transitions, but not that the updated belief places a uniform distribution over each region. We now provide conditions on the deterministic continuous components \(\delta_{E}^{i}\), see Assumption 1, of the environment transition function, under which region-based beliefs are closed. **Lemma 6** (Region-based belief closure).: _If \(\delta_{E}^{i}(\cdot,a):S_{E}\to\delta_{E}^{i}(S_{E},a)\) is piecewise differentiable and invertible and the Jacobian determinant of the inverse function is PWC for any \(a\in\text{Act}\) and \(1\leq i\leq N_{e}\), then region-based beliefs are closed under belief updates._ We next present an implementation of NS-HSVI using region-based beliefs for environment transition functions satisfying Lemma 6. For a region-based belief \((s_{A},b_{E})\), Algorithm 4 computes the belief update as the image of each region, dividing the images by perception functions into regions of \(S_{E}\), updating weights and selecting the regions with desired observations. The region-based belief update and expected values are summarised in Lemma 7. **Lemma 7** (Region-based belief update): _For region-based belief \((s_{A},b_{E})\) represented by \(\{(\phi_{E}^{i},w_{i})\}_{i=1}^{N_{b}}\), action \(a\) and observation \(s^{\prime}_{A}\): \((s^{\prime}_{A},b^{\prime}_{E})\) returned by Algorithm 4 is region-based and \(b^{\prime}_{E}=b_{E}^{s_{A},a,s^{\prime}_{A}}\). Furthermore, if \(h:S\rightarrow\mathbb{R}\) is PWC and \(\Phi_{E}\) is a constant-FCP of \(S_{E}\) for \(h\) at \(s_{A}\), then \(\langle h,(s_{A},b_{E})\rangle=\sum_{i=1}^{N_{b}}\sum_{\phi_{E}\in\Phi_{E}}h(s _{A},s_{E})w_{i}\mathrm{vol}(\phi_{E}^{i}\cap\phi_{E})\) where \(s_{E}\in\phi_{E}\)._ For the upper bound \(V_{UB}^{\Upsilon}\), the function \(K_{\mathit{UB}}\) has to compare beliefs over regions. We let \(K_{\mathit{UB}}=K\), and thus (10) holds. Instead of a computationally expensive exact bound, which involves a large number of region intersections, Algorithm 5 is approximate, based on maximum densities, and involves solving an LP. **Lemma 8** (Region-based upper bound): _For region-based belief \((s_{A},b_{E})\) represented by \(\{(\phi_{E}^{i},w_{i})\}_{i=1}^{N_{b}}\) and \(\Upsilon=\{((s_{A}^{k},b_{E}^{k}),y_{k})\mid k\in I\}\), if \(K_{\mathit{UB}}=K\), \((\phi_{E}^{\max},p)\) is returned by Algorithm 5, \(b^{\prime}_{E}=\sum_{k\in I_{s_{A}}}\lambda_{k}^{\star}b_{E}^{k}\) and \(\phi_{E}^{\max}\subseteq S_{E}^{b_{E}>b^{\prime}_{E}}\) where \(\lambda_{k}^{\star}\) is a solution to the LP of Algorithm 5, then \(p\) is an upper bound of \(V_{\mathit{UB}}^{\Upsilon}\) at \((s_{A},b_{E})\). Furthermore, if \(N_{b}=1\), then \(p=V_{\mathit{UB}}^{\Upsilon}(s_{A},b_{E})\)._ ``` 0:\((s_{A},b_{E})\) represented by \(\{(\phi_{E}^{i},w_{i})\}_{i=1}^{N_{b}}\), \(\Upsilon=\{((s_{A}^{k},b_{E}^{k}),y_{k})\mid k\in I\}\) 1:\(I_{b}\leftarrow\operatorname*{argmax}_{I_{b}\subseteq\{1,\ldots,N_{b}\}}\sum_{i \in I_{b}}w_{i}\) subject to: \(\cap_{i\in I_{b}}\phi_{E}^{i}\neq\varnothing\) 2:\(\phi_{E}^{\max}\leftarrow\cap_{i\in I_{b}}\phi_{E}^{i}\)\(\triangleright\) Maximum density 3:\(p=\operatorname*{minimize}\sum_{k\in I_{s_{A}}}\lambda_{k}y_{k}+(U-L)c\) 4: subject to: \(c\geq 1-\sum_{k\in I_{s_{A}}}\sum_{j=1}^{N_{b}^{k}}\lambda_{k}w_{kj} \mathrm{vol}(\phi_{E}^{kj}\cap\phi_{E}^{\max})\), \(\lambda_{k}\geq 0,\ \sum_{k\in I_{s_{A}}}\lambda_{k}=1\) 5:return:\((\phi_{E}^{\max},p)\) ``` **Algorithm 5** Approximate region-based upper bound via maximum density ## 6 Implementation and Experimental Evaluation In this section, we present a prototype implementation and experimental evaluation of our NS-HSVI algorithm for solution and optimal strategy synthesis on NS-POMDPs. We first summarise the details of the experimental setup, then discuss the results of two case studies, and conclude the section by discussing performance comparison. ### Implementation Overview We have developed a prototype Python implementation using the Parma Polyhedra Library [40] to build and operate over perception FCP representations of preimages of NNs, \(\alpha\)-functions and reward structures. We recall that both \(\alpha\)-functions and reward functions are piecewise constant over the continuous environment. They can thus be represented by subdividing the entire environment into _regions_, namely polyhedra over the continuous variables to which we associate a value. We remark that, since our method crucially depends on the states in a given region, and those in the subregions arising from subsequent refinements, being mapped to the same percept, arbitrary discretisation is not applicable. We use \(h\)-representations, which describe polyhedra through linear constraints for intersecting finite half-spaces. Upper bound computation is performed by solving LPs with Gurobi [41]. To sample points with polyhedra, we use the SMT solver Z3 [42]. We use the method of [30] to compute the (exact) preimage of piecewise linear NNs, which iterates backwards through the layers. This method is only applicable when the NN has piecewise linear decision boundaries, for which the basic building blocks are polytopes. This includes NNs with ReLU or linear layers, but can also be applied to approximations of NNs obtained via, for example, linear relaxation. With this pre-image, we then construct a polyhedral representation of the environment space corresponding to the perception FCP. Regarding boundary points, we order regions and then assign boundary points to the region with the highest order, resolving ties via a measurable rule. ### Car Parking Case Study The first case study is the dynamic vehicle parking problem from Example 1, which we extend both with obstacles and to a larger environment. We were able to compute optimal strategies that lead the vehicle to the parking spot while avoiding obstacles (if present). \(4{\times}4\) **environment.** To extend this example to the case when there is an obstacle region \(\mathcal{R}_{O}=\{(x,y)\in\mathbb{R}^{2}\ |\ 1\leq x,y\leq 2\}\), see Fig. 3 (left), the state reward function changes such that for any \((s_{A},(x,y))\in S\): \(r_{S}(s_{A},(x,y))=1000\) if \((x,y)\in\mathcal{R}_{P}\), \(-1000\) if \((x,y)\in\mathcal{R}_{O}\) and \(0\) otherwise, i.e., there is a negative reward if the vehicle hits the obstacle. The accuracy \(\varepsilon\) is \(10^{-3}\). **Strategy synthesis (\(4{\times}4\)).** Fig. 4 presents paths (\(\pi_{1}\), \(\pi_{2}\) and \(\pi_{3}\)) for synthesised strategies starting from three particles in a given initial belief in two different scenarios, as well as the corresponding lower bound values for different regions of the environment. It also shows (on the right) the lower and upper bound values computed for the initial belief at each iteration. In both cases, there is an obstacle highlighted with black border. We consider strategies for when the reward associated to a collision is defined as in the reward structure in the model's description, i.e., \((r_{S}(s_{A},(x,y))=-1000\) if \((x,y)\in\mathcal{R}_{O})\) (Fig. 4 top), and when that penalty is increased to \(-5000\) (Fig. 4 bottom). We assume a uniform distribution over the points in the initial belief. We see that, when the negative reward of a collision with the obstacle is increased, Fig. 4 (bottom), all the generated paths avoid the cell with the obstacle. We also see that, in the first step, the action chosen is to move _left_; while this is possible for path \(\pi_{0}\) (red), taking that action from the other two Figure 3: Car parking with obstacles. initial belief points would take the agent out of the environment, in which case the agent would not move. For the scenario with the original reward structure, Fig. 4 (top), since the negative reward associated with a collision with the obstacle is lower, we see that such a reward can be compensated for by the agent afterwards, i.e., it can choose to move upwards from all points in the initial belief, resulting in a possibly unsafe strategy where a collision could happen. Similarly, Fig. 5 shows values and strategies computed for the same scenario when considering a region-based belief. The regions reached from the initial position until arriving at the parking spot are indicated in orange, with the current state labelled by x. The lower and upper bound values at each iteration are shown on the right-hand side, and the convergence demonstrates that the approximate upper bound for the region-based beliefs is tight if the belief has a unique region (see Lemma 8). We notice that the synthesised strategy avoids the obstacle while also reaching the parking spot with the least number of possible steps, maximising the agent's reward. Figure 4: Paths and values for car parking (obstacle indicated with black border, \(\beta=0.8\), collision rewards equal to \(-1000\) (top) and \(-5000\) (bottom)). Fig. 6 illustrates how computation progresses for Algorithm 3. Initially, we have an \(\alpha\)-function for each local state whose underlying structure is the same as the perception FCP (see Fig. 1 right), with all regions initialised with the lower bound as described in Section 5.1. With each iteration, we refine the representation for the regions containing visited points and update their values. The figure shows the initial representation (left) and the maximum (over all local states) of the first 5, 25, and finally all the generated \(\alpha\)-functions, coinciding then with the values presented in Fig. 4 (bottom). We can see how the values for the regions progressively increase as the computation proceeds (top row, left to right), as well as how the subsequent representations are refinements of the initial FCP (bottom row). 8\(\times\)8 **environment.** We consider a larger 8\(\times\)8 environment \(\mathcal{R}=\{(x,y)\in\mathbb{R}^{2}\mid 0\leq x,y\leq 8\}\) with 4 obstacles \(\mathcal{R}_{O}\) (Fig. 3, right). In this model the parking spot is given by \(\mathcal{R}_{P}=\{(x,y)\in\mathbb{R}^{2}\mid 6\leq x\leq 8\wedge 7\leq y\leq 8\}\), and the same reward structure is considered. To extend the NS-POMDP from Example 1 to this setting, the following changes to the components \(S_{A}\), \(S_{E}\), \(\Delta_{A}\) and \(\mathit{obs}_{A}\) need to be made: * \(S_{A}=\mathit{Loc}\times\mathit{Per}\) with 5 trust levels \(\mathit{Loc}=\{1,\ldots,5\}\) and 64 abstract grid points \(\mathit{Per}=\{1,\ldots,64\}\) (percepts), which are ordered in the same way as Table 1; * \(S_{E}=\mathcal{R}=\{(x,y)\in\mathbb{R}^{2}\mid 0\leq x,y\leq 8\}\); * \(\Delta_{A}(\mathit{tr},\mathit{per})=\mathit{Act}\) if \(\mathit{per}\in\{63,64\}\) and \(\Delta_{A}(\mathit{tr},\mathit{per})=\{\mathit{up},\mathit{down},\mathit{ left},\)\(\mathit{right}\}\) otherwise for all \(\mathit{tr}\in\mathit{Loc}\) and \(\mathit{per}\in\mathit{Per}\); Figure 5: Region-based paths and values for car parking with the obstacle, \(\beta=0.8\). * \(obs_{A}(tr,(x,y))=\mathrm{argmax}(f(x,y))\), where \(f\), which is implemented via a feed-forward NN with one hidden ReLU layer with 15 neurons, takes the coordinate vector of the vehicle as input and then outputs one of the 64 abstract grid points. **Strategy synthesis (\(8\times 8\)).** Fig. 8 (left) shows the perception FCP for the \(8\times 8\) environment. For this extended model, Fig. 7 (left) presents the paths from the three particles in the initial belief for the synthesised strategy, as well as lower bound values for the regions of the environment. As the figure demonstrates, the vehicle is able to reach the parking spot while avoiding the obstacles. As the full set of \(\alpha\)-functions is large (see Table 4), to reduce computational effort we show approximate values obtained by maximizing over a set of sampled \(\alpha\)-functions. Fig. 7 (right) shows how the lower and upper bound values for the initial belief change as the number of iterations of the NS-HSVI algorithm increases. ### VCAS Case Study In this case study there are two commercial aircraft: an ownship aircraft equipped with an NN-controlled vertical collision avoidance system (VCAS) and an intruder aircraft. Each second, the avoidance system gives a vertical climb-acceleration advisory _ad_ to the pilot of the ownship to avoid near mid-air collisions (NMACs), which occur when the aircraft are separated by less than 100 ft vertically and 500 ft horizontally. The avoidance system extends Figure 6: Values (top) and region outlines (bottom) for the initial and the maximum (over all local states) of the first 5, 25 and all the generated \(\alpha\)-functions (respectively from left to right) for the \(4\times 4\) car parking example with obstacle, \(\beta=0.8\). the classical VCAS [29], both by adding trust to measure uncertainty and by allowing for deviations from the advisories arising from the additional belief information. Regarding the intruder, unlike in the VCAS model of [29], we allow a non-zero constant climb-rate for the intruder. We were able to compute optimal strategies that safely guide the ownship by avoiding the collision zone. **VCAS as an NS-POMDP.** The input to VCAS is a tuple \((h,\dot{h}_{A},t)\), where \(h\) is the relative altitude of the two aircraft, \(\dot{h}_{A}\) the climb rate of ownship, and \(t\) the time until the loss of horizontal separation between the aircraft. VCAS is implemented via nine feed-forward NNs, each of which outputs the scores of nine possible advisories, see Table 2. Each advisory will provide a set of acceleration values and the ownship then either accelerates at one of these values or does not accelerate. Each NN of VCAS has one hidden ReLU layer Figure 8: Perception FCP for car parking (8\(\times\)8), and a slice of the perception FCP for the COC advisory of the VCAS (\(h\) scaled 10:1). Figure 7: Paths and values for car parking (8\(\times\)8, \(\beta=0.8\), partially reconstructed). with 16 neurons, and therefore the regions in its pre-image are polytopes. If we had instead considered HorizontalCAS [43], the nonlinear environment transition function twists polytopes into non-polytopes, and would destroy our finite representations. We model VCAS as an NS-POMDP in which the agent \(\mathsf{Ag}\) is the ownship. The agent has four trust levels \(\{1,\ldots,4\}\), which represent the trust it has in the previous advisory. These levels increase if the current advisory is compliant with the executed action, and decrease with probability 0.5 otherwise. A local state of the agent is of the form \((\mathit{ad}_{\mathit{pre}},\mathit{tr})\) consisting of the previous advisory and the trust level and the percept of the agent is the current VCAS advisory. An environment state is a tuple \((h,\dot{h}_{A},t)\) corresponding to the input of VCAS. Formally, we have: * \(S_{A}=\mathit{Loc}\times\mathit{Per}\) with \(\mathit{Loc}=\{1,\ldots,9\}\times\{1,\ldots,4\}\) and \(\mathit{Per}=\{1,\ldots,9\}\); * \(S_{E}=[-2000,2000]\times[-50,50]\times[0,20]\); * \(\mathit{Act}=\{0,\pm 3.0,\pm 7.33,\pm 8.33,\pm 9.33,\pm 9.7,\pm 10.7,\pm 11.7\}\); * \(\Delta_{A}(\mathit{loc},\mathit{per})=\mathit{Act}\) for all \(\mathit{loc}\in\mathit{Loc}\) and \(\mathit{per}\in\mathit{Per}\); * \(\mathit{obs}_{A}((\mathit{ad}_{\mathit{pre}},\mathit{tr}),s_{E})=\mathrm{ argmax}(f_{\mathit{ad}_{\mathit{pre}}}(s_{E}))\), where \(f_{\mathit{ad}_{\mathit{pre}}}\) is the NN associated with the previous advisory \(\mathit{ad}_{\mathit{pre}}\) and the boundary point is resolved by assigning the advisory with the smallest label in Table 2; * for \(s_{A}=((\mathit{ad}_{\mathit{pre}},\mathit{tr}),\mathit{ad})\in S_{A}\), \((\mathit{ad}^{\prime},\mathit{tr}^{\prime})\in\mathit{Loc}\) and \(a\in\mathit{Act}\), if \(a\) is compliant with \(\mathit{ad}\) (see Table 2), then: \[\delta_{A}(s_{A},a)((\mathit{tr}^{\prime},\mathit{ad}^{\prime}))=\left\{ \begin{array}{rl}1&\text{if }(\mathit{tr}\leq 3)\land(\mathit{tr}^{\prime}= \mathit{tr}+1)\land(\mathit{ad}^{\prime}=\mathit{ad})\\ 1&\text{if }(\mathit{tr}=4)\land(\mathit{tr}^{\prime}=\mathit{tr})\land( \mathit{ad}^{\prime}=\mathit{ad})\\ 0&\text{otherwise}\end{array}\right.\] if \(a\) is not compliant with \(\mathit{ad}\), then: \[\delta_{A}(s_{A},a)((\mathit{tr}^{\prime},\mathit{ad}^{\prime}))=\left\{ \begin{array}{rl}0.5&\text{if }(\mathit{tr}\geq 2)\land(\mathit{tr}^{\prime}= \mathit{tr}-1)\land(\mathit{ad}^{\prime}=\mathit{ad})\\ 0.5&\text{if }(\mathit{tr}\geq 2)\land(\mathit{tr}^{\prime}=\mathit{tr})\land( \mathit{ad}^{\prime}=\mathit{ad})\\ 1&\text{if }(\mathit{tr}=1)\land(\mathit{tr}^{\prime}=\mathit{tr})\land( \mathit{ad}^{\prime}=\mathit{ad})\\ 0&\text{otherwise;}\end{array}\right.\] * for \(s=(h,\dot{h}_{A},t),s^{\prime}=(h^{\prime},\dot{h}^{\prime}_{A},t^{\prime})\in S\) if \[h^{\prime\prime} = h-\Delta t(\dot{h}_{A}-\dot{h}_{\rm int})-0.5\Delta t^{2}\ddot{h}_{A}\] \[\dot{h}^{\prime\prime}_{A} = \dot{h}_{A}+\dot{h}_{A}\Delta t\] \[t^{\prime\prime} = t-\Delta t\] then \[\delta_{E}(s,a)(s^{\prime})=\left\{\begin{array}{ll}1&\mbox{if }(h^{\prime\prime},\dot{h}^{ \prime\prime}_{A},t^{\prime\prime})\in S_{E}\mbox{ and }s^{\prime}=(h^{\prime\prime},\dot{h}^{ \prime\prime}_{A},t^{\prime\prime})\\ 1&\mbox{if }(h^{\prime\prime},\dot{h}^{\prime\prime}_{A},t^{\prime\prime}) \not\in S_{E}\mbox{ and }s^{\prime}=s\\ 0&\mbox{otherwise}\end{array}\right.\] where \(\Delta t=1.0\) is the time step and the intruder is assumed to be a constant climb-rate \(\dot{h}_{\rm int}=30\). In the reward structure we consider, all action rewards are zero and the state reward function is such that for any \(s\in S\): \(r_{S}(s)=-1000\) if \(t\in[0,1]\wedge h\in[-100,100]\) and \(0\) otherwise, i.e., there is a negative reward if altitudes of the aircraft are within 100 ft at time 0 or 1. The accuracy \(\varepsilon\) is \(10^{-1}\). **Strategy synthesis.** To compute the perception FCP \(\Phi_{P}\), i.e., the preimages of the NNs for this case study, we first trained these NNs. This involved computing an MDP table policy using local approximate value iteration, reformatting this into training data and training the NNs [44]. To generate the pre-images, we adapted the method of [30], which was used to compute exact pre-images for the NNs of HorizontalCAS [43]. For example, the pre-image for the COC (Clear of Conflict) advisory is shown in Fig. 8 (right), which shows VCAS next issuing the advisory DES1500 (Descend at least 1500 ft/min) for the environment states in the green region to avoid an NMAC given the small values of \(h\) and \(t\). \begin{table} \begin{tabular}{|c|l|l|c|} \hline Label & Advisory & Description & Actions \\ \((ad_{i})\) & & & ft/s\({}^{2}\) \\ \hline \hline 1 & COC & Clear of Conflict & \(-3\), \(0\), \(+3\) \\ 2 & DNC & Do Not Climb & \(-9.33\), \(-8.33\), \(-7.33\) \\ 3 & DND & Do Not Descend & \(+7.33\), \(+8.33\), \(+9.33\) \\ 4 & DES1500 & Descend at least 1500 ft/min & \(-9.33\), \(-8.33\), \(-7.33\) \\ 5 & CL1500 & Climb at least 1500 ft/min & \(+7.33\), \(+8.33\), \(+9.33\) \\ 6 & SDES1500 & Strengthen Descend to at least 1500 ft/min & \(-11.7\), \(-10.7\), \(-9.7\) \\ 7 & SCL1500 & Strengthen Climb to at least 1500 ft/min & \(+9.7\), \(+10.7\), \(+11.7\) \\ 8 & SDES2500 & Strengthen Descend to at least 2500 ft/min & \(-11.7\), \(-10.7\), \(-9.7\) \\ 9 & SCL2500 & Strengthen Climb to at least 2500 ft/min & \(+9.7\), \(+10.7\), \(+11.7\) \\ \hline \end{tabular} \end{table} Table 2: Available/suggested actions for each advisory of VCAS [4]. Fig. 9 shows the paths from the four particles in the initial belief of a synthesized strategy for the VCAS case study. For the particles that would reach the collision zone at time 0 or 1 (coloured green in Fig. 9), there is a course correction that enables the ownship to narrowly escape a collision. ### Performance Analysis To conclude the experimental analysis, we first discuss the performance of the implementation based on the statistics for two case studies, and then compare the performance of particle-based and region-based beliefs, and against SARSOP. **Experimental results.** The experimental results reported in this section were generated on a 2.10GHz Intel Xeon Gold. Our NS-HSVI implementation is able to compute values and strategies for particle-based and region-based instances of the models we considered in less than 1 hour (Table 3). In the table, we report the model we consider, the belief type, the number of initial Figure 9: Paths from synthesised safe strategies for VCAS (\(h\) scaled 5:1). points or regions, the discount factor (\(\beta\)), the number of updated points or the volume of the updated regions (depending on the belief type), and the overall number of iterations of Algorithm 3 as well as the time taken until convergence. We found that the branching factor of the environment transition function, the number of agent states and actions, and the number of polyhedra in the perception FCP \(\Phi_{P}\) can all have a significant impact on the computation time. Table 3 shows that computation for region-based beliefs normally takes longer because the number of regions of the perception FCP \(\Phi_{P}\) over which the algorithm puts positive probabilities is usually larger, and thus it requires more ISPP backups. Moreover, while the update for particle-based beliefs only involves simple operations, updating region-based beliefs is far more complex due to the need of the polyhedra image computations, intersections and volume calculations. Another crucial aspect is the choice of the discount factor (\(\beta\)). Fig. 10 shows how verification times vary for the different case studies as a function of that parameter. As expected, the trend we are able to observe is that it takes longer for the algorithm to converge as the value of \(\beta\) increases. The small drop in the curve for the \(8\times 8\) version of the car parking example for the lower values of \(\beta\) can be explained by the inherent nondeterminism of HSVI exploration, especially in the early stages of the computation when many regions may have the same lower and upper bounds. This may lead to the algorithm being indifferent with respect to the actions it takes, and thus constructing paths that have lower impact on the values of the initial belief. Finally, another element that impacts the running time is the choice of the initial belief and the model's dynamics. This can be especially noticed when comparing the two instances of VCAS. The beliefs for the version with 15 actions have lower values for \(t\) and are thus much closer to the boundaries of Figure 10: Solution times for different discount factors (for particle-based beliefs). the environment, which considerably limits the number of reachable states and makes it possible for the algorithm to converge more quickly despite the higher number of actions. Table 4 shows, for a number of instances of both case studies and for each belief type, particle-based (PB) and region-based (RB): the total number of polyhedra that make up the \(\alpha\)-functions computed, the lower and upper bounds on values for the initial belief and the time required for strategy synthesis, i.e., reading \(\alpha\)-functions, finding maximum actions and updating beliefs. We also show the compliance ratio with respect to the suggested actions as well as average trust values over 20 paths generated from the synthesised strategies. For the car parking case study (recall the accuracy is \(10^{-3}\)), in general, the more iterations that are needed for convergence, the higher the number of \(\alpha\)-functions generated and consequently the total number of regions. Strategy synthesis for region-based beliefs tends to be comparatively slower due to the complexity of the mathematical operations involved. The following ratio and average trust values are both high for this case study as the suggested actions in Table 1 are close to the optimal strategies. Regarding VCAS, the statistics in Table 4 are for the accuracy of \(10^{-1}\). The \(\alpha\)-functions generally have a large number of regions, as the perception FCP for each of the 9 NNs of VCAS has many regions, and hence many intersections. In addition, we note that, for this model, the following ratio and average trust values are low, and in fact have been omitted for the \begin{table} \begin{tabular}{|c|c||c|c|c||c|c|c|} \hline Model & Belief type & Total regions & Lower & Upper & Strat. & Following & Avg. \\ & \#initial & (\(\alpha\)-functions) & bound & bound & time (s) & ratio & trust \\ \hline \hline \multirow{3}{*}{Car parking (no obstacles, 4\(\times\)4)} & PB, 3 & 80,494 & 2389.3309 & 2389.3333 & 19.3 & 88\% & 3.6 \\ \cline{2-9} & PB, 5 & 42,224 & 2047.9989 & 2048.0000 & 14.0 & 100\% & 3.9 \\ \cline{2-9} & RB, 1 & 36,467 & 2047.9992 & 2048.0000 & 50.0 & 100\% & 3.9 \\ \hline \multirow{3}{*}{Car parking (w/ obstacle, 4\(\times\)4)} & PB, 3 & 99,513 & 2218.6653 & 2218.6666 & 24.5 & 78\% & 3.3 \\ \cline{2-9} & PB, 5 & 47,719 & 2047.9990 & 2048.0000 & 14.2 & 100\% & 3.9 \\ \cline{2-9} & RB, 1 & 35,751 & 2047.9988 & 2048.0000 & 39.4 & 100\% & 3.9 \\ \hline \multirow{3}{*}{Car parking (w/ obstacles, 8\(\times\)8)} & PB, 3 & 1,410,799 & 343.5969 & 343.5974 & 338.9 & 85\% & 4.3 \\ \cline{2-9} & PB, 5 & 547,753 & 343.5970 & 343.5974 & 158.4 & 97\% & 4.4 \\ \cline{2-9} & RB, 1 & 550,685 & 343.5964 & 343.5974 & 473.8 & 80\% & 4.3 \\ \hline \hline \multirow{3}{*}{VCAS (3 actions)} & PB, 4 & 154,009 & -1.2281 & 0.0 & 75.3 & - & - \\ \cline{2-9} & PB, 5 & 278,447 & -1.2398 & 0.0 & 127.5 & - & - \\ \cline{2-9} & PB, 6 & 868,257 & -0.2498 & 0.0 & 400.8 & - & - \\ \cline{2-9} & RB, 1 & 22,919 & -0.0715 & 0.0 & 65.5 & - & - \\ \hline \multirow{3}{*}{VCAS (15 actions)} & PB, 4 & 32,387 & -0.6718 & 0.0 & 18.7 & 33\% & 1.3 \\ \cline{2-9} & PB, 5 & 30,003 & -0.9874 & 0.0 & 21.7 & 0\% & 1.0 \\ \cline{1-1} \cline{2-9} & PB, 6 & 19,218 & -1.0789 & 0.0 & 13.0 & 33\% & 1.3 \\ \cline{1-1} \cline{2-9} & RB, 1 & 21,102 & -0.6133 & 0.0 & 49.9 & 0\% & 1.0 \\ \hline \end{tabular} \end{table} Table 4: Further statistics for a set of NS-POMDP solution instances. model with 3 actions. This is because (see Table 2) the number of suggested actions associated to each advisory is only a fraction of the 15 actions we considered and, for a given belief, there are many strategies that can lead to the optimal value. Recall also that it is assumed that the intruder aircraft is always climbing and the beliefs we considered were all reasonably close to the collision zone. We analysed the synthesised strategies and found that, in many cases, the agent chose actions that would at first lead to a faster descent than those suggested in Table 2, but then compensated by descending less, or not at all, at later stages. While the values of the actions differed, all strategies we observed led to the ownship lowering its altitude, which would lead to an increase of the overall height difference so as to escape a potential collision. Thus, the low following ratios do not reflect an inadequacy of the advisories. **Performance comparison.** Finally, we compare values obtained for particle-based and region-based initial beliefs where the initial region covers the particles, after they have been disturbed by shifting their position along a sampled direction. This models a realistic scenario, in which the actual initial belief differs from the initial belief used to compute offline lower and upper bound functions, for example due to measurement imprecision. For a range of disturbance sizes (the distances by which the particles are shifted), the lower bound values for the average of 100 sampled points are presented in Fig. 11. The results show that, in all cases, the region-based belief values are greater than or equal to the particle-based values, and therefore the region-based approach is more robust to disturbance (i.e., generates lower bound values closer to the optimum). As the number of reachable states for a given number of transitions from Figure 11: Comparison between particle-based and region-based values. an initial particle-based belief is finite, we also compare the robustness of values obtained with our particle-based NS-HSVI and the finite-state POMDP solver SARSOP, for the \(4\times 4\) dynamic vehicle parking without obstacles in Fig. 12. For an initial particle-based belief, we build two finite-state POMDPs by unrolling the model's execution when considering 4 and 6 transitions, respectively. Note that no new distinct states can be reached for paths whose length exceeds 6 in this example, as any cell in the grid can be reached from any other cell in 6 steps. Then, we compute the value function, represented as a set of \(\alpha\)-vectors, for each finite-state POMDP with SARSOP. Using the value function, we approximate the values of beliefs disturbed by shifting as above, in which each shifted particle takes the value of the closest point in the finite-state space of the unrolled POMDP. The optimal value of each shifted belief is computed by unrolling from the shifted belief for a maximum of 6 transitions and solving the resulting finite-state POMDP with SARSOP. SARSOP performs better with respect to the computational time taken, which is understandable as SARSOP takes as input a discretised version of the model and does not operate over a continuous abstraction, as NS-HSVI does, requiring expensive operations over polyhedra. Nevertheless, the results shown in Fig. 12 demonstrate that the values achieved by strategies generated using SARSOP highly depend on how much of the model's execution we are able to construct beforehand, as the impact of missing reward-critical states with a shorter horizon can be considerable. It also shows that particle-based NS-HSVI obtains greater or equal lower bound values compared to SARSOP within a small disturbance range. This is due to the fact that, when performing the ISPP backup, we update not only the values for the Figure 12: Comparison between particle-based and SARSOP values. visited points but also for the regions that contain them. The optimal values of the shifted beliefs indicate that the values of the particle-based NS-HSVI and SARSOP are both valid lower bounds. ## 7 Conclusions We have introduced NS-POMDPs, the first partially observable neuro-symbolic model for an agent operating in continuous state space and perceiving the environment using NNs. Motivated by the need for safety guarantees for such systems, we focus on _optimal_ policy synthesis with discounting. By placing mild assumptions on the structure of NS-POMDPs, we are able to exploit their structure to approximate the value function from below and above using a representation of PWC \(\alpha\)-functions and belief-value induced functions. Using NS-HSVI, a variant of the classical HSVI algorithm, we synthesised optimal strategies for an agent parking a car and safe strategies for an agent using an aircraft collision avoidance system, employing the popular particle-based and novel region-based beliefs. Our main achievement is demonstrating the practicality of the methodology for small systems with realistic neural network components. To make progress in this challenging problem domain, similarly to other POMDP approaches, we initially focus on discounted objectives, and aim to later extend to the more complex undiscounted case (which is already undecidable for finite-state POMDPs). However, as the case studies demonstrate, we can use our approach to synthesise strategies that can then be shown to be safe in terms of provably avoiding "unsafe" parts of the state space. Further work includes efficiency improvement by incorporating sampling, adapting NS-HSVI to more general perception NNs and extending the approach to multi-agent systems. **Acknowledgements.** This project was funded by the ERC under the European Union's Horizon 2020 research and innovation programme (FUN2MODEL, grant agreement No.834115). ## Appendix A Proofs from Section 4 Before we give the proofs of Section 4 we require the following definition. **Definition 9**: _For FCPs \(\Phi_{1}\) and \(\Phi_{2}\) of S, we denote by \(\Phi_{1}+\Phi_{2}\) the smallest FCP of \(S\) such that \(\Phi_{1}+\Phi_{2}\) is a refinement of both \(\Phi_{1}\) and \(\Phi_{2}\), which can be computed by all combinations of intersections between regions in \(\Phi_{1}\) and \(\Phi_{2}\)._ **Lemma 1** (Perception FCP).: _There exists a smallest FCP of \(S\), called the perception FCP, denoted \(\Phi_{P}\), such that all states in any \(\phi\in\Phi_{P}\) are observationally equivalent, i.e., if \((s_{A},s_{E}),(s^{\prime}_{A},s^{\prime}_{E})\in\phi\), then \(s_{A}=s^{\prime}_{A}\) and we let \(s^{\phi}_{A}=s_{A}\)._ Proof.: Since \(\mathit{obs}_{A}\) is PWC and \(S_{A}\) is finite, using Definition 1 we have that for any \(s_{A}=(\mathit{loc},\mathit{per})\in S_{A}\) the set \(S^{s_{A}}_{E}=\{s_{E}\in S_{E}\mid\mathit{obs}_{A}(\mathit{loc},s_{E})= \mathit{per}\}\) can be expressed as a number of disjoint regions of \(S_{E}\) and we let \(\Phi^{s_{A}}_{E}\) be such a representation that minimises the number of such regions. It then follows that \(\{\{(s_{A},s_{E})\mid s_{E}\in\phi_{E}\}\mid\phi_{E}\in\Phi^{s_{A}}_{E}\wedge s _{A}\in S_{A}\}\) is a smallest FCP of \(S\) such that all states in any region are observationally equivalent. **Theorem 1** (P-PWLC closure and convergence).: _If \(V\in\mathbb{F}(S_{B})\) and P-PWLC, then so is \([TV]\). If \(V^{0}\in\mathbb{F}(S_{B})\) and P-PWLC, then the sequence \((V^{t})^{\infty}_{t=0}\), such that \(V^{t+1}=[TV^{t}]\) are P-PWLC and converges to \(V^{\star}\)._ Proof.: Consider any \(V\in\mathbb{F}(S_{B})\) that is P-PWLC, by Definition 6 there exists a finite set \(\Gamma\subseteq\mathbb{F}_{C}(S)\) such that: \[V(s_{A},b_{E})=\max_{\alpha\in\Gamma}\langle\alpha,(s_{A},b_{E})\rangle\text{ for all }(s_{A},b_{E})\in S_{B}.\] (A.1) Now consider any \((s_{A},b_{E}),(s^{\prime}_{A},b^{\prime}_{E})\in S_{B}\) where \(s^{\prime}_{A}=(\mathit{loc}^{\prime},\mathit{per}^{\prime})\) and action \(a\in\Delta_{A}(s_{A})\), and letting \(P_{1}\coloneqq P(s^{\prime}_{A}\mid(s_{A},b_{E}),a)\), by (A.1) we have: \[V(s^{\prime}_{A},b^{s_{A},a,s^{\prime}_{A}}_{E})\ =\ \max_{ \alpha\in\Gamma}\langle\alpha,(s^{\prime}_{A},b^{s_{A},a,s^{\prime}_{A}}_{E})\rangle\] \[=\max_{\alpha\in\Gamma}\int_{s^{\prime}_{E}\in S_{E}}\alpha(s^{ \prime}_{A},s^{\prime}_{E})b^{s_{A},a,s^{\prime}_{A}}_{E}(s^{\prime}_{E}) \mathrm{d}s^{\prime}_{E}\] by (5) \[=\max_{\alpha\in\Gamma}\int_{s^{\prime}_{E}\in S_{E}}\alpha(s^{ \prime}_{A},s^{\prime}_{E})\frac{P((s^{\prime}_{A},s^{\prime}_{E})\mid(s_{A}, b_{E}),a)}{P(s^{\prime}_{A}\mid(s_{A},b_{E}),a)}\mathrm{d}s^{\prime}_{E}\] by (1) \[=\max_{\alpha\in\Gamma}\int_{s^{\prime}_{E}\in S_{E}}\alpha(s^{ \prime}_{A},s^{\prime}_{E})\frac{P((s^{\prime}_{A},s^{\prime}_{E})\mid(s_{A}, b_{E}),a)}{P_{1}}\mathrm{d}s^{\prime}_{E}\] by definition of \[P_{1}\] \[=\frac{1}{P_{1}}\max_{\alpha\in\Gamma}\int_{s^{\prime}_{E}\in S_{E}} \alpha(s^{\prime}_{A},s^{\prime}_{E})P((s^{\prime}_{A},s^{\prime}_{E})\mid(s_{ A},b_{E}),a)\mathrm{d}s^{\prime}_{E}\] rearranging \[=\frac{1}{P_{1}}\max_{\alpha\in\Gamma}\int_{s^{\prime}_{E}\in S_{E}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[=\frac{1}{P_{1}}\max_{\alpha\in\Gamma}\int_{s_{E}\in E}\left(\delta_{A}(s_{A},a)( loc^{\prime})\int_{s^{\prime}_{E}\in S^{s^{\prime}_{A}}_{E}}\alpha(s^{\prime}_{A},s^{ \prime}_{E})\delta_{E}(s_{E},a)(s^{\prime}_{E})\mathrm{d}s^{\prime}_{E}\right)b _{E}(s_{E})\mathrm{d}s_{E}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ Now substituting (A.4) into Definition 4 it follows that \([TV](s_{A},b_{E})\) equals: \[\max_{a\in\Delta_{A}(s_{A})}\left\{\langle R_{a},(s_{A},b_{E}) \rangle+\beta{\sum}_{s^{\prime}_{A}\in S_{A}}\max_{\alpha\in\Gamma}\int_{s_{E} \in E}\alpha^{a,s^{\prime}_{A}}(s_{A},s_{E})b_{E}(s_{E})\mathrm{d}s_{E}\right\}\] \[=\ \max_{a\in\Delta_{A}(s_{A})}\left\{\langle R_{a},(s_{A},b_{E}) \rangle+\beta{\sum}_{s^{\prime}_{A}\in S_{A}}\max_{\alpha\in\Gamma}\langle \alpha^{a,s^{\prime}_{A}},(s_{A},b_{E})\rangle\right\}\quad\text{by (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq: eq:eq: eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq: **Theorem 2** (Convexity and continuity).: _For any \(s_{A}\in S_{A}\), the value function \(V^{\star}(s_{A},\cdot):\mathbb{P}(S_{E})\to\mathbb{R}\) is convex and for any \(b_{E},b^{\prime}_{E}\in\mathbb{P}(S_{E})\):_ \[|V^{\star}(s_{A},b_{E})-V^{\star}(s_{A},b^{\prime}_{E})|\leq K(b_{E},b^{\prime} _{E})\] (A.6) _where \(K(b_{E},b^{\prime}_{E})=(U-L)\int_{s_{E}\in S_{E}^{b_{E}>b^{\prime}_{E}}}(b_{E} (s_{E})-b^{\prime}_{E}(s_{E}))\mathrm{d}s_{E}\) and \(S_{E}^{b_{E}>b^{\prime}_{E}}=\{s_{E}\in S_{E}^{s_{A}}\mid b_{E}(s_{E})-b^{ \prime}_{E}(s_{E})>0\}\)._ Proof.: According to Theorem 1 there exists a (possibly infinite) set \(\Gamma\subseteq\mathbb{F}_{C}(S)\) such that for any \((s_{A},b_{E})\in S_{B}\): \[V^{\star}(s_{A},b_{E})=\sup_{\alpha\in\Gamma}\langle\alpha,(s_{A},b_{E}) \rangle\,.\] (A.7) Given \(s_{A}\in S_{A}\), consider any \(b_{E},b^{\prime}_{E}\in\mathbb{P}(S_{E})\) and \(\lambda\in[0,1]\), and we have: \[\lambda V^{\star}(s_{A},b_{E})+(1-\lambda)V^{\star}(s_{A},b^{ \prime}_{E})\] \[= \lambda\sup_{\alpha\in\Gamma}\langle\alpha,(s_{A},b_{E})\rangle+ (1-\lambda)\sup_{\alpha\in\Gamma}\langle\alpha,(s_{A},b^{\prime}_{E})\rangle \text{by (\ref{eq:V^{\star}})}\] \[= \sup_{\alpha\in\Gamma}\langle\alpha,(s_{A},\lambda b_{E})\rangle+ \sup_{\alpha\in\Gamma}\langle\alpha,(s_{A},(1-\lambda)b^{\prime}_{E})\rangle \text{by (\ref{eq:V^{\star}})}\] \[\geq \sup_{\alpha\in\Gamma}\langle\alpha,(s_{A},\lambda b_{E}+(1- \lambda)b^{\prime}_{E})\rangle\text{rearranging}\] \[= V^{\star}(s_{A},\lambda b_{E}+(1-\lambda)b^{\prime}_{E})\text{by (\ref{eq:V^{\star}})}\] which proves that \(V^{\star}(s_{A},\cdot)\) is convex. Next given \(\alpha\) and \(s_{A}\), let \(V_{\alpha,s_{A}}(b_{E})\coloneqq\langle\alpha,(s_{A},b_{E})\rangle\) for \((s_{A},b_{E})\in S_{B}\). For any \((s_{A},b_{E}),(s_{A},b^{\prime}_{E})\in S_{B}\), without loss of generality, we can assume that \(V_{\alpha,s_{A}}(b_{E})\geq V_{\alpha,s_{A}}(b^{\prime}_{E})\), and therefore: \[|V_{\alpha,s_{A}}(b_{E})-V_{\alpha,s_{A}}(b^{\prime}_{E})|=V_{ \alpha,s_{A}}(b_{E})-V_{\alpha,s_{A}}(b^{\prime}_{E})\] \[=\langle\alpha,(s_{A},b_{E})\rangle-\langle\alpha(s_{A},b^{ \prime}_{E})\rangle\text{by definition of }V_{\alpha,s_{A}}\] \[=\int_{s_{E}\in S_{E}^{s_{A}}}\alpha(s_{A},s_{E})b_{E}(s_{E}) \mathrm{d}s_{E}-\int_{s_{E}\in S_{E}^{s_{A}}}\alpha(s_{A},s_{E})b^{\prime}_{E }(s_{E})\mathrm{d}s_{E}\text{ by (\ref{eq:V^{\star}})}\] \[=\int_{s_{E}\in S_{E}^{s_{A}}}\alpha(s_{A},s_{E})(b_{E}(s_{E})-b^{ \prime}_{E}(s_{E}))\mathrm{d}s_{E}\text{rearranging}.\] Since \(b_{E},b^{\prime}_{E}\in\mathbb{P}(S_{E})\) and \((s_{A},b_{E}),(s_{A},b^{\prime}_{E})\in S_{B}\), we have: \[\int_{s_{E}\in S_{E}^{s_{A}}}b_{E}(s_{E})\mathrm{d}s_{E}=\int_{s_{E}\in S_{E}^ {s_{A}}}b^{\prime}_{E}(s_{E})\mathrm{d}s_{E}=1\,.\] (A.9) Now, letting \(S_{E}^{+}=\{s_{E}\in S_{E}^{s_{A}}\mid b_{E}(s_{E})-b_{E}^{\prime}(s_{E})>0\}\) and \(S_{E}^{-}=\{s_{E}\in S_{E}^{s_{A}}\mid b_{E}(s_{E})-b_{E}^{\prime}(s_{E})\leq 0\}\), rearranging (A.9) and using the fact that \(S_{E}^{+}\cup S_{E}^{-}=S_{E}^{s_{A}}\) it follows that: \[\int_{s_{E}\in S_{E}^{-}}(b_{E}(s_{E})-b_{E}^{\prime}(s_{E}))\mathrm{d}s_{E}=- \int_{s_{E}\in S_{E}^{+}}(b_{E}(s_{E})-b_{E}^{\prime}(s_{E}))\mathrm{d}s_{E}.\] (A.10) Next, using (A.8), the definition of \(V_{\alpha,s_{A}}\) and (5), it follows that \(|V_{\alpha,s_{A}}(b_{E})-V_{\alpha,s_{A}}(b_{E}^{\prime})|\) equals: \[\int_{s_{E}\in S_{E}^{+}} \alpha(s_{A},s_{E})(b_{E}(s_{E})-b_{E}^{\prime}(s_{E}))\mathrm{d}s _{E}+\int_{s_{E}\in S_{E}^{-}} \alpha(s_{A},s_{E})(b_{E}(s_{E})-b_{E}^{\prime}(s_{E}))\mathrm{d}s_{E}\] \[\leq\int_{s_{E}\in S_{E}^{+}}U(b_{E}(s_{E})-b_{E}^{\prime}(s_{E}) )\mathrm{d}s_{E}+\int_{s_{E}\in S_{E}^{-}}L(b_{E}(s_{E})-b_{E}^{\prime}(s_{E}) )\mathrm{d}s_{E}\] by definition of \[S_{E}^{+}\ \[=V^{\star}(s_{A},b^{\prime}_{E})+k\int_{s_{E}\in S^{+}_{E}}\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! backup for the other states in \(\phi\), and if the backup at line 7 is executed, \(\alpha^{\star}\) is assigned the lower bound \(L\) in \(\phi\). Therefore we have for any \((\hat{s}_{A},\hat{b}_{E})\in S_{B}\): \[\langle\alpha^{\star},(\hat{s}_{A},\hat{b}_{E})\rangle \leq[TV^{\Gamma}_{LB}](\hat{s}_{A},\hat{b}_{E})\] \[\leq[TV^{\star}](\hat{s}_{A},\hat{b}_{E}) \text{since }V^{\Gamma}_{LB}\leq V^{\star}\] \[=V^{\star}(\hat{s}_{A},\hat{b}_{E}) \text{by Theorem 1}.\] (B.1) Combining this inequality with \(V^{\Gamma}_{LB}\leq V^{\star}\), we have \(V^{\Gamma^{\prime}}_{LB}\leq V^{\star}\) as required. \(\square\) **Lemma 3** (ISPP backup).: _The FCP \(\Phi_{\mathrm{product}}\) returned by Algorithm 2 is a constant-FCP of \(\phi\) for \(\alpha^{\star}\) and the region-by-region backup for \(\alpha^{\star}\) satisfies (12)._ Proof.: For the PWC \(\alpha\)-functions in the input of Algorithm 2, let \(\Phi=\sum_{s^{\prime}_{A}\in\bar{S}_{A}}\Phi_{s^{\prime}_{A}}\), where \(\Phi_{s^{\prime}_{A}}\) is an FCP of \(S\) for \(\alpha^{s^{\prime}_{A}}\). According to Assumption 1, there exists a preimage-FCP of \(\Phi\) for action \(\bar{a}\). Through the image, split, preimage and product operations of Algorithm 2, all the states in any region \(\phi^{\prime}\in\Phi_{\mathrm{product}}\) have the same reward and reach the same regions of \(\Phi\). Since each \(\alpha\)-function \(\alpha^{s^{\prime}_{A}}\) is constant over each region in \(\Phi\), all states in \(\phi^{\prime}\) have the same backup value from \(\alpha^{s^{\prime}_{A}}\) for \(s^{\prime}_{A}\in\bar{S}_{A}\). This implies that \(\Phi_{\mathrm{product}}\) is a preimage FCP of \(\Phi\) for action \(\bar{a}\). Since the value backup (12) is used for each region in \(\Phi_{\mathrm{product}}\) and the image is from the region \(\phi\), then \(\Phi_{\mathrm{product}}\) is a constant-FCP of \(\phi\) for \(\alpha^{\star}\), and thus the value backup (12) for \(\alpha^{\star}\) is achieved by considering the regions of \(\Phi_{\mathrm{product}}\). \(\square\) **Lemma 4** (Upper bound).: _Given belief \((s_{A},b_{E})\in S_{B}\), if \(p^{\star}=[TV^{\Upsilon}_{UB}](s_{A},b_{E})\), then \(p^{\star}\) is an upper bound of \(V^{\star}\) at \((s_{A},b_{E})\), i.e., \(p^{\star}\geq V^{\star}(s_{A},b_{E})\), and if \(\Upsilon^{\prime}=\Upsilon\cup\{((s_{A},b_{E}),p^{\star})\}\), then \(V^{\Upsilon}_{UB}\geq V^{\Upsilon^{\prime}}_{UB}\geq V^{\star}\) and \(V^{\Upsilon^{\prime}}_{UB}(s_{A},b_{E})\leq[TV^{\Upsilon}_{UB}](s_{A},b_{E})\)._ Proof.: Consider an upper bound \(V^{\Upsilon}_{UB}\) such that \(V^{\Upsilon}_{UB}\geq V^{\star}\). By construction, each pair \(((s^{i}_{A},b^{i}_{E}),y_{i})\) in \(\Upsilon\) satisfies \(V^{\star}(s^{i}_{A},b^{i}_{E})\leq y_{i}\). Now suppose for belief \((s_{A},b_{E})\in S_{B}\) we let \(p^{\star}=[TV^{\Upsilon}_{UB}](s_{A},b_{E})\) and \(\Upsilon^{\prime}=\Upsilon\cup\{((s_{A},b_{E}),p^{\star})\}\). The new upper bound \(V^{\Upsilon^{\prime}}_{UB}\) after updating \(V^{\Upsilon}_{UB}\) at \((s_{A},b_{E})\) through Algorithm 1, satisfies \(V^{\Upsilon}_{UB}\geq V^{\Upsilon^{\prime}}_{UB}\) by (9). By construction of \(p^{\star}\) we have: \[p^{\star}=[TV^{\Upsilon}_{UB}](s_{A},b_{E})\] \[\geq[TV^{\star}](s_{A},b_{E}) \text{since }V^{\Upsilon}_{\text{\it UB}}\geq V^{\star}\] \[=V^{\star}(s_{A},b_{E}) \text{by Theorem 1}.\] Next we have: \[V^{\Upsilon^{\prime}}_{\text{\it UB}}(s_{A},b_{E}) \leq p^{\star} \text{since }((s_{A},b_{E}),p^{\star})\in\Upsilon^{\prime}\text{ and }\eqref{eq:p_eq}\] \[=[TV^{\Upsilon}_{\text{\it UB}}](s_{A},b_{E}) \text{by construction of }p^{\star}.\] It therefore remains to prove the last part, i.e. that \(V^{\Upsilon^{\prime}}_{\text{\it UB}}\geq V^{\star}\). Now for any \((s^{\prime}_{A},b^{\prime}_{E})\in S_{B}\), if \(s^{\prime}_{A}\neq s_{A}\), then using the fact that \(\Upsilon^{\prime}=\Upsilon\cup\{((s_{A},b_{E}),p^{\star})\}\) and (9) we have: \[V^{\Upsilon^{\prime}}_{\text{\it UB}}(s^{\prime}_{A},b^{\prime}_ {E}) =V^{\Upsilon}_{\text{\it UB}}(s^{\prime}_{A},b^{\prime}_{E})\] \[\geq V^{\star}(s^{\prime}_{A},b^{\prime}_{E}) \text{since }V^{\Upsilon}_{\text{\it UB}}\geq V^{\star}.\] On the other hand, if \(s^{\prime}_{A}=s_{A}\), then using (9) there exists \(\langle\hat{\lambda}_{i}\rangle_{i\in I_{s_{A}}}\) with \(\hat{\lambda}_{i}\geq 0\) and \(\sum_{i\in I_{s_{A}}}\hat{\lambda}_{i}=1\) such that: \[V^{\Upsilon^{\prime}}_{\text{\it UB}}(s^{\prime}_{A},b^{\prime}_{E})=\sum_{i \in I_{s_{A}}}\hat{\lambda}_{i}y_{i}+K_{\text{\it UB}}\left(b^{\prime}_{E}, \sum_{i\in I_{s_{A}}}\hat{\lambda}_{i}b^{i}_{E}\right)\,.\] (B.2) Now using Theorem 2 we have: \[V^{\star}(s^{\prime}_{A},b^{\prime}_{E})\leq V^{\star}(s_{A},\sum _{i\in I_{s_{A}}}\hat{\lambda}_{i}b^{i}_{E})+K\left(b^{\prime}_{E},\sum_{i\in I _{s_{A}}}\hat{\lambda}_{i}b^{i}_{E}\right)\] \[\leq\sum_{i\in I_{s_{A}}}\hat{\lambda}_{i}V^{\star}(s_{A},b^{i}_ {E})+K\left(b^{\prime}_{E},\sum_{i\in I_{s_{A}}}\hat{\lambda}_{i}b^{i}_{E} \right) \text{since }V^{\star}\text{ is convex in }S_{B}\] \[\leq\sum_{i\in I_{s_{A}}}\hat{\lambda}_{i}V^{\star}(s_{A},b^{i}_ {E})+K_{\text{\it UB}}\left(b^{\prime}_{E},\sum_{i\in I_{s_{A}}}\hat{\lambda} _{i}b^{i}_{E}\right) \text{by }\eqref{eq:p_eq}\] \[\leq\sum_{i\in I_{s_{A}}}\hat{\lambda}_{i}y_{i}+K_{\text{\it UB}} \left(b^{\prime}_{E},\sum_{i\in I_{s_{A}}}\hat{\lambda}_{i}b^{i}_{E}\right) \text{since if }i\in I_{s_{A}}\text{, then }((s_{A},b^{i}_{E}),y_{i})\in\Upsilon\] \[=V^{\Upsilon^{\prime}}_{\text{\it UB}}(s^{\prime}_{A},b^{\prime}_ {E}) \text{by }\eqref{eq:p_eq}.\] Therefore since these are the only cases to consider for \((s^{\prime}_{A},b^{\prime}_{E})\in S_{B}\) we have \(V^{\Upsilon^{\prime}}_{\text{\it UB}}\geq V^{\star}\) as required. \(\square\) **Lemma 5** (LP for upper bound).: _The function \(K_{\text{\it UB}}\) from (17) satisfies (10), and for particle-based belief \((s_{A},b_{E})\) represented by \(\{(s^{i}_{E},w_{i})\}_{i=1}^{N_{b}}\), we have that \(V^{\Upsilon}_{\text{\it UB}}(s_{A},b_{E})\) is the optimal value of the LP:_ \[\begin{array}{ll}\text{minimize:}&\sum_{k\in I_{s_{A}}}\lambda_{k}y_{k}+(U- L)N_{b}c\\ \text{subject to:}&c\geq|w_{i}-\sum_{k\in I_{s_{A}}}\lambda_{k}P(s^{i}_{E};b^{k}_{E})| \text{ for }1\leq i\leq N_{b}\\ &\lambda_{k}\geq 0\text{ for }k\in I_{s_{A}}\text{ and }\sum_{k\in I_{s_{A}}} \lambda_{k}=1\,.\end{array}\] Proof.: Consider any particle-based beliefs \((s_{A},b_{E})\) and \((s_{A},b^{\prime}_{E})\) where \((s_{A},b_{E})\) is represented by the weighted particle set \(\{(s^{i}_{E},w_{i})\}_{i=1}^{N_{b}}\). Recall that \(S_{E}^{b_{E}>b^{\prime}_{E}}=\{s_{E}\in S_{E}^{s_{A}}\mid b_{E}(s_{E})-b^{\prime }_{E}(s_{E})>0\}\), now by definition of \(K(b_{E},b^{\prime}_{E})\), see Theorem 2, we have: \[K(b_{E},b^{\prime}_{E})=(U-L)\int_{s_{E}\in S_{E}^{b_{E}>b^{\prime }_{E}}}(b_{E}(s_{E})-b^{\prime}_{E}(s_{E}))\mathrm{d}s_{E}\] \[=(U-L)\int_{s_{E}\in S_{E}^{b_{E}>b^{\prime}_{E}}}|b_{E}(s_{E})-b ^{\prime}_{E}(s_{E})|\mathrm{d}s_{E}\qquad\quad\text{by definition of }S_{E}^{b_{E}>b^{\prime}_{E}}\] \[\leq(U-L){\sum_{i=1}^{N_{b}}}\left|P(s^{i}_{E};b_{E})-P(s^{i}_{E} ;b^{\prime}_{E})\right|\qquad\qquad\qquad\qquad\qquad\text{by Definition \ref{def:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq **Lemma 6** (Region-based belief closure).: _If \(\delta^{i}_{E}(\cdot,a):S_{E}\to\delta^{i}_{E}(S_{E},a)\) is piecewise differentiable and invertible from \(S_{E}\) to \(T\subseteq S_{E}\), and the Jacobian determinant of the inverse function, i.e., for any \(s^{\prime}_{E}\in T\):_ \[\mathrm{Jac}(s^{\prime}_{E})\coloneqq\det\left(\frac{\mathrm{d}\delta^{i,-1}_{ E}(s^{\prime}_{E},a)}{\mathrm{d}s^{\prime}_{E}}\right)\] _is PWC for \(a\in\text{Act}\) and \(1\leq i\leq N_{e}\), then region-based beliefs are closed under belief updates._ Proof.: Since \(\delta^{i}_{E}(\cdot,a)\) is piecewise differentiable and piecewise invertible, let \(\phi_{E}\subseteq S_{E}\) be a region over which \(\delta^{i}_{E}(\cdot,a)\) is differentiable and invertible. Suppose that \(X_{E}\) is a random variable taking values in \(\phi_{E}\), and that \(X_{E}\) has a continuous uniform distribution with probability density function \(b_{E}\). Due to the differentiability and thus continuity of \(\delta^{i}_{E}(\cdot,a)\), the image \(\phi^{\prime}_{E}=\{s^{\prime}_{E}\mid s^{\prime}_{E}=\delta^{i}_{E}(s_{E},a) \wedge s_{E}\in\phi_{E}\}\) is a region in \(S_{E}\). Furthermore, suppose \(X^{\prime}_{E}=\delta^{i}_{E}(X_{E},a)\) is a new random variable taking values in \(\phi^{\prime}_{E}\) and let \(b^{\prime}_{E}\) be the probability density function for \(X^{\prime}_{E}\) over \(\phi^{\prime}_{E}\). We next prove that \(b^{\prime}_{E}\) is a PWC uniform distribution under the given conditions. Let \(\delta^{i,-1}_{E}(\cdot,a)\) be the inverse function of \(\delta^{i}_{E}(\cdot,a)\) in \(\phi_{E}\). If \(\phi^{\prime}_{1}\subseteq\phi^{\prime}_{E}\), letting \(\phi_{1}\) be the preimage of \(\phi^{\prime}_{1}\), then \[P(X^{\prime}_{E}\in\phi^{\prime}_{1}) =P(\delta^{i}_{E}(X_{E},a)\in\phi^{\prime}_{1})\qquad\text{ since }X^{\prime}_{E}=\delta^{i}_{E}(X_{E},a)\] \[=\int_{s_{E}\in\phi_{1}}b_{E}(s_{E})\mathrm{d}s_{E}\qquad\qquad \text{ by definition of }b_{E}.\] (B.3) Using the change of variables \(s_{E}=\delta^{i,-1}_{E}(s^{\prime}_{E},a)\) we have that: \[\mathrm{d}s_{E} =\det\left(\frac{\mathrm{d}\delta^{i,-1}_{E}(s^{\prime}_{E},a)}{ \mathrm{d}s^{\prime}_{E}}\right)\mathrm{d}s^{\prime}_{E}\] \[=\mathrm{Jac}(s^{\prime}_{E})\mathrm{d}s^{\prime}_{E}\qquad \qquad\qquad\text{ by definition of the Jacobian determinant}\] and substituting this into (B.3) we have: \[P(X^{\prime}_{E}\in\phi^{\prime}_{1})=\int_{s^{\prime}_{E}\in\phi^{\prime}_{1} }b_{E}(\delta^{i,-1}_{E}(s^{\prime}_{E},a))\mathrm{Jac}(s^{\prime}_{E}) \mathrm{d}s^{\prime}_{E}\,.\] Therefore we have that for any \(s^{\prime}_{E}\in\phi^{\prime}_{1}\): \[b^{\prime}_{E}(s^{\prime}_{E})=b_{E}(\delta^{i,-1}_{E}(s^{\prime}_{E},a)) \mathrm{Jac}(s^{\prime}_{E})\] and since \(b_{E}(\delta_{E}^{i,-1}(s^{\prime}_{E},a))=b_{E}(s_{E})\) for \(s_{E}\in\phi_{E}\) is constant and by construction \(\mathrm{Jac}(s^{\prime}_{E})\) is PWC, we have that \(b^{\prime}_{E}\) is PWC over \(\phi^{\prime}_{E}\) as required. We conclude that \(\delta_{E}^{i}(\cdot,a)\) transforms a random variable which has a continuous uniform distribution in a region into a new random variable which has a continuous uniform distribution over finitely many regions. Therefore, region-based belief are closed under \(\delta_{E}^{i}(\cdot,a)\). **Lemma 7** (Region-based belief update).: _For region-based belief \((s_{A},b_{E})\) represented by \(\{(\phi_{E}^{i},w_{i})\}_{i=1}^{N_{b}}\), action \(a\) and observation \(s^{\prime}_{A}\): \((s^{\prime}_{A},b^{\prime}_{E})\) returned by Algorithm 4 is region-based and \(b^{\prime}_{E}=b_{E}^{s_{A},a,s^{\prime}_{A}}\). Furthermore, if \(h:S\rightarrow\mathbb{R}\) is PWC and \(\Phi_{E}\) is a constant-FCP of \(S_{E}\) for \(h\) at \(s_{A}\), then \(\langle h,(s_{A},b_{E})\rangle=\sum_{i=1}^{N_{b}}\sum_{\phi_{E}\in\Phi_{E}}h(s _{A},s_{E})w_{i}\mathrm{vol}(\phi_{E}^{i}\cap\phi_{E})\) where \(s_{E}\in\phi_{E}\)._ Proof.: Consider a region-based belief \((s_{A},b_{E})\) represented by \(\{(\phi_{E}^{i},w_{i})\}_{i=1}^{N_{b}}\), action \(a\) and observation \(s^{\prime}_{A}\) and suppose that the belief \((s^{\prime}_{A},b^{\prime}_{E})\) is returned by Algorithm 4. Since \(\delta_{E}^{i}(\cdot,a)\) is piecewise continuous by Lemma 6, then for any region \(\phi_{E}\subseteq\Phi_{E}\), the image \(\{\delta_{E}^{i}(s_{E},a)\mid s_{E}\in\phi_{E}\}\) can be represented as a union of regions. Furthermore, due to the invertibility of \(\delta_{E}^{i}(\cdot,a)\), these regions are disjoint and the image is uniformly reached. Letting \(\phi_{ij}=\{\delta_{E}^{j}(s_{E},a)\mid s_{E}\in\phi_{E}^{i}\}\), according to the belief update (4) and the belief expression in Definition 8, we have: \[\int_{s_{E}\in S_{E}}\!\!\!\!b_{E}(s_{E})\delta_{E}(s_{E},a)(s^{ \prime}_{E})\mathrm{d}s_{E}=\int_{s_{E}\in S_{E}}\left(\sum_{i=1}^{N_{b}}\! \chi_{\phi_{E}^{i}}(s_{E})w_{i}\right)\delta_{E}(s_{E},a)(s^{\prime}_{E}) \mathrm{d}s_{E}\] \[=\sum_{i=1}^{N_{b}}\left(\int_{s_{E}\in S_{E}}\chi_{\phi_{E}^{i} }(s_{E})w_{i}\delta_{E}(s_{E},a)(s^{\prime}_{E})\mathrm{d}s_{E}\right) \text{rearranging}\] \[=\sum_{i=1}^{N_{b}}\left(\int_{s_{E}\in\phi_{E}^{i}}w_{i}\delta_ {E}(s_{E},a)(s^{\prime}_{E})\mathrm{d}s_{E}\right) \text{by definition of }\chi_{\phi_{E}^{i}}\] \[=\sum_{i=1}^{N_{b}}\left(\int_{s_{E}\in\phi_{E}^{i}}w_{i}\left( \sum_{j=1}^{N_{e}}\!\chi_{\phi_{E}^{ij}}(s^{\prime}_{E})\frac{\mu_{j}}{ \mathrm{vol}(\phi_{E}^{ij})}\mathrm{d}s_{E}\right)\right)\] \[\text{by definition of }\phi_{ij}\text{ and since it is uniformly reached by Lemma 6}\] \[=\sum_{i=1}^{N_{b}}\!\sum_{j=1}^{N_{e}}\left(\int_{s_{E}\in\phi_{E }^{i}}w_{i}\chi_{\phi_{E}^{ij}}(s^{\prime}_{E})\frac{\mu_{j}}{\mathrm{vol}( \phi_{E}^{ij})}\mathrm{d}s_{E}\right)\] \[=\sum_{i=1}^{N_{b}}\!\sum_{j=1}^{N_{e}}\!w_{i}\chi_{\phi_{E}^{ij} }(s^{\prime}_{E})\frac{\mu_{j}}{\mathrm{vol}(\phi_{E}^{ij})}\left(\int_{s_{E} \in\phi_{E}^{i}}\mathrm{d}s_{E}\right) \text{rearranging}\] \[=\sum_{i=1}^{N_{b}}\sum_{j=1}^{N_{e}}\chi_{\phi_{E}^{ij}}(s^{\prime}_{E})\frac{w_ {i}\mu_{j}\mathrm{vol}(\phi_{E}^{i})}{\mathrm{vol}(\phi_{E}^{ij})}\] by definition of vol \[=\sum_{i=1}^{N_{b}}\sum_{j=1}^{N_{e}}\chi_{\phi_{E}^{ij}}(s^{ \prime}_{E})\frac{w_{i}\mu_{j}\mathrm{vol}(\phi_{E}^{i})}{\mathrm{vol}(\phi_{E }^{ij})}\] by definition of vol \[=\sum_{i=1}^{N_{b}}\sum_{j=1}^{N_{e}}\chi_{\phi_{E}^{ij}}(s^{ \prime}_{E})\frac{w_{i}\mu_{j}\mathrm{vol}(\phi_{E}^{i})}{\mathrm{vol}(\phi_{E }^{ij})}\] by definition of vol \[=\sum_{i=1}^{N_{b}}\sum_{j=1}^{N_{e}}\chi_{\phi_{E}^{ij}}(s^{ \prime}_{E})\frac{w_{i} \[=(U-L)\left(\int_{s_{E}\in S_{E}^{b_{E}>b_{E}^{\prime}}}b_{E}(s_{E})- \int_{s_{E}\in S_{E}^{b_{E}>b_{E}^{\prime}}}b_{E}^{\prime}(s_{E})\mathrm{d}s_{E} \right)\qquad\text{rearranging}\] \[\leq(U-L)\left(\int_{s_{E}\in S_{E}}b_{E}(s_{E})-\int_{s_{E}\in S _{E}^{b_{E}>b_{E}^{\prime}}}b_{E}^{\prime}(s_{E})\mathrm{d}s_{E}\right)\qquad \text{rearranging}\] \[=(U-L)\left(1-\int_{s_{E}\in S_{E}^{b_{E}>b_{E}^{\prime}}}b_{E}^{ \prime}(s_{E})\mathrm{d}s_{E}\right)\qquad\qquad\qquad\text{since }b_{E}\in\mathbb{P}(S_{E}).\] (B.4) Now since \(\phi_{E}^{\max}\subseteq S_{E}^{b_{E}>b_{E}^{\prime}}\) we have: \[\int_{s_{E}\in S_{E}^{b_{E}>b_{E}^{\prime}}}b_{E}^{\prime}(s_{E} )\mathrm{d}s_{E}\ \geq\ \int_{s_{E}\in\phi_{E}^{\max}}b^{\prime}(s_{E})\mathrm{d}s_{E}\] \[=\int_{s_{E}\in\phi_{E}^{\max}}\left(\sum_{k\in I_{s_{A}}}\lambda _{k}^{\star}b_{E}^{k}(s_{E})\mathrm{d}s_{E}\right)\qquad\qquad\qquad\qquad \text{by definition of }b_{E}^{\prime}\] \[=\sum_{k\in I_{s_{A}}}\left(\int_{s_{E}\in\phi_{E}^{\max}}\lambda _{k}^{\star}b_{E}^{k}(s_{E})\mathrm{d}s_{E}\right)\qquad\qquad\qquad\qquad \qquad\qquad\qquad\text{rearranging}\] \[=\sum_{k\in I_{s_{A}}}\left(\int_{s_{E}\in\phi_{E}^{\max}}\lambda _{k}^{\star}{\sum_{j=1}^{N_{k}^{k}}}\chi_{\phi_{E}^{kj}}(s_{E})w_{kj} \mathrm{d}s_{E}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\text{since }\left\{(\phi_{E}^{kj},w_{kj})\right\}_{j=1}^{N_{k}^{k}}\text{ represents }b_{E}^{k}\] \[=\sum_{k\in I_{s_{A}}}{\sum_{j=1}^{N_{k}^{k}}}\lambda_{k}^{\star} \left(\int_{s_{E}\in\phi_{E}^{\max}}\chi_{\phi_{E}^{kj}}(s_{E})w_{kj}\mathrm{ d}s_{E}\right)\qquad\qquad\qquad\qquad\text{rearranging}\] \[=\sum_{k\in I_{s_{A}}}{\sum_{j=1}^{N_{k}^{k}}}\lambda_{k}^{\star}w_ {kj}\mathrm{vol}(\phi_{E}^{kj}\cap\phi_{E}^{\max})\qquad\qquad\qquad\qquad \qquad\text{by definition of vol.}\] (B.5) Thus, substituting (B.5) into (B.4) we have: \[K_{\mathit{UB}}(b_{E},b_{E}^{\prime})\leq(U-L)\left(1-\sum_{k\in I_{s_{A}}}{ \sum_{j=1}^{N_{k}^{k}}}\lambda_{k}^{\star}w_{kj}\mathrm{vol}(\phi_{E}^{kj}\cap \phi_{E}^{\max})\right)\] and using (9), it follows that the optimal value \(p\) to the LP of Algorithm 5 is an upper bound of \(V_{\mathit{UB}}^{\Upsilon}\) at \((s_{A},b_{E})\). Finally, suppose that \(N_{b}=1\). Therefore \(\phi_{E}^{\max}=\phi_{E}^{1}\) and since \(\phi_{E}^{1}\) is the unique region with positive probabilities for \(b_{E}\), by definition of \(S_{E}^{b_{E}>b_{E}^{\prime}}\) it follows that \(S_{E}^{b_{E}>b_{E}^{\prime}}\subseteq\phi_{E}^{1}\). Combining these with \(\phi_{E}^{\max}\subseteq S_{E}^{b_{E}>b_{E}^{\prime}}\), we have that \(S_{E}^{b_{E}>b_{E}^{\prime}}=\phi_{E}^{\max}=\phi_{E}^{1}\). Therefore, all the inequalities above become equalities, and therefore \(p=V_{\mathit{UB}}^{\Upsilon}(s_{A},b_{E})\).
2307.00078
On the Limits of Single Anchor Localization: Near-Field vs Far-Field
It is well known that a single anchor can be used to determine the position and orientation of an agent communicating with it. However, it is not clear what information about the anchor or the agent is necessary to perform this localization, especially when the agent is in the near-field of the anchor. Hence, in this paper, to investigate the limits of localizing an agent with some uncertainty in the anchor location, we consider a wireless link consisting of source and destination nodes. More specifically, we present a Fisher information theoretical investigation of the possibility of estimating different combinations of the source and destination's position and orientation from the signal received at the destination. To present a comprehensive study, we perform this Fisher information theoretic investigation under both the near and far field propagation models. One of the key insights is that while the source or destination's $3$D orientation can be jointly estimated with the source or destination's $3$D position in the near-field propagation regime, only the source or destination's $2$D orientation can be jointly estimated with the source or destination's $2$D position in the far-field propagation regime. Also, a simulation of the FIM indicates that in the near-field, we can estimate the source's $3$D orientation angles with no beamforming, but in the far-field, we can not estimate the source's $2$D orientation angles when no beamforming is employed.
Don-Roberts Emenonye, Harpreet S. Dhillon, R. Michael Buehrer
2023-06-30T18:32:30Z
http://arxiv.org/abs/2307.00078v1
# On the Limits of Single Anchor Localization: Near-Field vs. Far-Field ###### Abstract It is well known that a single anchor can be used to determine the position and orientation of an agent communicating with it. However, it is not clear what information about the anchor or the agent is necessary to perform this localization, especially when the agent is in the near-field of the anchor. Hence, in this paper, to investigate the limits of localizing an agent with some uncertainty in the anchor location, we consider a wireless link consisting of source and destination nodes. More specifically, we present a Fisher information theoretical investigation of the possibility of estimating different combinations of the source and destination's position and orientation from the signal received at the destination. To present a comprehensive study, we perform this Fisher information theoretic investigation under both the near and far field propagation models. One of the key insights is that while the source or destination's 3D orientation can be jointly estimated with the source or destination's 3D position in the near-field propagation regime, only the source or destination's 2D orientation can be jointly estimated with the source or destination's 2D position in the far-field propagation regime. Also, a simulation of the FIM indicates that in the near-field, we can estimate the source's 3D orientation angles with no beamforming, but in the far-field, we can not estimate the source's 2D orientation angles when no beamforming is employed. 6G localization, anchor uncertainty, far-field, near-field, FIM. ## I Introduction Recently, due to the ubiquitous deployment of multi-antenna base stations, single-anchor localization has been proposed and studied with [1, 2, 3] and without a reconfigurable intelligent surface (RIS) [4, 5, 6]. Localization is usually performed under the assumption that the anchor location (position and orientation) is perfectly known [7]. However, in practical systems, this assumption might not hold. For example, in scenarios where unmanned aerial vehicles (UAV) act as anchors, there could be inherent uncertainty in the locations of the UAVs [8, 9]. Another example involves localization using RISs. RISs are being considered to aid localization by acting as virtual anchors; however, their ubiquitous deployment means that their locations can change (e.g., when they are placed on movable objects), resulting in uncertainty in their locations. Lastly, in indoor localization systems, the locations of the indoor anchors can easily be disturbed after deployment. Hence, in this paper, to investigate localization with anchor uncertainty, we present a Fisher information view of estimating different combinations of a source and destination's position and orientation under the near and far field propagation regimes. ### _Prior Art_ Prior literature on single-anchor localization involves deriving the fundamental limits for the accuracy achievable in estimating the position and orientation of an agent [1]. These bounds are extended to the case of 3D localization of an agent in [2]. In [3], the amount of information in the non-line of sight (NLOS) paths and their usefulness for localization is analyzed. The bounds of single-anchor localization with a RIS have been studied in [4]. These bounds are extended to account for near-field propagation in [5, 6]. In the context of anchor state uncertainty, localization has been investigated with and without a RIS. In [10], the positioning problem in the presence of anchor uncertainty is studied, the resulting non-convex optimization problem is relaxed to a second-order cone programming problem, and semidefinite programming is applied. The authors in [11] derive the geometric dilution of precision in the presence of anchor position uncertainty, and a trade-off is made between range errors and position errors by applying the modified spring mass method. The anchor position offset and the agent's position are estimated in [12] using the signal strength of the received signals. In [13], a rigorous investigation of the impact of anchor uncertainty on received signal strength-based localization techniques is presented. The bounds given in [13] serve as lower bounds to the algorithm in [12]. In [14], multipath propagation between the uncertain anchor and the agent is investigated, the error model of the anchor uncertainty is assumed, and importance sampling is used to obtain the agent's position. Uncertainties are considered in the case of RIS-assisted localization in [4, 5]. While the prior art primarily includes robust algorithms to handle uncertainty in anchors' position, a comprehensive Fisher information-based analysis on the estimation of the anchor orientation has yet to be studied. It is important to note the anchor orientation is particularly important as the localization of agents is now being considered with large antenna single anchors. Moreover, the effect of anchor location uncertainty has not been investigated under the near-field propagation regime. ### _Contributions_ In this paper, through the Fisher information matrix (FIM), we present a theoretical investigation of the limits of single-anchor localization by determining the combinations of positions and orientations of the source and destination nodes that can be estimated in the near and far field propagation regimes. Further, using the FIM, we present a lower bound for the source orientation and destination position accuracy. One key result from the FIM-based analysis is that in the near-field, the source or destination's \(3\)D orientations can be estimated jointly with either the source or destination's \(3\)D positions. Also, in the far-field, the source or destination's \(2\)D orientations can be estimated jointly with either the source or destination's \(2\)D positions. Another result is that while the presence of a beamforming matrix is not required in the near-field to estimate the source's \(3\)D orientation angles, a beamforming matrix is required in the far-field to estimate the source's \(2\)D orientation angles. _Notation:_ the transpose operator is \((\cdot)^{\rm T}\); the hermitian transpose operator is \((\cdot)^{\rm H}\); the submatrix in the matrix \(\mathbf{V}\), with rows in the range, \(g_{1}:v_{1}\), and the columns in the range \(g_{2}:v_{2}\) is extracted using the operation \([\mathbf{V}]_{[g_{1}:v_{1},g_{2}:v_{2}]}\); \({\rm Tr}(\cdot)\) is the matrix trace operator; \(\left\|\cdot\right\|\) denotes the Euclidean norm ; the positive definiteness of a matrix is characterized by \(\succ\) ; the first derivative operator is \(\nabla\) ; the expectation operator with respect to the random vector \(\mathbf{v}\) is \(\mathbb{E}_{\mathbf{v}}\{\cdot\}\). ## II System Model We consider a source with its centroid located at \(\mathbf{p}_{B}=[x_{B},y_{B},z_{B}]^{\rm T}\), and its \(b^{\text{th}}\) antenna element located at \(\mathbf{s}_{b}=[x_{b},y_{b},z_{b}]^{\rm T}\). The location of the centroid is defined with respect to the global origin, while the location specified by \(\mathbf{s}_{b}\) is defined with respect to \(\mathbf{p}_{B}\). This point \(\mathbf{s}_{b}\) can also be written as \(\mathbf{s}_{b}=\mathbf{Q}_{B}\tilde{\mathbf{s}}_{b}\), where \(\tilde{\mathbf{s}}_{b}=[\tilde{x}_{b},\tilde{y}_{b},\tilde{z}_{b}]^{\rm T}\) is the previously known position of the antenna coordinate with respect to \(\mathbf{p}_{B}\) before an orientation offset, \(\mathbf{\Phi}_{B}=[\alpha_{B},\psi_{B},\varphi_{B}]^{\rm T}\). The subsequent \(3\)D orientation matrix is defined as \(\mathbf{Q}_{B}\)[15]. There are \(N_{B}\) antennas at the source, and each antenna can be described with respect to the global origin as \(\mathbf{p}_{b}=\mathbf{p}_{B}+\mathbf{s}_{b}\). The destination is located at \(\mathbf{p}_{U}=[x_{U},y_{U},z_{U}]^{\rm T}\), and its \(u^{\text{th}}\) antenna element is located at \(\mathbf{s}_{u}=[x_{u},y_{u},z_{u}]^{\rm T}\). The corresponding vectors, \(\mathbf{p}_{U}\), \(\mathbf{s}_{u}\), \(\tilde{\mathbf{s}}_{u}\) and \(\mathbf{p}_{u}\) have similar definitions as the corresponding source's vectors. Note that the orientation angles and the matrix related to the destination are denoted by \(\mathbf{\Phi}_{U}\) and \(\mathbf{Q}_{U}\), respectively. The position of the destination's centroid located at \(\mathbf{p}_{U}\) can be described in relation to the position of the source's centroid located at \(\mathbf{p}_{B}\) as \(\mathbf{p}_{U}=\mathbf{p}_{B}+d_{BU}\mathbf{\Delta}_{BU}\), where \(d_{BU}\) is the distance from point \(\mathbf{p}_{B}\) to point \(\mathbf{p}_{U}\) and \(\mathbf{\Delta}_{BU}\) is the corresponding unit direction vector \(\mathbf{\Delta}_{BU}=[\cos\phi_{BU}\sin\theta_{BU},\sin\phi_{BU}\sin\theta_{BU}, \cos\theta_{BU}]^{\rm T}\). All points defined locally that describe the location of elements on the source antenna array with respect to the source's centroid can be written in the matrix form as \(\mathbf{S}_{B}=[\mathbf{s}_{1},\mathbf{s}_{2},\cdots,\mathbf{s}_{N_{B}}]\). Similarly, the points defined locally that describe the location of elements on the destination antenna array with respect to the destination's centroid can be written in the matrix form as \(\mathbf{S}_{U}=[\mathbf{s}_{1},\mathbf{s}_{2},\cdots,\mathbf{s}_{N_{U}}]\). Matrices \(\tilde{\mathbf{S}}_{B}\) and \(\tilde{\mathbf{S}}_{U}\) can be described similarly, by collecting the appropriate vectors \(\tilde{\mathbf{s}}_{b}\) and \(\tilde{\mathbf{s}}_{u}\). ### _Signal Model_ The communication from the source to the destination is achieved through the transmission of \(T\) symbols from the source with \(N_{B}\) transmit antennas to the destination with \(N_{U}\) receive antennas. During each transmission, the source precodes a data stream, \(\mathbf{x}\in\mathcal{C}^{N_{D}\times 1}\), to the \(N_{B}\) transmit antennas with a beamforming matrix \(\mathbf{F}_{t}\in\mathcal{C}^{N_{B}\times N_{D}}\) under the constraint \(\mathbb{E}\left\{\left\|\mathbf{x}\right\|^{2}\right\}=1\). The signal received during the \(t^{\text{th}}\) transmission is \[\mathbf{y}_{t}=\mathbf{H}\mathbf{F}_{t}\mathbf{x}+\mathbf{n}_{t},=\mathbf{\mu}_{t}+\mathbf{n}_{t}. \tag{1}\] In the above equation, \(\mathbf{\mu}_{t}\) is the noise-free part (useful part) of the signal, and \(\mathbf{n}_{t}\sim\mathcal{CN}(0,N_{0})\) represents the thermal noise local to the destination's antenna array. The element in the \(u^{\text{th}}\) row and \(b^{\text{th}}\) column of the channel matrix \(\mathbf{H}\) is \([\mathbf{H}]_{[u,b]}=\beta e^{-j2\pi f_{e}\tau_{b*}}\). Here, \(\beta=\beta_{\rm R}+j\beta_{\rm I}\) is the complex path gain, \(f_{e}\) is the operating frequency, and \(\tau_{bu}\) is the propagation delay from the \(b^{\text{th}}\) transmit antenna located at \(\mathbf{p}_{b}\) on the source's antenna array to the receive antenna located at \(\mathbf{p}_{u}\) on the destination's antenna array. Now, the signal received at the destination's \(u^{\text{th}}\) receive antenna during the \(t^{\text{th}}\) transmission is \[\mathbf{y}_{t,u}=\sum_{b=1}^{N_{B}}\sum_{d=1}^{N_{D}}[\mathbf{F}_{t}]_{[b,d]}[\mathbf{x}]_ {[d]}[\mathbf{H}]_{[u,b]}+\mathbf{n}_{t}. \tag{2}\] The definition of the delay given as \(\tau_{bu}=\frac{\left\|\mathbf{p}_{u}-\mathbf{p}_{b}\right\|}{c}\) incorporates any potential spherical curvature wavefront present in the signal received at the destination. When the destination experiences substantial wavefront curvature, it is said to be located within the near-field propagation regime. It is important to note that at sufficiently larger distances between the destination and the source, the spherical wavefront can be approximated by a plane wave. With this plane wave approximation, the delay can be approximated as \(\tau_{bu}=\tau_{BU}+\Delta_{BU}^{\rm T}(\mathbf{s}_{u}-\mathbf{s}_{b})/c\). When this approximation holds, the destination is said to be located within the far-field propagation regime. The boundary that defines the near and far field propagation regime is called the Fraunhofer distance. This Fraunhofer distance can be computed as \(d_{\rm f}=2D^{2}/\lambda\) with \(\lambda\) indicating the wavelength of the signal and \(D\) the maximum diameter among the source and destination surface diameters [5]. While, (1) and (2) adequately represent the signals received in the near-field, an approximation of signals received in the far-field can be Figure 1: An illustration showing a source communicating with a destination. written as \[\mathbf{y}_{t}=\beta\mathbf{a}_{UB}(\Delta_{BU})\mathbf{a}_{BU}^{\text{H}}(\Delta_{BU})e^{-j2 \pi f_{c}\tau_{BU}}\mathbf{F}_{t}\mathbf{x}+\mathbf{n}_{t}, \tag{3}\] where \(\mathbf{a}_{BU}(\Delta_{BU})=e^{-j2\frac{\pi}{4}\mathbf{S}_{B}^{\text{T}}\Delta_{BU}}\) and \(\mathbf{a}_{UB}(\Delta_{BU})=e^{-j2\frac{\pi}{4}\mathbf{S}_{U}^{\text{T}}\Delta_{BU}}\). ### _Source and Destination Position and Orientation Estimation_ In this letter, we provide the different combinations of source and destination position and orientation that can be estimated through the signals received across the \(N_{U}\) antennas during the \(T\) transmissions. We determine this by evaluating the FIM under the following parameterizations: case I) \(\mathbf{\eta}=[\mathbf{p}_{U},\mathbf{\Phi}_{U},\mathbf{\beta}]^{\text{T}}\), case II) \(\mathbf{\eta}=[\mathbf{p}_{U},\mathbf{\Phi}_{B},\mathbf{\beta}]^{\text{T}}\), case III) \(\mathbf{\eta}=[\mathbf{p}_{B},\mathbf{\Phi}_{U},\mathbf{\beta}]^{\text{T}}\), and case IV) \(\mathbf{\eta}=[\mathbf{p}_{B},\mathbf{\Phi}_{B},\mathbf{\beta}]^{\text{T}}\). Here, \(\mathbf{\beta}=[\beta_{\text{R}},\beta_{\text{I}}]^{\text{T}}\). Note that the location parameters for each individual case can be collected into the vector \(\mathbf{\zeta}\). The FIM computations are carried under three scenarios: i) far-field model with beamforming, ii) near-field model with no beamforming, and iii) near-field model with beamforming. Note that the case for using the far-field model with identity beamforming matrices across the \(T\) transmissions is not possible. This is because the joint estimation of the source orientation, \(\mathbf{\Phi}_{B}\), and \(\mathbf{\beta}\) is not feasible under this condition (see Appendix A). ## III Information in the Received signal To analyze the amount of location information present in the received signal, we introduce the mathematical definition of the FIM for an unknown parameter vector, \(\mathbf{\eta}\), in the following definition. **Definition 1**.: _Based on a set of observations \(\mathbf{y}\), the Fisher information of a parameter vector, \(\mathbf{\eta}\), is written as_ \[\mathbf{\mathrm{J}}_{\mathbf{\eta}}\triangleq-\mathbb{E}_{\mathbf{y}}\left[\frac{\partial ^{2}\ln\chi(\mathbf{y}|\mathbf{\eta})}{\partial\mathbf{\eta}\partial\mathbf{\eta}^{\text{T}}}\right] \tag{4}\] _where \(\mathbb{E}_{\nu}\) is expectation taken over the random variable \(\nu\), \(\chi(\mathbf{y}|\mathbf{\eta})\) is the likelihood of \(\mathbf{y}\) conditioned on \(\mathbf{\eta}\). We note that the error covariance matrix of an unbiased estimate, \(\hat{\mathbf{\eta}}\), of an unknown parameter vector, \(\mathbf{\eta}\) satisfies the following information inequality \(\mathbb{E}_{\mathbf{y}}\left\{(\hat{\mathbf{\eta}}-\mathbf{\eta})(\hat{\mathbf{\eta}}-\mathbf{ \eta})^{\text{T}}\right\}\succeq\mathbf{\mathrm{J}}_{\mathbf{\eta}}^{-1}\)._ The FIM for the parameter vector \(\mathbf{\eta}=[\mathbf{p}_{U},\mathbf{\Phi}_{B},\mathbf{\beta}]^{\text{T}}\) has the following structure \[\mathbf{\mathrm{J}}_{\mathbf{\eta}}\triangleq\left[\begin{array}{cccc}\mathbf{\mathrm{ J}}_{\mathbf{p}\mathbf{p}\mathbf{U}}&\mathbf{\mathrm{J}}_{\mathbf{p}\mathbf{\Phi}_{B}}&\mathbf{\mathrm{J}}_{ \mathbf{p}\mathbf{\psi}_{B}}&\mathbf{\mathrm{J}}_{\mathbf{p}\mathbf{\psi}_{B}}\\ \mathbf{\mathrm{J}}_{\mathbf{\Phi}_{B}\mathbf{p}_{U}}&\mathbf{\mathrm{J}}_{\mathbf{\Phi}_{B}\mathbf{ \Phi}_{B}}&\mathbf{\mathrm{J}}_{\mathbf{\Phi}_{B}\mathbf{\beta}_{B}}&\mathbf{\mathrm{J}}_{\mathbf{ \Phi}_{B}\mathbf{\beta}_{B}}\\ \mathbf{\mathrm{J}}_{\mathbf{\beta}_{B}\mathbf{p}_{U}}&\mathbf{\mathrm{J}}_{\mathbf{\beta}_{B}\mathbf{ \Phi}_{B}}&\mathbf{\mathrm{J}}_{\mathbf{\beta}_{B}\mathbf{\beta}_{B}}&\mathbf{\mathrm{J}}_{\mathbf{ \beta}_{B}\mathbf{\beta}_{B}}\\ \end{array}\right]\in\mathcal{R}^{8\times 8}. \tag{5}\] The submatrices in the above matrix can be computed using \(\mathbf{\mathrm{J}}_{\mathbf{\eta}_{\mathbf{\eta}_{\mathbf{\eta}_{\mathbf{\eta}_{\mathbf{\eta}_{\mathbf{ \eta}}_{\mathbf{\eta}_{\mathbf{\eta}}}}}}}}\triangleq\frac{\partial^{2}}{\sigma^{2}} \sum_{t=1}^{T}\Re\left\{\frac{\partial\mathbf{\mu}_{\mathbf{\eta}_{\mathbf{\eta}_{\mathbf{\eta }_{\mathbf{\eta}_{\mathbf{\eta}}}}}^{\text{H}}}}{\partial\mathbf{\eta}_{\mathbf{\eta}_{\mathbf{\eta }_{\mathbf{\eta}}}}}\right\}\) where \(\mathbf{\eta}_{\mathbf{\mathbf{\eta}}_{\mathbf{\upsilon}}}\in\mathbf{\eta}\), \(\mathbf{\eta}_{\mathbf{\upsilon}}_{\mathbf{\upsilon}}\in\mathbf{\eta}\) are both dummy variables, and \(1/\sigma^{2}\) is the SNR which incorporates the pathloss and composite noise power. The required first derivatives are presented in the following sections. ### _First Derivatives under the Far-Field Model_ The first derivative of the useful part of the received signal with respect to \(\nu\in[\mathbf{p}_{B},\mathbf{p}_{U}]\) under the far-field model is \[\nabla_{\nu}\mathbf{\mu}_{t,u}=\beta e^{-j2\frac{\pi}{4}\mathbf{s}_{u}^{\text{T}} \Delta_{BU}}\mathbf{a}_{BU}^{\text{H}}(\Delta_{BU})\mathbf{K}_{\nu}e^{-j2\pi f_{c} \tau_{BU}}\mathbf{F}_{t}\mathbf{x},\] where \(\mathbf{K}_{\nu}\) is expressed in (6). The first derivatives of the useful part of the received signal with respect to \(\nu\in\mathbf{\Phi}_{B}\) and \(\nu\in\mathbf{\Phi}_{U}\) under the far-field model are \[\nabla_{\nu}\mathbf{\mu}_{t}=\beta\mathbf{\tilde{P}}_{\nu}\mathbf{a}_{UB}( \Delta_{BU})\mathbf{a}_{BU}^{\text{H}}(\Delta_{BU})e^{-j2\pi f_{c}\tau_{BU}}\mathbf{F}_ {t}\mathbf{x},\] \[\nabla_{\nu}\mathbf{\mu}_{t,u}=\beta e^{-j2\frac{\pi}{4}\mathbf{s}_{u}^{ \text{T}}\Delta_{BU}}\mathbf{a}_{BU}^{\text{H}}(\Delta_{BU})\mathbf{P}_{\nu}e^{-j2\pi f _{c}\tau_{BU}}\mathbf{F}_{t}\mathbf{x},\] respectively, where \[\mathbf{\tilde{P}}_{\nu}=\text{diag}\Bigg{[}-\frac{j2\pi}{\lambda}( \nabla_{\nu}\mathbf{S}_{u})^{\text{T}}\Bigg{[}\frac{\mathbf{p}_{U}-\mathbf{p}_{B}}{d_{BU}} \Bigg{]}\Bigg{]},\] \[\mathbf{P}_{\nu}=\text{diag}\Bigg{[}\frac{j2\pi}{\lambda}\Bigg{[} \frac{\mathbf{p}_{U}-\mathbf{p}_{B}}{d_{BU}}\Bigg{]}^{\text{T}}\nabla_{\nu}\mathbf{S}_{B} \Bigg{]}.\] Also, \(\nabla_{\mathbf{\Phi}_{B}}\mathbf{S}_{B}=\nabla_{\mathbf{\Phi}_{B}}\mathbf{Q}_{B}\tilde{\mathbf{S}}_{B}\) and \(\nabla_{\mathbf{\Phi}_{U}}\mathbf{S}_{U}=\nabla_{\mathbf{\Phi}_{U}}\mathbf{Q}_{U}\tilde{\mathbf{S}}_{U}\). Finally, the first derivative of the useful part of the received signal with respect to complex path gain under the far-field model is \[\nabla_{\mathbf{\beta}_{\mathbf{\eta}}}\mathbf{\mu}_{t}=\mathbf{a}_{UB}(\Delta_{BU})\mathbf{a}_{BU}^ {\text{H}}(\Delta_{BU})\mathbf{F}_{t}\mathbf{x}e^{-j2\pi f_{c}\tau_{BU}},\] \[\nabla_{\mathbf{\beta}_{\mathbf{\eta}}}\mathbf{\mu}_{t}=j\mathbf{a}_{UB}(\Delta_{BU}) \mathbf{a}_{BU}^{\text{H}}(\Delta_{BU})\mathbf{F}_{t}\mathbf{x}e^{-j2\pi f_{c}\tau_{BU}}.\] The above first derivatives are used to compute the submatrices with a similar structure as that shown in (5) when the far-field model is used. ### _First Derivatives under the Near-Field Model_ The first derivatives of the useful part of the received signal with respect to \(\mathbf{\eta}\) under the near-field model are \[\nabla_{\mathbf{p}_{U}}\mathbf{\mu}_{t,u}=(-j2\pi f_{c})\beta\sum_{b=1}^{N_{B}}\nabla_{ \mathbf{p}_{U}}\tau_{bu}\sum_{d=1}^{N_{D}}[\mathbf{F}_{t}]_{[b,d]}[\mathbf{x}]_{[d]}e^{-j2 \pi f_{c}\tau_{bu}},\] \[\nabla_{\mathbf{p}_{B}}\mathbf{\mu}_{t,u}=(-j2\pi f_{c})\beta\sum_{b=1}^{N_{B}} \nabla_{\mathbf{p}_{B}}\tau_{bu}\sum_{d=1}^{N_{D}}[\mathbf{F}_{t}]_{[b,d]}[ used to compute the submatrices with a similar structure as that shown in (5) when the near-field model is used. After computing \(\mathbf{J}_{\mathbf{\eta}}\), to focus on the available information concerning the location parameters, we present a mathematical description of the EFIM. **Definition 2**.: _If the FIM of a parameter \(\mathbf{\eta}=[\mathbf{\eta}_{1}^{\mathrm{T}}\ \ \mathbf{\eta}_{2}^{\mathrm{T}}]^{\mathrm{T}}\) is specified by_ \[\mathbf{J}_{\mathbf{\eta}}=\left[\begin{array}{cc}\mathbf{J}_{\mathbf{\eta}_{1}\mathbf{ \eta}_{1}}&\mathbf{J}_{\mathbf{\eta}_{1}\mathbf{\eta}_{2}}\\ \mathbf{J}_{\mathbf{\eta}_{1}\mathbf{\eta}_{2}}^{\mathrm{T}}&\mathbf{J}_{\mathbf{\eta}_{2} \mathbf{\eta}_{2}}\end{array}\right], \tag{9}\] _where \(\mathbf{\eta}\in\mathbb{R}^{N},\mathbf{\eta}_{1}\in\mathbb{R}^{n},\mathbf{J}_{\mathbf{\eta }_{1}\mathbf{\eta}_{1}}\in\mathbb{R}^{n\times n},\mathbf{J}_{\mathbf{\eta}_{1}\mathbf{\eta} _{2}}\in\mathbb{R}^{n\times(N-n)}\), and \(\mathbf{J}_{\mathbf{\eta}_{2}\mathbf{\eta}_{2}}\in\mathbb{R}^{(N-n)\times(N-n)}\) with \(n<N\), then the EFIM [5] of the parameter of interest \(\mathbf{\eta}_{1}\) is given by_ \[\mathbf{J}_{\mathbf{\eta}_{1}}^{\mathrm{c}}=\mathbf{J}_{\mathbf{\eta}_{1}\mathbf{\eta}_{ 1}}-\mathbf{J}_{\mathbf{\eta}_{1}\mathbf{\eta}_{2}}\mathbf{J}_{\mathbf{\eta}_{2}\mathbf{\eta} _{2}}^{\mathrm{T}}\mathbf{J}_{\mathbf{\eta}_{1}\mathbf{\eta}_{2}}^{\mathrm{T}}. \tag{10}\] Using Definition 2, the EFIM of the parameter vector \(\mathbf{\eta}\) is computed for different parameters of interest. For example, the EFIM when the parameter of interest is \(\mathbf{\zeta}=[\mathbf{p}_{U},\Phi_{U}]^{\mathrm{T}}\) is \(\mathbf{J}_{\mathbf{\zeta}}^{\mathrm{e}}\in\mathcal{R}^{6\times 6}\). Here, the nuisance parameter is the complex path gain. ## IV Results In this section, we use numerical simulations to find out which combinations of position and orientation parameters can be estimated - a parameter, \(\mathbf{\zeta}\), can be estimated if the corresponding EFIM, \(\mathbf{J}_{\mathbf{\zeta}}^{\mathrm{e}}\), is positive definite [5]. We also provide numerical position error bound (PEB) and orientation error bound (OEB) results for the case in which the source orientation and destination position are the unknown parameters. Our simulation framework consists of a source whose centroid is located at \(\mathbf{p}_{B}=[1.5,1.0,4.0]^{\mathrm{T}}\) with the orientation angles \(\mathbf{\Phi}_{B}=[1.1,2.2,0.7]^{\mathrm{T}}\). The position vectors are in meters, and the orientation vectors are in radians. The source has \(N_{B}=100\) antennas and the following number of transmit beams are considered \(N_{D}\in[16,32,48,64]^{\mathrm{T}}\). For each simulation, \(T=20\) symbols are transmitted, and the beamforming matrix \(\mathbf{F}_{t}\in\mathcal{C}^{N_{B}\times N_{D}}\) changes during each of the \(T\) transmit symbols. The rows of this beamforming matrix are selected from a discrete Fourier transform-based (DFT) codebook. The destination is located at \(\mathbf{p}_{U}=[2.6,2.15,5.1]^{\mathrm{T}}\) with the orientation angles \(\mathbf{\Phi}_{U}=[0.1,0.2,0.1]^{\mathrm{T}}\). The Fraunhofer distance indicates that the destination is experiencing near-field propagation. The incorrect case when the far-field model is applied in this near-field simulation setup is termed "far-field." The correct case when the near-field model is used is termed "near-field." With this simulation setup, we generate Table I. This table highlights different combinations of the source and destination location that can be estimated. The "not applicable" term is used to highlight the fact that the parameter is known. When the term \(3\)D is used, it means that the \(3\)D version of that parameter can be estimated, and if the \(3\)D version of the parameter can be estimated, all lower dimensions can also be estimated. As evident in Table I, it is impossible to estimate either the \(3\)D position coordinates or the \(3\)D orientation angles with only the signal from the line of sight (LOS) path when the far-field model is incorrectly applied to the near-field setup. However, if the near-field setup is correctly applied, estimating the \(3\)D position coordinates or the \(3\)D orientation angles are feasible with the LOS signal even without a beamforming matrix. While a \(2\)D estimation of the source or destination's orientation angles is feasible when the far-field model is used and \(N_{U}>1\), it is important to note that estimating the source orientation angles is only possible in the far-field with beamforming (see Appendix A). This is in contrast with the near-field setup in which the estimation of the source's orientation angles is possible even with no beamforming provided that \(N_{U}>1\). In Figs. (a)a and (b)b, we present the PEB and OEB as a function of varying numbers of receive antennas. Also, in these figures, the term "FF" is used to distinguish the incorrect case when the far-field model is applied to the study from the case when the near-field model is correctly applied to the study. As expected, the spherical wavefront in the near-field model results in more accurate localization. From the figures, the spherical wavefront is more advantageous for the estimation of the orientation. ## V Conclusion This paper has examined the estimation of different combinations of a single-source and single destination's position and orientation. Through a study of the FIM, we have shown that while the source or destination's \(3\)D orientation can be jointly estimated with the source or destination's \(3\)D position in the near-field propagation regime, only the source or destination's \(2\)D orientation can be jointly estimated with the source or destination's \(2\)D position in the far-field propagation regime. Also, while without beamforming in the near-field, the source's \(3\)D orientation can be estimated, the source's \(2\)D orientation angles can not be estimated without beamforming in the far-field. Finally, a simulation of the PEB and OEB shows that the spherical information present in the near-field is much more useful for estimating orientation information. ## Appendix ### _Analysis of Joint Estimation of \([\mathbf{\Phi}_{B},\mathbf{\beta}]\) under the Far-Field Model_ We start the proof by dropping the subscript \(t\) and using the identity beamforming matrix across the \(T\) transmissions. The FIM, \(\mathbf{J}_{\mathbf{\Phi}_{B}}\), under the parameterization \(\mathbf{\eta}=[\mathbf{\Phi}_{B},\mathbf{\beta}]^{\mathrm{T}}\), is obtained by using the appropriate first derivatives in Definitions (1), and it has the following structure \[\mathbf{J}_{\mathbf{\eta}}=\left[\begin{array}{ccc}\mathbf{J}_{\mathbf{\Phi}_{B}}& \mathbf{J}_{[\mathbf{\Phi}_{B},\beta_{R}]}&\mathbf{J}_{[\mathbf{\Phi}_{B},\beta_{l}]}\\ \mathbf{J}_{\mathbf{\Phi}_{B},\beta_{R}}^{\mathrm{T}}&\mathbf{J}_{\mathbf{\beta}_{R}} &0\\ \mathbf{J}_{\mathbf{\Phi}_{B},\beta_{l}}^{\mathrm{T}}&0&\mathbf{J}_{\beta_{l}}\\ \end{array}\right],\] and the EFIM can be written as \[\mathbf{J}_{\mathbf{\Phi}_{B}}^{\mathrm{e}}=\mathbf{J}_{\mathbf{\Phi}_{B}}-[\mathbf{J }_{\beta_{R}}]^{-1}\mathbf{J}_{[\mathbf{\Phi}_{B},\beta_{R}]}\mathbf{J}_{[\mathbf{ \Phi}_{B},\beta_{R}]}^{\mathrm{T}}+\mathbf{J}_{[\mathbf{\Phi}_{B},\beta_{l}]} \mathbf{J}_{[\mathbf{\Phi}_{B},\beta_{l}]}^{\mathrm{T}}, \tag{11}\] and \(\mathbf{J}_{\mathbf{\Phi}_{B}}^{\mathrm{e}}=\mathbf{J}_{\mathbf{\Phi}_{B}}-[\mathbf{J }_{\beta_{R}}]^{-1}\mathbf{J}_{\beta_{R}}\mathbf{J}_{\mathbf{\Phi}_{R}}\), the second equation results from noticing \(\mathbf{J}_{\beta_{R}}\mathbf{J}_{\mathbf{\Phi}_{B}}=\mathbf{J}_{[\mathbf{\Phi}_{B}, \beta_{R}]}\mathbf{J}_{[\mathbf{\Phi}_{B},\beta_{R}]}^{\mathrm{T}}+\mathbf{J}_{[ \mathbf{\Phi}_{B},\beta_{l}]}\mathbf{J}_{[\mathbf{\Phi}_{B},\beta_{l}]}\). The proof follows as \(\mathbf{J}_{\mathbf{\Phi}_{B}}^{\mathrm{e}}=0\). Hence, with no beamforming, the source orientation can not be estimated with the far-field propagation model.
2309.09296
Model-based Subsampling for Knowledge Graph Completion
Subsampling is effective in Knowledge Graph Embedding (KGE) for reducing overfitting caused by the sparsity in Knowledge Graph (KG) datasets. However, current subsampling approaches consider only frequencies of queries that consist of entities and their relations. Thus, the existing subsampling potentially underestimates the appearance probabilities of infrequent queries even if the frequencies of their entities or relations are high. To address this problem, we propose Model-based Subsampling (MBS) and Mixed Subsampling (MIX) to estimate their appearance probabilities through predictions of KGE models. Evaluation results on datasets FB15k-237, WN18RR, and YAGO3-10 showed that our proposed subsampling methods actually improved the KG completion performances for popular KGE models, RotatE, TransE, HAKE, ComplEx, and DistMult.
Xincan Feng, Hidetaka Kamigaito, Katsuhiko Hayashi, Taro Watanabe
2023-09-17T15:12:50Z
http://arxiv.org/abs/2309.09296v1
# Model-based Subsampling for Knowledge Graph Completion ###### Abstract Subsampling is effective in Knowledge Graph Embedding (KGE) for reducing overfitting caused by the sparsity in Knowledge Graph (KG) datasets. However, current subsampling approaches consider only frequencies of queries that consist of entities and their relations. Thus, the existing subsampling potentially underestimates the appearance probabilities of infrequent queries even if the frequencies of their entities or relations are high. To address this problem, we propose Model-based Subsampling (MBS) and Mixed Subsampling (MIX) to estimate their appearance probabilities through predictions of KGE models. Evaluation results on datasets FB15k-237, WN18RR, and YAGO3-10 showed that our proposed subsampling methods actually improved the KG completion performances for popular KGE models, RotatE, TransE, HAKE, ComplEx, and DistMult. ## 1 Introduction A Knowledge Graph (KG) is a graph that contains entities and their relations as links. KGs are important resources for various NLP tasks, such as dialogue Moon et al. (2019), question-answering Lukovnikov et al. (2017), and natural language generation Guan et al. (2019), etc. However, covering all relations of entities in a KG by humans takes a lot of costs. Knowledge Graph Completion (KGC) tries to solve this problem by automatically completing lacking relations based on the observed ones. Letting \(e_{i}\) and \(e_{k}\) be entities, and \(r_{j}\) be their relation, KGC models predict the existence of a link \((e_{i},r_{j},e_{k})\) by filling the? in the possible links \((e_{i},r_{j},?)\) and \((?,r_{j},e_{k})\), where \((e_{i},r_{j})\) and \((r_{j},e_{k})\) are called queries, and the? are the corresponding answers. Currently, Knowledge Graph Embedding (KGE) is a dominant approach for KGC. KGE models represent entities and their relations as continuous vectors. Since the number of these vectors proportionally increases to the number of links in a KG, KGE commonly relies on Negative Sampling (NS) to reduce the computational cost in training. In NS, a KGE model learns a KG by discriminating between true links and false links created by sampling links in the KG. While NS can reduce the computational cost, it has the problem that the sampled links also reflect the bias of the original KG. As a solution, Sun et al. (2019) introduce subsampling Mikolov et al. (2013) into NS for KGE. In this usage, subsampling is a method of mitigating bias in a KG by discounting the appearance frequencies of links with high-frequent queries and reserving the appearance frequencies for links with low-frequent queries. Figure 1 shows the effectiveness of using subsampling. From this figure, we can understand that KGE models cannot perform well without subsampling on commonly used datasets such as FB15k-237 Toutanova and Figure 1: The averaged KGC performance (MRR) of KGE models1with and without subsampling on FB15k-237, WN18RR, and YAGO3-10. Chen, 2015), WN18RR Dettmers et al. (2018), and YAGO3-10 Dettmers et al. (2018). Furthermore, the improved MRR on FB15k-237, which has more sparse relations than the other datasets, indicates that subsampling actually works on the sparse dataset. However, the current subsampling approaches in KGE Sun et al. (2019); Kamigaito and Hayashi (2022) only consider the frequencies of queries. Thus, these approaches potentially underestimate the appearance probabilities of infrequent queries when the frequencies of their entities or relations are high. Figure 2 shows the frequencies of entities and relations included in each query that appeared only once in training data. From the statistics, we can find that the current count-based subsampling (CBS) does not effectively use frequencies of entities and relations in infrequent queries, although these have sufficient frequencies. To deal with this problem, we propose Model-based Subsampling (MBS) that can handle such infrequent queries by estimating their appearance probabilities through predictions from KGE models in subsampling. Since the observed frequency in training data does not restrict the estimated frequencies of MBS different from CBS, we can expect the improvement of KGC performance using MBS. In addition, we also propose Mixed Subsampling (MIX), which uses the frequencies of both CBS and MBS to boost their advantage by reducing their disadvantages. In our evaluation on FB15k-237, WN18RR, and YAGO3-10 datasets, we adopted our MBS and MIX to the popularly used KGE models RotatE Sun et al. (2019), TransE Bordes et al. (2013), HAKE Zhang et al. (2019), ComplEx Trouillon et al. (2016), and DistMult Yang et al. (2015). The evaluation results showed that MBS and MIX improved MRR, H@1, H@3, and H@10 from Count-based Subsampling (CBS) in each setting3. Footnote 3: Our code is available on [https://github.com/xincanfeng/ms_kge](https://github.com/xincanfeng/ms_kge). ## 2 Subsampling in KGE ### Problem Definitions and Notations We denote a link of a KG in the triplet format \((h,r,t)\). \(h\) is the head entity, \(t\) is the tail entity, and \(r\) is the relation of the head and tail entity. In a classic KG completion task, we input the query \((h,r,?)\) or \((?,r,t)\), and output the predicted head or tail entity corresponding to? as the answer. More formally, let us denote the input query as \(x\) and its answer as \(y\), hereafter. A score function \(s_{\theta}(x,y)\) predicts \(p_{\theta}(y|x)\), a probability for a given query \(x\) linked to an answer \(y\) based on a model \(\theta\). In general, we train \(\theta\) by predicting \(p_{\theta}(y|x)\) on \(|D|\) number of links, where \(D=\{(x_{1},y_{1}),\cdots,(x_{|D|},y_{|D|})\}\) is a set of observables that follow \(p_{d}(x,y)\). ### Negative Sampling in KGE Since calculating all possible \(y\) for given \(x\) is computationally inefficient, NS loss is commonly used for training KGE models. The NS loss in KGE, \(\ell_{kge}(\theta)\) is represented as follows: \[\ell_{kge}(\theta)\] \[= -\frac{1}{|D|}\sum_{(x,y)\in D}\Bigl{[}\log(\sigma(s_{\theta}(x, y)+\gamma))\] \[+\frac{1}{\nu}\sum_{y_{i}\sim p_{n}(y_{i}|x)}^{\nu}\log(\sigma(- s_{\theta}(x,y_{i})-\gamma))\Bigr{]}, \tag{1}\] where \(\sigma\) is a sigmoid function, \(p_{n}(y_{i}|x)\) is a noise distribution describing negative samples, \(\nu\) is a Figure 2: Frequencies of entities and relations included in each query that appeared only once in training data of FB15k-237, WN18RR, and YAGO3-10\({}^{2}\). number of negative samples per positive sample \((x,y)\), \(\gamma\) is a margin term to adjust the value range of the score function. \(p_{n}(y_{i}|x)\) has a role of adjusting the frequency of \(y_{i}\)(Kamigaito and Hayashi, 2021). ### Negative Sampling with Subsampling Subsampling (Mikolov et al., 2013) is a method to reduce the bias of training data by discounting high-frequent instances. Kamigaito and Hayashi (2022) show a general formulation to cover currently proposed subsampling approaches in the NS loss for KGE by altering two terms \(A_{cbs}\) and \(B_{cbs}\). In that form, the NS loss in KGE with subsampling, \(\ell_{cbs}(\theta)\) is represented as follows: \[\ell_{cbs}(\theta)\] \[= -\frac{1}{|D|}\sum_{(x,y)\in D}\Bigl{[}A_{cbs}\log(\sigma(s_{ \theta}(x,y)+\gamma))\] \[+\frac{1}{\nu}\sum_{y_{i}\sim p_{n}(y_{i}|x)}^{\nu}B_{cbs}\log( \sigma(-s_{\theta}(x,y_{i})-\gamma))\Bigr{]}, \tag{2}\] where \(A_{cbs}\) adjusts the frequency of a true link \((x,y)\), and \(B_{cbs}\) adjusts the query \(x\) to adjust the frequency of a false link \((x,y_{i})\). Table 1 lists the currently proposed subsampling approaches which are the original subsampling for word2vec (Mikolov et al., 2013) in KGE of Sun et al. (2019) (Base), frequency-based subsampling of Kamigaito and Hayashi (2022) (Freq), and unique-based subsampling of Kamigaito and Hayashi (2022) (Uniq) (Kamigaito and Hayashi, 2022). Here, \(\#\) denotes frequency, \(\#(x,y)\) represents the frequency of \((x,y)\). Since frequency for each link \((x,y)\) is at most one in KG, the previous approaches use the following back-off approximation (Katz, 1987): \[\#(x,y)\approx\frac{\#(h_{i},r_{j})+\#(r_{j},t_{k})}{2}, \tag{3}\] where \((x,y)\) corresponds to the link \((h_{i},r_{j},t_{k})\), and \((h_{i},r_{j})\) and \((r_{j},t_{k})\) are the queries. Due to their heavily relying on counted frequency information of queries, we call the above conventional subsampling method **Count-based Subsampling (CBS)**, hereafter. ## 3 Proposed Methods As shown in Equation (3), CBS approximates the frequency of a link \(\#(x,y)\) by combining the counted frequencies of entity-relation pairs. Thus, CBS cannot estimate \(\#(x,y)\) well when at least one pair's frequency is low in the approximation. This kind of situation is caused by the sparseness problem in the KG datasets. To deal with this sparseness problem, we propose **Model-based Subsampling** method (**MBS**) and **Mixed Subsampling** method (**MIX**) as described in the following subsections. ### Model-based Subsampling (MBS) To avoid the problem caused by low-frequent entity-relation pairs, our MBS uses the estimated probabilities from a trained model \(\theta^{\prime}\) to calculate frequencies for each triplet and query. By using \(\theta^{\prime}\), the NS loss in KGE with MBS is represented as follows: \[\ell_{mbs}(\theta;\theta^{\prime})\] \[= -\frac{1}{|D|}\sum_{(x,y)\in D}\Bigl{[}A_{mbs}(\theta^{\prime}) \log(\sigma(s_{\theta}(x,y)+\gamma))\] \[+\frac{1}{\nu}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[B_{mbs}(\theta^{\prime})=\left\{\begin{array}{ll}\frac{\#(x,y)^{ -\alpha}_{mbs}|D|}{\sum_{(x^{\prime},y^{\prime})\in D}\#(x^{\prime},y^{\prime})^ {-\alpha}_{mbs}}&\text{(Base)}\\ \frac{\#x^{-\alpha}_{mbs}|D|}{\sum_{x^{\prime}_{mbs}\in D}\#x^{ \prime-\alpha}_{mbs}}&\text{(Freq)}\\ \frac{\#x^{-\alpha}_{mbs}|D|}{\sum_{x^{\prime}_{mbs}\in D}\#x^{ \prime-\alpha}_{mbs}}&\text{(Uniq)}\end{array}\right. \tag{6}\] where \(\alpha\) is a temperature term to adjust the distribution on \(A_{mbs}(\theta^{\prime})\) and \(B_{mbs}(\theta^{\prime})\). The frequencies \(\#(x,y)_{mbs}\) and \(\#x_{mbs}\), estimated by using \(score_{\theta^{\prime}}(x,y)\) are calculated as follows: \[\#(x,y)_{mbs} =|D|p_{\theta^{\prime}}(x,y), \tag{7}\] \[\#x_{mbs} =|D|\sum_{y_{i}\in D}p_{\theta^{\prime}}(x,y_{i}),\] (8) \[p_{\theta^{\prime}}(x,y) =\frac{e^{score_{\theta^{\prime}}(x,y)}}{\sum_{(x^{\prime},y^{ \prime})\in D}e^{score_{\theta^{\prime}}(x^{\prime},y^{\prime})}}. \tag{9}\] Hereafter, we refer to a model pre-trained for MBS as a sub-model. Different from the counted frequencies in Eq. (3), \(score_{\theta^{\prime}}(x,y)\) in Eq. (9) estimates them by sub-model inference regardless of their actual frequencies. Hence, we can expect MBS to deal with the sparseness problem in CBS. However, the ability of MBS depends on the sub-model, and we investigated the performance through our evaluations (SS4). ### Mixed Subsampling (MIX) As discussed in language modeling context (Neubig and Dyer, 2016), count-based and model-based frequencies have different strengths and weaknesses. To boost the advantages of CBS and MBS by mitigating their disadvantages, MIX uses a mixture of the distribution as follows: \[\ell_{mix}(\theta;\theta^{\prime})\] \[= -\frac{1}{|D|}\sum_{(x,y)\in D}\Big{[}A_{mix}(\theta^{\prime}) \log(\sigma(s_{\theta}(x,y)+\gamma))\] \[+\frac{1}{y_{i}\sim p_{n}(y_{i}|x)}\hskip-14.226378ptB_{mix}( \theta^{\prime})\log(\sigma(-s_{\theta}(x,y_{i})-\gamma))\Big{]}, \tag{10}\] where \(A_{mix}(\theta^{\prime})\) is a mixture of \(A_{cbs}\) in Eq. (2) and \(A_{mbs}(\theta^{\prime})\) in Eq. (4), and \(B_{mix}(\theta^{\prime})\) is also a mixture of \(B_{cbs}\) in Eq. (2) and \(B_{mbs}(\theta^{\prime})\) Eq. (4) as follows: \[A_{mix}(\theta^{\prime}) =\lambda A_{mbs}(\theta^{\prime})+(1-\lambda)A_{cbs} \tag{11}\] \[B_{mix}(\theta^{\prime}) =\lambda B_{mbs}(\theta^{\prime})+(1-\lambda)B_{cbs} \tag{12}\] where \(\lambda\) is a hyper-parameter to adjust the ratio of MBS and CBS. Note that MIX can be interpreted as a kind of multi-task learning4. Footnote 4: See Appendix B for the details. ## 4 Evaluation and Analysis ### Settings DatasetsWe used the three commonly used datasets, FB15k-237, WN18RR, and YAGO3-10, for the evaluation. Table 2 shows the statistics for each dataset. Unlike FB15k-237 and WN18RR, the dataset of YAGO3-10 only includes entities that have at least 10 relations and alleviates the sparseness problem of KGs. Thus, we can investigate the effectiveness of MBS and MIX in the sparseness problem by comparing performances on these datasets. MethodsWe compared five popular KGE models RotatE, TransE, HAKE, ComplEx, and Dist-Mult with utilizing subsampling methods Base, Freq, and Uniq based on the loss of CBS (SS2.3) and our MBS (SS3.1) and MIX (SS3.2). Additionally, we conducted experiments with no subsampling (None) to investigate the efficacy of the subsampling method. In YAGO3-10, due to our limited computational resources and the existence of tuned hyper-parameters by Sun et al. (2019); Zhang et al. (2019), we only used RotatE and HAKE for evaluation. MetricsWe evaluated these methods using the most conventional metrics in KGC, i.e., Mean Reciprocal Rank (MRR), Hits@1 (H@1), Hits@3 (H@3), and Hits@10 (H@10). We reported the average scores in three different runs by changing their seeds5 for each metric. We also reported the standard deviations of the scores by the three runs. Footnote 5: We fixed seed numbers for the three trials in the training model and sub-model correspondingly. Note that the appearance probabilities drawn in Figure 3 all use the same seed. Implementations and Hyper-parametersFor RotatE, TransE, ComplEx, and DistMult, we followed the implementations and hyper-parameters \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Dataset** & **\#Train** & **\#Valid** & **\#Test** & **Ent** & **Rel** \\ \hline FB15K-237 & 272,115 & 17,535 & 20,466 & 14,541 & 237 \\ WN18RR & 86,835 & 3,034 & 3,134 & 40,943 & 11 \\ YAGO3-10 & 1,079,040 & 5,000 & 5,000 & 123,188 & 37 \\ \hline \hline \end{tabular} \end{table} Table 2: Datasets statistics. #: Split in terms of number of triples; Ent: Entities; Rel: Relations; Exa: Examples. reported by Sun et al. (2019). For HAKE, we inherited the setting of Zhang et al. (2019). In our experiments, the performance of subsampling is influenced by the selection of the following hyper-parameters: (1) temperature \(\alpha\); (2) \(\lambda\), the ratio of MBS against CBS. For our proposed MBS subsampling, we chose \(\alpha\) from \(\{2.0,1.0,0.5,0.1,0.05,0.01\}\) based on validation MRR. For our proposed MIX subsampling, we inherited the best \(\alpha\) in MBS. Then, we chose the mix ratio \(\lambda\) from \(\{0.1,0.3,0.5,0.7,0.9\}\) based on validation MRR. In FB15k-237 and WN18RR, we chose the subs-model from RotatE, TransE, HAKE, ComplEx, and DistMult with the setting of Base and None based on the validation MRR. In YAGO3-10, we also chose the sub-model from RotatE and HAKE, similar to FB15k-237 and WN18RR. ### Results ResultsTable 3, 4, and 5 show the KGC performances on FB15k-237, WN18RR and YAGO3-10, respectively. Note that the results of Wilcoxon signed-rank test for performance differences between MBS/MIX and CBS show statistical significance with p-values less than 0.01 in all cases when MBS/MIX outperforms CBS. As we can see, the models trained with MIX or MBS achieved the best results in all models on FB15k-237 and WN18RR. However, in YAGO3-10, HAKE with Freq in CBS outperformed the results of MBS and MIX. Considering that the pre-process of YAGO3-10 filtered out entities with less than 10 relations in the dataset, we can conclude that MBS and MIX are effective on the sparse KGs like that of FB15k-237 and WN18RR. These results are along with our expectation that MBS and MIX can improve the completion performances in sparse KGs as introduced in SS1. In individual comparison for each metric, CBS sometimes outperformed MIX or MBS. This is because the estimated frequencies in MIX and MBS rely on selected sub-models. From these results, we can understand that MIX and MBS have the potential to improve the KG completion performances by carefully choosing their sub-model. AnalysisWe analyze the remaining question, i.e., which sub-model to choose for MBS. Table 3, 4, and 5 show the selected sub-models for each MBS (See SS4.1 in details), where ComplEx dominates over other models in FB15k-237 and WN18RR. To know the reason, we depict MBS frequencies of queries that have the bottom 100 CBS frequencies in Figure 3. In FB15k-237, we can see several spikes of frequencies in TransE, RotatE, and \begin{table} \begin{tabular}{c c c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Subsampling**} & \multicolumn{3}{c}{**MRR**} & \multicolumn{3}{c}{**H@1**} & \multicolumn{3}{c}{**H@3**} & \multicolumn{3}{c}{**H@10**} & \multicolumn{3}{c}{**Submodeling**} \\ \cline{3-14} & & \multicolumn{1}{c}{**Mean**} & **SD** & **Mean** & **SD** & **Mean** & **SD** & **Mean** & **SD** & **Sub-model** & \(\alpha\) & \(\lambda\) \\ \hline \multirow{6}{*}{RotatE} & None & 49.2 & 0.2 & 39.6 & 0.2 & 55.0 & 0.2 & 67.2 & 0.3 & & & \\ \cline{2-14} & \multirow{3}{*}{Base} & CBS & 49.3 & 0.1 & 39.9 & 0.1 & 54.9 & 0.3 & 67.1 & 0.2 & & & \\ \cline{2-14} & & MBS & 49.5 & 0.2 & 40.0 & 0.3 & 55.4 & 0.0 & 66.8 & 0.2 & & & \\ & & MIX & **49.8** & 0.1 & **40.4** & 0.2 & **55.6** & 0.2 & **67.2** & 0.3 & RotatE & None & 0.5 & 0.7 \\ \cline{2-14} & \multirow{3}{*}{Freq} & CBS & 49.6 & 0.1 & 40.2 & 0.1 & 55.2 & 0.1 & 67.3 & 0.1 & & & \\ \cline{2-14} & & MBS & 50.1 & 0.2 & \({}^{\dagger}\)**41.0** & 0.2 & 55.6 & 0.2 & 67.1 & 0.1 & & & \\ & & MIX & \({}^{\dagger}\)**50.2** & 0.2 & \({}^{\dagger}\)**41.0** & 0.4 & \({}^{\dagger}\)**55.8** & 0.1 & **67.5** & 0.2 & HAKE & Base & 0.5 & 0.5 \\ \cline{2-14} & \multirow{3}{*}{Uniq} & CBS & **49.8** & 0.2 & **40.3** & 0.2 & **55.4** & 0.1 & \({}^{\dagger}\)**67.6** & 0.1 & & & \\ \cline{2-14} & & MBS & 49.5 & 0.2 & 39.9 & 0.2 & 55.2 & 0.3 & 67.4 & 0.2 & RotatE & Base & 0.5 & 0.5 \\ \cline{2-14} & & MIX & 49.7 & 0.2 & **40.3** & 0.2 & **55.4** & 0.2 & 67.5 & 0.2 & RotatE & Base & 0.5 & 0.5 \\ \hline \multirow{6}{*}{HAKE} & None & 53.6 & 0.1 & 45.0 & 0.3 & 58.9 & 0.3 & 69.0 & 0.0 & & & & \\ \cline{2-14} & & CBS & **54.3** & 0.1 & **45.9** & 0.2 & **59.6** & 0.2 & **69.3** & 0.1 & & & & \\ \cline{2-14} & \multirow{3}{*}{Base} & CBS & 53.6 & 0.3 & 44.9 & 0.4 & 58.9 & 0.2 & 68.8 & 0.1 & & & & \\ \cline{2-14} & & MIX & 54.0 & 0.1 & 45.4 & 0.1 & 59.3 & 0.3 & 69.2 & 0.1 & HAKE & None & 0.1 & 0.5 \\ \cline{2-14} & \multirow{3}{*}{Freq} & CBS & 54.5 & 0.3 & 46.1 & 0.3 & 59.8 & 0.5 & 69.4 & 0.3 & & & & \\ \cline{2-14} & & MBS & **54.8** & 0.1 & 46.5 & 0.2 & **60.0** & 0.3 & **69.7** & 0.1 & RotatE & None & 0.5 & 0.1 \\ \cline{2-14} & & MIX & **54.8** & 0.1 & **46.7** & 0.1 & 59.7 & 0.2 & 69.5 & 0.1 & RotatE & None & 0.5 & 0.1 \\ \cline{2-14} & \multirow{3}{*}{Uniq} & CBS & \({}^{\dagger}\)**55.1** & 0.1 & \({}^{\dagger}\)**46.8** & 0.2 & \({}^{\dagger}\)**60.1** & 0.3 & \({}^{\dagger}\)**70.0** & 0.2 & & & & \\ \cline{2-14} & & MBS & 54.8 & 0.1 & 46.5 & 0.2 & 60.0 & 0.3 & 69.7 & 0.1 & & & & \\ \cline{2-14} & & MIX & 54.9 & 0.1 & 46.6 & 0.1 & 60.0 & 0.2 & 69.9 & 0.2 & RotatE & None & 0.5 & 0.3 \\ \hline \hline \end{tabular} \end{table} Table 5: Results on YAGO3-10. The notations are the same as the ones in Table 3. HAKE that do not exist in ComplEx. In WN18RR, the peak frequencies of ComplEx with None are larger and broader than that of other sub-models. These results indicate that models in FB15k-237 and WN18RR, respectively, encountered problems of an over and lack of smoothing, and MBS dealt with this problem. Because sparseness is a problem when data is small, these are along with the fact that FB15k-237 has larger training data than WN18RR. Thus, choosing a suitable sub-model for a target dataset is important in MBS. DiscussionWe discuss how sub-model and hyper-parameter choices contribute to the improvement of KGE performance apart from our method. The choice of the sub-model and the \(\alpha\) played significant roles in the observed improvements because distributions from sub-model prediction depend on each sub-model and each dataset. Since we adopted the value of \(\alpha\) used in the past state-of-the-art method of Sun et al. (2019) and Zhang et al. (2019), we believe that the performance gains of MBS are not only caused by the values of \(\alpha\). Similarly, keeping \(\lambda\) constant in the MIX strategy may lead to certain improvements depending on used sub-models and datasets. However, as shown in Appendix B, \(\lambda\) has the role of adjusting the loss of multi-task learning, and thus, it may be more sensitive compared with \(\alpha\). ## 5 Related Work Mikolov et al. (2013) originally propose the NS loss to train their word embedding model, word2vec. Trouillon et al. (2016) introduce the Figure 3: Appearance probabilities (%) of queries in CBS and MBS that have the lowest 100 CBS frequencies for each setting, sorted left to right in descending order by their CBS frequencies. NS loss to KGE to reduce training time. Sun et al. (2019) extend the NS loss for KGE by introducing a margin term, normalization of negative samples, and newly proposed their noise distribution. Kamigaito and Hayashi (2021) claim the importance of dealing with the sparseness problem of KGs through their theoretical analysis of the NS loss in KGE. Furthermore, Kamigaito and Hayashi (2022) reveal that subsampling (Mikolov et al., 2013) can alleviate the sparseness problem in the NS for KGE. Similar to these works, our work aims to investigate and extend the NS loss used in KGE to improve KG performance. ## 6 Conclusion In this paper, we propose new subsampling approaches, MBS and MIX, that can deal with the problem of low-frequent entity-relation pairs in CBS by estimating their frequencies using the sub-model prediction. Evaluation results on FB15k-237 and WN18RR showed the improvement of KGC performances by MBS and MIX. Furthermore, our analysis also revealed that selecting an appropriate sub-model for the target dataset is important for improving KGC performances. ## Limitations Utilizing our model-based subsampling requires pre-training for choosing a suitable sub-model, and thus may require more than twice the computational budget. However, since we can use a small model as a sub-model, like the use of ComplEx as a sub-model for HAKE, there is a possibility that the actual computational cost becomes less than the doubled one. For calculating CBS frequencies, we only use the one with the arithmetic mean since we inherited the conventional subsampling methods as our baseline. Thus, we can consider various replacements not covered by this paper for the operation. However, even if we carefully choose the operation, CBS is essentially difficult to induce the appropriate appearance probabilities of low-frequent queries compared with our MBS, which can use vector-space embedding. Our experiments are carried out only on FB15k-237, WN18RR, and YAGO3-10 datasets. Thus, whether our method works for larger and noisier data is to be verified. Furthermore, although our method is generalizable to deep learning models, our current work is conducted purely on KGE models, and whether it works for general deep learning models as well is to be verified. ## Acknowledgements This work was supported by NAIST Touch Stone, i.e., JST SPRING Grant Number JPMJSP2140, and JSPS KAKENHI Grant Numbers JP21H05054 and JP23H03458.
2305.00601
$G$-invariant Bergman kernel and geometric quantization on complex manifolds with boundary
Let $M$ be a complex manifold with boundary $X$, which admits a holomorphic Lie group $G$-action preserving $X$. We establish a full asymptotic expansion for the $G$-invariant Bergman kernel under certain assumptions. As an application, we get $G$-invariant version of Fefferman's result about regularity of biholomorphic maps on strongly pseudoconvex domains of $\mathbb C^n$. Moreover, we show that the Guillemin-Sternberg map on a complex manifold with boundary is Fredholm by developing reduction to boundary technique, which establish ``quantization commutes with reduction" in this case.
Chin-Yu Hsiao, Rung-Tzung Huang, Xiaoshan Li, Guokuan Shao
2023-04-30T23:40:28Z
http://arxiv.org/abs/2305.00601v1
# \(G\)-invariant Bergman kernel and geometric quantization on complex manifolds with boundary ###### Abstract. Let \(M\) be a complex manifold with boundary \(X\), which admits a holomorphic Lie group \(G\)-action preserving \(X\). We establish a full asymptotic expansion for the \(G\)-invariant Bergman kernel under certain assumptions. As an application, we get \(G\)-invariant version of Fefferman's result about regularity of biholomorphic maps on strongly pseudoconvex domains of \(\mathbb{C}^{n}\). Moreover, we show that the Guillemin-Sternberg map on a complex manifold with boundary is Fredholm by developing reduction to boundary technique, which establish "quantization commutes with reduction" in this case. Key words and phrases:invariant Bergman kernel, geometric quantization, moment map, Fredholm operator 2020 Mathematics Subject Classification: Primary: 32A25, 53D50, 58J40 Chin-Yu Hsiao was partially supported by Taiwan Ministry of Science and Technology projects 108-2115-M-001-012-MY5, 109-2923-M-001-010-MY4 and Academia Sinica Investigator Award Rung-Tzung Huang was supported by Taiwan Ministry of Science and Technology project 109-2115-M-008-007-MY2 and 111-2115-M-008 -003 -MY2 Xiaoshan Li was supported by National Natural Science Foundation of China (Grant No. 12271411 and 11871380) Guokuan Shao was supported by National Natural Science Foundation of China (Grant No. 12001549) and the Fundamental Research Funds for the Central Universities, Sun Yat-sen University (Grant No. 22qntd2901). **(1)** Let \(M:=\big{\{}(z_{1},z_{2},z_{3})\in\mathbb{C}^{3};\,|z_{1}|^{4}+|z_{2}|^{2}+|z_{3}|^ {2}<1\big{\}}\). \(M\) admits an \(S^{1}\)-action: \[S^{1}\times M\to M,\ \ e^{i\theta}\cdot(z_{1},z_{2},z_{3})=(e^{-i\theta}z_{1},e^{i \theta}z_{2},e^{i\theta}z_{3}).\] **(2)** Let \[M:=\Big{\{}(z_{1},z_{2},z_{3},z_{4},z_{5},z_{6})\in\mathbb{C}^{6};\,(|z_{5}|^{4 }+|z_{6}|^{2})(\sum_{j=1}^{4}|z_{j}|^{2}+z_{1}z_{3}+z_{2}z_{4}+\overline{z}_{1} \overline{z}_{3}+\overline{z}_{2}\overline{z}_{4})<1\Big{\}}.\] Then, \(M\) admits a \(G:=S^{1}\times SU(2)\) action: \[(e^{i\theta},g)\cdot z=(w_{1},w_{2},\ldots,w_{6}),\] \[(w_{1},w_{2})^{t}:=g(z_{1},z_{2})^{t},\ \ (w_{3},w_{4})^{t}:= \overline{g}(z_{3},z_{4})^{t},\ \ (w_{5},w_{6})=(e^{-i\theta}z_{5},e^{i\theta}z_{6}),\] \[g\in SU(2),\ \ e^{i\theta}\in S^{1},\ \ z\in M,\] where \(z^{t}\) denotes the transpose of \(z\). In these examples, all the domains are weakly pseudoconvex but with group action and the boundary reduced spaces of these examples are strongly pseudoconvex CR manifolds (we refer the reader to [19, Section 2.5] for the details and to Section 2.3 below for the meaning of reduced spaces). In [19], the first author, Ma and Marinescu showed that the \(G\)-invariant Szego projection is a complex Fourier integral operator if the reduced space is a strongly pseudoconvex CR manifold (the whole CR manifold can be non strongly pseudoconvex). Thus, it is quite natural and interesting to study \(G\)-invariant Bergman projection on a non strongly pseudoconvex domain with group action. This is the motivation of this work. In this work, we completely study the \(G\)-invariant Bergman projection on a domain \(M\) (can be non strongly pseudoconvex) with group \(G\) action under the assumption that the boundary reduced space with respect to the group \(G\) action is non-degenerate. We show that the \(G\)-invariant Bergman projection on a such domain is a complex Fourier integral operator. As an application, we get \(G\)-invariant version of Fefferman's result about regularity of biholomorphic maps on strongly pseudoconvex domains of \(\mathbb{C}^{n}\). Since the study of \(G\)-invariant Bergman projection is closely related to geometric quantization, we also study geometric quantization on complex manifolds with boundary. We now formulate our main results. We refer to Section 2 for some notations and terminology used here. Let \(M\) be a relatively compact open subset with smooth boundary \(X\) of a complex manifold \(M^{\prime}\) of dimension \(n\), \(n\geq 3\). Let \(\rho\in C^{\infty}(M^{\prime},\mathbb{R})\) be a defining function of \(X\), that is, \[X=\{x\in M^{\prime};\,\rho(x)=0\},\ \ M=\{x\in M^{\prime};\,\rho(x)<0\}\] and \(d\rho(x)\neq 0\) at every point \(x\in X\). Then the manifold \(X\) is a CR manifold with natural CR structure \(T^{1,0}X:=T^{1,0}M^{\prime}\cap\mathbb{C}TX\), where \(T^{1,0}M^{\prime}\) denotes the holomorphic tangent bundle of \(M^{\prime}\). Suppose that \(M^{\prime}\) admits a compact \(d\)-dimensional Lie group \(G\) action. Let \(\mathfrak{g}\) denote the Lie algebra of \(G\). Let \(\mu:M^{\prime}\to\mathfrak{g}^{*}\), \(\mu_{X}:=\mu|_{X}:X\to\mathfrak{g}^{*}\) be the associated moment maps (cf. Definition 2.2). We will work in the following setting. **Assumption 1.1**.: _The \(G\)-action is holomorphic, preserves the boundary \(X\), \(0\) is a regular value of \(\mu_{X}\), \(G\) acts freely on \(\mu^{-1}(0)\cap X\), \(\mu^{-1}(0)\cap X\neq\emptyset\), the Levi form is positive or negative near \(\mu^{-1}(0)\cap X\)._ Note that the \(G\)-action is holomorphic means that the \(G\)-action preserves \(J\), where \(J\) is the complex structure map on \(T^{1,0}M^{\prime}\). The \(G\)-action preserves the boundary \(X\) means that we can find a defining function \(\rho\in C^{\infty}(M^{\prime},\mathbb{R})\) of \(X\) such that \(\rho(g\circ x)=\rho(x)\), for every \(x\in M^{\prime}\) and every \(g\in G\). We take a \(G\)-invariant Hermitian metric \(\langle\cdot\,|\,\cdot\,\rangle\) on \(\mathbb{C}TM^{\prime}\). The \(G\)-invariant Hermitian metric \(\langle\cdot\,|\,\cdot\,\rangle\) on \(\mathbb{C}TM^{\prime}\) induces a \(G\)-invariant Hermitian metric \(\langle\cdot\,|\,\cdot\,\rangle\) on \(\oplus_{1\leq p+q\leq n,p,q\in\mathbb{N}_{0}}T^{*p,q}M^{\prime}\), where \(T^{*p,q}M^{\prime}\) denotes the bundle of \((p,q)\) forms on \(M^{\prime}\). From now on, we fix a defining function \(\rho\in C^{\infty}(M^{\prime},\mathbb{R})\) of \(X\) such that \[\begin{split}&\langle\,d\rho(x)\,|\,d\rho(x)\,\rangle=1\text{ on }X,\\ &\rho(g\circ x)=\rho(x),\ \ \forall x\in M^{\prime},\ \ \forall g\in G.\end{split} \tag{1.1}\] Let \((\,\cdot\,|\,\cdot\,)_{M}\) be the \(L^{2}\) inner product on \(\Omega^{0,q}_{c}(M)\) given by \[(\,u\,|\,v\,)_{M}:=\int_{M}\langle\,u\,|\,v\,\rangle dv_{M^{\prime}},\ \ u,v\in\Omega^{0,q}_{c}(M), \tag{1.2}\] where \(dv_{M^{\prime}}\) is the volume form on \(M^{\prime}\) induced by \(\langle\,\cdot\,|\,\cdot\,\rangle\). Let \(L^{2}_{(0,q)}(M)\) be the \(L^{2}\) completion of \(\Omega^{0,q}_{c}(M)\) with respect to \((\,\cdot\,|\,\cdot\,)_{M}\). We write \(L^{2}(M):=L^{2}_{(0,0)}(M)\). Let \(\overline{\partial}:C^{\infty}(\overline{M})\to\Omega^{0,1}(\overline{M}\,)\) be the Cauchy-Riemann operator on \(\overline{M}\). We extend \(\overline{\partial}\) to \(L^{2}(M)\): \[\overline{\partial}:\operatorname{Dom}\overline{\partial}\subset L^{2}(M)\to L ^{2}_{(0,1)}(M),\] where \(u\in\operatorname{Dom}\overline{\partial}\) if we can find \(u_{j}\in C^{\infty}(\overline{M})\), \(j=1,2,\ldots\), such that \(u_{j}\to u\) in \(L^{2}(M)\) as \(j\to+\infty\) and there is a \(v\in L^{2}_{(0,1)}(M)\) such that \(\overline{\partial}u_{j}\to v\) as \(j\to+\infty\). We set \(\overline{\partial}u:=v\). Let \[H^{0}(\overline{M}):=\operatorname{Ker}\overline{\partial}\subset L^{2}(M). \tag{1.3}\] Then \(H^{0}(\overline{M})\) is a (possible infinite dimensional) \(G\)-representation, its \(G\)-invariant part is the \(G\)-invariant \(L^{2}\) holomorphic functions on \(\overline{M}\). Let \[H^{0}(\overline{M})^{G}:=\left\{u\in H^{0}(\overline{M});\,h^{*}u=u,\text{ for any }h\in G\right\}. \tag{1.4}\] Let \[B_{G}:L^{2}(M)\to H^{0}(\overline{M})^{G} \tag{1.5}\] be the orthogonal projection with respect to \((\,\cdot\,|\,\cdot\,)_{M}\) (\(G\)-invariant Bergman projection). The \(G\)-_invariant Bergman kernel_\(B_{G}(x,y)\in\mathscr{D}^{\prime}(M\times M)\) is the distribution kernel of \(B_{G}\). We introduce some notations. For \(x\in X\), let \(\mathcal{L}_{x}\) denote the Levi form of \(X\) at \(x\) (see (2.9)) and let \(\det\mathcal{L}_{x}:=\lambda_{1}(x)\cdots\lambda_{n-1}(x)\), where \(\lambda_{j}(x)\), \(j=1,\ldots,n-1\), are the eigenvalues of \(\mathcal{L}_{x}\) with respect to \(\langle\,\cdot\,|\,\cdot\,\rangle\). For any \(\xi\in\mathfrak{g}\), we write \(\xi_{M^{\prime}}\) to denote the vector field on \(M^{\prime}\) induced by \(\xi\). That is, \[(\xi_{M^{\prime}}u)(x)=\tfrac{\partial}{\partial t}\left(u(\exp(t\xi)\circ x) \right)|_{t=0}\text{, for any }u\in C^{\infty}(M^{\prime}). \tag{1.6}\] For \(x\in M^{\prime}\), set \[\underline{\mathfrak{g}}_{x}=\text{Span }\left\{\xi_{M^{\prime}}(x);\,\xi\in \mathfrak{g}\,\right\}. \tag{1.7}\] Fix \(x\in\mu^{-1}(0)\cap X\), consider the linear map \[\begin{array}{rcl}R_{x}:\underline{\mathfrak{g}}_{x}&\to&\underline{ \mathfrak{g}}_{x},\\ u&\to&R_{x}u,\ \ \langle\,R_{x}u\,|\,v\,\rangle=\langle\,d\omega_{0}(x)\,,\,Ju \wedge v\,\rangle,\end{array}\] where \(\omega_{0}(x)=J^{t}(d\rho)(x)\), \(J^{t}\) is the complex structure map on \(T^{*}M^{\prime}\). Let \(\det R_{x}=\mu_{1}(x)\cdots\mu_{d}(x)\), where \(\mu_{j}(x)\), \(j=1,2,\ldots,d\), are the eigenvalues of \(R_{x}\). Fix \(x\in\mu^{-1}(0)\cap X\), put \(Y_{x}=\left\{\varrho\circ x;\,g\in G\right\}\). \(Y_{x}\) is a \(d\)-dimensional submanifold of \(X\). The \(G\)-invariant Hermitian metric \(\langle\,\cdot\,|\,\cdot\,\rangle\) induces a volume form \(dv_{Y_{x}}\) on \(Y_{x}\). Put \[V_{\text{eff}}\,(x):=\int_{Y_{x}}dv_{Y_{x}}.\] The first main result of this work is the following **Theorem 1.2**.: _With the notations and assumptions above and recall that we work with Assumption 1.1. Let \(\tau\in C^{\infty}(\overline{M})\) with \(\operatorname{supp}\tau\cap\mu^{-1}(0)\cap X=\emptyset\). Then, \(\tau B_{G}\equiv 0\mod C^{\infty}(\overline{M}\times\overline{M})\), \(B_{G}\tau\equiv 0\mod C^{\infty}(\overline{M}\times\overline{M})\)._ _Let \(p\in\mu^{-1}(0)\cap X\). Let \(U\) be an open local coordinate patch of \(p\) in \(M^{\prime}\), \(D:=U\cap X\). If Levi form is negative on \(D\), then_ \[B_{G}(z,w)\equiv 0\mod C^{\infty}((U\times U)\cap(\overline{M}\times\overline{M} )). \tag{1.8}\] _Suppose that the Levi form is positive on \(D\). Then,_ \[B_{G}(z,w)\equiv\int_{0}^{+\infty}e^{it\Psi(z,w)}b(z,w,t)dt\mod C^{\infty}((U \times U)\cap(\overline{M}\times\overline{M})), \tag{1.9}\] _where_ \[\begin{array}{l}b(z,w,t)\in S_{1,0}^{n-\frac{d}{2}}(((U\times U)\cap( \overline{M}\times\overline{M}))\times\mathbb{R}_{+}),\\ b(z,w,t)\sim\sum_{j=0}^{+\infty}t^{n-\frac{d}{2}-j}b_{j}(z,w)\ \ \text{in}\ S_{1,0}^{n-\frac{d}{2}}(((U\times U)\cap(\overline{M}\times \overline{M}))\times\mathbb{R}_{+}),\\ b_{j}(z,w)\in C^{\infty}((U\times U)\cap(\overline{M}\times\overline{M})), \ \ j=0,1,2,\ldots,\end{array} \tag{1.10}\] \[b_{0}(z,z)=2^{d}\frac{1}{V_{\text{eff}}\,(x)}\pi^{-n+\frac{d}{2}}|\det R_{x} |^{-\frac{1}{2}}|\det\mathcal{L}_{x}|,\ \ \forall x\in\mu^{-1}(0)\cap D, \tag{1.11}\] _and_ \[\begin{array}{l}\Psi(z,w)\in C^{\infty}(((U\times U)\cap(\overline{M}\times \overline{M}))),\ \ \text{Im}\,\Psi\geq 0,\\ \Psi(z,z)=0,\ z\in\mu^{-1}(0)\cap D,\\ \text{Im}\,\Psi(z,w)>0\ \text{if}\ (z,w)\notin\text{diag}\,((\mu^{-1}(0)\cap D )\times(\mu^{-1}(0)\cap D)),\\ d_{x}\Psi(x,x)=-\omega_{0}(x)-id\rho(x),\ \ d_{y}\Psi(x,x)=\omega_{0}(x)-id\rho(x),\ \ x\in\mu^{-1}(0)\cap D,\\ \Psi|_{D\times D}=\Phi\)_,_ \(\Phi\) _is the phase as in_ _[_16_, Theorem 1.5]__._ _Moreover, let \(z=(x_{1},\ldots,x_{2n-1},\rho)\) be local coordinates of \(M^{\prime}\) defined near \(p\) in \(M^{\prime}\) with \(x(p)=0\) and \(x=(x_{1},\ldots,x_{2n-1})\) are local coordinates of \(X\) defined near \(p\) in \(X\). Then,_ \[\Psi(z,w)=\Phi(x,y)-i\rho(z)(1+f(z))-i\rho(w)(1+\overline{f}(w))+O(|(z,w)|^{3}) \ \text{near}\ (p,p), \tag{1.13}\] _where \(f\in C^{\infty}\), \(f=O(|z|)\)._ The above theorem lays a foundation to the study of Toeplitz quantization on complex manifolds with boundary. We refer the reader to the discussion before (2.10) for the meaning of \(F\equiv G\mod C^{\infty}((U\times U)\cap(\overline{M}\times\overline{M}))\). Before we formulate our main result about geometric quantization on complex manifolds with boundary, we give some historic remark about geometric quantization theory. The famous geometric quantization conjecture of Guillemin and Sternberg [11] states that for a compact pre-quantizable symplectic manifold admitting a Hamiltonian action of a compact Lie group, the principle of "quantization commutes with reduction" holds. This conjecture was first proved independently by Meinrenken [24] and Vergne [32] for the case where the Lie group is abelian, and by Meinrenken [25] in the general case, then Tian-Zhang [30] gave a purely analytic proof in general case with various generalizations, see [33] for a survey and complete references on this subject. In the case of a non-compact symplectic manifold \(M\) which has a compact Lie group action \(G\), this question was solved by Ma-Zhang [22, 23] as a solution to a conjecture of Vergne in her ICM 2006 plenary lecture [34], see [21] for a survey. Paradan [29] gave a new proof, cf. also the recent work [13]. A natural choice for the quantum spaces of a compact symplectic manifold is the kernel of the Dirac operator. In [27], Ma-Zhang established the asymptotic expansion of the \(G\)-invariant Bergman kernel for a positive line bundle \(L\) over a compact symplectic manifold \(M\) and by using the asymptotic expansion of \(G\)-invariant Bergman kernel, they could establish the "quantization commutes with reduction" theorem when the power of the line bundle \(L\) is high enough. In [16], the first and second authors established the asymptotic expansion of the \(G\)-invariant Szego kernel for \((0,q)\) forms on a non-degenerate CR manifold and they could establish the "quantization commutes with reduction" theorem when the CR manifold admits a circle action. The quantization of strongly pseudoconvex or more generally contact manifolds via the Szego projector or its generalizations was developed by Boutet de Monvel and Guillemin [5] and can be applied to the Kahler quantization by using the above construction (see e. g. [7, 28]). In [19], the first author, Ma and Marinescu study the quantization of CR manifolds and the principle of "quantization commutes with reduction". An important difference between the CR setting and the Kahler/symplectic setting is that the quantum spaces in the case of compact Kahler/symplectic manifolds are finite dimensional, whereas for the compact strongly pseudoconvex CR manifolds that they consider the quantum spaces consisting of CR functions are infinite dimensional. For manifolds with boundary, in [31], Tian-Zhang extended the results in [30] to the case where the compact symplectic manifold with a non-empty boundary under the assumption that the preimage of the moment map of the regular value \(0\) in the dual of the Lie algebra does not touch the boundary. The quantum spaces considered in [31] are the kernel of the Dirac operator with Atiyah-Patodi-Singer type boundary conditions [2] and hence finite dimensional. Following the same line of [19], in this paper we study the quantization of complex manifolds with boundary and the principle of "quantization commutes with reduction". The quantum spaces we consider are the spaces of \(L^{2}\) holomorphic functions and could be infinite dimensional. We now formulate our main results. By Assumption 1.1, \(\mu^{-1}_{X}(0)\) is a \(d\)-codimensional submanifold of \(X\). We decompose \(\mu^{-1}(0)\cap X\) into two parts \(\widehat{X}\) and \(\widetilde{X}\) on which the Levi-form is strongly pseudoconvex and pseudoconcave, respectively. From now on, we assume that \(\widehat{X}\) is non-empty. Let \[\widehat{X}_{G}:=\widehat{X}/G,\ \widetilde{X}_{G}=\widetilde{X}/G. \tag{1.14}\] It was proved in [19, Theorem 2.6] that \(\widehat{X}_{G}\) is a compact CR manifold. Let \[\overline{\partial}_{b}:\operatorname{Dom}\overline{\partial}_{b}\subset L^{2 }(\widehat{X}_{G})\to L^{2}_{(0,1)}(\widehat{X}_{G})\] be the tangential Cauchy-Riemann operator. For every \(s\in\mathbb{R}\), let \(W^{s}(\overline{M})\) and \(W^{s}(\widehat{X}_{G})\) denotes the Sobolev spaces of \(\overline{M}\) and \(\widehat{X}_{G}\) of order \(s\) (see the discussion after Definition 2.1, for the precise meaning of \(W^{s}(\overline{M})\)). Let \((\,\cdot\,|\,\cdot\,)_{\widehat{X}_{G}}\) be the \(L^{2}\) inner product on \(L^{2}(\widehat{X}_{G})\) induced naturally by \(\langle\,\cdot\,|\,\cdot\,\rangle\). For every \(s\in\mathbb{R}\), put \[H^{0}(\overline{M})_{s}:=\left\{u\in W^{s}(\overline{M});\, \overline{\partial}u=0\text{ in the sense of distributions}\right\},\] \[H^{0}_{b}(\widehat{X}_{G})_{s}:=\left\{u\in W^{s}(\widehat{X}_{ G});\,\overline{\partial}_{b}u=0\text{ in the sense of distributions}\right\}, \tag{1.15}\] \[H^{0}(\overline{M})^{G}_{s}:=\left\{u\in H^{0}(\overline{M})_{s}; \,h^{*}u=u\text{ in the sense of distributions for all }h\in G\right\}.\] We write \(H^{0}_{b}(\widehat{X}_{G}):=H^{0}_{b}(\widehat{X}_{G})_{0}\). Let \(\iota_{\widehat{X}}:\widehat{X}\hookrightarrow X\) be the natural inclusion and let \(\iota^{*}_{\widehat{X}}:C^{\infty}(X)\to C^{\infty}(\widehat{X})\) be the pull-back by \(\iota_{\widehat{X}}\). Let \(\iota_{G,\widehat{X}}:C^{\infty}(\widehat{X})^{G}\to C^{\infty}(\widehat{X}_{ G})\) be the natural identification. Let \[\tilde{\sigma}_{G}:H^{0}(\overline{M})^{G}\cap C^{\infty}(\overline{M})\to H ^{0}_{b}(\widehat{X}_{G}),\qquad\tilde{\sigma}_{G}=\iota_{G,\widehat{X}} \circ\iota^{*}_{\widehat{X}}\circ\gamma, \tag{1.16}\] where \(\gamma\) denotes the operator of the restriction to the boundary \(X\). The map (1.16) is well defined. The map \(\tilde{\sigma}_{G}\) does not extend to a bounded operator on \(L^{2}\), so it is necessary to consider its extension to Sobolev spaces. We can check that \(\tilde{\sigma}_{G}\) extends by density to a bounded operator \[\tilde{\sigma}_{G}=\tilde{\sigma}_{G,s}:H^{0}(\overline{M})^{G}_{s}\to H^{0}_{ b}(\widehat{X}_{G})_{s-\frac{d}{4}-\frac{1}{2}},\text{ for every }s\in\mathbb{R} \tag{1.17}\] (see Theorem 6.1 and Theorem 6.2 below). For every \(s\in\mathbb{R}\), put \[\operatorname{Coker}\tilde{\sigma}_{G,s}=\operatorname{Coker}\tilde{\sigma}_{ G}:=\{u\in H^{0}_{b}(\widehat{X}_{G})_{s-\frac{d}{4}-\frac{1}{2}};\,(\,u\,|\, \tilde{\sigma}_{G,s}v)_{\widehat{X}_{G}}=0,\forall v\in H^{0}(\overline{M})^{ G}_{s}\cap C^{\infty}(\overline{M})\}. \tag{1.18}\] The following is our second main result **Theorem 1.3**.: _Let \(M\) be a relatively compact open subset with smooth boundary \(X\) of a complex manifold \(M^{\prime}\) of dimension \(n\), \(n\geq 3\). Let \(G\) be a compact Lie group acting on \(M^{\prime}\) such that Assumption 1.1 holds. With the notations used above, assume that \(\dim_{\,\mathbb{R}}\widehat{X}_{G}\geq 5\). Then, for every \(s\in\mathbb{R}\), the Guillemin-Sternberg map (1.17) is Fredholm. More precisely, \(\operatorname{Ker}\tilde{\sigma}_{G,s}\) and \(\mathrm{Coker}\,\tilde{\sigma}_{G,s}\) are finite dimensional subspaces of \(H^{0}(\overline{M})^{G}\cap C^{\infty}(\overline{M})^{G}\) and \(H^{0}_{b}(\widehat{X}_{G})\cap C^{\infty}(\widehat{X}_{G})\) respectively, \(\mathrm{Ker}\,\tilde{\sigma}_{G,s}\) and \(\mathrm{Coker}\,\tilde{\sigma}_{G,s}\) are independent of \(s\)._ The assumption \(\dim_{\,\mathbb{R}}\widehat{X}_{G}\geq 5\) in Theorem 1.3 can be replaced by \(\overline{\partial}_{b,\widehat{X}_{G}}\) has closed range, where \(\overline{\partial}_{b,\widehat{X}_{G}}\) denotes the tangential Cauchy-Riemann operator on \(\widehat{X}_{G}\). Theorem 1.3 tells us that up to some finite dimensional spaces, the quantum space \(H^{0}(\overline{M})^{G}\) is isomorphic to the space of \(L^{2}\) CR functions on \(\widehat{X}_{G}\). Suppose that \[0\text{ is a regular value of }\mu,\,G\text{ acts freely on }\mu^{-1}(0). \tag{1.19}\] Under (1.19), \(\mu^{-1}(0)\) is a \(d\)-codimensional submanifold of \(M^{\prime}\). Let \[M^{\prime}_{G}:=\mu^{-1}(0)/G,\quad M_{G}:=(\mu^{-1}(0)\cap M)/G. \tag{1.20}\] In Theorem 2.6 below, we will show that \(M_{G}\) is a complex manifold in \(M^{\prime}_{G}\) with smooth boundary \(X_{G}\). In fact, \(X_{G}=\widehat{X}_{G}\cup\widehat{X}_{G}\) and thus the the boundary \(X_{G}\) is non-degenerate, hence \(M_{G}\) is a domain in the complex manifold \(M^{\prime}_{G}\) with non-degenerate boundary. Let \(\iota:\mu^{-1}(0)\cap\overline{M}\hookrightarrow\overline{M}\) be the natural inclusion and let \(\iota^{*}:C^{\infty}(\overline{M})\to C^{\infty}(\mu^{-1}(0)\cap \overline{M})\) be the pull-back by \(\iota\). Let \(\iota_{G}:C^{\infty}(\mu^{-1}(0)\cap\overline{M})^{G}\to C^{\infty}( \overline{M}_{G})\) be the natural identification. Let \[\sigma_{G}:H^{0}(\overline{M})^{G}\cap C^{\infty}(\overline{M})\to H^{0}( \overline{M}_{G}),\qquad\sigma_{G}=\iota_{G}\circ\iota^{*}. \tag{1.21}\] The map (1.21) is well defined, see the construction of the complex reduction in Section 2.3. The map \(\sigma_{G}\) does not extend to a bounded operator on \(L^{2}\), so it is necessary to consider its extension to Sobolev spaces. Actually, we have \[\sigma_{G}=P_{M_{G}}\sigma_{1}(P^{*}P)^{-1}P^{*}\ \text{ on }H^{0}(\overline{M})^{G}\cap C^{\infty}(\overline{M}),\] where \(P_{M_{G}}\) and \(P\) are Poisson operators on \(M\) and \(M_{G}\) respectively and \(\sigma_{1}\) is the CR Guillemin-Sternberg map introduced in [19, (1.5)]. From [19, Theorem 5.3] and the regularity property for Poisson operator (see Section 4), we can check that \(\sigma_{G}\) extends by density to a bounded operator \[\sigma_{G}=\sigma_{G,s}:H^{0}(\overline{M})^{G}_{s}\to H^{0}(\overline{M}_{G} )_{s-\frac{d}{4}},\text{ for every }s\in\mathbb{R}. \tag{1.22}\] This operator can be thought as a Guillemin-Sternberg map in the setting of complex manifolds with boundary. It maps the "first quantize and then reduce" space (the space of \(G\)-invariant Sobolev holomorphic functions on \(\overline{M}\)) to the "first reduce and then quantize" space (the space of Sobolev holomorphic functions on \(\overline{M}_{G}\)). Indeed, from the point of view of quantum mechanics, the Hilbert space structures play an essential role. It is natural, then, to investigate the extent to which the holomorphic Guillemin-Sternberg map is Fredholm. Let \((\,\cdot\,|\,\cdot\,)_{M_{G}}\) be the \(L^{2}\) inner product on \(L^{2}(M_{G})\) induced naturally by \(\langle\,\cdot\,|\,\cdot\,\rangle\). For every \(s\in\mathbb{R}\), put \[\mathrm{Coker}\,\sigma_{G,s}=\mathrm{Coker}\,\sigma_{G}:=\{u\in H^{0}(\overline {M}_{G})_{s-\frac{d}{4}};\,(\,u\,|\,\sigma_{G}v)_{M_{G}}=0,\forall v\in H^{0}( \overline{M})^{G}_{s}\cap C^{\infty}(\overline{M})\}. \tag{1.23}\] The third main result of this work is the following. **Theorem 1.4**.: _Let \(M\) be a relatively compact open subset with smooth boundary \(X\) of a complex manifold \(M^{\prime}\) of dimension \(n\), \(n\geq 3\). Let \(G\) be a compact Lie group acting on \(M^{\prime}\) such that Assumption 1.1 and (1.19) hold. With the notations used above, assume that \(\dim{{}_{\mathbb{C}}M_{G}}\geq 3\). Then, for every \(s\in{\mathbb{R}}\), the holomorphic Guillemin-Sternberg map (1.22) is Fredholm. More precisely, \(\operatorname{Ker}\sigma_{G,s}\) and \(\operatorname{Coker}\sigma_{G,s}\) are finite dimensional subspaces of \(H^{0}(\overline{M})^{G}\cap C^{\infty}(\overline{M})^{G}\) and \(H^{0}(\overline{M}_{G})\cap C^{\infty}(M_{G})\) respectively, \(\operatorname{Ker}\sigma_{G,s}\) and \(\operatorname{Coker}\sigma_{G,s}\) are independent of \(s\)._ In should be mentioned that the condition \(\dim{{}_{\mathbb{C}}M_{G}}\geq 3\) in Theorem 1.4 can be replaced by \(\overline{\partial}_{b,X_{G}}\) has closed range. Until further notice, we will not assume (1.19). Suppose that \(M^{\prime}\) admits another compact holomorphic Lie group action \(H\) such that \(H\) commutes with \(G\) and \(H\) preserves the boundary \(X\). Recall that \(\mu^{-1}(0)\cap X=\widehat{X}\cup\widetilde{X}\) on which the Levi form is strongly pseudoconvex and pseudoconcave, respectively. Let \[\mathcal{R}=\{\mathcal{R}_{m};\,m=1,2,\ldots\}\] denote the set of all irreducible unitary representations of the group \(H\), including only one representation from each equivalence class. For each \(\mathcal{R}_{m}\), we write \(\mathcal{R}_{m}\) as a matrix \((\mathcal{R}_{m,j,k})_{j,k=1}^{d_{m}}\), where \(d_{m}\) is the dimension of \(\mathcal{R}_{m}\). Fix a Haar measure \(d\nu(h)\) on \(H\) so that \(\int_{H}d\nu(h)=1\). Take an irreducible unitary representation \(\mathcal{R}_{m}\), for every \(h\in H\), put \[\chi_{m}(h):=\operatorname{Tr}\,\left(\mathcal{R}_{m,j,k}(h)\right)_{j,k=1}^{ d_{m}}=\sum_{j=1}^{d_{m}}\mathcal{R}_{m,j,j}(h).\] Let \(u\in C^{\infty}(M^{\prime})\) be a smooth function. The \(m\)-th Fourier component of \(u\) is given by \[u_{m}(x):=d_{m}\int_{H}(h^{*}u)(x)\overline{\chi_{m}(h)}d\nu(h)\in C^{\infty} (M^{\prime}).\] For every \(m\in{\mathbb{N}}\), put \[C^{\infty}_{m}(M^{\prime}):=\left\{f\in C^{\infty}(M^{\prime});\,\text{there is an $F\in C^{\infty}(M^{\prime})$ such that $f=F_{m}$ on $M^{\prime}$}\right\}.\] For every \(m\in{\mathbb{N}}\), we define \(C^{\infty}_{m}(\overline{M})\), \(C^{\infty}_{m}(\widehat{X}_{G})\) in the standard way. For every \(m\in{\mathbb{N}}\), let \[\begin{split} H^{0}(\overline{M})_{(m)}&:=H^{0}( \overline{M})\cap C^{\infty}_{m}(\overline{M}),\\ H^{0}(\overline{M})^{G}_{(m)}&:=H^{0}(\overline{M} )^{G}\cap C^{\infty}_{m}(\overline{M}),\\ H^{0}_{b}(\widehat{X}_{G})_{(m)}&:=H^{0}_{b}( \widehat{X}_{G})\cap C^{\infty}_{m}(\widehat{X}_{G}).\end{split} \tag{1.24}\] Let \(\mathfrak{h}\) denote the Lie algebra of \(H\). For any \(\xi\in\mathfrak{h}\), as (1.6), we write \(\xi_{M^{\prime},H}\) to denote the vector field on \(M^{\prime}\) induced by \(\xi\). For \(x\in M^{\prime}\), set \[\underline{\mathfrak{h}}_{x}=\operatorname{Span}\,\left\{\xi_{M^{\prime},H}( x);\,\xi\in\mathfrak{h}\,\right\}. \tag{1.25}\] We assume that \[T^{1,0}_{x}\widehat{X}\oplus T^{0,1}_{x}\widehat{X}\oplus\underline{\mathfrak{ h}}_{x}={\mathbb{C}}T_{x}\widehat{X},\ \ \text{for every $x\in\widehat{X}$}, \tag{1.26}\] where \(T_{x}^{1,0}\widehat{X}:=T_{x}^{1,0}M^{\prime}\cap\mathbb{C}T_{x}\widehat{X}\), \(T_{x}^{0,1}\widehat{X}:=T_{x}^{0,1}M^{\prime}\cap\mathbb{C}T_{x}\widehat{X}\), \(T^{1,0}M^{\prime}\) and \(T^{0,1}M^{\prime}\) denotes the holomorphic tangent bundle of \(M^{\prime}\) and the anti-holomorphic tangent bundle of \(M^{\prime}\) respectively. We can repeat the proof of [15, Theorem 3.1, Appendix] with minor change and deduce that \[\begin{split}&\dim H^{0}(\overline{M})_{(m)}<+\infty,\,\dim H^{0}( \overline{M})_{(m)}^{G}<+\infty,\,\dim H^{0}_{b}(\widehat{X}_{G})_{(m)}<+\infty,\,\text{for every }m\in\mathbb{N},\\ & H^{0}(\overline{M})=\oplus_{m\in\mathbb{N}}H^{0}(\overline{M} )_{(m)},\ \ H^{0}(\overline{M})^{G}=\oplus_{m\in\mathbb{N}}H^{0}(\overline{M})_{(m)}^{G},\ \ H^{0}_{b}(\widehat{X}_{G})=\oplus_{m\in\mathbb{N}}H^{0}_{b}(\widehat{X}_{G})_{ (m)}.\end{split} \tag{1.27}\] From Theorem 1.3, Theorem 1.4 and (1.27), we deduce **Theorem 1.5**.: _With the same assumptions used in Theorem 1.3, suppose that \(M^{\prime}\) admits another compact holomorphic Lie group action \(H\) such that \(H\) commutes with \(G\) and \(H\) preserves the boundary \(X\). Under the same notations above and assume that (1.26) holds. Then, for \(|m|\gg 1\), we have_ \[\dim H^{0}(\overline{M})_{(m)}^{G}=\dim H^{0}_{b}(\widehat{X}_{G})_{(m)}.\] _Assume further that (1.19) holds. Then, for \(|m|\gg 1\), we have_ \[\dim H^{0}(\overline{M})_{(m)}^{G}=\dim H^{0}(\overline{M}_{G})_{(m)}.\] As an application of Theorem 1.2, we establish \(G\)-invariant version of Fefferman's result about regularity of biholomorphic maps. Let \(M_{1}\), \(M_{2}\) be bounded domains in \(\mathbb{C}^{n}\). Assume that \(M_{j},j=1,2\) admit a compact holomorphic Lie group action \(G\). Let \(F:M_{1}\to M_{2}\) be a holomorphic map. \(F\) is said to be \(G\)-invariant if \(F(g\circ z)=F(z)\), for all \(z\in M_{1}\) and \(g\in G\). Then from Theorem 1.2 and by using the argument in [3], we have (see Section 6, for the details) **Theorem 1.6**.: _Let \(M_{1}\), \(M_{2}\) be bounded domains in \(\mathbb{C}^{n}\) with smooth boundary, \(n\geq 3\). Assume that \(M_{j}\) admits a compact holomorphic Lie group action \(G\) and Assumption 1.1 holds, for each \(j=1,2\). Let \(F:M_{1}\to M_{2}\) be a \(G\)-invariant holomorphic map. Assume that the induced map of \(F\) on the quotient space still denoted by \(F:M_{1}/G\to M_{2}/G\) is onto, one-to-one and the differential of \(F\) is invertible everywhere on the regular part of \(M_{1}/G\). Then, \(F\) extends smoothly to the boundary._ In the end of this section, we give a simple example. Let \[M:=\left\{(z_{1},z_{2},z_{3},z_{4})\in\mathbb{C}^{4};\,|z_{1}|^{4}+\sum_{j=2 }^{4}|z_{j}|^{2}<1\right\}.\] \(M\) admits a \(S^{1}\)-action: \[S^{1}\times M\to M,\ \ e^{i\theta}\cdot(z_{1},\ldots,z_{4})=(e^{-i\theta}z_{1},e^{i \theta}z_{2},\ldots,e^{i\theta}z_{4}).\] We can show that Assumption 1.1 holds in this example (see [19, Section 2.5] for the details). Moreover, it is straightforward to check that \(0\in\mathbb{C}^{4}\) is a critical point of \(\mu\) and hence (1.19) does not hold. In this example, we have Theorem 1.2 and Theorem 1.3. Since \(M_{G}\) has singularities, we do not know if we have Theorem 1.5. It is quite interesting to see if we have Theorem 1.5 for singular reduction. Let us consider the shell domain \[M:=\left\{(z_{1},z_{2},z_{3},z_{4})\in\mathbb{C}^{4};\,\frac{1}{2}<|z_{1}|^{4}+ \sum_{j=2}^{4}|z_{j}|^{2}<1\right\}.\] Then, Assumption 1.1 and (1.19) hold in this example and we have Theorem 1.2, Theorem 1.3 and Theorem 1.5 for this example. ## 2. Preliminaries ### Some standard notations We use the following notations: \(\mathbb{N}=\{1,2,\ldots\}\), \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\), \(\mathbb{R}\) is the set of real numbers, \(\overline{\mathbb{R}}_{+}:=\{x\in\mathbb{R};\,x\geq 0\}\). For a multiindex \(\alpha=(\alpha_{1},\ldots,\alpha_{m})\in\mathbb{N}_{0}^{m}\), we set \(|\alpha|=\alpha_{1}+\cdots+\alpha_{m}\). For \(x=(x_{1},\ldots,x_{m})\in\mathbb{R}^{m}\) we write \[x^{\alpha}=x_{1}^{\alpha_{1}}\ldots x_{m}^{\alpha_{m}},\quad\partial_{x_{j}}= \frac{\partial}{\partial x_{j}}\,,\quad\partial_{x}^{\alpha}=\partial_{x_{1}} ^{\alpha_{1}}\ldots\partial_{x_{m}}^{\alpha_{m}}=\frac{\partial^{|\alpha|}}{ \partial x^{\alpha}}\,,\] \[D_{x_{j}}=\frac{1}{i}\partial_{x_{j}}\,,\quad D_{x}^{\alpha}=D_{x_{1}}^{\alpha _{1}}\ldots D_{x_{m}}^{\alpha_{m}}\,,\quad D_{x}=\frac{1}{i}\partial_{x}\,.\] Let \(z=(z_{1},\ldots,z_{m})\), \(z_{j}=x_{2j-1}+ix_{2j}\), \(j=1,\ldots,m\), be coordinates of \(\mathbb{C}^{m}\), where \(x=(x_{1},\ldots,x_{2m})\in\mathbb{R}^{2m}\) are coordinates in \(\mathbb{R}^{2m}\). We write \[z^{\alpha}=z_{1}^{\alpha_{1}}\ldots z_{m}^{\alpha_{m}}\,,\quad \overline{z}^{\alpha}=\overline{z}_{1}^{\alpha_{1}}\ldots\overline{z}_{m}^{ \alpha_{m}}\,,\] \[\partial_{z_{j}}=\frac{\partial}{\partial z_{j}}=\frac{1}{2} \Big{(}\frac{\partial}{\partial x_{2j-1}}-i\frac{\partial}{\partial x_{2j}} \Big{)}\,,\quad\partial_{\overline{z}_{j}}=\frac{\partial}{\partial\overline{ z}_{j}}=\frac{1}{2}\Big{(}\frac{\partial}{\partial x_{2j-1}}+i\frac{ \partial}{\partial x_{2j}}\Big{)},\] \[\partial_{z}^{\alpha}=\partial_{z_{1}}^{\alpha_{1}}\ldots \partial_{z_{m}}^{\alpha_{m}}=\frac{\partial^{|\alpha|}}{\partial z^{\alpha}} \,,\quad\partial_{\overline{z}}^{\alpha}=\partial_{\overline{z}_{1}}^{\alpha_ {1}}\ldots\partial_{\overline{z}_{m}}^{\alpha_{m}}=\frac{\partial^{|\alpha|}}{ \partial\overline{z}^{\alpha}}\,.\] Let \(\Omega\) be a \(C^{\infty}\) orientable paracompact manifold. We let \(T\Omega\) and \(T^{*}\Omega\) denotes the tangent bundle of \(\Omega\) and the cotangent bundle of \(\Omega\) respectively. The complexified tangent bundle of \(\Omega\) and the complexified cotangent bundle of \(\Omega\) will be denoted by \(\mathbb{C}T\Omega\) and \(\mathbb{C}T^{*}\Omega\) respectively. We write \(\langle\,\cdot\,,\cdot\,\rangle\) to denote the pointwise duality between \(T\Omega\) and \(T^{*}\Omega\). We extend \(\langle\,\cdot\,,\cdot\,\rangle\) bilinearly to \(\mathbb{C}T\Omega\times\mathbb{C}T^{*}\Omega\). Let \(E\) be a \(C^{\infty}\) complex vector bundle over \(\Omega\). The fiber of \(E\) at \(x\in\Omega\) will be denoted by \(E_{x}\). Let \(F\) be another vector bundle over \(\Omega\). We write \(F\boxtimes E^{*}\) to denote the vector bundle over \(\Omega\times\Omega\) with fiber over \((x,y)\in\Omega\times\Omega\) consisting of the linear maps from \(E_{y}\) to \(F_{x}\). Let \(Y\subset\Omega\) be an open set. The spaces of smooth sections of \(E\) over \(Y\) and distribution sections of \(E\) over \(Y\) will be denoted by \(C^{\infty}(Y,E)\) and \(\mathscr{D}^{\prime}(Y,E)\) respectively. Let \(\mathscr{E}^{\prime}(Y,E)\) be the subspace of \(\mathscr{D}^{\prime}(Y,E)\) whose elements have compact support in \(Y\) and set \(C^{\infty}_{c}(Y,E):=C^{\infty}(Y,E)\bigcap\mathscr{E}^{\prime}(Y,E)\). Fix a volume form on \(Y\) and a Hermitian metric on \(E\), we get a natural \(L^{2}\) inner product \((\,\cdot\,|\,\cdot\,)\) on \(C^{\infty}_{c}(Y,E)\). Let \(L^{2}(Y,E)\) be the completion of \(C^{\infty}_{c}(Y,E)\) with respect to \((\,\cdot\,|\,\cdot\,)\) and the \(L^{2}\) inner product \((\,\cdot\,|\,\cdot\,)\) can be extended to \(L^{2}(Y,E)\) by density. Let \(\|\cdot\|\) be the \(L^{2}\) norm corresponding to the \(L^{2}\) inner product \((\,\cdot\,|\,\cdot\,)\). For every \(s\in\mathbb{R}\), let \(L_{s}:\mathscr{D}^{\prime}(Y,E)\to\mathscr{D}^{\prime}(Y,E)\) be a properly supported classical elliptic pseudodifferential operator of order \(s\) on \(Y\) with values in \(E\). Define \[W^{s}(Y,E):=\left\{u\in\mathscr{D}^{\prime}(Y,E);\,L_{s}u\in L^{2}(Y,E)\right\}\] and for \(u\in W^{s}(Y,E)\), let \(\left\|u\right\|_{s}:=\left\|L_{s}u\right\|\). We call \(W^{s}(Y,E)\) the Sobolev space of order \(s\) of sections of \(E\) over \(Y\) (with respect to \(L_{s}\)) and for \(u\in W^{s}(Y,E)\), we call the number \(\left\|u\right\|_{s}\) the Sobolev norm of \(u\) of order \(s\) (with respect to \(L_{s}\)). Put \[W^{s}_{\rm loc}\left(Y,E\right)=\left\{u\in\mathscr{D}^{\prime}(Y,E);\,\varphi u \in W^{s}(Y,E),\,\forall\varphi\in C^{\infty}_{c}(Y)\right\},\] \[W^{s}_{\rm comp}\left(Y,E\right)=W^{s}_{\rm loc}(Y,E)\cap\mathscr{E}^{\prime}( Y,E)\,.\] Let \(U,V\) be open sets of \(\Omega\). Let \(F:C^{\infty}_{c}(V)\to\mathscr{D}^{\prime}(U)\) be a continuous operator and let \(F(x,y)\in\mathscr{D}^{\prime}(U\times V)\) be the distribution kernel of \(F\). In this work, we will identify \(F\) with \(F(x,y)\). We say that \(F\) is a smoothing operator if \(F(x,y)\in C^{\infty}(U\times V)\). Note that the following conditions are equivalent. \[\begin{array}{l}F(x,y)\in C^{\infty}(U\times V).\\ F:\mathscr{E}^{\prime}(V)\to\mathscr{C}^{\infty}(U)\text{ is continuous.}\\ F:W^{-s}_{\rm comp}(V)\to W^{s}_{\rm loc}(U)\text{ is continuous for all }s\in\mathbb{N}_{0}.\end{array} \tag{2.1}\] For two continuous linear operators \(A,B:C^{\infty}_{c}(V)\to\mathscr{D}^{\prime}(U)\), we write \(A\equiv B\) (on \(U\times V\)) or \(A(x,y)\equiv B(x,y)\) (on \(U\times V\)) if \(A-B\) is a smoothing operator, where \(A(x,y),B(x,y)\in\mathscr{D}^{\prime}(U\times V)\) are the distribution kernels of \(A\) and \(B\), respectively. ### Complex manifolds with boundary Let \(M\) be a relatively compact open subset with smooth boundary \(X\) of a complex manifold \(M^{\prime}\) of dimension \(n\), \(n\geq 3\). Let \(\rho\in C^{\infty}(M^{\prime},\mathbb{R})\) be a defining function of \(X\), that is, \[X=\{x\in M^{\prime};\,\rho(x)=0\},\ \ M=\{x\in M^{\prime};\,\rho(x)<0\}\] and \(d\rho(x)\neq 0\) at every point \(x\in X\). Then the manifold \(X\) is a CR manifold with natural CR structure \(T^{1,0}X:=T^{1,0}M^{\prime}\cap\mathbb{C}TX\), where \(T^{1,0}M^{\prime}\) denotes the holomorphic tangent bundle of \(M^{\prime}\). Let \(T^{0,1}M^{\prime}:=\overline{T^{1,0}M^{\prime}}\), \(T^{0,1}X:=\overline{T^{1,0}X}\). Assume that \(M^{\prime}\) admits a holomorphic \(d\)-dimensional compact Lie group \(G\) action. From now on, we will use the same assumptions and notations as in Section 1. Recall that we work with Assumption 1.1. We take a \(G\)-invariant Hermitian metric \(\langle\cdot\,|\,\cdot\,\rangle\) on \(\mathbb{C}TM^{\prime}\). The \(G\)-invariant Hermitian metric \(\langle\cdot\,|\,\cdot\,\rangle\) on \(\mathbb{C}TM^{\prime}\) induces a \(G\)-invariant Hermitian metric \(\langle\cdot\,|\,\cdot\,\rangle\) on \(\mathbb{C}T^{*}M^{\prime}\). From now on, we fix a defining function \(\rho\in C^{\infty}(M^{\prime},\mathbb{R})\) of \(X\) such that \[\begin{array}{l}\langle\,d\rho(x)\,|\,d\rho(x)\,\rangle=1\text{ on }X,\\ \rho(g\circ x)=\rho(x),\ \ \forall x\in M^{\prime},\ \ \forall g\in G.\end{array} \tag{2.2}\] Let \(\frac{\partial}{\partial\rho}\in C^{\infty}(X,TM^{\prime})\) be the global real vector field on \(X\) given by \[\begin{array}{l}\left\langle\frac{\partial}{\partial\rho},d\rho\,\right\rangle =1\text{ on }X,\\ \left\langle\frac{\partial}{\partial\rho}(p)\,\big{|}\,v\,\right\rangle=0 \text{ at every }p\in X,\text{ for every }v\in T_{p}X.\end{array} \tag{2.3}\] Let \(J:TM^{\prime}\to TM^{\prime}\) be the complex structure map and put \[T=J\left(\frac{\partial}{\partial\rho}\,\right)\in C^{\infty}(X,TM^{\prime}\,). \tag{2.4}\] The \(G\)-invariant Hermitian metric \(\langle\,\cdot\,|\,\cdot\,\rangle\) on \(\mathbb{C}TM^{\prime}\) induces by duality a Hermitian metric on \(\mathbb{C}T^{*}M^{\prime}\) and Hermitian metrics on \(T^{*0,q}M^{\prime}\) the bundle of \((0,q)\) forms on \(M^{\prime}\), \(q=1,\dots,n\). We shall also denote these Hermitian metrics by \(\langle\,\cdot\,|\,\cdot\,\rangle\). Put \[T^{*1,0}X:=(T^{0,1}X\oplus\mathbb{C}T)^{\perp}\subset\mathbb{C}T^{*}X,\quad T^ {*0,1}X:=(T^{1,0}X\oplus\mathbb{C}T)^{\perp}\subset\mathbb{C}T^{*}X.\] Put \[\omega_{0}=J^{t}(d\rho), \tag{2.5}\] where \(J^{t}\) is the complex structure map for the cotangent bundle \(T^{*}M^{\prime}\). Then, on \(X\), \(\omega_{0}\in C^{\infty}(X,T^{*}X)\) is the global one form on \(X\) satisfying \[\begin{split}&\langle\omega_{0}(p),u\,\rangle=0,\,\text{for every}\,\,p\in X\,\,\text{and every}\,\,u\in T^{1,0}_{p}X\oplus T^{0,1}_{p}X,\\ &\langle\omega_{0},T\,\rangle=-1\,\,\text{on}\,\,X.\end{split} \tag{2.6}\] It is easy to see that under Assumption 1.1, the \(G\)-action preserves \(\omega_{0}\). We have the pointwise orthogonal decompositions: \[\begin{split}&\mathbb{C}T^{*}X=T^{*1,0}X\oplus T^{*0,1}X\oplus \{\lambda\omega_{0};\lambda\in\mathbb{C}\,\},\\ &\mathbb{C}TX=T^{1,0}X\oplus T^{0,1}X\oplus\{\lambda T;\lambda\in \mathbb{C}\,\}.\end{split} \tag{2.7}\] For \(p\in X\), the Levi form of \(X\) at \(p\) is the Hermitian quadratic form on \(T^{1,0}_{p}X\) given by \[\mathcal{L}_{p}(U,\overline{V})=-\frac{1}{2i}\langle\,d\omega_{0}(p)\,,\,U \wedge\overline{V}\,\rangle,\,\,\,\,\forall U,V\in T^{1,0}_{p}X. \tag{2.8}\] We can check that the Levi form on \(X\) defined in (2.8) is exactly \[\mathcal{L}_{p}(U,\overline{V})=\langle\partial\overline{\partial}\rho(p)\,, \,U\wedge\overline{V}\,\rangle,\,\,\,\,U,V\in T^{1,0}_{p}X. \tag{2.9}\] **Definition 2.1**.: \(M\) _is called weakly (strongly) pseudoconvex at \(x\in X\) if \(\mathcal{L}_{x}\) is semi-positive (positive) definite on \(T^{1,0}_{x}X\). If \(\mathcal{L}_{x}\) is semi-positive (positive) definite at every point of \(X\), then \(M\) is called a weakly (strongly) pseudoconvex manifold._ Let \(A\) be a \(C^{\infty}\) vector bundle over \(M^{\prime}\). Let \(U\) be an open set in \(M^{\prime}\). Let \[\begin{split}& C^{\infty}(U\cap\overline{M},A),\,\,\,\,\mathscr{ D}^{\prime}(U\cap\overline{M},A),\,\,\,\,C^{\infty}_{c}(U\cap\overline{M},A),\,\,\,\, \mathscr{E}^{\prime}(U\cap\overline{M},A),\\ & W^{s}(U\cap\overline{M},A),\,\,\,\,W^{s}_{\text{comp}}\,(U\cap \overline{M},A),\,\,\,\,W^{s}_{\text{loc}}\,(U\cap\overline{M},A),\end{split}\] (where \(s\in\mathbb{R}\)) denote the spaces of restrictions to \(U\cap\overline{M}\) of elements in \[\begin{split}& C^{\infty}(U\cap M^{\prime},A),\,\,\,\,\mathscr{ D}^{\prime}(U\cap M^{\prime},A),\,\,\,\,C^{\infty}(U\cap M^{\prime},A),\,\,\,\, \mathscr{E}^{\prime}(U\cap M^{\prime},A),\\ & W^{s}(M^{\prime},A),\,\,\,\,W^{s}_{\text{comp}}\,(M^{\prime},A ),\,\,\,\,W^{s}_{\text{loc}}\,(M^{\prime},A),\end{split}\] respectively. Write \[\begin{split}& L^{2}(U\cap M,A)=L^{2}(U\cap\overline{M},A):=W^{0} (U\cap\overline{M},A),\\ & L^{2}_{\text{comp}}\,(U\cap\overline{M},A):=W^{0}_{\text{comp }}\,(U\cap\overline{M},A),\,\,\,\,L^{2}_{\text{loc}}\,(U\cap\overline{M},A):=W ^{0}_{\text{loc}}\,(U\cap\overline{M},A).\end{split}\] For every \(q=0,\ldots,n\), we denote \[\begin{split}\Omega^{0,q}(U\cap\overline{M}):=C^{\infty}(U\cap \overline{M},T^{*0,q}M^{\prime}),&\Omega^{0,q}(M^{\prime}):=C^{ \infty}(M^{\prime},T^{*0,q}M^{\prime}),\\ \Omega^{0,q}_{c}(U\cap\overline{M}):=C^{\infty}_{c}(U\cap \overline{M},T^{*0,q}M^{\prime}),&\Omega^{0,q}_{c}(M):=C^{ \infty}_{c}(M,T^{*p,q}M^{\prime}).\end{split}\] Let \(A\) and \(B\) be \(C^{\infty}\) vector bundles over \(M^{\prime}\). Let \(U\) be an open set in \(M^{\prime}\). Let \[F_{1},F_{2}:C^{\infty}_{c}(U\cap M,A)\to\mathscr{D}^{\prime}(U\cap M,B)\] be continuous operators. Let \(F_{1}(x,y),F_{2}(x,y)\in\mathscr{D}^{\prime}((U\times U)\cap(M\times M),A \boxtimes B^{*})\) be the distribution kernels of \(F_{1}\) and \(F_{2}\) respectively. We write \[F_{1}\equiv F_{2}\ \ \text{mod}\ C^{\infty}((U\times U)\cap(\overline{M} \times\overline{M}))\] or \(F_{1}(x,y)\equiv F_{2}(x,y)\ \text{mod}\ C^{\infty}((U\times U)\cap( \overline{M}\times\overline{M}))\) if \(F_{1}(x,y)=F_{2}(x,y)+r(x,y)\), where \(r(x,y)\in C^{\infty}((U\times U)\cap(\overline{M}\times\overline{M}),A \boxtimes B^{*})\). Similarly, let \(\hat{F}_{1},\hat{F}_{2}:C^{\infty}_{c}(U\cap M,A)\to\mathscr{D}^{\prime}(U \cap(X,B)\) be continuous operators. Let \(\hat{F}_{1}(x,y),\hat{F}_{2}(x,y)\in\mathscr{D}^{\prime}((U\times U)\cap(X \times M),A\boxtimes B^{*})\) be the distribution kernels of \(\hat{F}_{1}\) and \(\hat{F}_{2}\) respectively. We write \(\hat{F}_{1}\equiv\hat{F}_{2}\ \text{mod}\ C^{\infty}((U\times U)\cap(X \times\overline{M}))\) or \(\hat{F}_{1}(x,y)\equiv\hat{F}_{2}(x,y)\ \text{mod}\ C^{\infty}((U\times U)\cap(X \times\overline{M}))\) if \(\hat{F}_{1}(x,y)=\hat{F}_{2}(x,y)+\hat{r}(x,y)\), where \(\hat{r}(x,y)\in C^{\infty}((U\times U)\cap(X\times\overline{M}),A\boxtimes B ^{*})\). Similarly, let \(\tilde{F}_{1},\tilde{F}_{2}:C^{\infty}_{c}(U\cap X,A)\to\mathscr{D}^{\prime}( U\cap M,B)\) be continuous operators. Let \[\tilde{F}_{1}(x,y),\tilde{F}_{2}(x,y)\in\mathscr{D}^{\prime}((U\times U)\cap( M\times X),A\boxtimes B^{*})\] be the distribution kernels of \(\tilde{F}_{1}\) and \(\tilde{F}_{2}\) respectively. We write \(\tilde{F}_{1}\equiv\tilde{F}_{2}\ \text{mod}\ C^{\infty}((U\times U)\cap( \overline{M}\times X))\) or \(\tilde{F}_{1}(x,y)\equiv\tilde{F}_{2}(x,y)\ \text{mod}\ C^{\infty}((U\times U)\cap( \overline{M}\times X))\) if \(\tilde{F}_{1}(x,y)=\tilde{F}_{2}(x,y)+\tilde{r}(x,y)\), where \(\tilde{r}(x,y)\in C^{\infty}((U\times U)\cap(\overline{M}\times X),A \boxtimes B^{*})\). Let \((\,\cdot\,|\,\cdot\,)_{M^{\prime}}\) and \((\,\cdot\,|\,\cdot\,)_{M}\) be the \(L^{2}\) inner products on \(\Omega^{0,q}_{c}(M^{\prime})\) and \(\Omega^{0,q}_{c}(M)\) respectively given by \[\begin{split}(\,u\,|\,v\,)_{M^{\prime}}:=\int_{M^{\prime}} \langle\,u\,|\,v\,\rangle dv_{M^{\prime}},& u,v\in\Omega^{0,q}_{c}(M^{ \prime}),\\ (\,u\,|\,v\,)_{M}:=\int_{M}\langle\,u\,|\,v\,\rangle dv_{M^{ \prime}},& u,v\in\Omega^{0,q}_{c}(M),\end{split} \tag{2.10}\] where \(dv_{M^{\prime}}\) is the volume form on \(M^{\prime}\) induced by \(\langle\,\cdot\,|\,\cdot\,\rangle\). Let \(L^{2}_{(0,q)}(M)\) and \(L^{2}_{(0,q)}(M^{\prime})\) be the \(L^{2}\) completions of \(\Omega^{0,q}_{c}(M)\) and \(\Omega^{0,q}_{c}(M^{\prime})\) with respect to \((\,\cdot\,|\,\cdot\,)_{M}\) and \((\,\cdot\,|\,\cdot\,)_{M^{\prime}}\) respectively. It is clear that \(\Omega^{0,q}(\overline{M})\subset L^{2}_{(0,q)}(M)\). We write \(L^{2}(M):=L^{2}_{(0,0)}(M)\). We extend \((\,\cdot\,|\,\cdot\,)_{M}\) and \((\,\cdot\,|\,\cdot\,)_{M^{\prime}}\) to \(L^{2}_{(0,q)}(M)\) and \(L^{2}_{(0,q)}(M^{\prime})\) in the standard way and let \(\left\|\cdot\right\|_{M}\) and \(\left\|\cdot\right\|_{M^{\prime}}\) be the corresponding \(L^{2}\) norms. Let \(T^{*0,q}X\) be the bundle of \((0,q)\) forms on \(X\). Recall that for every \(x\in X\), we have \[T^{*0,q}_{x}X:=\left\{u\in T^{*0,q}_{x}M^{\prime};\,\langle\,u\,|\,\overline{ \partial}\rho(x)\wedge g\,\rangle=0,\ \ \forall g\in T^{*0,q-1}_{x}M^{\prime}\right\}.\] Let \(\Omega^{0,q}(X)\) be the space of smooth \((0,q)\) forms on \(X\). Let \((\,\cdot\,|\,\cdot\,)_{X}\) be the \(L^{2}\) inner product on \(\Omega^{0,q}(X)\) given by \[(\,u\,|\,v\,)_{X}:=\int_{X}\langle\,u\,|\,v\,\rangle dv_{X}, \tag{2.11}\] where \(dv_{X}\) is the volume form on \(X\) induced by \(\langle\,\cdot\,|\,\cdot\,\rangle\). Let \(L^{2}_{(0,q)}(X)\) be the \(L^{2}\) completion of \(\Omega^{0,q}(X)\) with respect to \((\,\cdot\,|\,\cdot\,)_{X}\). We extend \((\,\cdot\,|\,\cdot\,)_{X}\) to \(L^{2}_{(0,q)}(X)\) in the standard way and let \(\left\|\cdot\right\|_{X}\) be the corresponding \(L^{2}\) norm. We write \(L^{2}(X):=L^{2}_{(0,0)}(X)\). Fix \(g\in G\). Let \(g^{*}:\Lambda^{r}_{x}(\mathbb{C}T^{*}M^{\prime})\to\Lambda^{r}_{g^{-1}\circ x }(\mathbb{C}T^{*}M^{\prime})\) be the pull-back map. Since \(G\) preserves \(J\), we have \[g^{*}:T^{*0,q}_{x}M^{\prime}\to T^{*0,q}_{g^{-1}\circ x}M^{\prime},\ \ \forall x\in M^{\prime}.\] Thus, for \(u\in\Omega^{0,q}(M^{\prime})\), we have \(g^{*}u\in\Omega^{0,q}(M^{\prime})\). Put \[\Omega^{0,q}(M^{\prime})^{G}:=\left\{u\in\Omega^{0,q}(M^{\prime});\,g^{*}u=u, \ \ \forall g\in G\right\}.\] Let \(u\in L^{2}_{(0,q)}(M^{\prime})\) and \(g\in G\), we can also define \(g^{*}u\) in the standard way. Put \[L^{2}_{(0,q)}(M^{\prime})^{G}:=\left\{u\in L^{2}_{(0,q)}(M^{\prime});\,g^{*}u= u,\ \ \forall g\in G\right\}.\] Let \(\Omega^{0,q}(\overline{M})^{G}\) denote the space of restrictions to \(M\) of elements in \(\Omega^{0,q}(M^{\prime})^{G}\). Let \(L^{2}_{(0,q)}(M)^{G}\) be the completion of \(\Omega^{0,q}(\overline{M})^{G}\) with respect to \((\,\cdot\,|\,\cdot\,)_{M}\). Similarly, let \[\Omega^{0,q}(X)^{G}:=\{u\in\Omega^{0,q}(X);\,g^{*}u=u,\ \ \forall g\in G\}. \tag{2.12}\] Let \(L^{2}_{(0,q)}(X)^{G}\) be the completion of \(\Omega^{0,q}(X)^{G}\) with respect to \((\,\cdot\,|\,\cdot\,)_{X}\). We write \(L^{2}(X)^{G}:=L^{2}_{(0,0)}(X)^{G}\), \(L^{2}(M)^{G}:=L^{2}_{(0,0)}(M)^{G}\), \(L^{2}(M^{\prime})^{G}:=L^{2}_{(0,0)}(M^{\prime})^{G}\). For \(s\in\mathbb{R}\), we also use \(\left\|\cdot\right\|_{s,X}\) to denote the standard Sobolev norm on \(X\) of order \(s\). Let \(A\) be a vector bundle over \(M^{\prime}\). Let \(u\in W^{s}(\overline{M},A)\). We define \[\left\|u\right\|_{s,\overline{M}}:=\inf\left\{\left\|\widetilde{u}\right\|_{s,M^{\prime}};\,u^{\prime}\in W^{s}(M^{\prime},A),u^{\prime}|_{M}=u\right\}.\] We call \(\left\|u\right\|_{s,\overline{M}}\) the Sobolev norm of \(u\) of order \(s\) on \(\overline{M}\). Let \(s\) be a non-negative integer. We can also define Sobolev norm of order \(s\) on \(\overline{M}\) as follows: Let \(x_{0}\in X\) and let \(U\) be an open neighborhood of \(x_{0}\) in \(M^{\prime}\) with local coordinates \(x=(x_{1},\ldots,x_{2n})\). Let \(u\in\mathscr{E}^{\prime}(U\cap\overline{M})\bigcap W^{s}(\overline{M},A)\). Let \(\widetilde{u}\in\mathscr{E}^{\prime}(U)\bigcap W^{s}(M^{\prime},A)\) with \(\widetilde{u}|_{M}=u\). We define the Sobolev norm of order \(s\) of \(u\) on \(\overline{M}\) by \[\left\|u\right\|^{2}_{(s),\overline{M}}:=\sum_{\alpha\in\mathbb{N}_{0}^{2n}, \left|\alpha\right|\leq s}\int_{M}\lvert\partial_{x}^{\alpha}\widetilde{u} \rvert^{2}dv_{M^{\prime}}. \tag{2.13}\] By using partition of unity, for \(u\in W^{s}(\overline{M},A)\), we define \(\left\|u\right\|^{2}_{(s),\overline{M}}\) in the standard way. As in function case, we define \(\left\|u\right\|^{2}_{(s),\overline{M}}\), for \(u\in W^{s}(\overline{M},A)\) in the similar way. It is well-known (see [12, Corollary B.2.6]) that the two norms \(\left\|\cdot\right\|_{s,\overline{M}}\) and \(\left\|\cdot\right\|_{(s),\overline{M}}\) are equivalent for every non-negative integer \(s\). ### The reduction of complex manifolds with boundary As before, let \(\mathfrak{g}\) denote the Lie algebra of \(G\) and for any \(\xi\in\mathfrak{g}\), we write \(\xi_{M^{\prime}}\) to denote the vector field on \(M^{\prime}\) induced by \(\xi\) (see (1.6)). For \(x\in M^{\prime}\), recall that \(\underline{\mathfrak{g}}_{x}\) is given by (1.7). **Definition 2.2**.: _The moment map associated to the form \(\omega_{0}\) is the map \(\mu:M^{\prime}\to\mathfrak{g}^{*}\) defined by_ \[\langle\mu(x),\xi\rangle=\omega_{0}(\xi_{M^{\prime}}(x)),\qquad x\in M^{\prime},\quad\xi\in\mathfrak{g}. \tag{2.14}\] The proof of the following lemma is standard, cf. for example, [1, Theorem 6]. **Lemma 2.3**.: _The moment map \(\mu:M^{\prime}\to\mathfrak{g}^{*}\) is \(G\)-equivariant, so \(G\) acts on \(Y^{\prime}:=\mu^{-1}(0)\), where \(G\) acts on \(\mathfrak{g}^{*}\) through co-adjoint representation._ Proof.: For all \(g\in G\), \(\xi\in\mathfrak{g}\) and \(x\in M^{\prime}\), we have \[\xi_{M^{\prime}}(g\circ x) = \frac{d}{dt}\left(\exp(t\xi)\circ g\circ x\right)|_{t=0}\] \[= \frac{d}{dt}\left(g\circ g^{-1}\circ\exp(t\xi)\circ g\circ x \right)|_{t=0}\] \[= g_{*}\left(\operatorname{Ad}(g^{-1})\circ\xi\right)_{M^{\prime }}(x)\] and hence \[\langle\mu(g\circ x),\xi\rangle = \omega_{0}(\xi_{M^{\prime}}(g\circ x))\qquad\text{by (\ref{eq:M})}\] \[= \omega_{0}\left(g_{*}\left(\operatorname{Ad}(g^{-1})\circ\xi \right)_{M^{\prime}}(x)\right)\qquad\text{by (\ref{eq:M})}\] \[= \omega_{0}\left(\left(\operatorname{Ad}(g^{-1})\circ\xi\right)_{ M^{\prime}}(x)\right)\qquad\text{by (\ref{eq:M})}\] \[= \langle\mu(x),\operatorname{Ad}(g^{-1})\circ\xi\rangle\qquad \text{by (\ref{eq:M})}\] \[= \langle\operatorname{Ad}(g)^{*}\mu(x),\xi\rangle.\] Thus, the moment map \(\mu\) is \(G\)-equivariant. Note that \(\mu_{X}=\mu|_{X}\) is the CR moment map associated to \(\omega_{0}\) on \(X\), cf. [16, 19]. Suppose that \(\mu_{X}^{-1}(0)\neq\emptyset\), then it is shown, in [19, Lemma 2.5], that if \(G\) acts freely on \(\mu_{X}^{-1}(0)\) and the Levi form is positive on \(\mu_{X}^{-1}(0)\), then \(0\) is a regular value of \(\mu_{X}\). Set \(Y^{\prime}_{G}:=\mu^{-1}(0)/G\). In this section, we assume that (1.19) holds. \(\mu^{-1}(0)\) is a smooth manifold. Since \(G\) acts freely on \(Y^{\prime}\), \(Y^{\prime}_{G}\) is a smooth manifold. Let \[g^{TM^{\prime}}=d\omega_{0}(\cdot,J\cdot).\] Then, \(g^{TM^{\prime}}\) is a non-degenerate quadratic form on \(TM^{\prime}\) near \(\mu^{-1}(0)\). Let \(T^{H}Y^{\prime}\) be the orthogonal complement of \(\underline{\mathfrak{g}}_{Y^{\prime}}\) in \(TY^{\prime}\) with respect to \(g^{TM^{\prime}}\), where \(\underline{\mathfrak{g}}_{Y^{\prime}}:=\underline{\mathfrak{g}}|_{Y^{\prime}}\). Then we have \[TY^{\prime}=T^{H}Y^{\prime}\oplus\underline{\mathfrak{g}}_{Y^{\prime}}. \tag{2.16}\] **Lemma 2.4**.: _We have_ \[JT^{H}Y^{\prime}=T^{H}Y^{\prime}=JTY^{\prime}\cap TY^{\prime}.\] Proof.: Since \(G\) acts freely on \(Y^{\prime}\), then vector spaces \(\underline{\mathfrak{g}}_{x}\) defined in (1.7) form a vector bundle \(\underline{\mathfrak{g}}\) near \(\mu^{-1}(0)\). For \(x\in Y^{\prime}\), by (1.19) and the fact that \(d\omega_{0}(\cdot,J\cdot)\) is non-degenerate on \(T_{x}M^{\prime}\), we have that \(d\mu|_{TY^{\prime}}=0\) and \(d\mu|_{J\underline{\mathfrak{g}}_{x}}\to\mathfrak{g}^{*}\) is surjective. Since \(\dim Y^{\prime}+\dim\underline{\mathfrak{g}}=\dim TM^{\prime}\), we have \[J\underline{\mathfrak{g}}|_{Y^{\prime}}\oplus TY^{\prime}=TM^{\prime}|_{Y^{ \prime}}. \tag{2.17}\] By (2.16) and (2.17), we have the \(G\)-equivariant orthogonal decomposition on \(Y^{\prime},\) \[TM^{\prime}|_{Y^{\prime}}=\underline{\mathfrak{g}}|_{Y^{\prime}}\oplus J \underline{\mathfrak{g}}|_{Y^{\prime}}\oplus T^{H}Y^{\prime}. \tag{2.18}\] Thus from (2.18) and \(g^{TM^{\prime}}\) on \(TM^{\prime}|_{Y^{\prime}}\) is \(J\)-invariant, we get \[JT^{H}Y^{\prime}=T^{H}Y^{\prime}=JTY^{\prime}\cap TY^{\prime}. \tag{2.19}\] Let \(\pi:Y^{\prime}\to Y^{\prime}_{G}\) and \(\iota:Y^{\prime}\hookrightarrow M^{\prime}\) be the natural quotient and inclusion, respectively, then there is a unique induced \(1\)-form \(\widetilde{\omega}_{0}\) on \(Y^{\prime}_{G}\) such \(\pi^{*}\widetilde{\omega}_{0}=\iota^{*}\omega_{0}.\) Since \(T^{H}Y^{\prime}\) is preserved by \(J\), we can define the homomorphism \(J_{G}\) on \(TY^{\prime}_{G}\) in the following way: For \(V\in TY^{\prime}_{G},\) we denote by \(V^{H}\) its lift in \(T^{H}Y^{\prime},\) and we define \(J_{G}\) on \(Y^{\prime}_{G}\) by \[(J_{G}V)^{H}=J(V^{H}). \tag{2.20}\] Hence, we have \(J_{G}:TY^{\prime}_{G}\to TY^{\prime}_{G}\) such that \(J_{G}^{2}=-\operatorname{id}\), where \(\operatorname{id}\) denotes the identity map \(\operatorname{id}\,:\,TY^{\prime}_{G}\to TY^{\prime}_{G}.\) By complex linear extension of \(J_{G}\) to \(\mathbb{C}TY^{\prime}_{G},\) the \(\sqrt{-1}\)-eigenspace of \(J_{G}\) is given by \(T^{1,0}Y^{\prime}_{G}\,=\,\left\{V\in\mathbb{C}TY^{\prime}_{G}\,;\,J_{G}V\,=\, \sqrt{-1}V\right\}.\) **Lemma 2.5**.: _The almost complex structure \(J_{G}\) is integrable, thus \((Y^{\prime}_{G},J_{G})\) is a complex manifold._ Proof.: Let \(u,v\in C^{\infty}(Y^{\prime}_{G},T^{1,0}Y^{\prime}_{G}),\) then we can find \(U,V\in C^{\infty}(Y^{\prime}_{G},TY^{\prime}_{G})\) such that \[u=U-\sqrt{-1}J_{G}U,\qquad v=V-\sqrt{-1}J_{G}V.\] By (2.20), we have \[u^{H}=U^{H}-\sqrt{-1}JU^{H},\quad v=V^{H}-\sqrt{-1}JV^{H}\in T^{1,0}X\cap \mathbb{C}TY^{\prime}.\] Since \(T^{1,0}M^{\prime}\) is integrable and it is clearly that \([u^{H},v^{H}]\in\mathbb{C}TY^{\prime},\) we have \([u^{H},v^{H}]\in T^{1,0}M^{\prime}\cap\mathbb{C}TY^{\prime}.\) Hence, there is a \(W\in C^{\infty}(M^{\prime},TM^{\prime})\) such that \[[u^{H},v^{H}]=W-\sqrt{-1}JW.\] In particular, \(W,JW\in TY^{\prime}\). Thus, \(W\in TY^{\prime}\cap JTY^{\prime}=T^{H}Y^{\prime}\). Let \(X^{H}\in T^{H}Y^{\prime}\) be a lift of \(X\in TY^{\prime}_{G}\) such that \(X^{H}=W\). Then we have \[[u,v]=\pi_{*}[u^{H},v^{H}]=\pi_{*}(X^{H}-\sqrt{-1}JX^{H})=X-\sqrt{-1}J_{G}X\in T ^{1,0}Y^{\prime}_{G},\] i.e. we have \([C^{\infty}(Y^{\prime}_{G},T^{1,0}Y^{\prime}_{G}),C^{\infty}(Y^{\prime}_{G},T ^{1,0}Y^{\prime}_{G})]\subset C^{\infty}(Y^{\prime}_{G},T^{1,0}Y^{\prime}_{G}).\) Therefore, \(J_{G}\) is integrable. Let \(M^{\prime}_{G}:=\mu^{-1}(0)/G\), \(M_{G}:=(\mu^{-1}(0)\cap M)/G\), \(X_{G}:=(\mu^{-1}(0)\cap X)/G\). By combining Lemma 2.3 and Lemma 2.5, we have the following **Theorem 2.6**.: _Under (1.19), \(M^{\prime}_{G}\) is a complex manifold of dimension \(n-d\) and \(M_{G}\subset M^{\prime}_{G}\) is a relatively compact open subset of \(M^{\prime}_{G}\) with smooth boundary \(X_{G}\). In particular, the Levi form of \(X_{G}\) is negative or positive._ ## 3. \(G\)-invariant \(\overline{\partial}\)-Neumann problem In this section, we will study \(G\)-invariant \(\overline{\partial}\)-Neumann problem on \(M\). Until further notice, we fix \(q\in\{0,1,\ldots,n-1\}\). Let \(\overline{\partial}:\Omega^{0,q}(\overline{M})\to\Omega^{0,q+1}(\overline{M})\) be the Cauchy-Riemann operator. We extend \(\overline{\partial}\) to \(L^{2}_{(0,q)}(M)\): \[\overline{\partial}:\operatorname{Dom}\overline{\partial}\subset L^{2}_{(0,q) }(M)\to L^{2}_{(0,q+1)}(M),\] where \(u\in\operatorname{Dom}\overline{\partial}\) if we can find \(u_{j}\in\Omega^{0,q}(\overline{M})\), \(j=1,2,\ldots\), such that \(u_{j}\to u\) in \(L^{2}_{(0,q)}(M)\) as \(j\to+\infty\) and there is a \(v\in L^{2}_{(0,q+1)}(M)\) such that \(\overline{\partial}u_{j}\to v\) as \(j\to+\infty\). We set \(\overline{\partial}u:=v\). Let \[\overline{\partial}^{*}:\operatorname{Dom}\overline{\partial}^{*}\subset L^{2 }_{(0,q+1)}(M)\to L^{2}_{(0,q)}(M)\] be the Hilbert adjoint of \(\overline{\partial}\) with respect to \((\,\cdot\,|\,\cdot\,)_{M}\). The Gaffney extension of the \(\overline{\partial}\)-Neumann Laplacian is given by \[\square^{(q)}:\operatorname{Dom}\square^{(q)}\subset L^{2}_{(0,q)}(M)\to L^{ 2}_{(0,q)}(M), \tag{3.1}\] where \(\operatorname{Dom}\square^{(q)}:=\{u\in L^{2}_{(0,q)}(M);\,u\in\operatorname {Dom}\overline{\partial}\cap\operatorname{Dom}\overline{\partial}^{*}, \overline{\partial}u\in\operatorname{Dom}\overline{\partial}^{*},\overline{ \partial}^{*}u\in\operatorname{Dom}\overline{\partial}\}\) and \(\square^{(q)}u=(\overline{\partial}\,\overline{\partial}^{*}+\overline{ \partial}^{*}\,\overline{\partial})u\), \(u\in\operatorname{Dom}\square^{(q)}\). Put \[\operatorname{Ker}\square^{(q)}=\left\{u\in\operatorname{Dom}\square^{(q)}; \,\square^{(q)}u=0\right\}.\] It is easy to check that \[\operatorname{Ker}\square^{(q)}=\{u\in\operatorname{Dom}\square^{(q)};\, \overline{\partial}u=0,\overline{\partial}^{*}u=0\}. \tag{3.2}\] Since \(G\) preserves \(J\) and \((\,\cdot\,|\,\cdot\,)\) is \(G\)-invariant, it is straightforward to see that \[\begin{split}& g^{*}\overline{\partial}=\overline{\partial}g^{* }\ \ \text{on}\ \operatorname{Dom}\overline{\partial},\\ & g^{*}\overline{\partial}^{*}=\overline{\partial}^{*}g^{*}\ \ \text{on}\ \operatorname{Dom}\overline{\partial}^{*},\\ & g^{*}\square^{(q)}=\square^{(q)}g^{*}\ \ \text{on}\ \operatorname{Dom} \square^{(q)}.\end{split} \tag{3.3}\] Put \((\operatorname{Ker}\square^{(q)})^{G}:=\operatorname{Ker}\square^{(q)}\bigcap L ^{2}_{(0,q)}(M)^{G}\). Let \(\overline{\partial}\rho^{\wedge}:T^{*0,q}M^{\prime}\to T^{*0,q+1}M^{\prime}\) be the operator with wedge multiplication by \(\overline{\partial}\rho\) and let \(\overline{\partial}\rho^{\wedge,*}:T^{*0,q+1}M^{\prime}\to T^{*0,q}M^{\prime}\) be its adjoint with respect to \(\langle\,\cdot\,|\,\cdot\,\rangle\), that is, \[\langle\,\overline{\partial}\rho\wedge u\,|\,v\,\rangle=\langle\,u\,|\, \overline{\partial}\rho^{\wedge,*}v\,\rangle,\ \ u\in T^{*0,q}M^{\prime},\ \ v\in T^{*0,q+1}M^{\prime}. \tag{3.4}\] Denote by \(\gamma\) the operator of restriction on \(X\). By using the calculation in page 13 of [10], we can check that \[\begin{split}&\operatorname{Dom}\overline{\partial}^{*}\cap\Omega^{ 0,q+1}(\overline{M})=\{u\in\Omega^{0,q+1}(\overline{M});\,\gamma\overline{ \partial}\rho^{\wedge,*}u=0\},\\ &\operatorname{Dom}\square^{(q)}\cap\Omega^{0,q}(\overline{M})=\{u \in\Omega^{0,q}(\overline{M});\,\gamma\overline{\partial}\rho^{\wedge,*}u=0, \gamma\overline{\partial}\rho^{\wedge,*}\overline{\partial}u=0\}.\end{split} \tag{3.5}\] Let \(\overline{\partial}^{*}_{f}:\Omega^{0,q+1}(M^{\prime})\to\Omega^{0,q}(M^{\prime})\) be the formal adjoint of \(\overline{\partial}\) with respect to \((\,\cdot\,|\,\cdot\,)_{M^{\prime}}\), that is, \[(\,\overline{\partial}u\,|\,v\,)_{M^{\prime}}=(\,u\,|\,\overline{\partial}^{*} _{f}v\,)_{M^{\prime}},\ \ \forall u\in\Omega^{0,q}_{c}(M^{\prime}),\ \ \forall v\in\Omega^{0,q+1}(M^{\prime}).\] It is easy to see that if \(u\in\operatorname{Dom}\overline{\partial}^{*}\cap\Omega^{0,q+1}(\overline{M})\), then \(\overline{\partial}^{*}u=\overline{\partial}^{*}_{f}u.\) Write \(\square^{(q)}_{f}=\overline{\partial}\,\overline{\partial}^{*}_{f}+\overline{ \partial}^{*}_{f}\overline{\partial}\). Recall that we work with Assumption 1.1. Let \(\{\omega^{j}\}_{j=1}^{n}\) be an orthonormal basis of \(T^{\star 1,0}M^{\prime}\) in a neighborhood of \(X\) with \(\omega^{n}=\frac{\partial\rho}{|\partial\rho|}\). Let \(\{L_{j}\}_{j=1}^{n}\) be a dual frame of \(\{\omega^{j}\}_{j=1}^{n}\) with respect to \(\langle\cdot|\cdot\rangle\). It is straightforward to check that \[T=\frac{i}{\sqrt{2}}(L_{n}-\bar{L}_{n}). \tag{3.6}\] Denote by \(\bar{\partial}_{G}\) the operator \(\bar{\partial}\) restricted on \(L^{2}_{(0,q)}(M^{\prime})^{G}\). As \(\square^{(q)}\), we can define the \(G\)-invariant \(\bar{\partial}\)-Laplacian: \[\square^{(q)}_{G}:=\bar{\partial}^{*}_{G}\bar{\partial}_{G}+\bar{\partial}_{ G}\bar{\partial}^{*}_{G}:\operatorname{Dom}\square^{(q)}_{G}\subset L^{2}_{(0,q)}(M )^{G}\to L^{2}_{(0,q)}(M)^{G} \tag{3.7}\] in the similar way, where \(\bar{\partial}^{*}_{G}:\operatorname{Dom}\bar{\partial}^{*}_{G}\subset L^{2} _{(0,q+1)}(M)^{G}\to L^{2}_{(0,q)}(M)^{G}\) is the Hilbert space adjoint of \(\bar{\partial}_{G}:\operatorname{Dom}\bar{\partial}_{G}\subset L^{2}_{(0,q)} (M)^{G}\to L^{2}_{(0,q+1)}(M)^{G}\). **Lemma 3.1**.: _Fix \(q=1,\dots,n-2\). We have_ \[\|u\|_{1,\overline{M}}\leq C\Big{(}\|\square^{(q)}_{G}u\|_{M}+\|u\|_{M}\Big{)}, \forall u\in\operatorname{Dom}\square^{(q)}_{G}\cap\Omega^{0,q}_{G}(\overline {M}), \tag{3.8}\] _where \(C>0\) is a constant._ Proof.: Fix \(p\in X\). Assume that \(p\in\mu^{-1}(0)\cap X\). There exists a neighborhood \(V\) of \(p\) in \(X\) such that the Levi form is positive or negative on \(V\). Let \(U\) be a neighborhood of \(p\) in \(M^{\prime}\) such that \(U\cap X=V\). Let \(u\in\operatorname{Dom}\square^{(q)}_{G}\cap\Omega^{0,q}(\overline{M})^{G}\). Let \(\chi\in C^{\infty}_{c}(U)\) and put \(v:=\chi u\). Since the Levi form is positive or negative on \(V\) then one has \[\|v\|^{2}_{1,\overline{M}}\leq C\Big{(}\|\overline{\partial}v\|^{2}_{M}+\| \overline{\partial}^{*}v\|^{2}_{M}+\|v\|^{2}_{M}\Big{)}, \tag{3.9}\] where \(C>0\) is a constant independent of \(u\) (\(C\) depends on \(\chi\)). Now assume \(p\notin\mu^{-1}(0)\cap X\). Then there exists a neighborhood \(U\) of \(p\) in \(M^{\prime}\) such that \(\mu(\tilde{p})\neq 0,\forall\tilde{p}\in U\). Moreover, we assume that \(\overline{U}\cap\mu^{-1}(0)=\emptyset\). Let \(z=(z_{1},\dots,z_{n})\) be holomorphic coordinates centered at \(p\) defined on an open neighborhood \(U\) of \(p\) in \(M^{\prime}\) such that (3.6) holds. We will use the same notations as in the discussion before (3.7). On \(U\), we write \(u=\sideset{{}^{\prime}}{\sum}{{}^{\prime}}\) means that the summation is performed only over strictly increasing multiindices and for \(J=(j_{1},\dots,j_{q})\), \(\overline{\omega}^{J}=\overline{\omega}^{j_{1}}\wedge\dots\wedge\overline{ \omega}^{j_{q}}\). Let \(\chi\in C^{\infty}_{c}(U)\). For every strictly increasing multiindex \(J\), \(|J|=q\), put \(v_{J}:=\chi u_{J}\), \(v=\sideset{{}^{\prime}}{\sum}{{}^{\prime}}\). We have \[\|v\|^{2}_{1,\overline{M}}\leq C_{1}\Big{(}\sideset{{}^{\prime}}{\sum}{{}^{ \prime}}\|\overline{L}_{j}v_{J}\|^{2}_{M}+\sideset{{}^{\prime}}{\sum}{{}^{ \prime}}\|L_{j}v_{J}\|^{2}_{M}+\|v\|^{2}_{M}\Big{)}, \tag{3.10}\] where \(C_{1}>0\) is a constant. Moreover, it is easy to see that \[\begin{split}&\|\overline{\partial}v\|^{2}_{M}+\|\overline{ \partial}^{*}v\|^{2}_{M}\\ &=\sideset{{}^{\prime}}{\sum}{{}^{\prime}}\|\overline{L}_{j}v_{J} \|^{2}_{M}+\sideset{{}^{\prime}}{\sum}{{}^{\prime}}\|L_{j}v_{J}\|^{2}_{M}+ \sideset{{}^{\prime}}{\sum}{{}^{\prime}}\|L_{j}v_{J}\|^{2}_{M}+O(\|v\|_{M} \cdot\|v\|_{1,\overline{M}}).\end{split} \tag{3.11}\] For \(j=1,\ldots,n-1\) and every strictly increasing multiindex \(J\), \(|J|=q\), we have \[\begin{split}&\left\|L_{j}v_{J}\right\|_{M}^{2}=\left\|\overline{L}_{ j}v_{J}\right\|_{M}^{2}+O(\left\|v\right\|_{M}\left\|v\right\|_{1,\overline{M}}), \\ &\left\|\overline{L}_{j}v_{J}\right\|_{M}^{2}=\left\|L_{j}v_{J} \right\|_{M}^{2}+O(\left\|v\right\|_{M}\left\|v\right\|_{1,\overline{M}}). \end{split} \tag{3.12}\] From (3.11) and (3.12), we deduce that \[\begin{split}&\left\|\overline{\partial}v\right\|_{M}^{2}+\left\| \overline{\partial}^{*}v\right\|_{M}^{2}\\ &=\frac{1}{2}\sideset{}{{}^{\prime}}{\sum}_{|J|=q,j\in\{1,\ldots, n-1\}}\Bigl{(}\|\overline{L}_{j}v_{J}\|_{M}^{2}+\|L_{j}v_{J}\|_{M}^{2}\Bigr{)}\\ &+\sideset{}{{}^{\prime}}{\sum}_{|J|=q,n\notin J}^{\prime}\| \overline{L}_{n}v_{J}\|_{M}^{2}+\sideset{}{{}^{\prime}}{\sum}_{|J|=q,n\in J}^{ \prime}\|L_{n}v_{J}\|_{M}^{2}+O(\|v\|_{M}\cdot\|v\|_{1,\overline{M}}).\end{split} \tag{3.13}\] From (3.10), (3.13), we see that if \(U\) is small enough, then \[\|v\|_{1,\overline{M}}^{2}\leq C_{2}\Bigl{(}\|Tv\|_{M}^{2}+\|v\|_{M}^{2}+\| \overline{\partial}v\|_{M}^{2}+\left\|\overline{\partial}^{*}v\right\|_{M}^{2} \Bigr{)}, \tag{3.14}\] where \(C_{2}>0\) is a constant and \(\|Tv\|_{M}^{2}:=\sum_{|J|=q}^{\prime}\|Tv_{J}\|^{2}\). Since \(p\notin\mu^{-1}(0)\cap X\), there exists \(\xi_{M}\in\underline{\mathfrak{g}}\) such that \(\langle\omega_{0},\xi_{M}\rangle\neq 0\) on \(\overline{U}\) when \(\overline{U}\) is sufficiently small. Then \[\xi_{M}|_{X}+\langle\omega_{0},\xi_{M}\rangle T|_{X}\in T^{1,0}X\bigoplus T^{ 0,1}X.\] Thus by Taylor's expansion, \[\xi_{M}+\langle\omega_{0},\xi_{M}\rangle T=\sum_{j=1}^{n-1}a_{j}L_{j}+\sum_{j= 1}^{n-1}b_{j}\bar{L}_{j}+O(|z|)D,\] where \(a_{j}\), \(b_{j}\) are smooth functions, \(j=1,\ldots,n-1\), \(D\) is a first order differential operator. Since \(u\in\Omega_{G}^{0,q}(M)\), one has \(\xi_{M}v=O(\|u\|_{M})\). Then \[\langle\omega_{0},\xi_{M}\rangle Tv_{J}=-\xi_{M}v_{J}+\sum_{j=1}^{n-1}a_{j}L_{ j}v_{J}+\sum_{j=1}^{n-1}b_{j}\bar{L}_{j}v_{J}+O(|z|)Dv_{J}. \tag{3.15}\] Note that \(\langle\omega_{0},\xi_{M}\rangle\neq 0\) on \(\overline{U}\). Then we can assume that \(|\langle\omega_{0},\xi_{M}\rangle|\geq C>0\) on \(\overline{U}\), where \(C\) is a constant. Hence \[\|Tv\|_{M}^{2}\leq\hat{C}\Bigl{(}\sum_{j=1}^{n-1}\sideset{}{{}^{\prime}}{ \sum}_{|J|=q}^{\prime}\|\bar{L}_{j}v_{J}\|_{M}^{2}+\|u\|_{M}^{2}+\varepsilon_{ p}\|v\|_{1,\overline{M}}^{2}\Bigr{)}, \tag{3.16}\] where \(\hat{C}>0\) is a constant and \(\varepsilon_{p}>0\) is sufficiently small when \(U\) is chosen to be small. From (3.12), (3.14) and (3.16), we deduce that if \(U\) is small, then \[\|\chi g\|_{1,\overline{M}}^{2}\leq C\Bigl{(}\left\|\Box_{G}^{(q)}g\right\|_{M }^{2}+\|g\|_{M}^{2}\Bigr{)}, \tag{3.17}\] for all \(g\in\Omega^{0,q}(\overline{M})^{G}\cap\operatorname{Dom}\Box_{G}^{(q)}\), where \(C>0\) is a constant independent of \(g\) (\(C\) depends on \(\chi\)). As before, let \(u\in\Omega^{0,q}(\overline{M})^{G}\cap\operatorname{Dom}\square_{G}^{(q)}\) and let \(\hat{\chi}\in C_{c}^{\infty}(M^{\prime})\), \(\hat{\chi}\equiv 1\) near \(X\), \(\hat{\chi}\equiv 0\) outside some small neighborhood of \(X\) in \(M^{\prime}\). From (3.9), (3.17) and by using partition of unity, we have \[\|\hat{\chi}u\|_{1,\overline{M}}^{2}\leq\hat{C}\Big{(}\left\|\square_{G}^{(q)}u \right\|_{M}^{2}+\|u\|_{M}^{2}\Big{)}, \tag{3.18}\] where \(\hat{C}>0\) is a constant independent of \(u\). Since \(\square^{(q)}\) is elliptic away the boundary \(X\), \[\|(1-\hat{\chi})u\|_{1,\overline{M}}^{2}\leq\tilde{C}\Big{(}\left\|\square_{G} ^{(q)}u\right\|_{M}^{2}+\|u\|_{M}^{2}\Big{)}, \tag{3.19}\] where \(\tilde{C}>0\) is a constant independent of \(u\). From (3.18) and (3.19), the lemma follows. **Lemma 3.2**.: _Fix \(q=1,2,\ldots,n-2\). For all \(k\in\mathbb{N}\), there exists \(C_{k}>0\) such that_ \[\|f\|_{k,\overline{M}}^{2}\leq C_{k}(\|\square_{G}^{(q)}f\|_{k-1,\overline{M} }^{2}+\|f\|_{M}^{2}),\forall f\in\Omega^{0,q}(\overline{M})^{G}\cap \operatorname{Dom}\square_{G}^{(q)}. \tag{3.20}\] Proof.: Fix \(p\in X\). If \(p\in\mu^{-1}(0)\cap X\), then by Assumption 1.1, \(X\) is strongly pseudoconvex or strongly pseudoconcave near \(p\). Let \(U\) be a neighborhood of \(p\) in \(M^{\prime}\) such that \(U\cap X\) is strongly pseudoconvex or strongly pseudoconcave and choose a cut-off function \(\eta\in C_{c}^{\infty}(U)\). Then it is well-known that (see [8, Chapter 5]) for \(k\in\mathbb{N}\), \[\|\eta f\|_{k,\overline{M}}^{2}\leq C_{k}\Big{(}\|\square_{G}^{(q)}f\|_{k-1, \overline{M}}^{2}+\|f\|_{M}^{2}\Big{)},\forall f\in\Omega^{0,q}(\overline{M}) ^{G}\cap\operatorname{Dom}\square_{G}^{(q)}, \tag{3.21}\] where \(C_{k}>0\) is a constant independent of \(f\). Next we assume \(p\not\in\mu^{-1}(0)\cap X\). We can assume that there exists a neighborhood \(U\) of \(p\) in \(M^{\prime}\) such that \(\overline{U}\cap\mu^{-1}(0)=\emptyset\) and special boundary coordinates \((t_{1},t_{2},\cdots,t_{2n-1},\rho)\) centered at \(p\) such that \(t_{1},\cdots,t_{2n-1}\) restricted to \(X\) are coordinates for \(X\). For \(u\in C_{c}^{\infty}(U\cap\overline{M})\), the partial Fourier transform of \(u\) is defined by \[\hat{u}(\tau,\rho):=\int_{\mathbb{R}^{2n-1}}e^{-i<t,\tau>}u(t,\rho)dt,\] where \(t=(t_{1},\ldots,t_{2n-1})\). For \(s\in\mathbb{R}\), the tangential Sobolev norms \(|||u|||_{s}\) of \(u\) is defined by \[|||u|||_{s}^{2}=\int_{\mathbb{R}^{2n-1}}\int_{-\infty}^{0}(1+|\tau|^{2})^{s}| \hat{u}(\tau,\rho)|^{2}d\rho d\tau.\] For \(\delta>0\), \(M_{\delta}\) is defined by \(M_{\delta}:=\{z\in\overline{M}:\rho(z)>-\delta\}\). Choose a \(\delta\) sufficiently small such that the tangential Sobolev norm can be defined on \(M_{\delta}\) by the partition of unity and we will use \(|||\cdot|||_{s(M_{\delta})}\) to denote the tangential Sobolev norm on \(M_{\delta}\). By a similar argument in the proof of [8, Lemma 5.2.4], one has for every \(k\in\mathbb{N}\), \[\|f\|_{k,\overline{M}}\leq\hat{C}_{k}\Big{(}\|\square_{G}^{(q)}f\|_{k-1, \overline{M}}+|||f|||_{k(M_{\delta})}+\|f\|_{M}\Big{)},\forall f\in\Omega^{0, q}(\overline{M})^{G}\cap\operatorname{Dom}\square_{G}^{(q)}, \tag{3.22}\] where \(\hat{C}_{k}>0\) is a constant. Here, we do not need the condition that \(M\) is pseudoconvex as in [8, Lemma 5.2.4] since we have one more term \(|||f|||_{k(M_{\delta})}\) in the above estimate (3.22) which can be controlled by \(\|\square_{G}^{(q)}f\|_{k-1,\overline{M}}\) when \(M\) is a bounded strongly pseudoconvex domain. Choose \(\chi\in C_{c}^{\infty}(U\cap\overline{M})\). We have for every \(k\in\mathbb{N}\), \[\frac{1}{\tilde{C}_{k}}\sum_{|\alpha|\leq k-1}|||D_{t}^{\alpha}(\chi f)|||_{1} \leq|||\chi f|||_{k}\leq\tilde{C}_{k}\sum_{|\alpha|\leq k-1}|||D_{t}^{\alpha}( \chi f)|||_{1},\] for every \(f\in\Omega^{0,q}(\overline{M})^{G}\cap\operatorname{Dom}\square_{G}^{(q)}\), where \(\tilde{C}_{k}>1\) is a constant independent of \(f\). We prove the following **Claim:** Let \(k\in\mathbb{N}\). For any \(\varepsilon>0\), \(\varepsilon\ll 1\), we can take \(U\) small enough so that \[|||\chi f|||_{k}\leq C\Big{(}\|\square_{G}^{(q)}f\|_{k-1,\overline{M}}+\frac{ 1}{\varepsilon}\|f\|_{k-1,\overline{M}}+\varepsilon\|\chi f\|_{k,\overline{M} }\Big{)},\forall f\in\Omega^{0,q}(\overline{M})^{G}\cap\operatorname{Dom} \square_{G}^{(q)}, \tag{3.23}\] where \(C>0\) is a constant independent of \(\varepsilon\). Let \(\mathcal{T}^{k}\) denote a \(k\)-th order tangential differential operator of the form \(D_{t}^{\alpha}\) where \(|\alpha|=k\). Let \(f\in\Omega^{0,q}(\overline{M})^{G}\cap\operatorname{Dom}\square_{G}^{(q)}\). Recall that \(D_{t}\chi f\in\operatorname{Dom}\overline{\partial}^{*}\), for any \(D_{t}\). Then from (3.14) one has \[|||\mathcal{T}^{k-1}(\chi f)|||_{1} \leq C\Big{(}\|T\mathcal{T}^{k-1}(\chi f)\|_{M}+\|\mathcal{T}^{k -1}(\chi f)\|_{M}+\|\overline{\partial}\mathcal{T}^{k-1}(\chi f)\|_{M}+\| \overline{\partial}^{*}\mathcal{T}^{k-1}(\chi f)\|_{M}\Big{)} \tag{3.24}\] \[\leq C_{1}\Big{(}\|T\mathcal{T}^{k-1}(\chi f)\|_{M}+|||f|||_{k-1 (M_{\delta})}+\|\overline{\partial}\mathcal{T}^{k-1}(\chi f)\|_{M}+\| \overline{\partial}^{*}\mathcal{T}^{k-1}(\chi f)\|_{M}\Big{)},\] where \(C,C_{1}>0\) are constants. It follows from [8, (5.2.14)] with some minor modification that \[\begin{split}&\|\overline{\partial}\mathcal{T}^{k-1}(\chi f)\|_{M}^ {2}+\|\overline{\partial}^{*}\mathcal{T}^{k-1}(\chi f)\|_{M}^{2}\leq C_{2} \Big{(}|||f|||_{k-1(M_{\delta})}\cdot|||\square_{G}^{(q)}f|||_{k-1(M_{\delta} )}+\|f\|_{k-1,\overline{M}}^{2}\\ &\quad+\|f\|_{k-1,\overline{M}}\,|||\overline{\partial}(\chi f)||| _{k-1}+\|f\|_{k-1,\overline{M}}\,|||\overline{\partial}^{*}(\chi f)|||_{k-1} \Big{)},\end{split} \tag{3.25}\] where \(C_{2}>0\) is a constant. Next, we estimate \(\|T\mathcal{T}^{k-1}(\chi f)\|\). From (3.15), it follows that \[\langle\omega_{0},\xi_{M}\rangle T\mathcal{T}^{k-1}(\chi f)=-\xi_{M}\mathcal{ T}^{k-1}(\chi f)+\sum_{j=1}^{n-1}a_{j}L_{j}\mathcal{T}^{k-1}(\chi f)+\sum_{j=1}^{n-1 }b_{j}\overline{L}_{j}\mathcal{T}^{k-1}(\chi f)+O(|z|)D\mathcal{T}^{k-1}(\chi f). \tag{3.26}\] Note that \(|\langle\omega_{0},\xi_{M}\rangle|\geq C>0\) on \(\overline{U}\) with constant \(C>0\). Notice that \[\begin{split}\xi_{M}\mathcal{T}^{k-1}(\chi f)&=[\xi_ {M},\mathcal{T}^{k-1}](\chi f)+\mathcal{T}^{k-1}\xi_{M}(\chi f)\\ &=[\xi_{M},\mathcal{T}^{k-1}](\chi f)+\mathcal{T}^{k-1}[\xi_{M}, \chi]f+\mathcal{T}^{k-1}\chi\xi_{M}f.\end{split} \tag{3.27}\] Since \(f\in\Omega^{0,q}(\overline{M})^{G}\), then one has \(\xi_{M}f=0\). From this observation and (3.27), we have \[\left\|-\frac{1}{\langle\omega_{0},\xi_{M}\rangle}\xi_{M}\mathcal{T}^{k-1}(\chi f )\right\|_{M}\leq C_{4}|||f|||_{k-1(M_{\delta})}, \tag{3.28}\] where \(C_{4}>0\) is a constant. Fix \(j=1,\ldots,n-1\). It is straightforward to check that for every \(\varepsilon>0\), we have \[\begin{split}&\left\|L_{j}\mathcal{T}^{k-1}\chi f\right\|_{M}^{2}+ \left\|\overline{L}_{j}\mathcal{T}^{k-1}\chi f\right\|_{M}^{2}\\ &\leq C_{5}\Big{(}\left\|\overline{\partial}\mathcal{T}^{k-1}\chi f \right\|_{M}^{2}+\left\|\overline{\partial}^{*}\mathcal{T}^{k-1}\chi f\right\|_{M }^{2}+\frac{1}{\varepsilon}\left\|\mathcal{T}^{k-1}(\chi f)\right\|_{M}^{2}+ \varepsilon\left\|f\right\|_{k,\overline{M}}^{2}\Big{)},\end{split} \tag{3.29}\] where \(C_{5}>0\) is a constant independent of \(\varepsilon\). From (3.25) and (3.29), we deduce that \[\begin{split}&\left\|L_{j}\mathcal{T}^{k-1}\chi f\right\|_{M}^{2 }+\left\|\overline{L_{j}}\mathcal{T}^{k-1}\chi f\right\|_{M}^{2}\\ &\leq C_{6}\Big{(}|||f|||_{k-1(M_{\delta})}\cdot\||\Box_{G}^{(q )}f|||_{k-1(M_{\delta})}+\|f\|_{k-1,\overline{M}}^{2}\\ &\quad+\|f\|_{k-1,\overline{M}}|||\overline{\partial}(\chi f)||| _{k-1}+\|f\|_{k-1,\overline{M}}\,|||\overline{\partial}^{*}(\chi f)|||_{k-1} \\ &+\frac{1}{\varepsilon}\left\|\mathcal{T}^{k-1}(\chi f)\right\|^ {2}+\varepsilon\,\|f\|_{k,\overline{M}}^{2}\Big{)},\end{split} \tag{3.30}\] where \(C_{6}>0\) is a constant independent of \(\varepsilon\). From (3.26), (3.28) and (3.30), we deduce that for every \(\varepsilon>0\), \(\varepsilon\ll 1\), we can take \(U\) small enough so that \[\left\|T\mathcal{T}^{k-1}(\chi f)\right\|_{M}^{2}\leq C_{7}\Big{(}|||\Box_{G}^ {(q)}f|||_{k-1(M_{\delta})}^{2}+\frac{1}{\varepsilon}\,\|f\|_{k-1,\overline{ M}}^{2}+\varepsilon\,\|\chi f\|_{k,\overline{M}}^{2}\Big{)}. \tag{3.31}\] From (3.24), (3.25) and (3.31), we get the claim (3.23). From (3.21), (3.22), (3.23) and by using partition of unity, the lemma follows. From the above lemma and the techniques of elliptic regularization, one has **Theorem 3.3**.: _Fix \(q=1,\ldots,n-2\). Suppose \(u\in\operatorname{Dom}\Box_{G}^{(q)}\) and \(\Box_{G}^{(q)}u=v\). If \(v\in\Omega^{0,q}(\overline{M})^{G}\), then \(u\in\Omega^{0,q}(\overline{M})^{G}\). If \(v\in W^{s}(\overline{M},T^{*0,q}M^{\prime})\cap L^{2}_{(0,q)}(M)^{G}\), for some \(s\in\mathbb{N}_{0}\), then \(u\in W^{s+1}(\overline{M},T^{*0,q}M^{\prime})\cap L^{2}_{(0,q)}(M)^{G}\) and we have_ \[\left\|u\right\|_{s+1,\overline{M}}\leq C_{s}(\left\|v\right\|_{s,\overline{ M}}+\left\|u\right\|_{M}),\] _where \(C_{s}>0\) is a constant independent of \(u\)._ **Corollary 3.4**.: _For every \(q=0,\ldots,n-2\), \(\Box_{G}^{(q)}:\operatorname{Dom}\Box_{G}^{(q)}\subset L^{2}_{(0,q)}(M)^{G} \to L^{2}_{(0,q)}(M)^{G}\) has closed range. In particular, for \(1\leq q\leq n-2\), \(\operatorname{Ker}\Box_{G}^{(q)}\) is a finite dimensional subspace of \(\Omega^{0,q}(\overline{M})^{G}\)._ Let \(N_{G}^{(1)}:L^{2}_{(0,1)}(M)^{G}\to\operatorname{Dom}\Box_{G}^{(1)}\) be the partial inverse of \(\Box_{G}^{(1)}\). We have \[\Box_{G}^{(1)}N_{G}^{(1)}+B_{G}^{(1)}=I\text{ on }\,L^{2}_{(0,1)}(M)^{G},\] \[N_{G}^{(1)}\Box_{G}^{(1)}+B_{G}^{(1)}=I\text{ on }\,\operatorname{Dom}\Box_{G}^{(1)}.\] By Theorem 3.3, we conclude that \(N_{G}^{(1)}:W^{s}(\overline{M},T^{*0,1}M^{\prime})\cap L^{2}_{(0,1)}(M)^{G} \to W^{s+1}(\overline{M},T^{*0,1}M^{\prime})\cap L^{2}_{(0,1)}(M)^{G}\) is continuous, for all \(s\in\mathbb{N}_{0}\). In particular, \[N_{G}^{(1)}:\Omega^{0,1}(\overline{M})^{G}\to\Omega^{0,1}(\overline{M})^{G} \text{ is continuous}. \tag{3.32}\] Let \[B_{G}:L^{2}(M)\to\operatorname{Ker}\Box_{G}^{(0)} \tag{3.33}\] be the orthogonal projection (\(G\)-invariant Bergman projection). Note that \[H^{0}(\overline{M})^{G}=(\operatorname{Ker}\Box^{(q)})^{G}=\operatorname{Ker} \Box_{G}^{(0)}. \tag{3.34}\] Let \[Q_{G}:L^{2}(M)\to L^{2}(M)^{G}\] be the orthogonal projection. It is not difficult to see that \[Q_{G}:C^{\infty}(\overline{M})^{G}\to C^{\infty}(\overline{M})^{G}\text{ is continuous.} \tag{3.35}\] We can deduce from [8, Theorem 4.4.5] that \[B_{G}=(I-\overline{\partial}_{G}^{*}N_{G}^{(1)}\overline{\partial}_{G})\circ Q _{G}\text{ on }L^{2}(M). \tag{3.36}\] From (3.32), (3.35) and (3.36), we get **Theorem 3.5**.: _We have \(B_{G}:C^{\infty}(\overline{M})\to C^{\infty}(\overline{M})^{G}\) is continuous._ ## 4. The Poisson operators and reduction to the boundary Until further notice, we fix \(q\in\{0,1,\dots,n-1\}\). We first introduce some notations. We remind the reader that for \(s\in\mathbb{R}\), the space \(W^{s}(\overline{M},T^{*0,q}M^{\prime})\) was introduced in the discussion after Definition 2.1. Let \[\overline{\partial}_{f}^{*}:\Omega^{0,q+1}(M^{\prime})\to\Omega^{0,q}(M^{ \prime})\] be the formal adjoint of \(\overline{\partial}\) with respect to \((\,\cdot\,|\,\cdot\,)_{M^{\prime}}\). That is, \[(\,\overline{\partial}f\,|\,h\,)_{M^{\prime}}=(\,f\,|\,\overline{\partial}_{ f}^{*}h\,)_{M^{\prime}},\] \(f\in\Omega^{0,q}_{c}(M^{\prime})\), \(h\in\Omega^{0,q+1}(M^{\prime})\). Let \[\square_{f}^{(q)}=\overline{\partial}\,\overline{\partial}_{f}^{*}+\overline{ \partial}_{f}^{*}\,\overline{\partial}:\Omega^{0,q}(M^{\prime})\to\Omega^{0,q }(M^{\prime})\] denote the complex Laplace-Beltrami operator on \((0,q)\) forms. As before, let \(\gamma\) denote the operator of restriction to the boundary \(X\). Let us consider the map \[\begin{split} F^{(q)}:W^{2}(\overline{M},T^{*0,q}M^{\prime})& \to L^{2}_{(0,q)}(M)\oplus W^{\frac{3}{2}}(X,T^{*0,q}M^{\prime})\\ u&\mapsto(\square_{f}^{(q)}u,\gamma u).\end{split} \tag{4.1}\] It is well-known that (see [4]) \(\dim\operatorname{Ker}F^{(q)}<\infty\) and \(\operatorname{Ker}F^{(q)}\subset\Omega^{0,q}(\overline{M})\). Let \[K^{(q)}:W^{2}(\overline{M},T^{*0,q}M^{\prime})\to\operatorname{Ker}F^{(q)} \tag{4.2}\] be the orthogonal projection with respect to \((\,\cdot\,|\,\cdot\,)_{M}\). Put \(\tilde{\square}_{f}^{(q)}=\square_{f}^{(q)}+K^{(q)}\) and consider the map \[\begin{split}\tilde{F}^{(q)}:W^{2}(\overline{M},T^{*0,q}M^{ \prime})&\to L^{2}_{(0,q)}(M)\oplus W^{\frac{3}{2}}(X,T^{*0,q}M^{ \prime})\\ u&\mapsto(\tilde{\square}_{f}^{(q)}u,\gamma u). \end{split} \tag{4.3}\] Then \(\tilde{F}^{(q)}\) is injective (see [14, Part II, Chapter 3]). Let \[\tilde{P}:C^{\infty}(X,T^{*0,q}M^{\prime})\to\Omega^{0,q}(\overline{M}) \tag{4.4}\] be the Poisson operator for \(\tilde{\square}^{(q)}_{f}\) which is well-defined since (4.3) is injective. The Poisson operator \(\tilde{P}\) satisfies \[\begin{split}\tilde{\square}^{(q)}_{f}\tilde{P}u=0,& \forall u\in C^{\infty}(X,T^{*0,q}M^{\prime}),\\ \gamma\tilde{P}u=u,&\forall u\in C^{\infty}(X,T^{* 0,q}M^{\prime}).\end{split} \tag{4.5}\] It is known that \(\tilde{P}\) extends continuously \[\tilde{P}:W^{s}(X,T^{*0,q}M^{\prime})\to W^{s+\frac{1}{2}}(\overline{M},T^{*0,q }M^{\prime}),\ \ \forall s\in\mathbb{R}\] (see [4, Page 29]). Let \[\tilde{P}^{*}:\hat{\mathscr{D}}^{\prime}(\overline{M},T^{*0,q}M^{\prime}) \rightarrow\mathscr{D}^{\prime}(X,T^{*0,q}M^{\prime})\] be the operator defined by \[(\,\tilde{P}^{*}u\,|\,v\,)_{X}=(\,u\,|\,\tilde{P}v\,)_{M},\ \ u\in\hat{\mathscr{D}}^{ \prime}(\overline{M},T^{*0,q}M^{\prime}),\ \ v\in C^{\infty}(X,T^{*0,q}M^{\prime}),\] where \(\hat{\mathscr{D}}^{\prime}(\overline{M},T^{*0,q}M^{\prime})\) denotes the space of continuous linear map from \(\Omega^{0,q}(\overline{M})\) to \(\mathbb{C}\) with respect to \((\,\cdot\,|\,\cdot\,)_{M}\). It is well-known (see [4, page 30]) that \(\tilde{P}^{*}\) is continuous: \(\tilde{P}^{*}:W^{s}(\overline{M},T^{*0,q}M^{\prime})\to W^{s+\frac{1}{2}}(X,T^ {*0,q}M^{\prime})\) and \[\tilde{P}^{*}:\Omega^{0,q}(\overline{M})\to C^{\infty}(X,T^{*0,q}M^{ \prime}),\] for every \(s\in\mathbb{R}\). It is well-known that the operator \[\tilde{P}^{*}\tilde{P}:C^{\infty}(X,T^{*0,q}M^{\prime})\to C^{\infty}(X,T^{*0,q }M^{\prime})\] is a classical elliptic pseudodifferential operator of order \(-1\) and invertible since \(\tilde{P}\) is injective (see [4]). Moreover, the operator \[(\tilde{P}^{*}\tilde{P})^{-1}:C^{\infty}(X,T^{*0,q}M^{\prime})\to C^{ \infty}(X,T^{*0,q}M^{\prime})\] is a classical elliptic pseudodifferential operator of order \(1\). When \(q=0\), we simply write \(P\), \(P^{*}\) to denote \(\tilde{P}\), \(\tilde{P}^{*}\) respectively. We define a new inner product on \(W^{-\frac{1}{2}}(X,T^{*0,q}M^{\prime})\) as follows: \[[\,u\,|\,v\,]=(\,\tilde{P}u\,|\,\tilde{P}v)_{M},\ \ u,v\in W^{-\frac{1}{2}}(X,T^ {*0,q}M^{\prime}). \tag{4.6}\] Let \[Q:W^{-\frac{1}{2}}(X,T^{*0,q}M^{\prime})\rightarrow\operatorname{Ker}\overline {\partial}\rho^{\wedge,\star} \tag{4.7}\] be the orthogonal projection onto \(\operatorname{Ker}\overline{\partial}\rho^{\wedge,\star}\) with respect to \([\,\cdot\,|\,\cdot\,]\). We consider the following operator \[\overline{\partial}_{\beta}=Q\gamma\overline{\partial}\tilde{P}:\Omega^{0,q}(X )\rightarrow\Omega^{0,q+1}(X). \tag{4.8}\] The operator \(\overline{\partial}_{\beta}\) was introduced by the first author in [14]. Let \(\overline{\partial}^{t}_{\beta}:\Omega^{0,q+1}(X)\rightarrow\Omega^{0,q}(X)\) be the formal adjoint with respect to \([\,\cdot\,|\,\cdot\,]\). We recall the following (see [14, Part II, Lemma 5.1, equation (5.3) and Lemma 5.2]) **Proposition 4.1**.: _We have that \(\overline{\partial}_{\beta}\) and \(\overline{\partial}_{\beta}^{\dagger}\) are classical pseudodifferential operators of order \(1\),_ \[\overline{\partial}_{\beta}\circ\overline{\partial}_{\beta}=0\ \ \text{on}\ \Omega^{0,q}(X) \tag{4.9}\] _and_ \[\begin{array}{l}\overline{\partial}_{\beta}=\overline{\partial}_{b}\text{+ lower order terms},\\ \overline{\partial}_{\beta}^{\dagger}=\gamma\overline{\partial}_{f}^{*} \tilde{P}=\overline{\partial}_{b}^{*}\text{+lower order terms},\end{array} \tag{4.10}\] _where \(\overline{\partial}_{b}\) is the tangential Cauchy Riemann operator on \(X\) and \(\overline{\partial}_{b}^{*}\) is the adjoint of \(\overline{\partial}_{b}\) with respect to \((\,\cdot\,|\,\cdot\,)_{X}\)._ Let \[\square_{\beta}^{(0)}:=\overline{\partial}_{\beta}^{\dagger}\,\overline{ \partial}_{\beta}:\Omega^{0,q}(X)\to\Omega^{0,q}(X).\] Let \(D\subset X\) be an open set. Assume that the Levi form is positive on \(D\). By the same arguments of [14, PartII, Chapter 7], we can show that there are continuous operators \(N,\hat{S}:C^{\infty}_{c}(D)\to C^{\infty}(D)\), which are properly supported on \(D\), such that \[\begin{array}{l}\square_{\beta}^{(0)}N+\hat{S}\equiv I\ \text{on}\ D\times D,\\ \square_{\beta}^{(0)}\hat{S}\equiv 0\ \text{on}\ D\times D.\end{array} \tag{4.11}\] Moreover, \(N\) is a pseudodifferential operator of order \(-1\) type \((\frac{1}{2},\frac{1}{2})\) on \(D\). \(\hat{S}\) is a pseudodifferential operator of order \(0\) type \((\frac{1}{2},\frac{1}{2})\) on \(D\) and for any local coordinate patch \(D_{0}\subset D\), we have \[\hat{S}(x,y)\equiv\int_{0}^{\infty}e^{it\varphi(x,y)}s(x,y,t)dt\ \text{on}\ D_{0}\times D_{0}, \tag{4.12}\] where \(\varphi\in C^{\infty}(D_{0}\times D_{0})\) is the phase function as in [17], \(s(x,y,t)\sim\sum_{j=0}^{+\infty}s_{j}(x,y)t^{n-1-j}\) in \(S_{1,0}^{n-1}(D_{0}\times D_{0}\times\mathbb{R}_{+})\), \(s_{j}(x,y)\in C^{\infty}(D_{0}\times D_{0})\), \(j=0,1,\ldots\), \(s_{0}(x,x)=\frac{1}{2}\pi^{-n}|\det\mathcal{L}_{x}|\), for all \(x\in D_{0}\), where \(\det\mathcal{L}_{x}:=\lambda_{1}(x)\cdots\lambda_{n}(x)\), \(\lambda_{j}(x)\), \(j=1,\ldots,n\), are eigenvalues of \(\mathcal{L}_{x}\) with respect to \(\langle\,\cdot\,|\,\cdot\,\rangle\). Let \(U\) be an open neighborhood of \(\mu^{-1}(0)\cap X\) in \(M^{\prime}\) such that \(U=GU=\{gx|g\in G,x\in U\}\). Set \(D=U\cap X\), then \(D=GD\). Let \(\hat{S}_{G}:C^{\infty}_{c}(D)\to C^{\infty}(D)\) be the continuous operator with distribution kernel \[\hat{S}_{G}(x,y):=\int_{G}\hat{S}(x,gy)d\mu(g), \tag{4.13}\] where \(d\mu\) is the Haar measure on \(G\) with \(\int_{G}d\mu=1\). Then by using the method of [16], we conclude that for any local coordinate patch \(D_{0}\subset D\), if the Levi form is negative on \(D_{0}\), then \(\hat{S}_{G}\equiv 0\) on \(D_{0}\times D_{0}\). If the Levi form is positive on \(D_{0}\), then \[\hat{S}_{G}(x,y)\equiv\int_{0}^{\infty}e^{it\Phi(x,y)}a(x,y,t)dt\ \text{on}\ D_{0}\times D_{0}, \tag{4.14}\] where \(\Phi\in C^{\infty}(D_{0}\times D_{0})\) is the phase as in [16, Theorem 1.5] and \[\begin{array}{l}a(x,y,t)\sim\sum_{j=0}^{+\infty}a_{j}(x,y)t^{n-1-\frac{d}{2} -j}\ \text{in}\ S_{1,0}^{n-1-\frac{d}{2}}(D_{0}\times D_{0}\times\mathbb{R}_{+}),\\ a_{j}(x,y)\in C^{\infty}(D_{0}\times D_{0}),\,j=0,1,\ldots,\\ a_{0}(x,x)\ \text{is given by \@@cite[cite]{[\@@bibref{}{e:2010}{}{}]}},\text{ Theorem 1.6}].\end{array} \tag{4.15}\] ## 5. \(G\)-invariant Bergman kernel asymptotics We first introduce \(G\)-invariant Szego projection. Let \(\overline{\partial}_{b,G}:\Omega^{0,q}(X)^{G}\to\Omega^{0,q+1}(X)^{G}\) be the tangential Cauchy-Riemann operator on \(X\) acting on \(\Omega^{0,q}(X)^{G}\). We extend \(\overline{\partial}_{b,G}\) to \(L^{2}_{(0,q)}(X)^{G}\): \[\begin{split}\operatorname{Dom}\overline{\partial}_{b,G}& =\left\{u\in L^{2}_{(0,q)}(X)^{G};\,\overline{\partial}_{b,G}u \in L^{2}_{(0,q+1)}(X)^{G}\right\}\,,\\ \overline{\partial}_{b,G}&:\operatorname{Dom} \overline{\partial}_{b,G}\ni u\longmapsto\overline{\partial}_{b,G}u\in L^{2}_ {(0,q+1)}(X)^{G},\end{split} \tag{5.1}\] where \(\overline{\partial}_{b,G}u\) is defined in the sense of distributions. Let \[\overline{\partial}_{b,G}^{*}:\operatorname{Dom}\overline{\partial}_{b,G}^{*} \subset L^{2}_{(0,q+1)}(X)^{G}\to L^{2}_{(0,q)}(X)^{G}\] be the \(L^{2}\) adjoint of \(\overline{\partial}_{b,G}\). Let \[\square^{(0)}_{b,G}=\overline{\partial}_{b,G}^{*}\,\overline{\partial}_{b,G} :\operatorname{Dom}\square^{(0)}_{b,G}\subset L^{2}(X)^{G}\to L^{2}(X)^{G}, \tag{5.2}\] where \(\operatorname{Dom}\square^{(0)}_{b,G}=\left\{u\in L^{2}(X)^{G};\,u\in \operatorname{Dom}\overline{\partial}_{b,G},\overline{\partial}_{b,G}u\in \operatorname{Dom}\overline{\partial}_{b,G}^{*}\right\}\). Let \[S_{G}:L^{2}(X)\to\operatorname{Ker}\square^{(0)}_{b,G}\subset L^{2}(X)^{G} \tag{5.3}\] be the orthogonal projection (\(G\)-invariant Szego projection). It was proved in [19, Theorem 3.17] that \(\square^{(0)}_{b,G}\) has closed range. Let \[N_{b}:L^{2}(X)^{G}\to\operatorname{Dom}\square^{(0)}_{b,G}\] be the partial inverse of \(\square^{(0)}_{b,G}\). We have \[\begin{split} N_{b}\square^{(0)}_{b,G}+S_{G}=I\text{ on } \operatorname{Dom}\square^{(0)}_{b,G},\\ \square^{(0)}_{b,G}N_{b}+S_{G}=I\text{ on }L^{2}(X)^{G}.\end{split} \tag{5.4}\] Recall that \(B_{G}\) is the \(G\)-invariant Bergman projection (see (3.33)). We can now prove **Theorem 5.1**.: _Let \(\tau\in C^{\infty}(\overline{M})\) with \(\operatorname{supp}\tau\cap\mu^{-1}(0)\cap X=\emptyset\). Then, \(\tau B_{G}\equiv 0\mod C^{\infty}(\overline{M}\times\overline{M})\), \(B_{G}\tau\equiv 0\mod C^{\infty}(\overline{M}\times\overline{M})\)._ Proof.: Denote by \(\gamma\) the restriction from \(M^{\prime}\) to \(X\). It follows from the definition of \(\overline{\partial}_{b}\) that \(\overline{\partial}_{b,G}\gamma B_{G}=0\) and \(\square^{(0)}_{b,G}\gamma B_{G}=0\). From this observation and (5.4), we have \[\gamma B_{G}=(N_{b}\square^{(0)}_{b,G}+S_{G})\gamma B_{G}=S_{G}\gamma B_{G}, \tag{5.5}\] hence \[PS_{G}\gamma B_{G}=P\gamma B_{G}=B_{G} \tag{5.6}\] and \[P^{*}B_{G}=P^{*}P\gamma B_{G}.\] We deduce that \[\gamma B_{G}=(P^{*}P)^{-1}P^{*}B_{G}. \tag{5.7}\] By (5.6) and (5.7), we deduce that \[B_{G}=PS_{G}(P^{*}P)^{-1}P^{*}B_{G}. \tag{5.8}\] Assume that \(\tau\in C^{\infty}(\overline{M}),\operatorname{supp}\tau\cap\mu^{-1}(0)\cap X=\emptyset\). We have \[\tau B_{G}=\tau PS_{G}(P^{*}P)^{-1}P^{*}B_{G}. \tag{5.9}\] Let \(\tilde{\tau}\in C^{\infty}(X)\), \(\tilde{\tau}=1\) near \(\operatorname{supp}\tau\cap X\) and \(\tilde{\tau}=0\) on \(\mu^{-1}(0)\cap X\). Taking adjoint in (5.9), we have \[\begin{split} B_{G}\tau&=B_{G}P(P^{*}P)^{-1}S_{G}P^{ *}\tau\\ &=B_{G}P(P^{*}P)^{-1}S_{G}\tilde{\tau}P^{*}\tau+B_{G}P(P^{*}P)^{- 1}S_{G}(1-\tilde{\tau})P^{*}\tau.\end{split} \tag{5.10}\] By [19, Theorem 3.18], we know that \[S_{G}\tilde{\tau}\equiv 0. \tag{5.11}\] Moreover, from [18, Lemma 4.1], we have \[(1-\tilde{\tau})P^{*}\tau\equiv 0\mod C^{\infty}(X\times\overline{M}). \tag{5.12}\] From (5.10), (5.11), (5.12) and notice that \(B_{G}:C^{\infty}(\overline{M})\to C^{\infty}(\overline{M})^{G}\) is continuous (see Theorem 3.5), we deduce that \[B_{G}\tau:W^{s}(\overline{M})\to C^{\infty}(\overline{M}),\] for all \(s\in\mathbb{R}\). Hence \(B_{G}\tau\equiv 0\mod C^{\infty}(\overline{M}\times\overline{M})\). By taking adjoint, we deduce that \(\tau B_{G}\equiv 0\mod C^{\infty}(\overline{M}\times\overline{M})\). The theorem follows. We now study the distribution kernel of \(B_{G}\) near \(\mu^{-1}(0)\cap X\). For a Borel set \(\Sigma\subset\mathbb{R}\), denote by \(E(\Sigma)\) the spectral projection of \(\square^{(0)}\) associated to \(\Sigma\), where \(E\) is the spectral measure of \(\square^{(0)}\). Set \(H^{0}_{\leq\lambda}(\overline{M}):=RanE\big{(}(-\infty,\lambda]\big{)}\). Let \(B^{(0)}_{\leq\lambda}\) be the orthogonal projection onto \(H^{0}_{\leq\lambda}(\overline{M})\) and let \(B^{(0)}(x,y)\in\mathscr{D}^{\prime}(M\times M)\) be the distribution kernel of \(B^{(0)}_{\leq\lambda}\). First we recall the following theorem [18, Theorem 1.1]. **Theorem 5.2**.: _Let \(U\) be an open set of \(M^{\prime}\) with \(U\cap X\neq\emptyset\). Assume that the Levi form is negative on \(U\cap X\), then \(B^{(0)}_{\leq\lambda}\equiv 0\mod C^{\infty}((U\times U)\cap(\overline{M} \times\overline{M}))\). Assume that the Levi form is positive on \(U\cap X\). We have_ \[B^{(0)}_{\leq\lambda}\equiv\int_{0}^{\infty}e^{it\phi(x,y)t}b(x,y,t)dt\text{ mod }C^{\infty}\big{(}(U\times U)\cap(\overline{M}\times\overline{M})\big{)}, \tag{5.13}\] _where_ \[b(x,y,t)\sim\sum_{j=0}^{\infty}b_{j}(x,y)t^{n-j}\in S^{n}_{1,0}\big{(} (U\times U)\cap(\overline{M}\times\overline{M})\times]0,\infty[\big{)},\] \[b_{j}(x,y)\in C^{\infty}((U\times U)\cap(\overline{M}\times \overline{M})),\ \ j=0,1,\ldots,\] \[b_{0}(x,y)\neq 0, \tag{5.14}\] \[\phi\in C^{\infty}((U\times U)\cap(\overline{M}\times\overline{ M})),\ \ \mathrm{Im}\,\phi\geq 0,\] \[\phi(x,x)=0\mbox{, }x\in U\cap X\mbox{, }\phi(x,y)\neq 0\mbox{ if }(x,y)\notin\mathrm{diag}\,((U\times U)\cap(X\times X)),\] \[\mathrm{Im}\,\phi(x,y)>0\mbox{ if }(x,y)\notin(U\times U)\cap(X \times X),\] \[d_{x}\phi(x,x)=-\omega_{0}(x)-id\rho(x),\ \ d_{y}\phi(x,x)= \omega_{0}(x)-id\rho(x),\ \ \mbox{for every }x\in U\cap X\mbox{.}\] We refer the reader to [18, Theorem 5.26] for more properties of the phase \(\phi\) in (5.13). We also refer the reader to [18, (5.121)] for the explicit formula for \(b_{0}(x,x)\) in (5.14). From Corollary 3.4, we know that \(\square_{G}^{(0)}\) has closed range. From this observation, we can repeat the proof in [19, Theorem 3.17] and deduce that there exists a constant \(\lambda_{0}>0\) such that \[B_{G}=B_{\leq\lambda_{0}}^{(0)}\circ Q_{G}\ \ \mbox{on }L^{2}(M), \tag{5.15}\] \[B_{G}(x,y)=\int_{G}B_{\leq\lambda_{0}}(x,gy)d\mu(g).\] We recall some results in [18]. Fix \(p\in\mu^{-1}(0)\) and let \(U\) be an open set of \(p\) in \(M^{\prime}\) with \(U=GU\). Let \(D:=U\cap X\). Assume the Levi form is positive on \(D\). Let \[L:C^{\infty}_{c}(U\cap\overline{M})\to C^{\infty}(D)\] be a continuous operator such that \[L-(P^{*}P)^{-1}P^{*}\equiv 0\mod C^{\infty}((U\times U)\cap(X\times\overline{M})), \tag{5.16}\] \(L\) is properly supported on \(U\cap\overline{M}\), that is, for every \(\chi\in C^{\infty}_{c}(U\cap\overline{M})\), there is a \(\tau\in C^{\infty}_{c}(D)\) such that \(L\chi=\tau L\) on \(C^{\infty}_{c}(U\cap\overline{M})\) and for every \(\tau_{1}\in C^{\infty}_{c}(D)\), there is a \(\chi_{1}\in C^{\infty}_{c}(U\cap\overline{M})\) such that \(\tau_{1}L=L\chi_{1}\) on \(C^{\infty}_{c}(U\cap\overline{M})\) (see the discussion after [18, Theorem 5.18]). Since \(U\) and \(D\) are \(G\)-invariant, we can take \(L\) so that \[LQ_{G}=Q_{G,X}L\ \mbox{on }C^{\infty}_{c}(U\cap\overline{M}), \tag{5.17}\] where \(Q_{G,X}\) is the orthogonal projection from \(L^{2}(X)\) onto \(L^{2}(X)^{G}\). It was shown in [18, (5.111), Theorem 6.11] that \[B_{\leq\lambda_{0}}-P\hat{S}L\equiv 0\mod C^{\infty}((U\times U)\cap(\overline{ M}\times\overline{M})), \tag{5.18}\] where \(\hat{S}\) is as in (4.11). From (5.15) and (5.18), we deduce \[B_{G}-P\hat{S}LQ_{G}\equiv 0\mod C^{\infty}((U\times U)\cap(\overline{M} \times\overline{M})). \tag{5.19}\] From (5.17) and (5.19), we get **Theorem 5.3**.: _With the notations and assumptions used above, we have_ \[B_{G}-P\hat{S}_{G}L\equiv 0\mod C^{\infty}((U\times U)\cap(\overline{M}\times \overline{M})), \tag{5.20}\] _where \(\hat{S}_{G}\) is as in (4.13)._ By (4.14) and applying the procedure in [14, Part II, Proposition 7.8, Theorem 7.9] we get **Theorem 5.4**.: _Let \(p\in\mu^{-1}(0)\cap X\). Let \(U\) be an open local coordinate patch of \(p\) in \(M^{\prime}\), \(D:=U\cap X\). If the Levi form is negative on \(D\), then \(B_{G}\equiv 0\mod C^{\infty}((U\times U)\cap(\overline{M}\times\overline{M}))\). Assume that the Levi form is positive on \(D\). We have_ \[B_{G}(z,w)\equiv\int_{0}^{+\infty}e^{it\Psi(z,w)}b(z,w,t)dt\mod C^{\infty}((U \times U)\cap(\overline{M}\times\overline{M})), \tag{5.21}\] _where_ \[\begin{split}& b(z,w,t)\in S_{1,0}^{n-\frac{d}{2}}(((U\times U) \cap(\overline{M}\times\overline{M}))\times\mathbb{R}_{+}),\\ & b(z,w,t)\sim\sum_{j=0}^{+\infty}t^{n-\frac{d}{2}-j}b_{j}(z,w) \ \ \text{in}\ S_{1,0}^{n-\frac{d}{2}}(((U\times U)\cap(\overline{M}\times \overline{M}))\times\mathbb{R}_{+}),\\ & b_{j}(z,w)\in C^{\infty}((U\times U)\cap(\overline{M}\times \overline{M})),\ \ j=0,1,2,\dots,\\ & b_{0}(z,w)\neq 0,\end{split} \tag{5.22}\] _and_ \[\begin{split}&\Psi(z,w)\in C^{\infty}(((U\times U)\cap( \overline{M}\times\overline{M}))),\ \ \text{Im}\,\Psi\geq 0,\\ &\Psi(z,z)=0,\ z\in\mu^{-1}(0)\cap X,\\ &\text{Im}\,\Psi(z,w)>0\ \text{if}\ (z,w)\notin\text{diag}\,((\mu^{-1}(0) \cap D)\times(\mu^{-1}(0)\cap D)),\\ & d_{x}\Psi(x,x)=-\omega_{0}(x)-id\rho(x),\ \ d_{y}\Psi(x,x)=\omega_{0}(x)-id\rho(x),\ \ x\in\mu^{-1}(0)\cap D,\\ &\Psi|_{D\times D}=\Phi\text{, }\Phi\text{ is as in \eqref{eq:2.2.3}}.\end{split} \tag{5.23}\] _Moreover, let \(z=(x_{1},\dots,x_{2n-1},\rho)\) be local coordinates of \(M^{\prime}\) defined near \(p\) in \(M^{\prime}\) with \(x(p)=0\) and \(x=(x_{1},\dots,x_{2n-1})\) are local coordinates of \(X\) defined near \(p\) in \(X\). Then,_ \[\Psi(z,w)=\Phi(x,y)-i\rho(z)(1+f(z))-i\rho(w)(1+\overline{f}(w))+O(|(z,w)|^{3} )\ \text{near}\ (p,p), \tag{5.24}\] _where \(f\in C^{\infty}\), \(f=O(|z|)\)._ From [14, Part II, Proposition 7.10] we have the following **Theorem 5.5**.: _With the notations and assumptions used above, for \(b_{0}(z,w)\) in (5.22), we have_ \[b_{0}(x,x)=2a_{0}(x,x),\ \ \text{for every}\ x\in\mu^{-1}(0)\cap D,\] _where \(a_{0}\) is as in (4.15)._ End of the proof of Theorem 1.2.: From Theorem 5.1, Theorem 5.4 and Theorem 5.5, we get Theorem 1.2. Let \(D\) be an open set of \(\widehat{X}\) in \(X\) with \(D=GD\). Let \(\chi\in C^{\infty}_{c}(D)^{G}\), \(\chi\equiv 1\) near \(\mu^{-1}(0)\cap X\). Let \(\chi_{1}\in C^{\infty}_{c}(D)^{G}\) with \(\chi_{1}\equiv 1\) on \(\operatorname{supp}\chi\). Let \(\tilde{S}_{G}:=\chi_{1}\hat{S}_{G}\chi:C^{\infty}(X)\to C^{\infty}(X)^{G}\), where \(\hat{S}_{G}\) is as in (4.13). From Theorem 5.1 and (5.19), we get **Theorem 5.6**.: _With the notations and assumptions above, we have_ \[B_{G}\equiv P\tilde{S}_{G}(P^{*}P)^{-1}P^{*}\mod C^{\infty}(\overline{M}\times \overline{M}).\] ## 6. The proofs of Theorem 1.3, Theorem 1.4 and Theorem 1.6 For simplicity, until further notice, we assume that \(\mu^{-1}(0)\cap X\) is strongly pseudoconvex. For every \(s\in\mathbb{R}\), consider the map \[\begin{split}\hat{\sigma}_{s}=\hat{\sigma}:H^{0}(\overline{M})^{G }_{s}&\to H^{0}_{b}(X)^{G}_{s-\frac{1}{2}}\\ u&\mapsto(P^{*}P)^{-1}P^{*}u=\gamma u.\end{split} \tag{6.1}\] Since \((P^{*}P)^{-1}P^{*}:H^{s}(\overline{M})^{G}_{s}\to W^{s-\frac{1}{2}}(X)\) is continuous, for every \(s\in\mathbb{R}\), \(\hat{\sigma}\) is well-defined. We define \(\operatorname{Coker}\hat{\sigma}_{s}\) in the following way: \[\operatorname{Coker}\hat{\sigma}_{s}=\operatorname{Coker}\hat{\sigma}:=\{u\in H ^{0}_{b}(X)^{G}_{s-\frac{1}{2}};\,(\,u\,|\,\hat{\sigma}v)_{X}=0,\forall v\in H ^{0}(\overline{M})^{G}_{s}\cap C^{\infty}(\overline{M})\}. \tag{6.2}\] **Theorem 6.1**.: _We have that \(\operatorname{Ker}\hat{\sigma}_{s}=\{0\}\) and \(\operatorname{Coker}\hat{\sigma}_{s}\) is a finite dimensional subspace of \(C^{\infty}(X)^{G}\cap H^{0}_{b}(X)^{G}\). Moreover, \(\operatorname{Coker}\hat{\sigma}_{s}\) is independent of \(s\)._ Proof.: It is obvious that \(\operatorname{Ker}\hat{\sigma}=\{0\}\). We extend \(\hat{\sigma}\) to \(\hat{\sigma}:\hat{\mathscr{D}}^{\prime}(\overline{M})\to\mathscr{D}^{\prime}(X)\) by putting \[\hat{\sigma}u=(P^{*}P)^{-1}P^{*}B_{G}u=S_{G}(P^{*}P)^{-1}P^{*}B_{G}u,\;u\in \hat{\mathscr{D}}^{\prime}(\overline{M}). \tag{6.3}\] Recall that \(\hat{\mathscr{D}}^{\prime}(\overline{M})\) is the space of continuous linear maps from \(C^{\infty}(\overline{M})\) to \(\mathbb{C}\). Since \[B_{G}P(P^{*}P)^{-1}S_{G}=(S_{G}(P^{*}P)^{-1}P^{*}B_{G})^{*}\] maps \(C^{\infty}(X)\) to \(C^{\infty}(\overline{M})\), the definition (6.3) is well-defined, where \[(S_{G}(P^{*}P)^{-1}P^{*}B_{G})^{*}\] is the formal adjoint of \(S_{G}(P^{*}P)^{-1}P^{*}B_{G}\) with respect to \((\,\cdot\,|\,\cdot\,)_{X}\) and \((\,\cdot\,|\,\cdot\,)_{M}\). Let \(\hat{\sigma}^{*}:\mathscr{D}^{\prime}(X)\to\hat{\mathscr{D}}^{\prime}( \overline{M})\) be the formal adjoint of \(\hat{\sigma}\) with respect to the given \(L^{2}\) inner products \((\,\cdot\,|\,\cdot\,)_{M}\) and \((\,\cdot\,|\,\cdot\,)_{X}\). We have \[\hat{\sigma}\hat{\sigma}^{*}=S_{G}(P^{*}P)^{-1}P^{*}B_{G}P(P^{*}P)^{-1}S_{G}.\] From Theorem 5.6, we have \[\hat{\sigma}\hat{\sigma}^{*} =S_{G}(P^{*}P)^{-1}P^{*}(P\tilde{S}_{G}(P^{*}P)^{-1}P^{*}+F)P(P^{*} P)^{-1}S_{G}\] \[=S_{G}\tilde{S}_{G}(P^{*}P)^{-1}S_{G}+S_{G}\hat{F}S_{G},\] where \(F\equiv 0\mod C^{\infty}(\overline{M}\times\overline{M})\) and \(\hat{F}\equiv 0\). Thus \[S_{G}(P^{*}P)\hat{\sigma}\hat{\sigma}^{*}=S_{G}(P^{*}P)S_{G}\tilde{S}_{G}(P^{* }P)^{-1}S_{G}+S_{G}(P^{*}P)S_{G}\hat{F}S_{G}.\] From (4.14) and the complex stationary phase formula of Melin-Sjostrand [26], we deduce that \[S_{G}(P^{*}P)\hat{\sigma}\hat{\sigma}^{*}=S_{G}(I+R)S_{G}, \tag{6.4}\] where \(R\) is smoothing away \(\mu^{-1}(0)\cap X\) and for any \(p\in\mu^{-1}(0)\cap X\), let \(D\) be any small local coordinate patch of \(X\), \(p\in D\), if the Levi form is negative on \(D\), then \(R\equiv 0\). If the Levi form is positive on \(D\), we have \[R\equiv\int_{0}^{+\infty}e^{it\Psi(x,y)}r(x,y,t)dt,\] \(r(x,y,t)\in S_{1,0}^{n-1-\frac{d}{2}}(D\times D\times\mathbb{R}_{+})\), \(r(x,y,t)\sim\sum_{j=0}^{+\infty}t^{n-1-\frac{d}{2}-j}r_{j}(x,y)\) in \(S_{1,0}^{n-1-\frac{d}{2}}(D\times D\times\mathbb{R}_{+})\), \(r_{j}(x,y)\in C^{\infty}(D\times D)\), \(j=0,1,\ldots\), \(r_{0}(x,x)=0\), for every \(x\in\mu^{-1}(0)\cap D\). From [19, Theorem 4.9], we know that \(\operatorname{Ker}\left(I+R\right)\) is a finite dimensional subspace of \(C^{\infty}(X)\). It follows from \[\operatorname{Coker}\hat{\sigma}_{s}\subset\operatorname{Ker}\left(I+R\right) \cap H_{b}^{0}(X)_{s-\frac{1}{2}}^{G}\] that \(\operatorname{Coker}\hat{\sigma}_{s}\) is a finite dimensional subspace of \(C^{\infty}(X)^{G}\) and \(\operatorname{Coker}\hat{\sigma}_{s}\) is independent of \(s\). As before, let \(X_{G}:=(\mu^{-1}(0)\cap X)/G\). Let \(\iota_{X}:\mu^{-1}(0)\cap X\to X\) be the natural inclusion and let \(\iota_{X}^{*}:C^{\infty}(X)\to C^{\infty}(\mu^{-1}(0)\cap X)\) be the pull-back by \(\iota_{X}\). Let \(\iota_{X,G}:C^{\infty}(\mu^{-1}(0)\cap X)^{G}\to C^{\infty}(X_{G})\) be the natural identification. Put \[H_{b}^{0}(X)^{G}:=\left\{u\in L^{2}(X)^{G};\,\overline{\partial}_ {b}u=0\right\},\] \[H_{b}^{0}(X_{G}):=\left\{u\in L^{2}(X_{G});\,\overline{\partial} _{b,X_{G}}u=0\right\},\] where \(\overline{\partial}_{b,X_{G}}\) denotes the tangential Cauchy-Riemann operator on \(X_{G}\). For every \(s\in\mathbb{R}\), put \[H_{b}^{0}(X)_{s}^{G}:=\left\{u\in W^{s}(X);\,\overline{\partial}_ {b}u=0,\ \ g^{*}u=0,\ \ \forall g\in G\right\},\] \[H_{b}^{0}(X_{G})_{s}:=\left\{u\in W^{s}(X_{G});\,\overline{ \partial}_{b,X_{G}}u=0\right\}.\] Let \[\sigma_{1}:H_{b}^{0}(X)^{G} \to H_{b}^{0}(X_{G}),\] \[u \to\iota_{X,G}\circ\iota_{X}^{*}u.\] From [19, Theorem 5.3], \(\sigma_{1}\) extends by density to a bounded operator \[\sigma_{1}=\sigma_{1,s}:H_{b}^{0}(X)_{s}^{G}\to H_{b}^{0}(X_{G})_{s-\frac{d}{ 4}},\] for every \(s\in\mathbb{R}\). Let \(\triangle^{X}\) and \(\triangle^{X_{G}}\) be the (positive) Laplacians on \(X\) and \(X_{G}\) respectively. For \(s\in\mathbb{R}\), let \(\Lambda_{s}:=(I+\triangle^{X})^{\frac{s}{2}}\), \(\hat{\Lambda}_{s}:=(I+\triangle^{X_{G}})^{\frac{s}{2}}\). Fix \(s\in\mathbb{R}\). For \(u,v\in W^{s}(X)\), \(u^{\prime},v^{\prime}\in W^{s}(X_{G})\), we define the inner products \[(\,u\,|\,v\,)_{X,s}:=(\Lambda_{s}u\,|\,\Lambda_{s}v\,)_{X},\] \[(\,u^{\prime}\,|\,v^{\prime}\,)_{X_{G},s}:=(\hat{\Lambda}_{s}u^{\prime}\,|\, \hat{\Lambda}_{s}v^{\prime}\,)_{X_{G}}\] and let \(\left\|\cdot\right\|_{X,s}\) and \(\left\|\cdot\right\|_{X_{G},s}\) be the corresponding norms. For every \(s\in\mathbb{R}\), define \[(\operatorname{Im}\sigma_{1,s})^{\perp}:=\left\{u\in H_{b}^{0}(X_{G})_{s-\frac {d}{4}};\,(\,\sigma_{1,s}v\,|\,u\,)_{X_{G},s-\frac{d}{4}}=0,\ \ \forall v\in H_{b}^{0}(X)_{s}^{G}\right\}.\] The following theorem is the main result in [19] (see [19, Theorem 1.2]) **Theorem 6.2**.: _With the notations and assumptions used above, assume that \(\overline{\partial}_{b,X_{G}}\) has closed range. Then, for every \(s\in\mathbb{R}\), \(\operatorname{Ker}\sigma_{1,s}\) and \((\operatorname{Im}\sigma_{1,s})^{\perp}\) are finite dimensional subspaces of \(H^{0}_{b}(X)^{G}\cap C^{\infty}(X)^{G}\) and \(H^{0}_{b}(X_{G})\cap C^{\infty}(X_{G})\) respectively, \(\operatorname{Ker}\sigma_{1,s}\) and the index \(\dim\operatorname{Ker}\sigma_{1,s}-\dim\left(\operatorname{Im}\sigma_{1,s} \right)^{\perp}\) are independent of \(s\)._ For \(s\in\mathbb{R}\), define \[\operatorname{Coker}\sigma_{1}=\operatorname{Coker}\sigma_{1,s}:=\left\{u\in H ^{0}_{b}(X_{G})_{s-\frac{d}{4}};\,(\,u\,|\,\sigma_{1,s}v\,)_{X_{G}}=0,\ \ \forall v\in H^{0}_{b}(X)^{G}\cap C^{\infty}(X)^{G}\right\}, \tag{6.5}\] where \((\,\cdot\,|\,\cdot\,)_{X_{G}}\) is the \(L^{2}\) inner product on \(X_{G}\) induced by \(\langle\,\cdot\,|\,\cdot\,\rangle\). **Theorem 6.3**.: _With the notations and assumptions used above, assume that \(\overline{\partial}_{b,X_{G}}\) has closed range. Then, \(\operatorname{Coker}\sigma_{1,s}=(\operatorname{Im}\sigma_{1,\frac{d}{4}})^{\perp}\), for every \(s\in\mathbb{R}\). In particular, \(\operatorname{Coker}\sigma_{1,s}\) is a finite dimensional subspace of \(H^{0}_{b}(X_{G})\cap C^{\infty}(X_{G})\) and \(\operatorname{Coker}\sigma_{1,s}\) is independent of \(s\)._ Proof.: Let \(u\in\operatorname{Coker}\sigma_{1,s}\). By definition, we have \[(\,\sigma_{1,\frac{d}{4}}v\,|\,u\,)_{X_{G}}=0,\ \ \text{for every}\ v\in H^{0}_{b}(X)^{G}\cap C^{\infty}(X)^{G}. \tag{6.6}\] From (6.6) and the proof of [19, Theorem 1.2], we can check that \(u\in C^{\infty}(X_{G})\). Thus, (6.6) holds for all \(v\in H^{0}_{b}(X)^{G}_{\frac{d}{4}}\). Hence, \(u\in(\operatorname{Im}\sigma_{1,\frac{d}{4}})^{\perp}\). We have proved that \(\operatorname{Coker}\sigma_{1,s}\subset(\operatorname{Im}\sigma_{1,\frac{d}{4 }})^{\perp}\). It is clear that \((\operatorname{Im}\sigma_{1,\frac{d}{4}})^{\perp}\subset\operatorname{Coker} \sigma_{1,s}\). The theorem follows. End of the proof of Theorem 1.3.: For every \(s\in\mathbb{R}\), it is not difficult to see that \(\tilde{\sigma}_{G}=\tilde{\sigma}_{G,s}=\hat{\sigma}_{s}\circ\sigma_{1,s}\), where \(\tilde{\sigma}_{G,s}\) is given by (1.16), (1.17). From this observation, Theorem 6.1, Theorem 6.2 and Theorem 6.3, Theorem 1.3 follows. We now prove Theorem 1.4. We assume that (1.19) holds. Recall \(\mu^{-1}(0)\cap X=\widehat{X}\cup\widetilde{X}\) on which the Levi form is strongly pseudoconvex and pseudoconcave respectively. As before, let \(M_{G}=(\mu^{-1}(0)\cap M)/G\). For \(u\in H^{0}_{b}(\widehat{X}_{G})_{s},s\in\mathbb{R}\), we identify \(u\) with an element in \(H^{0}_{b}(X_{G})_{s}\) by putting \(u=0\) on \(\widetilde{X}_{G}\). For every \(s\in\mathbb{R}\), let \[\sigma_{2,s}=\sigma_{2}:H^{0}_{b}(\widehat{X}_{G})_{s} \to H^{0}(\overline{M}_{G})_{s+\frac{1}{2}},\] \[u \to B_{M_{G}}P_{M_{G}}u,\] where \(B_{M_{G}}\) and \(P_{M_{G}}\) are the Bergman projection on \(M_{G}\) and the Poisson operator on \(M_{G}\) respectively. For every \(s\in\mathbb{R}\), let \[\operatorname{Coker}\sigma_{2}=\operatorname{Coker}\sigma_{2,s}:=\left\{u\in H ^{0}(\overline{M}_{G})_{s+\frac{1}{2}};\,(\,u\,|\,\sigma_{2,s}v\,)_{M_{G}}=0, \ \ \forall v\in H^{0}_{b}(\widehat{X}_{G})\cap C^{\infty}(\widehat{X}_{G})\right\}. \tag{6.7}\] Now we are in a position to prove a key result. **Theorem 6.4**.: _With the notations and assumptions used above, assume that \(\overline{\partial}_{b,X_{G}}\) has closed range. Let \(s\in\mathbb{R}\). We have that \(\operatorname{Ker}\sigma_{2,s}\) and \(\operatorname{Coker}\sigma_{2,s}\) are finite dimensional subspaces of \(H^{0}_{b}(\widehat{X}_{G})\cap C^{\infty}(\widehat{X}_{G})\) and \(H^{0}(\overline{M}_{G})\cap C^{\infty}(\overline{M}_{G})\) respectively, \(\operatorname{Ker}\hat{\sigma}_{2,s}\) and \(\operatorname{Coker}\hat{\sigma}_{2,s}\) are independent of \(s\)._ Proof.: We extend \(\sigma_{2}\) to \(\sigma_{2}:\mathscr{D}^{\prime}(\widehat{X}_{G})\to\hat{\mathscr{D}}^{\prime}( \overline{M}_{G})\) by putting \[\sigma_{2}u=B_{M_{G}}P_{M_{G}}S_{\widehat{X}_{G}}u,\ \ u\in\mathscr{D}^{\prime}( \widehat{X}_{G}),\] where \(S_{\widehat{X}_{G}}\) denotes the Szego projection on \(\widehat{X}_{G}\). Let \(\sigma_{2}^{*}:\hat{\mathscr{D}}^{\prime}(\overline{M}_{G})\to\mathscr{D}^{ \prime}(\widehat{X}_{G})\) be the formal adjoint of \(\sigma_{2}\) with respect to \((\,\cdot\,|\,\cdot\,)_{\widehat{X}_{G}}\) and \((\,\cdot\,|\,\cdot\,)_{M_{G}}\). We have \[\sigma_{2}^{*}\sigma_{2}=S_{\widehat{X}_{G}}P_{M_{G}}^{*}B_{M_{G} }P_{M_{G}}S_{\widehat{X}_{G}} \tag{6.8}\] \[=S_{\widehat{X}_{G}}P_{M_{G}}^{*}\Big{(}P_{M_{G}}\hat{S}_{ \widehat{X}_{G}}(P_{M_{G}}^{*}P_{M_{G}})^{-1}P_{M_{G}}^{*}+F)P_{M_{G}}S_{ \widehat{X}_{G}}\] \[=S_{\widehat{X}_{G}}P_{M_{G}}^{*}P_{M_{G}}\hat{S}_{\widehat{X}_{ G}}S_{\widehat{X}_{G}}+S_{\widehat{X}_{G}}\hat{F}S_{\widehat{X}_{G}},\] where \(\hat{S}_{X_{G}}\) is the operator as in (4.12), \(F\equiv 0\mod C^{\infty}(\overline{M}_{G}\times\overline{M}_{G})\), \(\hat{F}\equiv 0\). From (6.8), it is straightforward to check that \[S_{\widehat{X}_{G}}(P_{M_{G}}^{*}P_{M_{G}})^{-1}\sigma_{2}^{*}\sigma_{2}=S_{ \widehat{X}_{G}}(I+\hat{R})S_{\widehat{X}_{G}},\] where \(\hat{R}\) is a complex Fourier integral operator of the same type, the same order, the same phase as \(S_{\widehat{X}_{G}}\) but the leading term vanishes at diagonal. From this observation, we can repeat the proof of [19, Theorem 4.15] with minor change and deduce that \(\operatorname{Ker}\,(I+\hat{R})\) is a finite dimensional subspace of \(C^{\infty}(\widehat{X}_{G})\). Thus, \(\operatorname{Ker}\hat{\sigma}_{2,s}\) is a finite dimensional subspace of \(H_{b}^{0}(\widehat{X}_{G})\cap C^{\infty}(\widehat{X}_{G})\). Now \[\sigma_{2}(P_{M_{G}}^{*}P_{M_{G}})^{-1}\sigma_{2}^{*}=B_{M_{G}}P_{M_{G}}S_{ \widehat{X}_{G}}(P_{M_{G}}^{*}P_{M_{G}})^{-1}S_{\widehat{X}_{G}}P_{M_{G}}^{*} B_{M_{G}}.\] From [14, Theorem 1.2], we have \[\sigma_{2}(P_{M_{G}}^{*}P_{M_{G}})^{-1}\sigma_{2}^{*}=B_{M_{G}}(I+R_{M_{G}})B_ {M_{G}}, \tag{6.9}\] where \(R_{M_{G}}\) is a complex Fourier integral operator of the same type, the same order, the same phase as \(B_{M_{G}}\), but the leading term vanishes at \(\operatorname{diag}\,(\widehat{X}_{G}\times\widehat{X}_{G})\). We can deduce from [19, Theorem 4.15] that \(\operatorname{Ker}\,(I+R_{M_{G}})\) is a finite dimensional subspace of \(C^{\infty}(\widehat{X}_{G})\). Hence, \(\operatorname{Coker}\hat{\sigma}_{2,s}\) is a finite dimensional subspace of \(H^{0}(\overline{M}_{G})\cap C^{\infty}(\overline{M}_{G})\). The proof follows. End of the proof of Theorem 1.4.: It is clear that for every \(s\in\mathbb{R}\), \[\sigma_{G}=\sigma_{G,s}=\sigma_{2}\circ\sigma_{1}\circ\hat{\sigma}:H^{0}( \overline{M})_{s}^{G}\to H^{0}(\overline{M}_{G})_{s-\frac{d}{4}},\] where \(\sigma_{G}\) is given by (1.21), (1.22). From Theorem 6.1, Theorem 6.2, Theorem 6.3 and Theorem 6.4, we deduce that \(\sigma\) is Fredholm, \(\operatorname{Ker}\sigma\) and \(\operatorname{Coker}\sigma\) are finite dimensional subspaces of \(H^{0}(\overline{M})^{G}\cap C^{\infty}(\overline{M})^{G}\) and \(H^{0}(\overline{M}_{G})\cap C^{\infty}(\overline{M}_{G})\) respectively. Moreover, \(\operatorname{Ker}\sigma\) and \(\operatorname{Coker}\sigma\) are independent of the choices of \(s\). The proof is completed. Proof of Theorem 1.6.: In the end of this section, we will prove Theorem 1.6. We will not assume (1.19). Let \(M_{1}\), \(M_{2}\) be bounded domains in \(\mathbb{C}^{n}\) with smooth boundary. Assume that \(M_{j}\) admits a compact holomorphic Lie group \(G\) action and \(M_{j}\) satisfies Assumption 1.1, for each \(j=1,2\). Let \[F:M_{1}\to M_{2},\] \[z\to(F_{1}(z),\ldots,F_{n}(z)),\] be the \(G\)-invariant map which satisfies the assumption of Theorem 1.6. We are going to prove that \(F\) extends smoothly to the boundary. Let \(\mu_{j}:M_{j}\to\mathfrak{g}^{*}\) be the corresponding moment map on \(M_{j}\) and let \(X_{j}\) be the boundary of \(M_{j}\), \(j=1,2\). **Lemma 6.5**.: _Let \(\tau\in C^{\infty}(\overline{M}_{1})\), \(\tau\equiv 1\) near \(\mu_{1}^{-1}(0)\cap X_{1}\). Then, \((1-\tau)F\) extends smoothly to the boundary._ Proof.: Fix \(j\in\{1,\ldots,n\}\). Since \(F_{j}(z)\) is bounded, \(F_{j}(z)\) is a \(G\)-invariant \(L^{2}\) holomorphic function on \(M_{1}\). We have \[(\tau F_{j})(z)=\tau(z)(B_{G,M_{1}}F_{j})(z)=((\tau B_{G,M_{1}})F_{j})(z), \tag{6.10}\] where \(B_{G,M_{1}}\) is the \(G\)-invariant Bergman projection on \(M_{1}\). In view of Theorem 1.2, we see that \(\tau B_{G,M_{1}}\equiv 0\mod C^{\infty}(\overline{M}_{1}\times\overline{M}_{1})\). From this observation and (6.10), we deduce that \(\tau F_{j}\in C^{\infty}(\overline{M}_{1})\). The lemma follows. From Theorem 1.2, we see that \[B_{G,M_{1}}(\cdot,w)\in C^{\infty}(\overline{M}_{1}),\ \ \text{for every}\ w\in M_{1}. \tag{6.11}\] Fix \(p\in\mu_{1}^{-1}(0)\cap X_{1}\). Let \(Z_{1},\ldots,Z_{n-d}\in C^{\infty}(U,T^{1,0}\mathbb{C}^{n})\) such that \[\text{span}\ \Big{\{}\eta-iJ\eta,Z_{1},\ldots,Z_{n-d};\,\eta\in\underline{ \mathfrak{g}}_{x}\Big{\}}=T_{x}^{1,0}\mathbb{C}^{n},\ \ \text{for every}\ x\in U, \tag{6.12}\] where \(U\) is a small open set of \(p\) in \(\mathbb{C}^{n}\) and \(J\) is the complex structure map on \(\mathbb{C}^{n}\). It follows from (1.9) and [17, Theorem 1.10] that there are \(f_{0},\ldots,f_{n-d}\in H^{0}(\overline{M}_{1})^{G}\cap C^{\infty}(\overline{ M}_{1})\) such that \[\det\ \Big{(}(a_{j,\ell})_{j,\ell=0}^{n-d}\Big{)}\neq 0, \tag{6.13}\] \[a_{0,\ell}=f_{\ell}(p),\ \ \ell=0,\ldots,n-d,\] \[a_{j,\ell}=(Z_{j}f_{\ell})(p),\ \ \ell=0,\ldots,n-d,j=1,\ldots,n-d.\] From (6.11) and (6.13), combined with the proof of [3, Part 3 of the proof of Lemma 1], we deduce that there are \(n-d+1\) points \(a_{0},a_{1},\ldots,a_{n-d}\) in \(M_{1}\) such that \[\det\ \Big{(}(b_{j,\ell})_{j,\ell=0}^{n-d}\Big{)}\neq 0, \tag{6.14}\] \[b_{0,\ell}=B_{G,M_{1}}(p,a_{\ell}),\ \ \ell=0,\ldots,n-d,\] \[b_{j,\ell}=(Z_{j,x}B_{G,M})(p,a_{\ell}),\ \ \ell=0,\ldots,n-d,j=1,\ldots,n-d.\] From Lemma 6.5, (6.14) and [3, Lemma 2], we get Theorem 1.6.
2303.17795
Signless Laplacian energies of non-commuting graphs of finite groups and related results
The non-commuting graph of a non-abelian group $G$ with center $Z(G)$ is a simple undirected graph whose vertex set is $G\setminus Z(G)$ and two vertices $x, y$ are adjacent if $xy \ne yx$. In this study, we compute Signless Laplacian spectrum and Signless Laplacian energy of non-commuting graphs of finite groups. We obtain several conditions such that the non-commuting graph of $G$ is Q-integral and observe relations between energy, Signless Laplacian energy and Laplacian energy. In addition, we look into the energetic hyper- and hypo-properties of non-commuting graphs of finite groups. We also assess whether the same graphs are Q-hyperenergetic and L-hyperenergetic.
Monalisha Sharma, Rajat Kanti Nath
2023-03-31T04:07:33Z
http://arxiv.org/abs/2303.17795v1
# Signless Laplacian energies of non-commuting graphs of finite groups and related results ###### Abstract The non-commuting graph of a non-abelian group \(G\) with center \(Z(G)\) is a simple undirected graph whose vertex set is \(G\setminus Z(G)\) and two vertices \(x,y\) are adjacent if \(xy\neq yx\). In this study, we compute Signless Laplacian spectrum and Signless Laplacian energy of non-commuting graphs of finite groups. We obtain several conditions such that the non-commuting graph of \(G\) is Q-integral and observe relations between energy, Signless Laplacian energy and Laplacian energy. In addition, we look into the energetic hyper- and hypo-properties of non-commuting graphs of finite groups. We also assess whether the same graphs are Q-hyperenergetic and L-hyperenergetic. _Department of Mathematical Sciences, Tezpur University,_ _Napaam-784028, Sonitpur, Assam, India._ _Emails: [email protected] and [email protected]_ _Key words:_ Non-commuting graph, Spectrum, Energy. _2010 Mathematics Subject Classification:_ 20D60, 05C50, 15A18, 05C25. ## 1 Introduction Recall that for a graph \(\mathcal{G}\), its spectrum denoted by \(\mathrm{Spec}(\mathcal{G})\) is \(\{(\lambda_{1})^{k_{1}},(\lambda_{2})^{k_{2}},\,\ldots,(\lambda_{n})^{k_{n}}\}\), where \(\lambda_{i}\)'s are the eigenvalues of the adjacency matrix of \(\mathcal{G}\) with multiplicities \(k_{i}\) for \(1\leq i\leq n\), respectively. Let \(A(\mathcal{G})\) and \(D(\mathcal{G})\) be the adjacency matrix and degree matrix of \(\mathcal{G}\), respectively. Consequently, \(L(\mathcal{G})=D(\mathcal{G})-A(\mathcal{G})\) gives the Laplacian matrix of \(\mathcal{G}\). The set \(\{(\beta_{1})^{b_{1}},(\beta_{2})^{b_{2}},\ldots,(\beta_{m})^{b_{m}}\}\), where \(\beta_{j}\)'s are the eigenvalues of \(L(\mathcal{G})\) with multiplicities \(b_{j}\) for \(1\leq j\leq m\), respectively is referred to as the Laplacian spectrum of \(\mathcal{G}\) and is denoted by \(\mathrm{L-spec}(\mathcal{G})\). While, \(Q(\mathcal{G})=A(\mathcal{G})-D(\mathcal{G})\) provides the Signless Laplacian matrix of \(\mathcal{G}\). The set \(\{(\gamma_{1})^{d_{1}},(\gamma_{2})^{d_{2}},\ldots,(\gamma_{l})^{d_{l}}\}\), where \(\gamma_{r}\)'s are the eigenvalues of \(Q(\mathcal{G})\) with multiplicities \(d_{r}\) for \(1\leq r\leq l\), respectively is known as the Signless Laplacian spectrum of \(\mathcal{G}\) and is denoted by \(\mathrm{Q-spec}(\mathcal{G})\). A graph \(\mathcal{G}\) is called integral, L-integral and Q-integral if eigenvalues of \(A(\mathcal{G})\), \(L(\mathcal{G})\) and \(Q(\mathcal{G})\) are integers. The energy, Laplacian energy and Signless Laplacian energy of \(\mathcal{G}\) are given by \(E(\mathcal{G})=\sum_{\lambda\in\mathrm{Spec}(\mathcal{G})}|\lambda|\), \(LE(\mathcal{G})=\sum_{\beta\in\mathrm{L-spec}(\mathcal{G})}\left|\beta-\frac{ 2|e(\mathcal{G})|}{|v(\mathcal{G})|}\right|\) and \(LE^{+}(\mathcal{G})=\sum_{\gamma\in\mathrm{Q-spec}(\mathcal{G})}\left|\gamma- \frac{2|e(\mathcal{G})|}{|v(\mathcal{G})|}\right|\) respectively, where \(v(\mathcal{G})\) and \(e(\mathcal{G})\) are the set of vertices and edges of \(\mathcal{G}\) respectively. A finite graph \(\mathcal{G}\) is hyperenergetic if \(E(\mathcal{G})>E(K_{|v(\mathcal{G})|})\) and is hypoenergetic if \(E(\mathcal{G})<|v(\mathcal{G})|\). Likewise, the graph \(\mathcal{G}\) is L-hyperenergetic if \(LE(\mathcal{G})>LE(K_{|v(\mathcal{G})|})\) and is Q-hyperenergetic if \(LE^{+}(\mathcal{G})>LE^{+}(K_{|v(\mathcal{G})|})\). Gutman [20] and Walikar et. al [31] pioneered the research of hyperenergetic graphs in 1999. Gutman [23] first presented the hypoenergetic graph in 2007. L-hyperenergetic and Q-hyperenergetic graphs were considered in [15]. Assume that \(G\) is a finite non-abelian group and that \(Z(G)=\{z\in G:zx=xz,\forall x\in G\}\) is the centre of \(G\). The non-commuting graph of \(G\), represented by \(\Gamma_{G}\), is a basic undirected network connecting two unique vertices \(x\) and \(y\) whenever \(xy\neq yx\) with \(G\setminus Z(G)\) as the vertex set. Erdos and Neumann's study [27] considered this graph for the first time in 1976. [1, 4, 6, 16, 25, 30] are some references on non-commuting graphs of finite groups. Spectral aspects of this graph have also grabbed attention of numerous mathematicians. In [18], Ghorbani et al. calculated spectrum of \(\Gamma_{G}\) for certain groups. The energy of the same was later explored by Ghorbani and Gharavi-Alkhansari in [17]. In [8], Dutta et al. computed the Laplacian spectrum of \(\Gamma_{G}\) following which Dutta and Nath [12] worked on Laplacian energy of the same. Signless Laplacian spectrum and energy of \(\Gamma_{G}\) are not yet computed. At present, [2] is the only paper where Abdussakir et al. investigated the Signless Laplacian spectrum of non-commuting graphs of dihedral groups. However, various spectra and energies (including Signless Laplacian spectrum and energy) of the complement of \(\Gamma_{G}\), known as commuting graph of \(G\), are already computed in [7, 9, 10, 11, 13, 14, 28]. Various spectra and energies of non-commuting conjugacy class graphs of several families of finite groups are computed in [26]. In this paper, we consider the following questions and answer them up to some extent. **Question 1.1**.: _Which finite non-abelian groups give Q-integral non-commuting graphs?_ **Question 1.2**.: _Are there any finite non-abelian group \(G\) such that \(\Gamma_{G}\) is hypoenergetic, hyperenergetic, L-hyperenergetic and Q-hyperenergetic?_ **Question 1.3**.: _Which finite non-abelian groups satisfy the following inequalities?_ * \(E(\Gamma_{G})\leq LE(\Gamma_{G})\)_._ * \(LE(\Gamma_{G})\leq LE^{+}(\Gamma_{G})\)_._ We compute Q-spec(\(\Gamma_{G}\)) and determine several conditions such that \(\Gamma_{G}\) is Q-integral. We also compute \(LE^{+}(\Gamma_{G})\) for many families of finite non-abelian groups and compare various energies of \(\Gamma_{G}\). These comparisons are interesting because mathematicians want to know about the graphs \(\mathcal{G}\) such that \(E(\mathcal{G})\leq LE(\mathcal{G})\) (see [7, 22, 24, 29]) and \(LE(\mathcal{G})\leq LE^{+}(\mathcal{G})\) (see [5, 7]). Using energies of \(\Gamma_{G}\) for various classes of finite groups we determine whether they are hyperenergetic or hypoenergetic. In [19], Gutman posed Conjecture 1.4 which was disproved by different mathematicians providing counter examples (see [21]). **Conjecture 1.4**.: _Any finite graph \(\mathcal{G}\ncong K_{|v(\mathcal{G})|}\) is non-hyperenergetic._ It is worth mentioning that we also disprove Conjecture 1.4 by considering non-commuting graphs of finite groups (see Theorem 6.3(b)). We shall also determine finite groups such that their non-commuting graphs are Q-hyperenergetic and L-hyperenergetic. We conclude this section with the following results which will help us to compute spectrum, Laplacian spectrum and Signless Laplacian spectrum of \(\Gamma_{G}\) in the succeeding sections. **Theorem 1.5** ([8, Corollary 2.3]).: _Let \(\mathcal{G}\) be a graph and \(\mathcal{G}=l_{1}K_{m_{1}}\cup l_{2}K_{m_{2}}\cup\cdots\cup l_{k}K_{m_{k}}\), where \(l_{i}K_{m_{i}}\) denotes the disjoint union of \(l_{i}\) copies of \(K_{m_{i}}\) for \(1\leq i\leq k\) and \(m_{1}<m_{2}<\cdots<m_{k}\). Then_ \[\text{L-spec}(\mathcal{G}^{c})= \left\{(0)^{1},\left(\sum_{i=1}^{k}l_{i}m_{i}-m_{k}\right)^{l_{k }(m_{k}-1)},\left(\sum_{i=1}^{k}l_{i}m_{i}-m_{k-1}\right)^{l_{k-1}(m_{k-1}-1)},\ldots,\right.\] \[\left.\left(\sum_{i=1}^{k}l_{i}m_{i}-m_{1}\right)^{l_{1}(m_{1}-1)},\left(\sum_{i=1}^{k}l_{i}m_{i}\right)\sum_{i=1}^{k}l_{i-1}\right\}.\] **Theorem 1.6** ([32, Corollary 2.3] and [33, Corollary 2.2]).: _Let \(\mathcal{G}\) be the complete \(r\)-partite graph \(K_{\underbrace{p_{1},\ldots,p_{1}}_{a_{1}\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots ### The dihedral groups, \(D_{2m}\) We consider \(D_{2m}:=\langle a,b:a^{m}=b^{2}=1;bab^{-1}=a^{-1}\rangle\), the dihedral groups of order \(2m\) (where \(m>2\)). Results regarding different energies of non-commuting graphs of \(D_{2m}\) are given below. **Theorem 2.1** ([14, Corollary 4.1.7 and (4.3.e)]).: _Let \(G\) be isomorphic to \(D_{2m}\)._ 1. _If_ \(m\) _is odd then_ \[E(\Gamma_{D_{2m}})=(m-1)+\sqrt{(m-1)(5m-1)}\;\;\text{and}\;\;LE(\Gamma_{D_{2m} })=\frac{2m(m-1)(m-2)+2m(2m-1)}{2m-1}.\] 2. _If_ \(m\) _is even then_ \[E(\Gamma_{D_{2m}})=(m-2)+\sqrt{(m-2)(5m-2)}\;\;\text{and}\;\;LE(\Gamma_{D_{2m} })=\frac{m(m-2)(m-4)+2m(m-1)}{m-1}.\] **Theorem 2.2**.: _Let \(G\) be isomorphic to \(D_{2m}\), where \(m\) is odd. Then_ \[\text{\rm Q-spec}(\Gamma_{D_{2m}})\!=\!\left\{(m)^{m-2},(2m-3)^{m-1},\left( \frac{4m-3+\sqrt{8m^{2}-16m+9}}{2}\right)^{1},\left(\frac{4m-3-\sqrt{8m^{2}-16m +9}}{2}\right)^{1}\right\}\] _and_ \[LE^{+}(\Gamma_{D_{2m}})=\begin{cases}\frac{9}{5}+\sqrt{33},&\text{if}\;m=3\\ \frac{2m^{3}-10m^{2}+12m-3}{2m-1}+\sqrt{8m^{2}-16m+9},&\text{if}\;m\geq 5. \end{cases}\] Proof.: If \(G\cong D_{2m}\) and \(m\) is odd then \(|v(\Gamma_{D_{2m}})|=2m-1\) and \(\Gamma_{D_{2m}}=K_{m.1.1,(m-1)}\). Using Theorem 1.6(b), we have \[Q_{\Gamma_{D_{2m}}}(x)= \prod_{i=1}^{2}(x-(2m-1)+p_{i})^{a_{i}(p_{i}-1)}\prod_{i=1}^{2}( x-(2m-1)+2p_{i})^{a_{i}}\left(1-\sum_{i=1}^{2}\frac{a_{i}p_{i}}{x-(2m-1)+2p_{i}}\right)\] \[= \,(x-(2m-2))^{0}(x-m)^{m-2}(x-2m+3)^{m}(x-1)\left(1-\frac{m}{x-2m +3}-\frac{m-1}{x-1}\right)\] \[= \,(x-m)^{m-2}(x-(2m-3))^{m-1}(x^{2}-(4m-3)x+2m^{2}-2m).\] Thus, \(\text{\rm Q-spec}(\Gamma_{D_{2m}})=\left\{(m)^{m-2},(2m-3)^{m-1},\left(\frac{4 m-3+\sqrt{8m^{2}-16m+9}}{2}\right)^{1},\left(\frac{4m-3-\sqrt{8m^{2}-16m+9}}{2} \right)^{1}\right\}\). Number of edges in \(\Gamma_{D_{2m}}^{c}\) is \(\frac{m^{2}-3m+2}{2}\). Thus, \(|e(\Gamma_{D_{2m}})|=\frac{(2m-1)(2m-1-1)}{2}-\frac{m^{2}-3m+2}{2}=\frac{3m(m- 1)}{2}\). Now \[\left|m-\frac{2|e(\Gamma_{D_{2m}})|}{|v(\Gamma_{D_{2m}})|}\right|=\left|\frac {-m(m-2)}{2m-1}\right|=\frac{m(m-2)}{2m-1},\] \[\left|2m-3-\frac{2|e(\Gamma_{D_{2m}})|}{|v(\Gamma_{D_{2m}})|}\right|=\left| \frac{m^{2}-5m+3}{2m-1}\right|=\begin{cases}\frac{3}{5},&\text{if}\;m=3\\ \frac{m^{2}-5m+3}{2m-1},&\text{if}\;m\geq 5,\end{cases}\] \[\left|\frac{1}{2}\left(4m-3+\sqrt{8m^{2}-16m+9}\right)-\frac{2|e( \Gamma_{D_{2m}})|}{|v(\Gamma_{D_{2m}})|}\right|= \left|\frac{1}{2}\left(\sqrt{8m^{2}-16m+9}+m-\frac{3}{2}+\frac{3}{4m-2 }\right)\right|\] \[= \frac{1}{2}\left(\sqrt{8m^{2}-16m+9}+m-\frac{3}{2}+\frac{3}{4m-2 }\right)\] and \[\left|\frac{1}{2}\left(4m-3-\sqrt{8m^{2}-16m+9}\right)-\frac{2|e( \Gamma_{D_{2m}})|}{|v(\Gamma_{D_{2m}})|}\right|= \left|\frac{1}{2}\left(-\sqrt{8m^{2}-16m+9}+m-\frac{3}{2}+\frac{3}{4m- 2}\right)\right|\] \[= \frac{1}{2}\left(\sqrt{8m^{2}-16m+9}-m+\frac{3}{2}-\frac{3}{4m-2 }\right).\] Therefore, for \(m=3\) we have \(LE^{+}(\Gamma_{D_{2m}})=\frac{9}{5}+\sqrt{33}\). For \(m\geq 5\) we have \[LE^{+}(\Gamma_{D_{2m}})= (m-2)\times\frac{m(m-2)}{2m-1}+(m-1)\times\frac{m^{2}-5m+3}{2m-1}+\] \[\frac{1}{2}\left(\sqrt{8m^{2}-16m+9}+m-\frac{3}{2}+\frac{3}{4m-2 }\right)+\frac{1}{2}\left(\sqrt{8m^{2}-16m+9}-m+\frac{3}{2}-\frac{3}{4m-2}\right)\] and the result follows on simplification. **Theorem 2.3**.: _Let \(G\) be isomorphic to \(D_{2m}\), where \(m\) is even. Then_ \[\text{Q-spec}(\Gamma_{D_{2m}})\] \[=\left\{(2m-4)^{\frac{m}{2}},(m)^{m-3},(2m-6)^{\frac{m}{2}-1}, \left(2m-3+\sqrt{2m^{2}-8m+9}\right)^{1},\left(2m-3-\sqrt{2m^{2}-8m+9}\right)^ {1}\right\}\] _and_ \[LE^{+}(\Gamma_{D_{2m}})=\begin{cases}\frac{m^{3}-4m^{2}+12}{2m-2}+2\sqrt{2m^{2 }-8m+9},&\text{if $4\leq m\leq 8$}\\ \frac{m^{3}-8m^{2}+16m-6}{m-1}+2\sqrt{2m^{2}-8m+9},&\text{if $m\geq 10$}.\end{cases}\] Proof.: If \(G\cong D_{2m}\) and \(m\) is even then \(|v(\Gamma_{D_{2m}})|=2m-2\) and \(\Gamma_{D_{2m}}=K_{\frac{m}{2},2,1,(m-2)}\). Using Theorem 1.6(b), we have \[Q_{\Gamma_{D_{2m}}}(x)= \prod_{i=1}^{2}(x-(2m-2)+p_{i})^{a_{i}(p_{i}-1)}\prod_{i=1}^{2}( x-(2m-2)+2p_{i})^{a_{i}}\left(1-\sum_{i=1}^{2}\frac{a_{i}p_{i}}{x-(2m-2)+2p_{i}}\right)\] \[= \left(x-(2m-4)\right)^{\frac{m}{2}}(x-m)^{m-3}(x-2m+6)^{\frac{m} {2}}(x-2)\left(1-\frac{m}{x-2m+6}-\frac{m-2}{x-2}\right)\] \[= \left(x-(2m-4)\right)^{\frac{m}{2}}(x-m)^{m-3}(x-(2m-6))^{\frac{ m}{2}-1}(x^{2}-(4m-6)x+2m^{2}-4m).\] Therefore \[\text{Q-spec}(\Gamma_{D_{2m}})\] \[= \left\{(2m-4)^{\frac{m}{2}},(m)^{m-3},(2m-6)^{\frac{m}{2}-1}, \left(2m-3+\sqrt{2m^{2}-8m+9}\right)^{1},\left(2m-3-\sqrt{2m^{2}-8m+9}\right)^ {1}\right\}.\] Number of edges in \(\Gamma_{D_{2m}}^{c}\) is \(\frac{m^{2}-4m+6}{2}\) and so \(|e(\Gamma_{D_{2m}})|=\frac{(2m-2)(2m-2-1)}{2}-\frac{m^{2}-4m+6}{2}=\frac{3m(m- 2)}{2}\). Now \[\left|2m-4-\frac{2|e(\Gamma_{D_{2m}})|}{|v(\Gamma_{D_{2m}})|}\right|=\left| \frac{(m-2)(m-4)}{2m-2}\right|=\frac{(m-2)(m-4)}{2m-2},\] \[\left|m-\frac{2|e(\Gamma_{D_{2m}})|}{|v(\Gamma_{D_{2m}})|}\right|=\left|\frac {(-m^{2}+4m)}{2m-2}\right|=\frac{(m^{2}-4m)}{2m-2},\] \[\left|2m-6-\frac{2|e(\Gamma_{D_{2m}})|}{|v(\Gamma_{D_{2m}})|}\right|=\left| \frac{(m^{2}-10m+12)}{2m-2}\right|=\begin{cases}\frac{(m^{2}+10m-12)}{2m-2}, &\text{if $m\leq 8$}\\ \frac{(m^{2}-10m+12)}{2m-2},&\text{if $m\geq 10$},\end{cases}\] \[\left|2m-3+\sqrt{2m^{2}-8m+9}-\frac{2|e(\Gamma_{D_{2m}})|}{|v( \Gamma_{D_{2m}})|}\right|= \left|\sqrt{2m^{2}-8m+9}+\frac{m}{2}-\frac{3}{2}+\frac{3}{2m-2}\right|\] \[= \sqrt{2m^{2}-8m+9}+\frac{m}{2}-\frac{3}{2}+\frac{3}{2m-2}\] and \[\left|2m-3-\sqrt{2m^{2}-8m+9}-\frac{2|e(\Gamma_{D_{2m}})|}{|v( \Gamma_{D_{2m}})|}\right|= \left|-\sqrt{2m^{2}-8m+9}+\frac{m}{2}-\frac{3}{2}+\frac{3}{2m-2}\right|\] \[= \sqrt{2m^{2}-8m+9}-\frac{m}{2}+\frac{3}{2}-\frac{3}{2m-2}.\] Therefore, for \(4\leq m\leq 8\), we have \[LE^{+}(\Gamma_{D_{2m}})= \frac{m}{2}\times\frac{(m-2)(m-4)}{2m-2}+(m-3)\times\frac{(m^{2}- 4m)}{2m-2}+\left(\frac{m}{2}-1\right)\times\frac{-(m^{2}-10m+12)}{2m-2}\] \[+\sqrt{2m^{2}-8m+9}+\frac{m}{2}-\frac{3}{2}+\frac{3}{2m-2}+\sqrt {2m^{2}-8m+9}-\frac{m}{2}+\frac{3}{2}-\frac{3}{2m-2}\] and for \(m\geq 10\), we have \[LE^{+}(\Gamma_{D_{2m}})= \frac{m}{2}\times\frac{(m-2)(m-4)}{2m-2}+(m-3)\times\frac{(m^{2}- 4m)}{2m-2}+\left(\frac{m}{2}-1\right)\times\frac{(m^{2}-10m+12)}{2m-2}\] \[+\sqrt{2m^{2}-8m+9}+\frac{m}{2}-\frac{3}{2}+\frac{3}{2m-2}+\sqrt {2m^{2}-8m+9}-\frac{m}{2}+\frac{3}{2}-\frac{3}{2m-2}.\] The required expressions for \(LE^{+}(\Gamma_{D_{2m}})\) can be obtained on simplification. **Theorem 2.4**.: _If \(G\) is isomorphic to \(D_{2m}\) then_ * \(E(\Gamma_{D_{2m}})\leq LE^{+}(\Gamma_{D_{2m}})\leq LE(\Gamma_{D_{2m}})\)_, equality holds if and only if_ \(G\cong D_{8}\)_._ * \(\Gamma_{D_{2m}}\) _is non-hypoenergetic as well as non-hyperenergetic._ * \(\Gamma_{D_{6}}\) _is L-hyperenergetic but not Q-hyperenergetic._ \(\Gamma_{D_{8}}\) _is not L-hyperenergetic and not Q-hyperenergetic. If_ \(m\neq 3,4\) _then_ \(\Gamma_{D_{2m}}\) _is Q-hyperenergetic and L-hyperenergetic._ Proof.: (a) **Case 1:**\(m\) is odd For \(m=3\), using Theorems 2.1 and 2.2, we have \(E(\Gamma_{D_{6}})=2+2\sqrt{7}\), \(LE(\Gamma_{D_{6}})=\frac{42}{5}\) and \(LE^{+}(\Gamma_{D_{6}})=\frac{9}{5}+\sqrt{33}\). Clearly, \(E(\Gamma(D_{6}))<LE^{+}(\Gamma(D_{6}))<LE(\Gamma(D_{6}))\). For \(m\geq 5\), using Theorems 2.1 and 2.2, we have \[LE(\Gamma_{D_{2m}})-LE^{+}(\Gamma_{D_{2m}})=\frac{8m^{2}-10m+3}{2m-1}-\sqrt{8 m^{2}-16m+9} \tag{2.1}\] and \[LE^{+}(\Gamma_{D_{2m}})-E(\Gamma_{D_{2m}})=\frac{2m^{2}(m-6)+15m-4}{2m-1}+ \sqrt{8m^{2}-16m+9}-\sqrt{5m^{2}-6m+1}. \tag{2.2}\] Since \(8m^{2}-10m+3>0\), \((2m-1)\sqrt{8m^{2}-16m+9}>0\) and \((8m^{2}-10m+3)^{2}-\left(\sqrt{8m^{2}-16m+9}\right)^{2}(2m-1)^{2}=32m^{3}(m-2 )+8m(5m-1)>0\) we have \(8m^{2}-10m+3-(2m-1)\sqrt{8m^{2}-16m+9}>0\). Therefore, by (2.1), \((2m-1)(LE(\Gamma_{D_{2m}})-LE^{+}(\Gamma_{D_{2m}}))>0\). Hence, \(LE(\Gamma_{D_{2m}})>LE^{+}(\Gamma_{D_{2m}})\). Again, \(\sqrt{8m^{2}-16m+9}>0\), \(\sqrt{5m^{2}-6m+1}>0\) and \(\left(\sqrt{8m^{2}-16m+9}\right)^{2}-\left(\sqrt{5m^{2}-6m+1}\right)^{2}=m(3m- 10)+8>0\). Therefore, \(\sqrt{8m^{2}-16m+9}-\sqrt{5m^{2}-6m+1}>0\). Since \(2m^{2}(m-6)+15m-4>0\) we have \(\frac{2m^{2}(m-6)+15m-4}{2m-1}+\sqrt{8m^{2}-16m+9}-\sqrt{5m^{2}-6m+1}>0\). Therefore, by (2.2), \(LE^{+}(\Gamma_{D_{2m}})>E(\Gamma_{D_{2m}})\). Hence, \(E(\Gamma_{D_{2m}})<LE^{+}(\Gamma_{D_{2m}})<LE(\Gamma_{D_{2m}})\). **Case 2:**\(m\) is even For \(4\leq m\leq 8\), using Theorems 2.1 and 2.3, we have \[LE(\Gamma_{D_{2m}})-LE^{+}(\Gamma_{D_{2m}})=\frac{m^{3}-4m^{2}+12m-12}{2m-2}- 2\sqrt{2m^{2}-8m+9} \tag{2.3}\] and \[LE^{+}(\Gamma_{D_{2m}})-E(\Gamma_{D_{2m}})=\frac{(m-4)(m^{2}-2m-2)}{m-1}+2 \sqrt{2m^{2}-8m+9}-\sqrt{5m^{2}-12m+4}. \tag{2.4}\] Since \(m^{3}-4m^{2}+12m-12>0\), \(2(2m-2)\sqrt{2m^{2}-8m+9}>0\) and \[\left(m^{3}-4m^{2}+12m-12\right)^{2}-\left(2\sqrt{2m^{2}-8m+9}\right)^{2}(2m- 2)^{2}=m(m-4)^{2}(m-2)(m^{2}+2m-4)\geq 0\] (equality holds if and only if \(m=4\)). It follows that \(m^{3}-4m^{2}+12m-12-2(2m-2)\sqrt{2m^{2}-8m+9}\geq 0\). Therefore, by (2.3), \((2m-2)\left(LE(\Gamma_{D_{2m}})-LE^{+}(\Gamma_{D_{2m}})\right)\geq 0\). Hence, \(LE(\Gamma_{D_{2m}})\geq LE^{+}(\Gamma_{D_{2m}})\) equality holds if and only if \(G\cong D_{8}\). Again, \(2\sqrt{2m^{2}-8m+9}>0,\sqrt{5m^{2}-12m+4}>0\) and \(\left(2\sqrt{2m^{2}-8m+9}\right)^{2}-\left(\sqrt{5m^{2}-12m+4}\right)^{2}=(m-4 )(3m-8)\geq 0\). Therefore, \(2\sqrt{2m^{2}-8m+9}-\sqrt{5m^{2}-12m+4}\geq 0\) (equality holds if and only if \(m=4\)). Therefore, by (2.4), \(LE^{+}(\Gamma_{D_{2m}})\geq E(\Gamma_{D_{2m}})\). Hence, \(E(\Gamma_{D_{2m}})\leq LE^{+}(\Gamma_{D_{2m}})\leq LE(\Gamma_{D_{2m}})\) equality holds if and only if \(G\cong D_{8}\). For \(m\geq 10\), using Theorems 2.1 and 2.3, we have \[LE(\Gamma_{D_{2m}})-LE^{+}(\Gamma_{D_{2m}})=\frac{4m^{2}-10m+6}{m-1}-2\sqrt{2 m^{2}-8m+9} \tag{2.5}\] and \[LE^{+}(\Gamma_{D_{2m}})-E(\Gamma_{D_{2m}})=\frac{m^{3}-9m^{2}+19m-8}{m-1}+2 \sqrt{2m^{2}-8m+9}-\sqrt{5m^{2}-12m+4}. \tag{2.6}\] Since \(4m^{2}-10m+6>0\), \(2(m-1)\sqrt{2m^{2}-8m+9}>0\) and \((4m^{2}-10m+6)^{2}-\left(2\sqrt{2m^{2}-8m+9}\right)^{2}(m-1)^{2}=8m^{3}(m-4)+8m(5m -2)>0\) we have \(4m^{2}-10m+6-2(m-1)\sqrt{2m^{2}-8m+9}>0\). Therefore, by (2.5), \((m-1)(LE(\Gamma_{D_{2m}})-LE^{+}(\Gamma_{D_{2m}}))>0\). Hence, \(LE(\Gamma_{D_{2m}})>LE^{+}(\Gamma_{D_{2m}})\). Again, \(2\sqrt{2m^{2}-8m+9}>0,\sqrt{5m^{2}-12m+4}>0\) and \(\left(2\sqrt{2m^{2}-8m+9}\right)^{2}-\left(\sqrt{5m^{2}-12m+4}\right)^{2}=m(3m- 10)+8>0\). Therefore, \(2\sqrt{2m^{2}-8m+9}-\sqrt{5m^{2}-12m+4}>0\). Since \(m^{3}-9m^{2}+19m-8>0\) we have \(\frac{m^{3}-9m^{2}+19m-8}{m-1}+2\sqrt{2m^{2}-8m+9}-\sqrt{5m^{2}-12m+4}>0\). Therefore, by (2.6), \(LE^{+}(\Gamma_{D_{2m}})>E(\Gamma_{D_{2m}})\). Hence, \(E(\Gamma_{D_{2m}})<LE^{+}(\Gamma_{D_{2m}})<LE(\Gamma_{D_{2m}})\). (b) **Case 1:**\(m\) is odd Here, \(|v(\Gamma_{D_{2m}})|=2m-1\) and \(E(K_{|v(\Gamma_{D_{2m}})|})=LE(K_{|v(\Gamma_{D_{2m}})|})=LE^{+}(K_{|v(\Gamma_ {D_{2m}})|})=4m-4\). Using Theorem 2.1, we have \[E(\Gamma_{D_{2m}})-|v(\Gamma_{D_{2m}})|=\sqrt{(m-1)(5m-1)}-m \tag{2.7}\] and \[E(K_{|v(\Gamma_{D_{2m}})|})-E(\Gamma_{D_{2m}})=3(m-1)-\sqrt{(m-1)(5m-1)}. \tag{2.8}\] Since \(\sqrt{(m-1)(5m-1)}>0\), \(m>0\) and \(\left(\sqrt{(m-1)(5m-1)}\right)^{2}-m^{2}=4m^{2}-6m+1>0\) we have \(\sqrt{(m-1)(5m-1)}-m>0\). Therefore, by (2.7), \(E(\Gamma_{D_{2m}})>|v(\Gamma_{D_{2m}})|\). Again, \(\sqrt{(m-1)(5m-1)}>0\), \(3(m-1)>0\) and \((3(m-1))^{2}-\left(\sqrt{(m-1)(5m-1)}\right)^{2}=4(m^{2}-3m+2)>0\) and so \(3(m-1)-\sqrt{(m-1)(5m-1)}>0\). Therefore, by (2.8), \(E(K_{|v(\Gamma_{D_{2m}})|})>E(\Gamma_{D_{2m}})\). **Case 2:**\(m\) is even Here, \(|v(\Gamma_{D_{2m}})|=2m-2\) and \(E(K_{|v(\Gamma_{D_{2m}})|})=LE(K_{|v(\Gamma_{D_{2m}})|})=LE^{+}(K_{|v(\Gamma_ {D_{2m}})|})=4m-6\). Using Theorem 2.1, we have \[E(\Gamma_{D_{2m}})-|v(\Gamma_{D_{2m}})|=\sqrt{(m-2)(5m-2)}-m \tag{2.9}\] and \[E(K_{|v(\Gamma_{D_{2m}})|})-E(\Gamma_{D_{2m}})=3(m-2)+2-\sqrt{(m-2)(5m-2)}. \tag{2.10}\] Since \(\sqrt{(m-2)(5m-2)}>0\), \(m>0\) and \(\left(\sqrt{(m-2)(5m-2)}\right)^{2}-m^{2}=4(m^{2}-3m+1)>0\) we have \(\sqrt{(m-2)(5m-2)}-m>0\). Therefore, by (2.9), \(E(\Gamma_{D_{2m}})>|v(\Gamma_{D_{2m}})|\). Again, \(\sqrt{(m-2)(5m-2)}>0\), \(3(m-2)+2>0\) and \[(3(m-2)+2)^{2}-\left(\sqrt{(m-2)(5m-2)}\right)^{2}=4(m^{2}-3m+3)>0\] and so \(3(m-2)+2-\sqrt{(m-2)(5m-2)}>0\). Therefore, by (2.10), \(E(K_{|v(\Gamma_{D_{2m}})|})>E(\Gamma_{D_{2m}})\). (c) **Case 1:**\(m\) is odd For \(m=3\), using Theorems 2.1 and 2.2, \(LE(\Gamma(D_{6}))=\frac{42}{5}\), \(LE^{+}(\Gamma(D_{6}))=\frac{9}{5}+\sqrt{33}\) and \(LE^{+}(K_{|v(\Gamma(D_{6}))|})=LE(K_{|v(\Gamma(D_{6}))|})=8\). Clearly, \[LE^{+}(\Gamma(D_{6}))<LE^{+}(K_{|v(\Gamma(D_{6})|})|)=LE(K_{|v(\Gamma(D_{6}))| })<LE(\Gamma(D_{6})).\] For \(m\geq 5\), using Theorem 2.2, we have \[LE^{+}(\Gamma_{D_{2m}})-LE^{+}(K_{|v(\Gamma_{D_{2m}})|})=\frac{2m^{2}(m-9)+24 m-7}{2m-1}+\sqrt{8m^{2}-16m+9}>0.\] Therefore, \(LE^{+}(\Gamma_{D_{2m}})>LE^{+}(K_{|v(\Gamma_{D_{2m}})|})\) which implies \(\Gamma_{D_{2m}}\) is Q-hyperenergetic and consequently part (a) implies \(\Gamma_{D_{2m}}\) is L-hyperenergetic. **Case 2:**\(m\) is even For \(m=4\), using Theorem 2.1, we have \(LE(\Gamma(D_{8}))=8\) and \(LE(K_{|v(\Gamma(D_{8}))|})=10\). Clearly, \(LE(\Gamma(D_{8}))\)\(<LE(K_{|v(\Gamma(D_{8}))|})\). Therefore, \(\Gamma_{D_{8}}\) is not L-hyperenergetic and not Q-hyperenergetic. Using Theorem 2.3, for \(m=6\) and \(8\), we have \[LE^{+}(\Gamma_{D_{2m}})-LE^{+}(K_{|v(\Gamma_{D_{2m}})|})=\frac{m^{2}(m-12)+20m} {2m-2}+2\sqrt{2m^{2}-8m+9}>0\] and for \(m\geq 10\), \(LE^{+}(\Gamma_{D_{2m}})-LE^{+}(K_{|v(\Gamma_{D_{2m}})|})=\frac{m^{2}(m-12)+26m-9} {m-1}+2\sqrt{2m^{2}-8m+9}>0\). Therefore, \(LE^{+}(\Gamma_{D_{2m}})>LE^{+}(K_{|v(\Gamma_{D_{2m}})|})\) which implies \(\Gamma_{D_{2m}}\) is Q-hyperenergetic and consequently part (a) implies \(\Gamma_{D_{2m}}\) is L-hyperenergetic. In Theorem 2.4, we compare \(E(\Gamma_{D_{2m}})\), \(LE(\Gamma_{D_{2m}})\) and \(LE^{+}(\Gamma_{D_{2m}})\). However, in the following figures, we show how close are they. ### The Quasidihedral groups, \(QD_{2^{n}}\) We consider \(QD_{2^{n}}:=\langle a,b:a^{2^{n-1}}=b^{2}=1,bab^{-1}=a^{2^{n-2}-1}\rangle\), the quasidihedral groups of order \(2^{n}\) (where \(n\geq 4\)). Results regarding different energies of non-commuting graphs of \(QD_{2^{n}}\) are given below. **Theorem 2.5** ([14, Result 1.2.16 (d)]and [12, Proposition 3.2]).: _Let \(G\) be isomorphic to \(QD_{2^{n}}\). Then_ \[E(\Gamma_{QD_{2^{n}}})=(2^{n-1}-2)+2\sqrt{(5\times 2^{n-2}-1)(2^{n-2}-1)}\, \text{ and }\,LE(\Gamma_{QD_{2^{n}}})=\frac{2^{3n-3}-2^{2n}+3\times 2^{n}}{2^{n-1}-1}.\] **Theorem 2.6**.: _Let \(G\) be isomorphic to \(QD_{2^{n}}\). Then_ \[\text{\rm Q-spec}(\Gamma_{QD_{2^{n}}})= \left\{(2^{n}-4)^{2^{n-2}},(2^{n}-2^{n-1})^{2^{n-1}-3},(2^{n}-6) ^{2^{n-2}-1},\left(2^{n}-3+\sqrt{2^{2n-1}-2^{n+2}+9}\right)^{1},\right.\] \[\left.\left(2^{n}-3-\sqrt{2^{2n-1}-2^{n+2}+9}\right)^{1}\right\}\] _and_ \[LE^{+}(\Gamma_{QD_{2^{n}}})=\begin{cases}\frac{134}{7}+2\sqrt{73},&\text{if }n=4\\ \frac{2^{3n-2}+2^{n+4}-2^{2n+2}-12}{2^{n}-2}+2\sqrt{2^{2n-1}-2^{n+2}+9},&\text{ if }n\geq 5.\end{cases}\] Proof.: If \(G\cong QD_{2^{n}}\) then \(|v(\Gamma_{QD_{2^{n}}})|=2^{n}-2\) and \(\Gamma_{QD_{2^{n}}}=K_{2^{n-2},2,1,(2^{n-1}-2)}\). Using Theorem 1.6(b), we have \[Q_{\Gamma_{QD_{2^{n}}}}(x)= \prod_{i=1}^{2}(x-(2^{n}-2)+p_{i})^{a_{i}(p_{i}-1)}\prod_{i=1}^{ 2}(x-(2^{n}-2)+2p_{i})^{a_{i}}\left(1-\sum_{i=1}^{2}\frac{a_{i}p_{i}}{x-(2^{n }-2)+2p_{i}}\right)\] \[= (x-2^{n}+4)^{2^{n-2}}(x-2^{n}+2^{n-1})^{2^{n-1}-3}(x-2^{n}+6)^{2^ {n-2}}(x-2)\left(1-\frac{2^{n-1}}{x-2^{n}+6}-\frac{2^{n-1}-2}{x-2}\right)\] \[= (x-(2^{n}-4))^{2^{n-2}}(x-(2^{n}-2^{n-1}))^{2^{n-1}-3}(x-(2^{n}-6 ))^{2^{n-2}-1}(x^{2}-(2^{n+1}-6)x+2^{2n-1}-2^{n+1}).\] Thus, \[\text{\rm Q-spec}(\Gamma_{QD_{2^{n}}})= \left\{(2^{n}-4)^{2^{n-2}},(2^{n}-2^{n-1})^{2^{n-1}-3},(2^{n}-6)^ {2^{n-2}-1},\left(2^{n}-3+\sqrt{2^{2n-1}-2^{n+2}+9}\right)^{1},\right.\] \[\left.\left(2^{n}-3-\sqrt{2^{2n-1}-2^{n+2}+9}\right)^{1}\right\}.\] Number of edges of \(\Gamma_{QD_{2^{n}}}^{\zeta}\) is \(2^{2n-3}-2^{n}+3\). Thus, \(|e(\Gamma_{QD_{2^{n}}})|=\frac{(2^{n}-2)(2^{n}-2-1)}{2}-(2^{2n-3}-2^{n}+3)\)\(=3(2^{2n-3}-2^{n-1})\). Now \[\left|2^{n}-4-\frac{2|e(\Gamma_{QD_{2^{n}}})|}{|v(\Gamma_{QD_{2^{n}}})|}\right| =\left|\frac{8+2^{2n-2}-3\times 2^{n}}{2^{n}-2}\right|=\frac{8+2^{2n-2}-3 \times 2^{n}}{2^{n}-2},\] Figure 1: Energies of \(\Gamma_{D_{2n}}\), \(m\) is odd Figure 2: Energies of \(\Gamma_{D_{2n}}\), \(m\) is even \[\left|2^{n}-6-\frac{2|e(\Gamma_{QD_{2^{n}}})|}{|v(\Gamma_{QD_{2^{n}}})|}\right|= \left|\frac{12+2^{2n-2}-5\times 2^{n}}{2^{n}-2}\right|=\begin{cases}\frac{2}{7},& \text{if }n=4\\ \frac{12+2^{2n-2}-5\times 2^{n}}{2^{n}-2},&\text{if }n\geq 5,\end{cases}\] \[\left|2^{n}-3+\sqrt{2^{2n-1}-2^{n+2}+9}-\frac{2|e(\Gamma_{QD_{2^{n}}})|}{|v( \Gamma_{QD_{2^{n}}})|}\right|= \left|\sqrt{2^{2n-1}-2^{n+2}+9}+\frac{2^{2n-2}-2^{n+1}+6}{2^{n}- 2}\right|\] \[= \sqrt{2^{2n-1}-2^{n+2}+9}+\frac{2^{2n-2}-2^{n+1}+6}{2^{n}-2}\] and \[\left|2^{n}-3-\sqrt{2^{2n-1}-2^{n+2}+9}-\frac{2|e(\Gamma_{QD_{2^ {n}}})|}{|v(\Gamma_{QD_{2^{n}}})|}\right|= \left|-\sqrt{2^{2n-1}-2^{n+2}+9}+\frac{2^{2n-2}-2^{n+1}+6}{2^{n} -2}\right|\] \[= \sqrt{2^{2n-1}-2^{n+2}+9}-\frac{2^{2n-2}-2^{n+1}+6}{2^{n}-2}.\] Therefore, for \(n=4\) we have \(LE^{+}(\Gamma_{QD_{2^{n}}})=\frac{134}{7}+2\sqrt{73}\). For \(n\geq 5\) we have \[LE^{+}(\Gamma_{QD_{2^{n}}})= (2^{n-2})\times\frac{8+2^{n}(2^{n-2}-3)}{2^{n}-2}+(2^{n-1}-3) \times\frac{2^{n+1}(2^{n-3}-1)}{2^{n}-2}+(2^{n-2}-1)\times\frac{12+2^{n}(2^{n -2}-5)}{2^{n}-2}\] \[+\sqrt{2^{2n-1}-2^{n+2}+9}+\frac{2^{2n-2}-2^{n+1}+6}{2^{n}-2}+ \sqrt{2^{2n-1}-2^{n+2}+9}-\frac{2^{2n-2}-2^{n+1}+6}{2^{n}-2}\] and the result follows on simplification. **Theorem 2.7**.: _If \(G\) is isomorphic to \(QD_{2^{n}}\) then_ * \(E(\Gamma_{QD_{2^{n}}})<LE^{+}(\Gamma_{QD_{2^{n}}})<LE(\Gamma_{QD_{2^{n}}})\)_._ * \(\Gamma_{QD_{2^{n}}}\) _is non-hypoenergetic as well as non-hyperenergetic._ * \(\Gamma_{QD_{2^{n}}}\) _is Q-hyperenergetic and L-hyperenergetic._ Proof.: (a) For \(n=4\), using Theorems 2.5 and 2.6, we have \(E(\Gamma_{QD_{2^{n}}})=6+2\sqrt{57}\), \(LE(\Gamma_{QD_{2^{n}}})=\frac{304}{7}\) and \(LE^{+}(\Gamma_{QD_{2^{n}}})=\frac{134}{7}+2\sqrt{73}\). Clearly, \(E(\Gamma_{QD_{16}})<LE^{+}(\Gamma_{QD_{16}})<LE(\Gamma_{QD_{16}})\). For \(n\geq 5\), using Theorems 2.5 and 2.6, we have \[LE(\Gamma_{QD_{2^{n}}})-LE^{+}(\Gamma_{QD_{2^{n}}})=\frac{12+2^{2n+1}-5\times 2 ^{n+1}}{2^{n}-2}-2\sqrt{2^{2n-1}-2^{n+2}+9} \tag{2.11}\] and \[LE^{+}(\Gamma_{QD_{2^{n}}})-E(\Gamma_{QD_{2^{n}}})=\frac{2^{2n-2}(2^{n}-18)+ 19\times 2^{n}-16}{2^{n}-2}+2\sqrt{2^{2n-1}-2^{n+2}+9}-2\sqrt{5\times 2^{2n-4}-3 \times 2^{n-1}+1}. \tag{2.12}\] Since \(12+2^{2n+1}-5\times 2^{n+1}>0\), \(2\sqrt{2^{2n-1}-2^{n+2}+9}(2^{n}-2)>0\) and \[(12+2^{2n+1}-5\times 2^{n+1})^{2}-\left(2\sqrt{2^{2n-1}-2^{n+2}+9}\right)^{2}(2^{n }-2)^{2}=2^{3n+1}(2^{n}-8)+2^{n+3}(5\times 2^{n}-4)>0\] we have \(12+2^{2n+1}-5\times 2^{n+1}-2(2^{n}-2)\sqrt{2^{2n-1}-2^{n+2}+9}>0\). Therefore, by (2.11), \((2^{n}-2)(LE(\Gamma_{QD_{2^{n}}})-LE^{+}(\Gamma_{QD_{2^{n}}}))>0\). Hence, \(LE(\Gamma_{QD_{2^{n}}})>LE^{+}(\Gamma_{QD_{2^{n}}})\). Again, \(\sqrt{2^{2n-1}-2^{n+2}+9}>0,\sqrt{5\times 2^{2n-4}-3\times 2^{n-1}+1}>0\) and \[\left(\sqrt{2^{2n-1}-2^{n+2}+9}\right)^{2}-\left(\sqrt{5\times 2^{2n-4}-3\times 2^{n- 1}+1}\right)^{2}=2^{n-4}(3\times 2^{n}-40)+8>0.\] Therefore, we have \(\sqrt{2^{2n-1}-2^{n+2}+9}-\sqrt{5\times 2^{2n-4}-3\times 2^{n-1}+1}>0\). Since \(2^{2n-2}(2^{n}-18)+19\times 2^{n}-16>0\) we have \(\frac{2^{2n-2}(2^{n}-18)+19\times 2^{n}-16}{2^{n}-2}+2\sqrt{2^{2n-1}-2^{n+2}+9}-2 \sqrt{5\times 2^{2n-4}-3\times 2^{n-1}+1}>0\). Therefore, by (2.12), \(LE^{+}(\Gamma_{QD_{2^{n}}})\geq E(\Gamma_{QD_{2^{n}}})\). Hence, \(E(\Gamma_{QD_{2^{n}}})<LE^{+}(\Gamma_{QD_{2^{n}}})<LE(\Gamma_{QD_{2^{n}}})\). (b) Here, \(|v(\Gamma_{QD_{2^{n}}})|=2^{n}-2\) and \(E(K_{|v(\Gamma_{QD_{2^{n}}})|})=LE(K_{|v(\Gamma_{QD_{2^{n}}})|})=LE^{+}(K_{|v( \Gamma_{QD_{2^{n}}})|})=2^{n+1}-6\). Using Theorem 2.5, we have \[E(\Gamma_{QD_{2^{n}}})-|v(\Gamma_{QD_{2^{n}}})|=2\left(\sqrt{(5\times 2^{n-2}-1)(2^{n -2}-1)}-(2^{n-1}-2^{n-2})\right) \tag{2.13}\] \[E(K_{|v(\Gamma_{QD_{2^{n}}})|})-E(\Gamma_{QD_{2^{n}}})=2\left(3\times 2^{n-2}-2- \sqrt{(5\times 2^{n-2}-1)(2^{n-2}-1)}\right). \tag{2.14}\] Since \(\sqrt{(5\times 2^{n-2}-1)(2^{n-2}-1)}>0\), \(2^{n-1}-2^{n-2}>0\) and \(\left(\sqrt{(5\times 2^{n-2}-1)(2^{n-2}-1)}\right)^{2}-(2^{n-1}-2^{n-2})^{2}=2^{n- 2}(4\times 2^{n-2}-6)+1>0\) we have \(\sqrt{(5\times 2^{n-2}-1)(2^{n-2}-1)}-(2^{n-1}-2^{n-2})>0\). Therefore, by (2.13), \(E(\Gamma_{QD_{2^{n}}})>|v(\Gamma_{QD_{2^{n}}})|\). Again, \(\sqrt{(5\times 2^{n-2}-1)(2^{n-2}-1)}>0\), \(3\times 2^{n-2}-2>0\) and \((3\times 2^{n-2}-2)^{2}-\left(\sqrt{(5\times 2^{n-2}-1)(2^{n-2}-1)}\right)^{2}=(2 ^{n-2}-3)(4\times 2^{n-2}+6)+21>0\) and so \(3\times 2^{n-2}-2-\sqrt{(5\times 2^{n-2}-1)(2^{n-2}-1)}>0\). Therefore, by (2.14), \(E(K_{|v(\Gamma_{QD_{2^{n}}})|})>E(\Gamma_{QD_{2^{n}}})\). (c) For \(n=4\), using Theorem 2.6, \(LE^{+}(\Gamma_{QD_{16}})-LE^{+}(K_{|v(\Gamma_{QD_{16}})|})=2\sqrt{73}-\frac{48 }{7}>0.\) Therefore, \(LE^{+}(\Gamma_{QD_{16}})>LE^{+}(K_{|v(\Gamma_{QD_{16}})|})\) which implies \(\Gamma_{QD_{16}}\) is Q-hyperenergetic and consequently part (a) implies \(\Gamma_{QD_{16}}\) is L-hypereergetic. For \(n\geq 5\), using Theorem 2.6, \[LE^{+}(\Gamma_{QD_{2^{n}}})-LE^{+}(K_{|v(\Gamma_{QD_{2^{n}}})|})=\frac{2^{2^{ n-2}(2^{n}-24)+2(13\times 2^{n}-12)}}{2^{n-2}}+2\sqrt{2^{2n-1}-2^{n+2}+9}>0.\] Therefore, \(LE^{+}(\Gamma_{QD_{2^{n}}})>LE^{+}(K_{|v(\Gamma_{QD_{2^{n}}})|})\) which implies \(\Gamma_{QD_{2^{n}}}\) is Q-hyperenergetic and consequently part (a) implies \(\Gamma_{QD_{2^{n}}}\) is L-hyperenergetic. In Theorem 2.7, we compare \(E(\Gamma_{QD_{2^{n}}})\), \(LE(\Gamma_{QD_{2^{n}}})\) and \(LE^{+}(\Gamma_{QD_{2^{n}}})\). However, in the following figures, we show how close are they. ### The groups \(M_{2rs}\) We consider \(M_{2rs}:=\langle a,b:a^{r}=b^{2s}=1,bab^{-1}=a^{-1}\rangle\), the groups of order \(2rs\) (where \(r\geq 3\) and \(s\geq 1\)). Results regarding different energies of non-commuting graphs of \(M_{2rs}\) are given below. **Theorem 2.8** ([14, Corollary 4.1.6 and (4.3.d)]).: _Let \(G\) be isomorphic to \(M_{2rs}\)._ * _If_ \(m\) _is odd then_ \[E(\Gamma_{M_{2rs}})=s(r-1)+s\sqrt{(r-1)(5r-1)}\,\text{ and }\,LE(\Gamma_{M_{2rs}})=\frac{s}{2r-1}\left(2r^{3}s-6r^{2}s+4rs+4r^{2}-2r\right).\] * _If_ \(m\) _is even then_ \[E(\Gamma_{M_{2rs}})=s(r-2)+s\sqrt{(r-2)(5r-2)}\,\text{ and }\,LE(\Gamma_{M_{2rs}})=\frac{s}{r-1}\left(r^{3}s-6r^{2}s+8rs+2r^{2}-2r\right).\] **Theorem 2.9**.: _Let \(G\) be isomorphic to \(M_{2rs}\), where \(r\) is odd. Then_ \[\text{Q-spec}(\Gamma_{M_{2rs}})= \left\{(2s(r-1))^{r(s-1)},(rs)^{(r-1)s-1},((2r-3)s)^{r-1},\left( \frac{s\left(4r-3+\sqrt{8r^{2}-16r+9}\right)}{2}\right)^{1},\right.\] \[\left.\left.\left(\frac{s\left(4r-3+\sqrt{8r^{2}-16r+9}\right)}{2 }\right)^{1}\right\}\right\}\] \[LE^{+}(\Gamma_{M_{2rs}})= \begin{cases}\frac{3s(4s-1)}{5}+s\sqrt{33},&\text{if }r=3\\ s\left(\frac{(2r(r-1)(r-2))s}{2r-1}-(2r-3)+\sqrt{8r^{2}-16r+9}\right),&\text{if }r \geq 5.\end{cases}\] Proof.: If \(G\cong M_{2rs}\), where \(r\) is odd, then \(|v(\Gamma_{M_{2rs}})|=(2r-1)s\) and \(\Gamma_{M_{2rs}}=K_{r.s,1.((r-1)s)}\). Using Theorem 1.6(b), we have \[Q_{\Gamma_{M_{2rs}}}(x)= \prod_{i=1}^{2}(x-(2rs-s)+p_{i})^{a_{i}(p_{i}-1)}\prod_{i=1}^{2}(x -(2rs-s)+2p_{i})^{a_{i}}\left(1-\sum_{i=1}^{2}\frac{a_{i}p_{i}}{x-(2rs-s)+2p_{i }}\right)\] \[= \left(x-2s(r-1)\right)^{r(s-1)}(x-rs)^{(r-1)s-1}(x-(2r-3)s)^{r}(x -s)\left(1-\frac{rs}{x-(2r-3)s}-\frac{(r-1)s}{x-s}\right)\] \[= \left(x-2s(r-1)\right)^{r(s-1)}(x-rs)^{(r-1)s-1}(x-(2r-3)s)^{r-1} (x^{2}-(4r-3)sx+(2r^{2}-2r)s^{2}).\] Thus, \(\text{Q-spec}(\Gamma_{M_{2rs}})=\left\{(2s(r-1))^{r(s-1)},(rs)^{(r-1)s-1},((2 r-3)s)^{r-1},\left(\frac{s(4r-3+\sqrt{8r^{2}-16r+9})}{2}\right)^{1},\right.\) \[\left.\left(\frac{s(4r-3-\sqrt{8r^{2}-16r+9})}{2}\right)^{1}\right\}\] Number of edges of \(\Gamma_{M_{2rs}}^{c}\) is \(\frac{(r^{2}-r+1)s^{2}-(2r-1)s}{2}\). Therefore, \[|e(\Gamma_{M_{2rs}})|=\frac{(2r-1)^{2}s^{2}-(2r-1)s}{2}-\frac{(r^{2}-r+1)s^{2 }-(2r-1)s}{2}=\frac{3r(r-1)s^{2}}{2}.\] Now \[\left|(2r-2)s-\frac{2|e(\Gamma_{M_{2rs}})|}{|v(\Gamma_{M_{2rs}})|}\right|= \left|\frac{(r-1)(r-2)s}{2r-1}\right|=\frac{(r-1)(r-2)s}{2r-1},\] \[\left|rs-\frac{2|e(\Gamma_{M_{2rs}})|}{|v(\Gamma_{M_{2rs}})|}\right|=\left| \frac{-r(r-2)s}{2r-1}\right|=\frac{r(r-2)s}{2r-1},\] \[\left|(2r-3)s-\frac{2|e(\Gamma_{M_{2rs}})|}{|v(\Gamma_{M_{2rs}})|}\right|= \left|\frac{(r^{2}-5r+3)s}{2r-1}\right|=\begin{cases}\frac{3\pi}{5},&\text{ if }r=3\\ \frac{(r^{2}-5r+3)s}{2r-1},&\text{if }r\geq 5,\end{cases}\] \[\left|\frac{s}{2}\left(4r-3+\sqrt{8r^{2}-16r+9}\right)-\frac{2|e( \Gamma_{M_{2rs}})|}{|v(\Gamma_{M_{2rs}})|}\right|= \left|\frac{s}{2}\left(\sqrt{8r^{2}-16r+9}+r-\frac{3}{2}+\frac{3}{4r-2} \right)\right|\] \[= \frac{s}{2}\left(\sqrt{8r^{2}-16r+9}+r-\frac{3}{2}+\frac{3}{4r-2 }\right)\] and \[\left|\frac{s}{2}\left(4r-3-\sqrt{8r^{2}-16r+9}\right)-\frac{2|e( \Gamma_{M_{2rs}})|}{|v(\Gamma_{M_{2rs}})|}\right|= \left|\frac{s}{2}\left(-\sqrt{8r^{2}-16r+9}+r-\frac{3}{2}+\frac{3}{4 r-2}\right)\right|\] \[= \frac{s}{2}\left(\sqrt{8r^{2}-16r+9}-r+\frac{3}{2}-\frac{3}{4r-2 }\right).\] Therefore, for \(n=3\), we have \(LE^{+}(\Gamma_{M_{2rs}})=\frac{3s(4s-1)}{5}+s\sqrt{33}\). For \(r\geq 5\), we have \[LE^{+}(\Gamma_{M_{2rs}})= r(s-1)\times\frac{(r-1)(r-2)s}{2r-1}+((r-1)s-1)\times\frac{r(r-2)s}{2r-1 }+(r-1)\times\frac{(r^{2}-5r+3)s}{2r-1}+\] \[\frac{s}{2}\left(\sqrt{8r^{2}-16r+9}+r-\frac{3}{2}+\frac{3}{4r-2 }\right)+\frac{s}{2}\left(\sqrt{8r^{2}-16r+9}-r+\frac{3}{2}-\frac{3}{4r-2}\right)\] and the result follows on simplification. **Theorem 2.10**.: _Let \(G\) be isomorphic to \(M_{2rs}\), where \(r\) is even. Then_ \[\text{Q-spec}(\Gamma_{M_{2rs}})= \left\{(2s(r-2))^{rs-\frac{r}{2}},(rs)^{rs-2s-1},(2s(r-3))^{\frac {r}{2}-1},\left(4rs-6s+2s\sqrt{2r^{2}-8r+9}\right)^{1},\right.\] \[\left.\left(4rs-6s-2s\sqrt{2r^{2}-8r+9}\right)^{1}\right\}\] \[LE^{+}(\Gamma_{M_{2rs}})= \left\{\begin{array}{ll}\frac{s(r^{3}s-6r^{2}s+8rs-\frac{s}{r}+4r^ {2}-8r+6)}{r-1}+2s\sqrt{2r^{2}-8r+9},&\text{if }4\leq r\leq 8\\ \frac{s(r^{3}s-6r^{2}s+8rs-2r^{2}+8r-6)}{r-1}+2s\sqrt{2r^{2}-8r+9},&\text{if }r \geq 10.\end{array}\right.\] Proof.: If \(G\cong M_{2rs}\) and \(r\) is even then \(|v(\Gamma_{M_{2rs}})|=2s(r-1)\) and \(\Gamma_{M_{2rs}}=K_{\frac{r}{2},(2s),1,((\frac{r}{2}-1)2s)}\). Using Theorem 1.6(b), we have \[Q_{\Gamma_{M_{2rs}}}(x)= \prod_{i=1}^{2}(x-2(rs-s)+p_{i})^{a_{i}(p_{i}-1)}\prod_{i=1}^{2}( x-2(rs-s)+2p_{i})^{a_{i}}\left(1-\sum_{i=1}^{2}\frac{a_{i}p_{i}}{x-2(rs-s)+2p_{i} }\right)\] \[= \left(x-2s(r-2)\right)^{rs-\frac{r}{2})}(x-rs)^{rs-2s-1}(x-2s(r- 3))^{\frac{r}{2}}(x-2s)\left(1-\frac{rs}{x-2s(r-3)}-\frac{rs-2s}{x-2s}\right)\] \[= \left(x-2s(r-2)\right)^{rs-\frac{r}{2}}(x-rs)^{rs-2s-1}(x-2s(r-3 ))^{\frac{r}{2}-1}(x^{2}-(4r-6)sx+rs^{2}(2r-4)).\] Thus, \[\text{Q-spec}(\Gamma_{M_{2rs}})= \left\{(2s(r-2))^{rs-\frac{r}{2}},(rs)^{rs-2s-1},(2s(r-3))^{\frac{ r}{2}-1},\left(4rs-6s+2s\sqrt{2r^{2}-8r+9}\right)^{1},\right.\] \[\left.\left(4rs-6s-2s\sqrt{2r^{2}-8r+9}\right)^{1}\right\}.\] Number of edges of \(\Gamma_{M_{2rs}}^{c}\) is \(\frac{(r^{2}-2r+4)s^{2}-2(r-1)s}{2}\). Thus, \(|e(\Gamma_{M_{2rs}})|=\frac{2(r-1)s(2(r-1)s-1)}{2}-\frac{(r^{2}-r+1)s^{2}-2(r- 1)s}{2}\)\(=\frac{3r(r-2)s^{2}}{2}\). Now \[\left|2s(r-2)-\frac{2|e(\Gamma_{M_{2rs}})|}{|v(\Gamma_{M_{2rs}})|}\right|= \left|\frac{(r-2)(r-4)s}{2r-2}\right|=\frac{(r-2)(r-4)s}{2r-2},\] \[\left|rs-\frac{2|e(\Gamma_{M_{2rs}})|}{|v(\Gamma_{M_{2rs}})|}\right|=\left| \frac{-r(r-4)s}{2r-2}\right|=\frac{r(r-4)s}{2r-2},\] \[\left|(2r-3)s-\frac{2|e(\Gamma_{M_{2rs}})|}{|v(\Gamma_{M_{2rs}})|}\right|= \left|\frac{(r^{2}-10r+12)s}{2r-2}\right|=\begin{cases}\frac{(r-r^{2}+10r-12 )s}{2r-2},&\text{if }4\leq r\leq 8\\ \frac{(r^{2}-10r+12)s}{2r-2},&\text{if }r\geq 10,\end{cases}\] \[\left|4rs-6s+2s\sqrt{2r^{2}-8r+9}-\frac{2|e(\Gamma_{M_{2rs}})|}{|v( \Gamma_{M_{2rs}})|}\right|= \left|\frac{rs}{2}-\frac{3s}{2}+\frac{3s}{2r-2}+s\sqrt{2r^{2}-8r+9}\right|\] \[= \frac{rs}{2}-\frac{3s}{2}+\frac{3s}{2r-2}+s\sqrt{2r^{2}-8r+9}\] and \[\left|4rs-6s-2s\sqrt{2r^{2}-8r+9}-\frac{2|e(\Gamma_{M_{2rs}})|}{|v (\Gamma_{M_{2rs}})|}\right|= \left|\frac{rs}{2}-\frac{3s}{2}+\frac{3s}{2r-2}-s\sqrt{2r^{2}-8r+9}\right|\] \[= -\frac{rs}{2}+\frac{3s}{2}-\frac{3s}{2r-2}+s\sqrt{2r^{2}-8r+9}.\] Therefore, for \(4\leq r\leq 8\), we have \[LE^{+}(\Gamma_{M_{2rs}})= \left(rs-\frac{r}{2}\right)\times\frac{(r-2)(r-4)s}{2r-2}+(rs-2s-1 )\times\frac{r(r-4)s}{2r-2}+\left(\frac{r}{2}-1\right)\times\frac{(-r^{2}+10r-1 2)s}{2r-2}\] \[+\frac{rs}{2}-\frac{3s}{2}+\frac{3s}{2r-2}+s\sqrt{2r^{2}-8r+9}- \frac{rs}{2}+\frac{3s}{2}-\frac{3s}{2r-2}+s\sqrt{2r^{2}-8r+9}\] and for \(r\geq 10\), we have \[LE^{+}(\Gamma_{M_{2rs}})= \left(rs-\frac{r}{2}\right)\times\frac{(r-2)(r-4)s}{2r-2}+(rs-2s-1 )\times\frac{r(r-4)s}{2r-2}+\left(\frac{r}{2}-1\right)\times\frac{(r^{2}-10r+12 )s}{2r-2}\] \[+\frac{rs}{2}-\frac{3s}{2}+\frac{3s}{2r-2}+s\sqrt{2r^{2}-8r+9}- \frac{rs}{2}+\frac{3s}{2}-\frac{3s}{2r-2}+s\sqrt{2r^{2}-8r+9}.\] Hence, the results follow on simplification. **Theorem 2.11**.: _If \(G\) is isomorphic to \(M_{2rs}\) then_ * \(E(\Gamma_{M_{2rs}})\leq LE^{+}(\Gamma_{M_{2rs}})\leq LE(\Gamma_{M_{2rs}})\)_, equality holds if and only if_ \(G\cong M_{8s}\)_._ * \(\Gamma_{M_{2rs}}\) _is non-hypoenergetic as well as non-hyperenergetic._ * \(\Gamma_{M_{6}}\) _is L-hyperenergetic but not Q-hyperenergetic._ \(\Gamma_{M_{8s}}\) _is not L-hyperenergetic and not Q-hyperenergetic. If_ \(2rs\neq 6\) _and_ \(8s\) _then_ \(\Gamma_{M_{2rs}}\) _is Q-hyperenergetic and L-hyperenergetic._ Proof.: (a) **Case 1:**\(r\) is odd Using Theorems 2.8 and 2.9, for \(r=3\), \(LE(\Gamma_{M_{2rs}})-LE^{+}(\Gamma_{M_{2rs}})=\frac{33s}{5}-s\sqrt{33}>0\) and \(LE^{+}(\Gamma_{M_{2rs}})-E(\Gamma_{M_{2rs}})=\frac{12s^{2}-13s}{5}+(\sqrt{33}- 2\sqrt{7})s>0\). For \(r\geq 5\), using Theorems 2.8 and 2.9, we have \[LE(\Gamma_{M_{2rs}})-LE^{+}(\Gamma_{M_{2rs}})=\frac{s(8r^{2}-10r+3)}{2r-1}-s \sqrt{8r^{2}-16r+9} \tag{2.15}\] and \[LE^{+}(\Gamma_{M_{2rs}})-E(\Gamma_{M_{2rs}})=s\left(\frac{(2r^{3}-6r^{2}+4r)s- 6r^{2}+11r-4}{2r-1}+\sqrt{8r^{2}-16r+9}-\sqrt{5r^{2}-6r+1}\right). \tag{2.16}\] Since \(8r^{2}-10r+3>0\), \((2r-1)\sqrt{8r^{2}-16r+9}>0\) and \((8r^{2}-10r+3)^{2}-(2r-1)^{2}(8r^{2}-16r+9)=32r^{4}-64r^{3}+40r^{2}-8r>0\) we have \(8r^{2}-10r+3-(2r-1)\sqrt{8r^{2}-16r+9}>0\). Therefore, by (2.15), \((2r-1)(LE(\Gamma_{M_{2rs}})-LE^{+}(\Gamma_{M_{2rs}}))>0\). Hence, \(LE(\Gamma_{M_{2rs}})>LE^{+}(\Gamma_{M_{2s}})\). Again, \(\sqrt{8r^{2}-16r+9}>0,\sqrt{5r^{2}-6r+1}>0\) and \((\sqrt{8r^{2}-16r+9})^{2}-(\sqrt{5r^{2}-6r+1})^{2}=r(3r-10)+s>0\). Therefore, \(\sqrt{8r^{2}-16r+9}-\sqrt{5r^{2}-6r+1}>0\). Since \(2r^{3}-6r^{2}+4r>6r^{2}-11r+4\) we have \(\frac{(2r^{3}-6r^{2}+4r)s-6r^{2}+11r-4}{2r-1}+\sqrt{8r^{2}-16r+9}-\sqrt{5r^{2 }-6r+1}>0\). Therefore, by (2.16), \(LE^{+}(\Gamma_{M_{2rs}})>E(\Gamma_{M_{2rs}})\). Hence, \(E(\Gamma_{M_{2rs}})<LE^{+}(\Gamma_{M_{2rs}})<LE(\Gamma_{M_{2rs}})\). **Case 2:**\(r\) is even For \(4\leq r\leq 8\), using Theorems 2.8 and 2.10, we have \[LE(\Gamma_{M_{2rs}})-LE^{+}(\Gamma_{M_{2rs}})=\frac{s}{r-1}\left(\frac{r^{3}} {2}-2r^{2}+6r-6\right)-2s\sqrt{2r^{2}-8r+9} \tag{2.17}\] and \[LE^{+}(\Gamma_{M_{2rs}})-E(\Gamma_{M_{2rs}})=\frac{(r^{3}-6r^{2}+8r)s^{2}- \frac{r^{3}s}{2}+3r^{2}s-5rs+4s}{r-1}+2s\sqrt{2r^{2}-8r+9}-s\sqrt{5r^{2}-12r+ 4}. \tag{2.18}\] Since \(\frac{r^{3}}{2}-2r^{2}+6r-6>0\), \(2(r-1)\sqrt{2r^{2}-8r+9}>0\) and \((\frac{r^{3}}{2}-2r^{2}+6r-6)^{2}-4(r-1)^{2}(2r^{2}-8r+9)=\frac{r^{5}(r-8)}{4 }+2r^{4}+6r^{2}(3r-8)+32r\geq 0\) (equality holds if and only if \(r=4\)) we have \(\frac{r^{3}}{2}-2r^{2}+6r-6-2(r-1)\sqrt{2r^{2}-8r+9}\geq 0\). Therefore, by (2.17), \((r-1)(LE(\Gamma_{M_{2rs}})-LE^{+}(\Gamma_{M_{2rs}}))\geq 0\). Hence, \(LE(\Gamma_{M_{2rs}})\geq LE^{+}(\Gamma_{M_{2rs}})\) equality holds if and only if \(G\cong M_{8s}\). Again, \(2\sqrt{2r^{2}-8r+9}>0,\sqrt{5r^{2}-12r+4}>0\) and \((2\sqrt{2r^{2}-8r+9})^{2}-(\sqrt{5r^{2}-12r+4})^{2}=(r-4)(3r-8)\geq 0\) (equality holds if and only if \(r=4\)). Therefore, \(2\sqrt{2r^{2}-8r+9}-\sqrt{5r^{2}-12r+4}\geq 0\). Since \(r^{3}-6r^{2}+8r\geq\frac{r^{3}}{2}-3r^{2}+5r-4\) we have \(\frac{(r^{3}-6r^{2}+8r)s^{2}-\frac{r^{3}s}{2}+3r^{2}s-5rs+4s}{r-1}+2s\sqrt{2r^{2 }-8r+9}-s\sqrt{5r^{2}-12r+4}\geq 0\). Therefore, by (2.18), \(LE^{+}(\Gamma_{M_{2rs}})\geq E(\Gamma_{M_{2rs}})\). Hence, \(E(\Gamma_{M_{2rs}})\leq LE^{+}(\Gamma_{M_{2rs}})\leq LE(\Gamma_{M_{2rs}})\) equality holds if and only if \(G\cong M_{8s}\). For \(r\geq 10\), using Theorems 2.8 and 2.10, we have \[LE(\Gamma_{M_{2rs}})-LE^{+}(\Gamma_{M_{2rs}})=2s\left(\frac{2r^{2}-5r+3}{r-1}- \sqrt{2r^{2}-8r+9}\right) \tag{2.19}\] and \[LE^{+}(\Gamma_{M_{2rs}})-E(\Gamma_{M_{2rs}})=s\left(\frac{(r^{3}-6r^{2}+8r)s-3r^ {2}+11r-8}{r-1}+2\sqrt{2r^{2}-8r+9}-\sqrt{5r^{2}-12r+4}\right). \tag{2.20}\] Since \(2r^{2}-5r+3>0\), \((r-1)\sqrt{2r^{2}-8r+9}>0\) and \((2r^{2}-5r+3)^{2}-(r-1)^{2}(2r^{2}-8r+9)=2r(r-2)(r-1)^{2}>0\) we have \(2r^{2}-5r+3-(r-1)\sqrt{2r^{2}-8r+9}>0\). Therefore, by (2.19), \((r-1)(LE(\Gamma_{M_{2rs}})-LE^{+}(\Gamma_{M_{2rs}}))>0\). Hence, \(LE(\Gamma_{M_{2rs}})>LE^{+}(\Gamma_{M_{2rs}})\). Again, \(2\sqrt{2r^{2}-8r+9}>0,\sqrt{5r^{2}-12r+4}>0\) and \((2\sqrt{2r^{2}-8r+9})^{2}-(\sqrt{5r^{2}-12r+4})^{2}=(r-4)(3r-8)>0\). Therefore, \(2\sqrt{2r^{2}-8r+9}-\sqrt{5r^{2}-12r+4}>0\). Since \(r^{3}-6r^{2}+8r>3r^{2}-11r+8\) we have \(\frac{(r^{3}-6r^{2}+8r)s-3r^{2}+11r-8}{r-1}+2\sqrt{2r^{2}-8r+9}-\sqrt{5r^{2}-12r+4}>0\). Therefore, by (2.20), \(LE^{+}(\Gamma_{M_{2rs}})>E(\Gamma_{M_{2rs}})\). Hence, \(E(\Gamma_{M_{2rs}})<LE^{+}(\Gamma_{M_{2rs}})<LE(\Gamma_{M_{2rs}})\). (b) **Case 1:**\(r\) is odd Here, \(|v(\Gamma_{M_{2rs}})|=2rs-s\) and \(E(K_{|v(\Gamma_{M_{2rs}})|})=LE(K_{|v(\Gamma_{M_{2rs}})|})=LE^{+}(K_{|v(\Gamma_ {M_{2rs}})|})=4rs-2s-2\). Using Theorem 2.8, we have \[E(\Gamma_{M_{2rs}})-|v(\Gamma_{M_{2rs}})|=s(\sqrt{(r-1)(5r-1)}-r) \tag{2.21}\] and \[E(K_{|v(\Gamma_{M_{2rs}})|})-E(\Gamma_{M_{2rs}})=3rs-s-2-s\sqrt{(r-1)(5r-1)}. \tag{2.22}\] Since \(\sqrt{(r-1)(5r-1)}>0\), \(r>0\) and \(\left(\sqrt{(r-1)(5r-1)}\right)^{2}-(r)^{2}=2r(2r-3)+1>0\) we have \(\sqrt{(r-1)(5r-1)}-r>0\). Therefore, by (2.21), \(E(\Gamma_{M_{2rs}})>|v(\Gamma_{M_{2rs}})|\). Again, \(s\sqrt{(r-1)(5r-1)}>0\), \(3rs-s-2>0\) and \((3rs-s-2)^{2}-\left(s\sqrt{(r-1)(5r-1)}\right)^{2}=4rs(rs-3)+4(s+1)>0\)and so \(3rs-s-2-s\sqrt{(r-1)(5r-1)}>0\). Therefore, by (2.22), \(E(K_{|v(\Gamma_{M_{2rs}})|})>E(\Gamma_{M_{2rs}})\). **Case 2:**\(r\) is even Here, \(|v(\Gamma_{M_{2rs}})|=2rs-2s\) and \(E(K_{|v(\Gamma_{M_{2rs}})|})=LE(K_{|v(\Gamma_{M_{2rs}})|})=LE^{+}(K_{|v(\Gamma _{M_{2rs}})|})=4rs-4s-2\). Using Theorem 2.8, we have \[E(\Gamma_{M_{2rs}})-|v(\Gamma_{M_{2rs}})|=s(\sqrt{(r-2)(5r-2)}-r) \tag{2.23}\] and \[E(K_{|v(\Gamma_{M_{2rs}})|})-E(\Gamma_{M_{2rs}})=3rs-2s-2-s\sqrt{(r-2)(5r-2)}. \tag{2.24}\] Since \(\sqrt{(r-2)(5r-2)}>0\), \(r>0\) and \(\left(\sqrt{(r-2)(5r-2)}\right)^{2}-r^{2}=4(r(r-3)+1)>0\) we have \(\sqrt{(r-2)(5r-2)}-r>0\). Therefore, by (2.23), \(E(\Gamma_{M_{2rs}})>|v(\Gamma_{M_{2rs}})|\). Again, \(s\sqrt{(r-2)(5r-2)}>0\), \(3rs-2s-2>0\) and \((3rs-2s-2)^{2}-\left(s\sqrt{(r-2)(5r-2)}\right)^{2}=4rs(rs-3)+4(2s+1)>0\) and so \(3rs-2s-2-s\sqrt{(r-2)(5r-2)}>0\). Therefore, by (2.24), \(E(K_{|v(\Gamma_{M_{2rs}})|})>E(\Gamma_{M_{2rs}})\). (c) **Case 1:**\(r\) is odd For \(r=3\), using Theorem 2.9, we have \(LE^{+}(\Gamma_{M_{2rs}})-LE^{+}(K_{|v(\Gamma_{M_{2rs}})|})=\frac{12\sigma^{2}- 53r}{5}+2+s\sqrt{33}>0\) for all \(s\neq 1\). Therefore, for \(r=3\) and \(s\neq 1\), \(LE^{+}(\Gamma_{M_{2rs}})>LE^{+}(K_{|v(\Gamma_{M_{2rs}})|})\) which implies \(\Gamma_{M_{2rs}}\) is Q-hyperenergetic and consequently part (a) implies \(\Gamma_{M_{2rs}}\) is L-hyperenergetic. If \(r=3\) and \(s=1\), then \(G\cong D_{6}\) so result follows from Theorem 2.4(c). For \(r\geq 5\), using Theorem 2.9, we have \(LE^{+}(\Gamma_{M_{2rs}})-LE^{+}(K_{|v(\Gamma_{M_{2rs}})|})=\frac{(2r^{3}-6r^{2} +4r)s^{2}-(12r^{2}-16r+5)s}{2r-1}+s\sqrt{8r^{2}-16r+9}+2>0\). Therefore, \(LE^{+}(\Gamma_{M_{2rs}})>LE^{+}(K_{|v(\Gamma_{M_{2rs}})|})\) which implies \(\Gamma_{M_{2rs}}\) is Q-hyperenergetic and consequently part (a) implies \(\Gamma_{M_{2rs}}\) is L-hyperenergetic. **Case 2:**\(r\) is even For \(r=4\) and \(s\neq 1\), using Theorem 2.8, we have \(LE(K_{|v(\Gamma_{M_{2rs}})|})-LE(\Gamma_{M_{2rs}})=\frac{12s}{7}-2>0\). Therefore, \(\Gamma_{M_{s}}\), is not L-hyperenergetic and consequently part (a) implies that it is not Q-hyperenergetic. If \(r=4\) and \(s=1\), then \(G\cong D_{8}\) so result follows from Theorem 2.4(c). Using Theorem 2.10, for \(4<r\leq 8\), we get \[LE^{+}(\Gamma_{M_{2rs}})-LE^{+}(K_{|v(\Gamma_{M_{2rs}})|})=\frac{(r^{3}-6r^{2} +8r)s^{2}-(\frac{r^{3}}{2}-2)s}{r-1}+2s\sqrt{2r^{2}-8r+9}+2>0.\] Therefore, \(LE^{+}(\Gamma_{M_{2rs}})>LE^{+}(K_{|v(\Gamma_{M_{2rs}})|})\) which implies \(\Gamma_{M_{2rs}}\) is Q-hyperenergetic and consequently part (a) implies \(\Gamma_{M_{2rs}}\) is L-hyperenergetic. Using Theorem 2.10, for \(r\geq 10\), we get \[LE^{+}(\Gamma_{M_{2rs}})-LE^{+}(K_{|v(\Gamma_{M_{2rs}})|})=\frac{(r^{3}-6r^{2} +8r)s^{2}-(6r^{2}-16r+10)s}{r-1}+2s\sqrt{2r^{2}-8r+9}+2>0.\] Therefore, \(LE^{+}(\Gamma_{M_{2rs}})>LE^{+}(K_{|v(\Gamma_{M_{2rs}})|})\) which implies \(\Gamma_{M_{2rs}}\) is Q-hyperenergetic and consequently part (a) implies \(\Gamma_{M_{2rs}}\) is L-hyperenergetic. In Theorem 2.11, we compare \(E(\Gamma_{M_{2rs}})\), \(LE(\Gamma_{M_{2rs}})\) and \(LE^{+}(\Gamma_{M_{2rs}})\). However, in the following figures, we show how close are they. ### The dicyclic groups, \(Q_{4n}\) We consider \(Q_{4n}:=\langle x,y:x^{2n}=y^{4}=1;x^{n}=y^{2};y^{-1}xy=x^{-1}\rangle\), the dicyclic groups of order \(4n\) (where \(n\geq 2\)). Results regarding different energies of non-commuting graphs of \(Q_{4n}\) are given below. **Theorem 2.12** ([14, Corollary 4.1.8 and (4.3.f)]).: _Let \(G\) be isomorphic to \(Q_{4n}\). Then_ \[E(\Gamma_{Q_{4n}})=2\left((n-1)+\sqrt{(n-1)(5n-1)}\right)\ \text{and}\ LE( \Gamma_{Q_{4n}})=\frac{8n(n-1)(n-2)+4n(2n-1)}{2n-1}.\] **Theorem 2.13**.: _Let \(G\) be isomorphic to \(Q_{4n}\). Then_ \[\text{Q-spec}(\Gamma_{Q_{4n}})=\left\{(4n-4)^{n},(2n)^{2n-3},(4n-6)^{n-1},(4n- 3+\sqrt{8n^{2}-16n+9})^{1},\left(4n-3-\sqrt{8n^{2}-16n+9}\right)^{1}\right\}\] _and_ \[LE^{+}(\Gamma_{Q_{4n}})=\begin{cases}\frac{4n^{3}-8n^{2}+6}{2n-1}+2\sqrt{8n^{ 2}-16n+9},&\text{if $n\leq 4$}\\ \frac{8n^{3}-32n^{2}+32n-6}{2n-1}+2\sqrt{8n^{2}-16n+9},&\text{if $n\geq 5$}.\end{cases}\] Proof.: If \(G\cong Q_{4n}\) then \(|v(\Gamma_{Q_{4n}})|=4n-2\) and \(\Gamma_{Q_{4n}}=K_{n.2,1.(2n-2)}\). Using Theorem 1.6(b), we have \[Q_{\Gamma_{Q_{4n}}}(x)= \prod_{i=1}^{2}(x-(4n-2)+p_{i})^{a_{i}(p_{i}-1)}\prod_{i=1}^{2}( x-(4n-2)+2p_{i})^{a_{i}}\left(1-\sum_{i=1}^{2}\frac{a_{i}p_{i}}{x-(4n-2)+2p_{i}}\right)\] \[= (x-(4n-4))^{n}(x-2n)^{2n-3}(x-4n+6)^{n}(x-2)\left(1-\frac{2n}{x-4 n+6}-\frac{2n-2}{x-2}\right)\] \[= (x-(4n-4))^{n}(x-2n)^{2n-3}(x-(4n-6))^{n-1}(x^{2}-(8n-6)x+8n^{2} -8n).\] Thus, \[\text{Q-spec}(\Gamma_{Q_{4n}})=\left\{(4n-4)^{n},(2n)^{2n-3},(4n-6)^{n-1}, \left(4n-3+\sqrt{8n^{2}-16n+9}\right)^{1},(4n-3-\sqrt{8n^{2}-16n+9})^{1}\right\}.\] Number of edges of \(\Gamma_{Q_{4n}}^{c}\) is \(2n^{2}-4n+3\). Thus, \(|e(\Gamma_{Q_{4n}})|=\frac{(4n-2)(4n-2-1)}{2}-(2n^{2}-4n+3)=6n(n-1)\). Now \[\left|4n-4-\frac{2|e(\Gamma_{Q_{4n}})|}{|v(\Gamma_{Q_{4n}})|}\right|=\left| \frac{(2n-4)(n-1)}{2n-1}\right|=\frac{(2n-4)(n-1)}{2n-1},\] \[\left|2n-\frac{2|e(\Gamma_{Q_{4n}})|}{|v(\Gamma_{Q_{4n}})|}\right|=\left| \frac{-2n(n-2)}{2n-1}\right|=\frac{2n(n-2)}{2n-1},\] \[\left|4n-6-\frac{2|e(\Gamma_{Q_{4n}})|}{|v(\Gamma_{Q_{4n}})|}\right|=\left| \frac{2(n^{2}-5n+3)}{2n-1}\right|=\begin{cases}\frac{-2(n^{2}-5n+3)}{2n-1},& \text{if $n\leq 4$}\\ \frac{2(n^{2}-5n+3)}{2n-1},&\text{if $n\geq 5$},\end{cases}\] \[\left|4n-3+\sqrt{8n^{2}-16n+9}-\frac{2|e(\Gamma_{Q_{4n}})|}{|v( \Gamma_{Q_{4n}})|}\right|= \left|\sqrt{8n^{2}-16n+9}+n-\frac{3}{2}+\frac{3}{4n-2}\right|\] \[= \sqrt{8n^{2}-16n+9}+n-\frac{3}{2}+\frac{3}{4n-2}\] \[\left|4n-3-\sqrt{8n^{2}-16n+9}-\frac{2|e(\Gamma_{Q_{4n}})|}{|v(\Gamma_{Q_{4n}})| }\right|= \left|-\sqrt{8n^{2}-16n+9}+n-\frac{3}{2}+\frac{3}{4n-2}\right|\] \[= \sqrt{8n^{2}-16n+9}-n+\frac{3}{2}-\frac{3}{4n-2}.\] Therefore, for \(n\leq 4\) we have \[LE^{+}(\Gamma_{Q_{4n}})= n\times\frac{(2n-4)(n-1)}{2n-1}+(2n-3)\times\frac{2n(n-2)}{2n-1}+(n-1) \times\frac{-2(n^{2}-5n+3)}{2n-1}+\] \[\sqrt{8n^{2}-16n+9}+n-\frac{3}{2}+\frac{3}{4n-2}+\sqrt{8n^{2}-16n +9}-n+\frac{3}{2}-\frac{3}{4n-2}\] and for \(n\geq 5\) we have \[LE^{+}(\Gamma_{Q_{4n}})= n\times\frac{(2n-4)(n-1)}{2n-1}+(2n-3)\times\frac{2n(n-2)}{2n-1}+(n-1) \times\frac{2(n^{2}-5n+3)}{2n-1}+\] \[\sqrt{8n^{2}-16n+9}+n-\frac{3}{2}+\frac{3}{4n-2}+\sqrt{8n^{2}-16 n+9}-n+\frac{3}{2}-\frac{3}{4n-2}\] Hence, the results follow on simplification. **Theorem 2.14**.: _If \(G\) is isomorphic to \(Q_{4n}\) then_ * \(E(\Gamma_{Q_{4n}})\leq LE^{+}(\Gamma_{Q_{4n}})\leq LE(\Gamma_{Q_{4n}})\)_, equality holds if and only if_ \(G\cong Q_{8}\)_._ * \(\Gamma_{Q_{4n}}\) _is non-hypoenergetic as well as non-hyperenergetic._ * \(\Gamma_{Q_{8}}\) _is not L-hyperenergetic and not Q-hyperenergetic. If_ \(n\neq 2\) _then_ \(\Gamma_{Q_{4n}}\) _is Q-hyperenergetic and L-hyperenergetic._ Proof.: (a) For \(n\leq 4\), using Theorems 2.12 and 2.13, we have \[LE(\Gamma_{Q_{4n}})-LE^{+}(\Gamma_{Q_{4n}})=2\left(\frac{2n^{3}-4n^{2}+6n-3}{2 n-1}-\sqrt{8n^{2}-16n+9}\right) \tag{2.25}\] and \[LE^{+}(\Gamma_{Q_{4n}})-E(\Gamma_{Q_{4n}})=\frac{2(n-2)(2n^{2}-2n-1)}{2n-1}+2 \sqrt{8n^{2}-16n+9}-2\sqrt{5n^{2}-6n+1}. \tag{2.26}\] Since \(2n^{3}-4n^{2}+6n-3>0\), \((2n-1)\sqrt{8n^{2}-16n+9}>0\) and \((2n^{3}-4n^{2}+6n-3)^{2}-\left(\sqrt{8n^{2}-16n+9}\right)^{2}(2n-1)^{2}=8n^{5} (n-4)+16n^{4}+24n^{2}(3n-4)+32n\geq 0\) (equality holds if and only if \(n=2\)) we have \(2n^{3}-4n^{2}+6n-3-(2n-1)\sqrt{8n^{2}-16n+9}\geq 0\). Therefore, by (2.25), \((2n-1)(LE(\Gamma_{Q_{4n}})-LE^{+}(\Gamma_{Q_{4n}}))\geq 0\). Hence, \(LE(\Gamma_{Q_{4n}})\geq LE^{+}(\Gamma_{Q_{4n}})\) equality holds if and only if \(G\cong Q_{8}\). Again, \(\sqrt{8n^{2}-16n+9}>0,\sqrt{5n^{2}-6n+1}>0\) and \(\left(\sqrt{8n^{2}-16n+9}\right)^{2}-\left(\sqrt{5n^{2}-6n+1}\right)^{2}=n(3n- 10)+8\geq 0\) (equality holds if and only if \(n=2\)). Therefore, \(\sqrt{8n^{2}-16n+9}-\sqrt{5n^{2}-6n+1}\geq 0\). Since \((n-2)(2n^{2}-2n-1)\geq 0\) we have \(\frac{2(n-2)(2n^{2}-2n-1)}{2n-1}+2\sqrt{8n^{2}-16n+9}-2\sqrt{5n^{2}-6n+1}\geq 0\) (equality holds if and only if \(n=2\)). Therefore, by (2.26), \(LE^{+}(\Gamma_{Q_{4n}})\geq E(\Gamma_{Q_{4n}})\). Hence, \(E(\Gamma_{Q_{4n}})\leq LE^{+}(\Gamma_{Q_{4n}})\leq LE(\Gamma_{Q_{4n}})\) equality holds if and only if \(G\cong Q_{8}\). For \(n\geq 5\), using Theorems 2.12 and 2.13, we have \[LE(\Gamma_{Q_{4n}})-LE^{+}(\Gamma_{Q_{4n}})=2\left(\frac{8n^{2}-10n+3}{2n-1}- \sqrt{8n^{2}-16n+9}\right) \tag{2.27}\] and \[LE^{+}(\Gamma_{Q_{4n}})-E(\Gamma_{Q_{4n}})=\frac{2(2n^{2}(4n-9)+19n-4)}{2n-1}+ 2\sqrt{8n^{2}-16n+9}-2\sqrt{5n^{2}-6n+1}. \tag{2.28}\] Since \(8n^{2}-10n+3>0\), \((2n-1)\sqrt{8n^{2}-16n+9}>0\) and \((8n^{2}-10n+3)^{2}-\left(\sqrt{8n^{2}-16n+9}\right)^{2}(2n-1)^{2}=32n^{3}(n-2)+ 8n(5n-1)>0\) we have \(8n^{2}-10n+3-(2n-1)\sqrt{8n^{2}-16n+9}>0\). Therefore, by (2.27), \((2n-1)(LE(\Gamma_{Q_{4n}})-LE^{+}(\Gamma_{Q_{4n}}))>0\). Hence, \(LE(\Gamma_{Q_{4n}})>LE^{+}(\Gamma_{Q_{4n}})\). Again, \(\sqrt{8n^{2}-16n+9}>0,\sqrt{5n^{2}-6n+1}>0\) and \(\left(\sqrt{8n^{2}-16n+9}\right)^{2}-\left(\sqrt{5n^{2}-6n+1}\right)^{2}=n(3n- 10)+8>0\). Therefore, \(\sqrt{8n^{2}-16n+9}-\sqrt{5n^{2}-6n+1}>0\). Since \(2n^{2}(4n-9)+19n-4>0\) we have \(\frac{2(2n^{2}(4n-9)+19n-4)}{2n-1}+2\sqrt{8n^{2}-16n+9}-2\sqrt{5n^{2}-6n+1}>0\). Therefore, by (2.28), \(LE^{+}(\Gamma_{Q_{4n}})>E(\Gamma_{Q_{4n}})\). Hence, \(E(\Gamma_{Q_{4n}})<LE^{+}(\Gamma_{Q_{4n}})<LE(\Gamma_{Q_{4n}})\). (b) Here, \(|v(\Gamma_{Q_{4n}})|=4n-2=2(2n-1)\) and \(E(K_{|v(\Gamma_{Q_{4n}})|})=LE(K_{|v(\Gamma_{Q_{4n}})|})=LE^{+}(K_{|v(\Gamma_{ Q_{4n}})|})=8n-6\). Using Theorem 2.12, \[E(\Gamma_{Q_{4n}})-|v(\Gamma_{Q_{4n}})|=2(\sqrt{(n-1)(5n-1)}-n) \tag{2.29}\] and \[E(K_{|v(\Gamma_{Q_{4n}})|})-E(\Gamma_{Q_{4n}})=2(3(n-1)+1-\sqrt{(n-1)(5n-1)}). \tag{2.30}\] Since \(\sqrt{(n-1)(5n-1)}>0\), \(n>0\) and \(\Big{(}\sqrt{(n-1)(5n-1)}\Big{)}^{2}-n^{2}=2n(2n-3)+1>0\) we have \(\sqrt{(n-1)(5n-1)}-n>0\). Therefore, by (2.29), \(E(\Gamma_{Q_{4n}})>|v(\Gamma_{Q_{4n}})|\). Again, \(\sqrt{(n-1)(5n-1)}>0\), \(3(n-1)+1>0\) and \((3(n-1)+1)^{2}-\Big{(}\sqrt{(n-1)(5n-1)}\Big{)}^{2}=2n(2n-3)+3>0\) and so \(3(n-1)+1-\sqrt{(n-1)(5n-1)}>0\). Therefore, by (2.30), \(E(K_{|v(\Gamma_{Q_{4n}})|})>E(\Gamma_{Q_{4n}})\). (c) For \(n=2\), using Theorem 2.12, \(LE(\Gamma_{Q_{8}})=8\) and \(LE(K_{|v(\Gamma_{Q_{8}})|})=10\). Clearly, \(LE(\Gamma_{Q_{8}})<LE(K_{|v(\Gamma_{Q_{8}})|})\). Therefore, \(\Gamma_{Q_{8}}\) is not L-hyperenergetic and consequently part (a) implies that \(\Gamma_{Q_{8}}\) is not Q-hyperenergetic. Using Theorem 2.13, for \(2<n\leq 4\), \[LE^{+}(\Gamma_{Q_{4n}})-LE^{+}(K_{|v(\Gamma_{Q_{4n}})|})=\frac{4n(n-1)(n-5)}{2 n-1}+2\sqrt{8n^{2}-16n+9}>0.\] Also, for \(n\geq 5\), \(LE^{+}(\Gamma_{Q_{4n}})-LE^{+}(K_{|v(\Gamma_{Q_{4n}})|})=\frac{8n^{2}(n-6)+52n -12}{2n-1}+2\sqrt{8n^{2}-16n+9}>0\). Therefore, \(LE^{+}(\Gamma_{Q_{4n}})>LE^{+}(K_{|v(\Gamma_{Q_{4n}})|})\) which implies \(\Gamma_{Q_{4n}}\) is Q-hyperenergetic and consequently part (a) implies that \(\Gamma_{Q_{4n}}\) is L-hyperenergetic. Hence, the result holds. ### The groups \(U_{6n}\) We consider \(U_{6n}:=\langle x,y:x^{2n}=y^{3}=1;x^{-1}yx=y^{-1}\rangle\), the groups of order \(6n\). Results regarding different energies of non-commuting graphs of \(U_{6n}\) are given below. **Theorem 2.15** ([14, Corollary 4.1.9 and putting \(m=3\) in (4.3.c)]).: _Let \(G\) be isomorphic to \(U_{6n}\). Then \(E(\Gamma_{U_{6n}}))=2n(1+\sqrt{7})\) and \(LE(\Gamma_{U_{6n}}))=\frac{12n^{2}+30n}{5}\)._ **Theorem 2.16**.: _Let \(G\) be isomorphic to \(U_{6n}\). Then_ \[\text{\rm Q-spec}(\Gamma_{U_{6n}})=\left\{(3n)^{2n+1},(4n)^{3n-3},\left(\frac{ (9+\sqrt{33})n}{2}\right)^{1},\left(\frac{(9-\sqrt{33})n}{2}\right)^{1}\right\}\] _and_ \[LE^{+}(\Gamma_{U_{6n}})=\frac{12n^{2}-3n}{5}+\sqrt{33}n.\] Proof.: If \(G\cong U_{6n}\) then \(|v(\Gamma_{U_{6n}})|=5n\) and \(\Gamma_{U_{6n}}=K_{1.2n,3.n}\). Using Theorem 1.6(b), we have \[Q_{\Gamma_{U_{6n}}}(x)= \prod_{i=1}^{2}(x-5n+p_{i})^{a_{i}(p_{i}-1)}\prod_{i=1}^{2}(x-5n +2p_{i})^{a_{i}}\left(1-\sum_{i=1}^{2}\frac{a_{i}p_{i}}{x-5n+2p_{i}}\right)\] \[= (x-3n)^{2n-1}(x-4n)^{3n-3}(x-n)(x-3n)^{3}\left(1-\frac{2n}{x-n}- \frac{3n}{x-3n}\right)\] \[= (x-3n)^{2n+1}(x-4n)^{3n-3}(x^{2}-9nx+12n^{2}).\] Thus, \(\text{\rm Q-spec}(\Gamma_{U_{6n}})=\left\{(3n)^{2n+1},(4n)^{3n-3},\left(\frac {(9+\sqrt{33})n}{2}\right)^{1},\left(\frac{(9-\sqrt{33})n}{2}\right)^{1}\right\}\). Number of edges of \(\Gamma_{U_{6n}}^{c}\) is \(\frac{7n^{2}-5n}{2}\). Thus, \(|e(\Gamma_{U_{6n}})|=\frac{5n(5n-1)}{2}-\frac{7n^{2}-5n}{2}=\frac{18n^{2}}{2}\). Now \[\left|3n-\frac{2|e(\Gamma_{U_{6n}})|}{|v(\Gamma_{U_{6n}})|}\right|=\left|\frac{ -3n}{5}\right|=\frac{3n}{5},\ \ \ \left|4n-\frac{2|e(\Gamma_{U_{6n}})|}{|v(\Gamma_{U_{6n}})|} \right|=\left|\frac{2n}{5}\right|=\frac{2n}{5},\] \[\left|\frac{(9+\sqrt{33})n}{5}-\frac{2|e(\Gamma_{U_{6n}})|}{|v(\Gamma_{U_{6n}})|} \right|=\left|\frac{(9+5\sqrt{33})n}{10}\right|=\frac{(9+5\sqrt{33})n}{10}\] \[\left|\frac{(9-\sqrt{33})n}{5}-\frac{2|e(\Gamma_{U_{6n}})|}{|v(\Gamma_{U_{6n}})|} \right|=\left|\frac{(9-5\sqrt{33})n}{10}\right|=\frac{(5\sqrt{33}-9)n}{10}.\] Therefore, \(LE^{+}(\Gamma_{U_{6n}})=(2n+1)\times\frac{3n}{5}+(3n-3)\times\frac{2n}{5}+ \frac{(9+5\sqrt{3})n}{10}+\frac{(5\sqrt{33}-9)n}{10}\) and the result follows on simplification. **Theorem 2.17**.: _If \(G\) is isomorphic to \(U_{6n}\) then_ * \(E(\Gamma_{U_{6n}})<LE^{+}(\Gamma_{U_{6n}})<LE(\Gamma_{U_{6n}})\)_._ * \(\Gamma_{U_{6n}}\) _is non-hypoenergetic as well as non-hypeenergetic._ * \(\Gamma_{U_{6n}}\) _is Q-hypeenergetic and L-hypeenergetic._ Proof.: (a) Using Theorems 2.15 and 2.16, we have \[LE(\Gamma_{U_{6n}})-LE^{+}(\Gamma_{U_{6n}})=\frac{33n}{5}-\sqrt{33}n>0 \tag{2.31}\] and \[LE^{+}(\Gamma_{U_{6n}})-E(\Gamma_{U_{6n}})=\frac{12n^{2}-13n}{5}+(\sqrt{33}-2 \sqrt{7})n>0. \tag{2.32}\] Thus, the conclusion is drawn from equations (2.31) and (2.32). (b) Here, \(|v(\Gamma_{U_{6n}}))|=5n\) and \(E(K_{|v(\Gamma_{U_{6n}})|)})=LE(K_{|v(\Gamma_{U_{6n}})|)})=LE^{+}(K_{|v( \Gamma_{U_{6n}}))|})=10n-2\). Using Theorem 2.15, we have \[E(\Gamma_{U_{6n}})-|v(\Gamma_{U_{6n}})|=(2\sqrt{7}-3)n>0 \tag{2.33}\] and \[E(K_{|v(\Gamma_{U_{6n}})|)}-E(\Gamma_{U_{6n}})=(8-2\sqrt{7})n-2>0. \tag{2.34}\] Thus, the conclusion is drawn from equations (2.33) and (2.34). (c) Using Theorem 2.16, we have \(LE^{+}(\Gamma_{U_{6n}})-LE^{+}(K_{|v(\Gamma_{U_{6n}})|})=\frac{12n^{2}-53n+10 }{5}+n\sqrt{33}>0\). Therefore, \(LE^{+}(\Gamma_{U_{6n}})>LE^{+}(K_{|v(\Gamma_{U_{6n}})|})\) which implies \(\Gamma_{U_{6n}}\) is Q-hypeenergetic and consequently part (a) implies \(\Gamma_{U_{6n}}\) is L-hypeenergetic. In Theorems 2.14 and 2.17, we compare \(E(\Gamma_{G})\), \(LE(\Gamma_{G})\) and \(LE^{+}(\Gamma_{G})\) if \(G\cong Q_{4n}\) and \(U_{6n}\) respectively. However, in the following figures, we show how close are they for both the groups. It can be seen that if \(G\) is isomorphic to \(D_{2m}\), \(QD_{2^{n}}\), \(M_{2rs}\), \(Q_{4n}\) or \(U_{6n}\), then the central quotient of \(G\) is also isomorphic to some dihedral group. Therefore, we conclude this section with the following theorems for the non-commuting graphs of the groups \(G\) such that \(\frac{G}{Z(G)}\cong D_{2m}\). **Theorem 2.18** ([14, Theorem 4.1.5]).: _Let \(\frac{G}{Z(G)}\) be isomorphic to \(D_{2m}\) (\(m\geq 3\)) and \(|Z(G)|=n\). Then_ \[E(\Gamma_{G})=n\left((m-1)+\sqrt{(m-1)(5m-1)}\right)\ \ \text{and}\ LE(\Gamma_{G})=\frac{n}{2m-1} \left((2m^{3}-6m^{2}+4m)n+4m^{2}-2m\right).\] Figure 7: Energies of \(\Gamma_{Q_{4n}}\) Figure 8: Energies of \(\Gamma_{U_{6n}}\) **Theorem 2.19**.: _Let \(\frac{G}{2(G)}\) be isomorphic to \(D_{2m}\) (\(m\geq 3\)) and \(|Z(G)|=n\). Then_ \[\text{Q-spec}(\Gamma_{G})= \left\{((2m-2)n)^{m(n-1)},(mn)^{(m-1)n-1},((2m-3)n)^{m-1},\left( \frac{n(4m-3+\sqrt{8m^{2}-16m+9})}{2}\right)^{1},\right.\] \[\left.\left(\frac{n(4m-3-\sqrt{8m^{2}-16m+9})}{2}\right)^{1}\right\}\] _and_ \[LE^{+}(\Gamma_{G})=\begin{cases}\frac{12n^{2}-3n}{5}+n\sqrt{33},&\text{if }m=3\\ \frac{48n^{2}-29n}{7}+n\sqrt{73},&\text{if }m=4\\ \frac{(2m^{3}-6m^{2}+4m)n^{2}-(4m^{2}-8m+3)n}{2m-1}+n\sqrt{8m^{2}-16m+9},&\text {if }m\geq 5.\end{cases}\] Proof.: If \(\frac{G}{2(G)}\cong D_{2m}\) then \(|v(\Gamma_{G})|=(2m-1)n\) and \(\Gamma_{G}=K_{m.n,1.((m-1)n)}\). Using Theorem 1.6(b), we have \[\begin{split} Q_{\Gamma_{G}}(x)=&\prod_{i=1}^{2}(x-(2 m-1)n+p_{i})^{a_{i}(p_{i}-1)}\prod_{i=1}^{2}(x-(2m-1)n+2p_{i})^{a_{i}}\left(1- \sum_{i=1}^{2}\frac{a_{i}p_{i}}{x-(2m-1)n+2p_{i}}\right)\\ =&\left(x-(2m-2)n\right)^{m(n-1)}(x-mn)^{(m-1)n-1}(x-(2 m-3)n)^{m}(x-n)\left(1-\frac{mn}{x-(2m-3)n}-\frac{(m-1)n}{x-n}\right)\\ =&\left(x-(2m-2)n\right)^{m(n-1)}(x-mn)^{(m-1)n-1}(x-(2 m-3)n)^{m-1}(x^{2}-(4m-3)nx+(2m^{2}-2m)n^{2}).\end{split}\] Thus, \[\text{Q-spec}(\Gamma_{G})= \left\{((2m-2)n)^{m(n-1)},(mn)^{(m-1)n-1},((2m-3)n)^{m-1},\left( \frac{n(4m-3+\sqrt{8m^{2}-16m+9})}{2}\right)^{1},\right.\] \[\left.\left(\frac{n(4m-3-\sqrt{8m^{2}-16m+9})}{2}\right)^{1} \right\}.\] Number of edges of \(\Gamma_{G}^{c}\) is \(\frac{(m^{2}-m+1)n^{2}-(2m-1)n}{2}\). Thus, \(|e(\Gamma_{G})|=\frac{(2m-1)^{2}n^{2}-(2m-1)n}{2}-\frac{(m^{2}-m+1)n^{2}-(2m-1 )n}{2}\)\(=\frac{3m(m-1)n^{2}}{2}\). Now \[\left|(2m-2)n-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|= \left|\frac{(m-1)(m-2)n}{2m-1}\right|=\frac{(m-1)(m-2)n}{2m-1},\] \[\left|mn-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|=\left| \frac{-m(m-2)n}{2m-1}\right|=\frac{m(m-2)n}{2m-1},\] \[\left|(2m-3)n-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|= \left|\frac{(m^{2}-5m+3)n}{2m-1}\right|=\begin{cases}\frac{(-m^{2}+5m-3)n}{ 2m-1},&\text{if }m\leq 4\\ \frac{(m^{2}-5m+3)n}{2m-1},&\text{if }m\geq 5,\end{cases}\] \[\left|\frac{n}{2}\left(4m-3+\sqrt{8m^{2}-16m+9}\right)-\frac{2|e (\Gamma_{G})|}{|v(\Gamma_{G})|}\right|= \left|\frac{n}{2}\left(\sqrt{8m^{2}-16m+9}+m-\frac{3}{2}+\frac{3}{ 4m-2}\right)\right|\] \[= \frac{n}{2}\left(\sqrt{8m^{2}-16m+9}+m-\frac{3}{2}+\frac{3}{4m-2 }\right)\] and \[\left|\frac{n}{2}\left(4m-3-\sqrt{8m^{2}-16m+9}\right)-\frac{2|e (\Gamma_{G})|}{|v(\Gamma_{G})|}\right|= \left|\frac{n}{2}\left(-\sqrt{8m^{2}-16m+9}+m-\frac{3}{2}+\frac{3}{ 4m-2}\right)\right|\] \[= \frac{n}{2}\left(\sqrt{8m^{2}-16m+9}-m+\frac{3}{2}-\frac{3}{4m-2 }\right).\] Therefore, for \(m\leq 4\) we have \[LE^{+}(\Gamma_{G})= m(n-1)\times\frac{(m-1)(m-2)n}{2m-1}+((m-1)n-1)\times\frac{m(m-2)n}{2 m-1}+(m-1)\times\frac{(-m^{2}+5m-3)n}{2m-1}+\] \[\frac{n}{2}\left(\sqrt{8m^{2}-16m+9}+m-\frac{3}{2}+\frac{3}{4m-2} \right)+\frac{n}{2}\left(\sqrt{8m^{2}-16m+9}-m+\frac{3}{2}-\frac{3}{4m-2}\right)\] and for \(m\geq 5\) we have \[LE^{+}(\Gamma_{G})= m(n-1)\times\frac{(m-1)(m-2)n}{2m-1}+((m-1)n-1)\times\frac{m(m-2)n }{2m-1}+(m-1)\times\frac{(m^{2}-5m+3)n}{2m-1}+\] \[\frac{n}{2}\left(\sqrt{8m^{2}-16m+9}+m-\frac{3}{2}+\frac{3}{4m-2} \right)+\frac{n}{2}\left(\sqrt{8m^{2}-16m+9}-m+\frac{3}{2}-\frac{3}{4m-2}\right)\] Hence, the results follow on simplification. **Theorem 2.20**.: _If \(\frac{G}{Z(G)}\) is isomorphic to \(D_{2m}\) (\(m\geq 3\)) and \(|Z(G)|=n\) then_ * \(E(\Gamma_{G})<LE^{+}(\Gamma_{G})<LE(\Gamma_{G})\)_._ * \(\Gamma_{G}\) _is non-hypoenergetic as well as non-hyperenergetic._ * \(\Gamma_{G}\) _is L-hyperenergetic but not Q-hyperenergetic if_ \(m=3\) _and_ \(|Z(G)|=1\)_. For_ \(m=3,4\) _and_ \(|Z(G)|\neq 1\) _or_ \(m\geq 5\) _and_ \(|Z(G)|\geq 1\)_,_ \(\Gamma_{G}\) _is Q-hyperenergetic and L-hyperenergetic._ Proof.: (a) For \(m=3\), using Theorems 2.18 and 2.19, we have \(LE(\Gamma_{G})-LE^{+}(\Gamma_{G})=\frac{32n}{5}-n\sqrt{33}>0\) and \(LE^{+}(\Gamma_{G})-E(\Gamma_{G})=\frac{12n^{2}-13n}{5}+(\sqrt{33}-2\sqrt{7})n>0\). For \(m=4\), using Theorems 2.18 and 2.19, we have \(LE(\Gamma_{G})-LE^{+}(\Gamma_{G})=\frac{85n}{7}-n\sqrt{73}>0\) and \(LE^{+}(\Gamma_{G})-E(\Gamma_{G})=\frac{48n^{2}-50n}{5}+(\sqrt{73}-\sqrt{57})n>0\). For \(m\geq 5\), using Theorems 2.18 and 2.19, we have \[LE(\Gamma_{G})-LE^{+}(\Gamma_{G})=\frac{n(8m^{2}-10m+3)}{2m-1}-n\sqrt{8m^{2}- 16m+9} \tag{2.35}\] and \[LE^{+}(\Gamma_{G})-E(\Gamma_{G})=n\left(\frac{(2m^{3}-6m^{2}+4m)n-6m^{2}+11m-4 }{2m-1}+\sqrt{8m^{2}-16m+9}-\sqrt{5m^{2}-6m+1}\right). \tag{2.36}\] Since \(8m^{2}-10m+3>0\), \((2m-1)\sqrt{8m^{2}-16m+9}>0\) and \((8m^{2}-10m+3)^{2}-(2m-1)^{2}(8m^{2}-16m+9)=32m^{4}-64m^{3}+40m^{2}-8m>0\) we have \(8m^{2}-10m+3-(2m-1)\sqrt{8m^{2}-16m+9}>0\). Therefore, by (2.35), \((2m-1)(LE(\Gamma_{G})-LE^{+}(\Gamma_{G}))>0\). Hence, \(LE(\Gamma_{G})>LE^{+}(\Gamma_{G})\). Again, \(\sqrt{8m^{2}-16m+9}>0,\sqrt{5m^{2}-6m+1}>0\) and \((\sqrt{8m^{2}-16m+9})^{2}-(\sqrt{5m^{2}-6m+1})^{2}=m(3m-10)+8>0\). Therefore, \(\sqrt{8m^{2}-16m+9}-\sqrt{5m^{2}-6m+1}>0\). Since \(2m^{3}-6m^{2}+4m>6m^{2}-11m+4\) we have \(\frac{(2m^{3}-6m^{2}+4m)n-6m^{2}+11m-4}{2m-1}+\sqrt{8m^{2}-16m+9}-\sqrt{5m^{2}- 6m+1}>0\). Therefore, by (2.36), \(LE^{+}(\Gamma_{G})>E(\Gamma_{G})\). Hence, \(E(\Gamma_{G})<LE^{+}(\Gamma_{G})<LE(\Gamma_{G})\). (b) Here, \(|v(\Gamma_{G})|=2mn-n\) and \(E(K_{|v(\Gamma_{G})|})=LE(K_{|v(\Gamma_{G})|})=LE^{+}(K_{|v(\Gamma_{G})|})=4mn -2n-2.\) Using Theorem 2.18, we have \[E(\Gamma_{G})-|v(\Gamma_{G})|=n(\sqrt{(m-1)(5m-1)}-m) \tag{2.37}\] and \[E(K_{|v(\Gamma_{G})|})-E(\Gamma_{G})=3mn-n-2-n\sqrt{(m-1)(5m-1)}. \tag{2.38}\] Since \(\sqrt{(m-1)(5m-1)}>0,m>0\) and \((\sqrt{(m-1)(5m-1)})^{2}-m^{2}=2m(2m-3)+1>0\) we have \(\sqrt{(m-1)(5m-1)}-m>0\). Therefore, by (2.37), \(E(\Gamma_{G})>|v(\Gamma_{G})|\). Again, \(n\sqrt{(m-1)(5m-1)}>0\), \(3mn-n-2>0\) and \((3mn-n-2)^{2}-\left(n\sqrt{(m-1)(5m-1)}\right)^{2}=4mn(mn-3)+4(n+1)>0\) and so \(3mn-n-2-n\sqrt{(m-1)(5m-1)}>0\). Therefore, by (2.38), \(E(K_{|v(\Gamma_{G})|})>E(\Gamma_{G})\). (c) Using Theorem 2.19, for \(m=3\), we have \(LE^{+}(\Gamma_{G})-LE^{+}(K_{|v(\Gamma_{G})|})=\frac{12n^{2}-53n}{7}+2+n\sqrt{ 33}>0\) for all \(n\neq 1\). Therefore, for \(m=3\) and \(n\neq 1\), \(LE^{+}(\Gamma_{G}))>LE^{+}(K_{|v(\Gamma_{G})|}))\) which implies \(\Gamma_{G}\) is Q-hyperenergetic and consequently part (a) implies \(\Gamma_{G}\) is L-hyperenergetic. If \(m=3\) and \(n=1\), then \(G\cong D_{6}\) so result follows from Theorem 2.4(c). Using Theorem 2.19, for \(m=4\), we have \(LE^{+}(\Gamma_{G})-LE^{+}(K_{|v(\Gamma_{G})|})=\frac{48n^{2}-127n}{7}+2+n\sqrt{73}>\) \(0\) for all \(n\neq 1\). Therefore, for \(m=4\) and \(n\neq 1\), \(LE^{+}(\Gamma_{G}))>LE^{+}(K_{|v(\Gamma_{G})|)})\) which implies \(\Gamma_{G}\) is Q-hyperenergetic and consequently part (a) implies \(\Gamma_{G}\) is L-hyperenergetic. The case \(m=4\) and \(n=1\) does not arise since \(|Z(D_{8})|=2\). For \(r\geq 5\), using Theorem 2.19, we have \(LE^{+}(\Gamma_{G})-LE^{+}(K_{|v(\Gamma_{G})|)}=\frac{(2m^{3}-6m^{2}+4m)n^{2}-( 12m^{2}-16m+5)n}{2m-1}+n\sqrt{8m^{2}-16m+9}+2>0.\) Therefore, \(LE^{+}(\Gamma_{G})>LE^{+}(K_{|v(\Gamma_{G})|)})\) which implies \(\Gamma_{G}\) is Q-hyperenergetic and consequently part (a) implies \(\Gamma_{G}\) is L-hyperenergetic. In Theorem 2.20, we compare \(E(\Gamma_{G})\), \(LE(\Gamma_{G})\) and \(LE^{+}(\Gamma_{G})\). However, in the following figures, we show how close are they. ## 3 \(\frac{G}{Z(G)}\) is isomorphic to \(\mathbb{Z}_{p}\times\mathbb{Z}_{p}\) We compute the Signless Laplacian spectrum and Signless Laplacian energy of \(\Gamma_{G}\) considering the group \(G\) whose central quotient is isomorphic to \(\mathbb{Z}_{p}\times\mathbb{Z}_{p}\), where \(p\) is a prime. Further we compare energy, Laplacian energy and Signless Laplacian energy of \(\Gamma_{G}\) and look into the hyper- and hypo-properties of of \(\Gamma_{G}\). **Theorem 3.1** ([12, Theorem 2.2] and [14, Theorem 4.1.1 (c)]).: _Let \(\frac{G}{Z(G)}\) be isomorphic to \(\mathbb{Z}_{p}\times\mathbb{Z}_{p}\). Then \(E(\Gamma_{G})=LE(\Gamma_{G})=2p(p-1)|Z(G)|\). In particular, if \(G\) is non-abelian and \(|G|=p^{3}\) then \(E(\Gamma_{G})=LE(\Gamma_{G})=2p^{2}(p-1)\)._ **Theorem 3.2**.: _Let \(\frac{G}{Z(G)}\) be isomorphic to \(\mathbb{Z}_{p}\times\mathbb{Z}_{p}\). Then_ \[\text{Q-spec}(\Gamma_{G})=\left\{(pn(p-1))^{(p^{2}-1)n-(p+1)},\left(n(p-1)^{2} \right)^{p},(2pn(p-1))^{1}\right\},\text{ where }|Z(G)|=n\] _and \(LE^{+}(\Gamma_{G})=2p(p-1)|Z(G)|\). In particular, if \(G\) is non-abelian and \(|G|=p^{3}\) then \(LE^{+}(\Gamma_{G})=2p^{2}(p-1)\)._ Proof.: If \(\frac{G}{Z(G)}\cong\mathbb{Z}_{p}\times\mathbb{Z}_{p}\) then \(|v(\Gamma_{G})|=(p^{2}-1)n\) and \(\Gamma_{G}=K_{(p+1).(p-1)n}\), where \(|Z(G)|=n\). Using Theorem 1.6(b), we have \[Q_{\Gamma_{G}}(x)\] \[= (x-(p^{2}-1)n+(p-1)n)^{(p+1)((p-1)n-1)}(x-(p^{2}-1)n+2(p-1)n)^{p+ 1}\left(1-\frac{(p^{2}-1)n}{x-(p^{2}-1)n+2(p-1)n}\right)\] \[= (x-pn(p-1))^{(p^{2}-1)n-(p+1)}(x-n(p-1)^{2})^{p}(x-2pn(p-1)).\] Thus, \(\text{Q-spec}(\Gamma_{G})=\Big{\{}(pn(p-1))^{(p^{2}-1)n-(p+1)},\left(n(p-1)^{ 2}\right)^{p},(2pn(p-1))^{1}\Big{\}}\). Number of edges of \(\Gamma_{G}^{c}\) is \(\frac{n(p^{2}-1)(pn-n-1)}{2}\). Therefore, \[|e(\Gamma_{G})|=\frac{n^{2}(p^{2}-1)^{2}-n(p^{2}-1)}{2}-\frac{n(p^{2}-1)(pn-n- 1)}{2}=\frac{(p^{2}-p)(p^{2}-1)n^{2}}{2}.\] Now \[\left|pn(p-1)-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|=\left|pn(p-1)-( p^{2}-p)n\right|=0,\text{ \ \ }\left|n(p-1)^{2}-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|=\left|n-pn \right|=pn-n\] and \[\left|2pn(p-1)-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|=\left|p^{2}n-pn \right|=p^{2}n-pn.\] Therefore, \(LE^{+}(\Gamma_{G})=\left((p^{2}-1)n-(p+1)\right)\times 0+p\times(pn-n)+p^{2}n-pn=2pn(p-1)\). In particular, if \(G\) is non-abelian and \(|G|=p^{3}\) then \(n=p\). Therefore, \(LE^{+}(\Gamma_{G})=2p^{2}(p-1)\). **Theorem 3.3**.: _If \(\frac{G}{Z(G)}\cong\mathbb{Z}_{p}\times\mathbb{Z}_{p}\) then_ * \(E(\Gamma_{G})=LE(\Gamma_{G})=LE^{+}(\Gamma_{G})\)_._ * \(\Gamma_{G}\) _is non-hypoenergetic, non-hyppenergetic and not Q-hyperenergetic._ _In particular, if \(G\) is non-abelian and \(|G|=p^{3}\) then \(E(\Gamma_{G})=LE(\Gamma_{G})=LE^{+}(\Gamma_{G})\) and \(\Gamma_{G}\) is non-hypoenergetic, non-hypoenergetic, not L-hyperenergetic as well as not Q-hyperenergetic._ Proof.: (a) For \(\frac{G}{Z(G)}\cong\mathbb{Z}_{p}\times\mathbb{Z}_{p}\), from Theorems 3.1 and 3.2, we have \(E(\Gamma_{G})=LE(\Gamma_{G})=LE^{+}(\Gamma_{G})=2p^{2n}(p^{n}-1)\) and hence follows. (b) Here, \(|v(\Gamma_{G})|=(p^{2}-1)|Z(G)|\). Thus, \(E(K_{|v(\Gamma_{G})|})=LE(K_{|v(\Gamma_{G})|})=LE^{+}(K_{|v(\Gamma_{G})|})=2p (p-1)|Z(G)|+2(p-1)|Z(G)|-2\). Therefore, by Theorems 3.1 and 3.2, we have \(E(K_{|v(\Gamma_{G})|})-E(\Gamma_{G})=LE(K_{|v(\Gamma_{G})|})-LE(\Gamma_{G})= LE^{+}(K_{|v(\Gamma_{G})|})-LE^{+}(\Gamma_{G})=2(p-1)|Z(G)|-2>0\). Also, \(E(\Gamma_{G})-|v(\Gamma_{G})|=(p-1)^{2}|Z(G)|>0\). Hence, the results follow. In particular, if \(G\) is non-abelian and \(|G|=p^{3}\), then \(|Z(G)|=p\) in the above cases so the results hold. ## 4 \(\frac{G}{Z(G)}\) is isomorphic to \(Sz(2)\) We compute spectrum, energy, Signless Laplacian Spectrum and Signless Laplacian energy of \(\Gamma_{G}\) considering the group \(G\) whose central quotient is isomorphic to the Suzuki group of order \(20\) denoted by \(Sz(2)\). Further we compare energy, Laplacian energy and Signless Laplacian energy of \(\Gamma_{G}\) and look into the hyper- and hypo-properties of \(\Gamma_{G}\). **Theorem 4.1** ([12, Theorem 2.1]).: _Let \(\frac{G}{Z(G)}\cong Sz(2)\). Then \(LE(\Gamma_{G})=\left(\frac{120}{19}n+30\right)n\), where \(|Z(G)|=n\)._ **Theorem 4.2**.: _Let \(\frac{G}{Z(G)}\cong Sz(2)\). Then_ * \(\mathrm{Spec}(\Gamma_{G})=\left\{0^{19n-6},(-3n)^{4},\left(2n\left(3+2\sqrt{6} \right)\right)^{1},\left(2n\left(3-2\sqrt{6}\right)\right)^{1}\right\}\) _and_ \(E(\Gamma_{G})=4n(3+2\sqrt{6})\)_, where_ \(n=|Z(G)|\)_._ * \(\mathrm{Q-spec}(\Gamma_{G})=\left\{(16n)^{15n-5},(15n)^{4n-1},(13n)^{4},\left( \frac{n(43+\sqrt{409})}{2}\right)^{1},\left(\frac{n(43-\sqrt{409})}{2}\right)^ {1}\right\}\) _and_ \(LE^{+}(\Gamma_{G})=\frac{120n^{2}+1177n}{19}+\sqrt{409}n\)_, where_ \(n=|Z(G)|\)_._ Proof.: If \(\frac{G}{Z(G)}\cong Sz(2)\) and \(|Z(G)|=n\) then \(\Gamma_{G}=K_{5.3n,1.4n}\) and it is a complete \(6\)-partite graph with \(19n\) vertices. (a) Using Theorem 1.6(a), the characteristic polynomial of \(\Gamma_{G}\) is \[P_{\Gamma_{G}}(x)=x^{19n-6}(x+3n)^{4}(x^{2}-12nx-60n^{2}).\] Therefore, \(\mathrm{Spec}(\Gamma_{G})=\left\{0^{19n-6},(-3n)^{4},\left(2n\left(3+2\sqrt{6 }\right)\right)^{1},\left(2n\left(3-2\sqrt{6}\right)\right)^{1}\right\}\) and \(E(\Gamma_{G})=4n\left(3+2\sqrt{6}\right)\). (b) Using Theorem 1.6(b), we have \[Q_{\Gamma_{G}}(x)= \prod_{i=1}^{2}(x-19n+p_{i})^{a_{i}(p_{i}-1)}\prod_{i=1}^{2}(x-19 n+2p_{i})^{a_{i}}\left(1-\sum_{i=1}^{2}\frac{a_{i}p_{i}}{x-19n+2p_{i}}\right)\] \[= \left(x-19n+3n\right)^{5(3n-1)}(x-19n+4n)^{4n-1}(x-19n+2\times 3n)^ {5}(x-19n+2\times 4n)^{1}\] \[\times\left(1-\frac{5\times 3n}{x-19n+2\times 3n}-\frac{4n}{x-19n+2 \times 4n}\right)\] \[= \left(x-16n\right)^{15n-5}(x-15n)^{4n-1}(x-13n)^{4}(x^{2}-43nx+360 n^{2}).\] Thus, \(\mathrm{Q-spec}(\Gamma_{G})=\left\{(16n)^{15n-5},(15n)^{4n-1},(13n)^{4},\left( \frac{n(43+\sqrt{409})}{2}\right)^{1},\left(\frac{n(43-\sqrt{409})}{2}\right) ^{1}\right\}\). Number of edges of \(\Gamma_{G}^{c}\) is \(\frac{61n^{2}-19n}{2}\). Thus, \(|e(\Gamma_{G})|=\frac{19n(19n-1)}{2}-\frac{61n^{2}-19n}{2}=150n^{2}\). Now \[\left|16n-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|=\left|\frac{4n}{19} \right|=\frac{4n}{19},\quad\left|15n-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|} \right|=\left|\frac{-15n}{19}\right|=\frac{15n}{19},\] \[\left|13n-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|=\left|\frac{-53n}{19} \right|=\frac{53n}{19},\] \[\left|\frac{(43+\sqrt{409})n}{2}-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right| =\left|\frac{(217+19\sqrt{409})n}{38}\right|=\frac{(217+19\sqrt{409})n}{38}\] and \[\left|\frac{(43-\sqrt{409})n}{2}-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right| =\left|\frac{(217-19\sqrt{409})n}{38}\right|=\frac{(19\sqrt{409}-217)n}{38}.\] Therefore, \(LE^{+}(\Gamma_{G})=(15n-5)\times\frac{4n}{19}+(4n-1)\times\frac{15n}{19}+4 \times\frac{53n}{19}+\frac{(217+19\sqrt{409})n}{38}+\frac{(19\sqrt{409}-217)n} {38}\) and the result follows on simplification. **Theorem 4.3**.: _If \(\frac{G}{Z(G)}\) is isomorphic to \(Sz(2)\) then_ * \(E(\Gamma_{G})<LE^{+}(\Gamma_{G})<LE(\Gamma_{G})\)_._ * \(\Gamma_{G}\) _is non-hypoenergetic as well as non-hyperenergetic._ * \(\Gamma_{Sz(2)}\) _is not Q-hyperenergetic but is L-hyperenergetic. If_ \(G\ncong Sz(2)\) _then_ \(\Gamma_{G}\) _is Q-hyperenergetic and L-hyperenergetic._ Proof.: (a) Using Theorems 4.2 and 4.1, we have \(LE(\Gamma_{G})-LE^{+}(\Gamma_{G})=\left(\frac{393}{19}-\sqrt{409}\right)n>0\) and \(LE^{+}(\Gamma_{G})-E(\Gamma_{G})=\frac{3n(40n-17)}{19}+(\sqrt{409}-8\sqrt{6})n>0\), where \(n=|Z(G)|\). Hence, the result follows. (b) Here, \(|v(\Gamma_{G})|=19n\), \(n=|Z(G)|\) and \(E(K_{|v(\Gamma_{G})|})=LE(K_{|v(\Gamma_{G})|})=LE^{+}(K_{|v(\Gamma_{G})|})=38n-2\). Using Theorem 4.2(a), we have \(E(\Gamma_{G})-|v(\Gamma_{G})|=(8\sqrt{6}-7)n>0\) and also \(E(K_{|v(\Gamma_{G})|})-E(\Gamma_{G})=2(13-4\sqrt{6})n-2>0\). Hence, the result follows. (c) For \(n=|Z(G)|=1\), from Proposition 4.3.13 of [14], we have \(LE(\Gamma_{Sz(2)})=\frac{600}{19}>36=LE(K_{|v(\Gamma_{Sz(2)})|})\). Hence, for \(n=1\), \(\Gamma_{G}\) is L-hyperenergetic. For \(n=|Z(G)|>1\), using Theorem 4.2(b), we have \(LE^{+}(\Gamma_{G})-LE^{+}(K_{|v(\Gamma_{G})|})=\frac{5n(24n-109)+38}{19}+n \sqrt{409}>0\). Therefore, \(LE^{+}(\Gamma_{G})>LE^{+}(K_{|v(\Gamma_{G})|})\) which implies \(\Gamma_{G}\) is Q-hyperenergetic and consequently part (a) implies \(\Gamma_{G}\) is L-hyperenergetic. For \(\frac{G}{Z(G)}\cong Sz(2)\), the following figures also demonstrate that among the three energies, \(E(\Gamma_{G})\) is the least and the fact that although \(LE^{+}(\Gamma_{G})<LE(\Gamma_{G})\) but these two energies are very close to each other. ## 5 Some more classes of groups In this section we discuss results on energy, Laplacian energy and Signless Laplacian energy of non-commuting graph of certain well-known classes of finite groups. ### The Hanaki groups We consider the Hanaki groups \[A(n,\mathcal{V})=\left\{U(a,b)=\left[\begin{array}{ccc}1&0&0\\ a&1&0\\ b&\mathcal{V}(a)&1\end{array}\right]:a,b\in GF(2^{n})\right\}\quad(n\geq 2),\] under matrix multiplication given by \(U(a,b)U(a^{\prime},b^{\prime})=U(a+a^{\prime},b+b^{\prime}+a^{\prime}\mathcal{ V}(a))\) (here \(\mathcal{V}\) be the Frobenius automorphism of \(GF(2^{n})\), i.e., \(\mathcal{V}(x)=x^{2}\) for all \(x\in GF(2^{n})\)) and \[A(n,p)=\left\{V(a,b,c)=\left[\begin{array}{ccc}1&0&0\\ a&1&0\\ b&c&1\end{array}\right]:a,b,c\in GF(p^{n})\right\}\left\{\text{$p$ is any prime}\right\}\] under matrix multiplication \(V(a,b,c)V(a^{\prime},b^{\prime},c^{\prime})=V(a+a^{\prime},b+b^{\prime}+ca^{ \prime},c+c^{\prime})\). In this section, we compute Signless Laplacian spectrum and Signless Laplacian energy of non-commuting graph of the groups \(A(n,\mathcal{V})\) and \(A(n,p)\). Further we compare Signless Laplacian energy of \(\Gamma_{G}\) with its predetermined energy, Laplacian energy and look into the hyper- and hypo-properties of \(\Gamma_{G}\) if \(G\) is isomorphic to \(A(n,\mathcal{V})\) and \(A(n,p)\). **Theorem 5.1** ([12, Proposition 3.5] and [14, Theorem 4.1.3]).: _If \(G\) denotes the Hanaki group \(A(n,\mathcal{V})\), then_ \[E(\Gamma_{G})=LE(\Gamma_{G})=2^{2n+1}-2^{n+2}.\] **Theorem 5.2**.: _If \(G\) is isomorphic to the Hanaki group \(A(n,\mathcal{V})\) then \(\text{Q-spec}(\Gamma_{G})=\{(2^{2n}-2^{n+1})^{2^{2n}-2^{n+1}+1},\)\((2^{2n}-3\times 2^{n})^{2^{n}-2},(2^{2n+1}-2^{n+2})^{\}\}\) and \(LE^{+}(\Gamma_{G})=2^{2n+1}-2^{n+2}\)._ Proof.: If \(G\) is isomorphic to the Hanaki group \(A(n,\mathcal{V})\) then \(|v(\Gamma_{G})|=2^{2n}-2^{n}\) and \(\Gamma_{G}=K_{(2^{n}-1).2^{n}}\). Using Theorem 1.6(b), we have \[Q_{\Gamma_{G}}(x)= (x-(2^{2n}-2^{n})+2^{n})^{(2^{n}-1)(2^{n}-1)}(x-(2^{2n}-2^{n})+2 \times 2^{n})^{(2^{n}-1)}\left(1-\frac{(2^{n}-1).2^{n}}{x-(2^{2n}-2^{n})+2 \times 2^{n}}\right)\] \[= (x-(2^{2n}-2^{n+1}))^{(2^{n}-1)^{2}}(x-(2^{2n}-3\times 2^{n}))^{2^{n }-2}(x-(2^{2n+1}-2^{n+2})).\] Thus, \(\text{Q-spec}(\Gamma_{G})=\{(2^{2n}-2^{n+1})^{2^{2n}-2^{n+1}+1},(2^{2n}-3\times 2 ^{n})^{2^{n}-2},(2^{2n+1}-2^{n+2})^{\}\}\). Number of edges of \(\Gamma_{G}^{c}\) is \(2^{n-2}(2^{2n}-2^{n+1}+1)\). Thus, \(|e(\Gamma_{G})|=\frac{(2^{2n}-2^{n})(2^{2n}-2^{n}-1)}{2}-2^{n-2}(2^{2n}-2^{n+1} +1)\)\(=2^{4n-1}-3\times 2^{3n-1}+2^{2n}\). Now, \[\left|2^{2n}-2^{n+1}-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|=\left| \frac{2^{3n+1}-2^{3n+1}}{2^{2n}-2^{n}}\right|=0,\quad\left|2^{2n}-3\times 2^{n}- \frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|=\left|\frac{2^{2n}-2^{3n}}{2^{ 2n}-2^{n}}\right|=\frac{2^{3n}-2^{2n}}{2^{2n}-2^{n}}\] and \[\left|2^{2n+1}-2^{n+2}-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|=\left| \frac{2^{4n}-3\times 2^{3n}+2^{2n+1}}{2^{2n}-2^{n}}\right|=\frac{2^{4n}-3\times 2^{3n}+2 ^{2n+1}}{2^{2n}-2^{n}}.\] Therefore, \(LE^{+}(\Gamma_{G})=(2^{2n}-2^{n+1}+1)\times 0+(2^{n}-2)\times\frac{2^{3n}-2^{2n}}{2^{2n}-2 ^{n}}+\frac{2^{4n}-3\times 2^{3n}+2^{2n+1}}{2^{2n}-2^{n}}\) and the result follows on simplification. **Theorem 5.3**.: _If \(G\) is isomorphic to the Hanaki group \(A(n,\mathcal{V})\) then_ * \(E(\Gamma_{G})=LE(\Gamma_{G})=LE^{+}(\Gamma_{G})\)_._ * \(\Gamma_{G}\) _is non-hypoenergetic, non-hyperenergetic, not L-hyperenergetic and not Q-hyperenergetic._ Proof.: (a) Using Theorems 5.1 and 5.2, we have \(E(\Gamma_{G})=LE(\Gamma_{G})=LE^{+}(\Gamma_{G})=2^{n+1}(2^{n}-2)\) and hence the result follows. (b) Here, \(|v(\Gamma_{G})|=2^{n}(2^{n}-1)\) and \(E(K_{|v(\Gamma_{G})|})=LE(K_{|v(\Gamma_{G})|})=LE^{+}(K_{|v(\Gamma_{G})|})=2^{n +1}(2^{n}-1)-2\). Therefore, by Theorems 5.1 and 5.2, we have \(E(\Gamma_{G})-|v(\Gamma_{G})|=2^{n}(2^{n}-3)>0\), for \(n>1\) and \(E(K_{|v(\Gamma_{G})|})-E(\Gamma_{G})=LE(K_{|v(\Gamma_{G})|})-LE(\Gamma_{G})=LE^{+ }(K_{|v(\Gamma_{G})|})-LE^{+}(\Gamma_{G})=2(2^{n}-1)>0\). Hence, the results follow. **Theorem 5.4** ([12, Proposition 3.6] and [14, Theorem 4.1.4]).: _If \(G\) denotes the Hanaki group \(A(n,p)\) then_ \[E(\Gamma_{G})=LE(\Gamma_{G})=2(p^{3n}-p^{2n}).\] **Theorem 5.5**.: _If \(G\) is isomorphic to the Hanaki group \(A(n,p)\) then \(\text{Q-spec}(\Gamma_{G})=\{(p^{3n}-p^{2n})^{(p^{n}+1)(p^{2n}-p^{n}-1)},\)\((p^{3n}-2p^{2n}+p^{n})^{p^{n}},(2p^{3n}-2p^{2n})^{1}\}\) and \(LE^{+}(\Gamma_{G})=2p^{2n}(p^{n}-1)\)._ Proof.: If \(G\) is isomorphic to the Hanaki group \(A(n,p)\) then \(|v(\Gamma_{G})|=p^{3n}-p^{n}\) and \(\Gamma_{G}=K_{(p^{n}+1).(p^{2n}-p^{n})}\). Using Theorem 1.6(b), we have \[Q_{\Gamma_{G}}(x)= (x-(p^{3n}-p^{n})+(p^{2n}-p^{n}))^{(p^{n}+1)(p^{2n}-p^{n}-1)}(x-(p ^{3n}-p^{n})+2(p^{2n}-p^{n}))^{(p^{n}+1)}\] \[\times\left(1-\frac{p^{3n}-p^{n}}{x-(p^{3n}-p^{n})+2(p^{2n}-p^{n})}\right)\] \[= (x-(p^{3n}-p^{2n}))^{(p^{n}+1)(p^{2n}-p^{n}-1)}(x-(p^{3n}-2p^{2n} +p^{n}))^{p^{n}}(x-(2p^{3n}-2p^{2n})).\] Thus, \(\text{Q-spec}(\Gamma_{G})=\{(p^{3n}-p^{2n})^{(p^{n}+1)(p^{2n}-p^{n}-1)},(p^{3n }-2p^{2n}+p^{n})^{p^{n}},(2p^{3n}-2p^{2n})^{1}\}\). Number of edges of \(\Gamma_{G}^{c}\) is \(\frac{(p^{3n}-p^{n})(p^{2n}-p^{n}-1)}{2}\). Thus, \(|e(\Gamma_{G})|=\frac{(p^{3n}-p^{n})(p^{3n}-p^{n}-1)}{2}-\frac{(p^{3n}-p^{n})( p^{2n}-p^{n}-1)}{2}\) \(=\frac{p^{6n}-p^{3n}-p^{4n}+p^{3n}}{2}\). Now \[\left|p^{3n}-2p^{2n}-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|=\left| \frac{0}{p^{3n}-p^{n}}\right|=0,\] \[\left|p^{3n}-2p^{2n}+p^{n}-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|= \left|\frac{-p^{5n}+p^{4n}+p^{3n}-p^{2n}}{p^{3n}-p^{n}}\right|=\frac{p^{5n}-p ^{4n}-p^{3n}+p^{2n}}{p^{3n}-p^{n}}\] and \[\left|2p^{3n}-2p^{2n}-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|=\left| \frac{p^{6n}-p^{5n}-p^{4n}+p^{3n}}{p^{3n}-p^{n}}\right|=\frac{p^{6n}-p^{5n}-p ^{4n}+p^{3n}}{p^{3n}-p^{n}}.\] Therefore, \(LE^{+}(\Gamma_{G})=(p^{n}+1)(p^{2n}-p^{n}-1)\times 0+p^{n}\times\frac{p^{5n}-p ^{4n}-p^{3n}+p^{2n}}{p^{4n}-p^{5n}}+\frac{p^{6n}-p^{5n}-p^{4n}+p^{3n}}{p^{3n}- p^{n}}\) and the result follows on simplification. **Theorem 5.6**.: _If \(G\) is isomorphic to the Hanaki group \(A(n,p)\) then_ * \(E(\Gamma_{G})=LE(\Gamma_{G})=LE^{+}(\Gamma_{G})\)_._ * \(\Gamma_{G}\) _is non-hypoenergetic, non-hyperenergetic, not L-hyperenergetic and not Q-hyperenergetic._ Proof.: (a) Using Theorems 5.4 and 5.5 we have \(E(\Gamma_{G})=LE(\Gamma_{G})=LE^{+}(\Gamma_{G})=2p^{2n}(p^{n}-1)\) and hence the result follows. (b) Here, \(|v(\Gamma_{G})|=p^{3n}-p^{n}\) and \(E(K_{|v(\Gamma_{G})|})=LE(K_{|v(\Gamma_{G})|})=LE^{+}(K_{|v(\Gamma_{G})|})=2(p ^{2n}(p^{n}-1)+p^{2n}-p^{n}-1)\). Therefore, by Theorems 5.4 and 5.5, we have \(E(\Gamma_{G})-|v(\Gamma_{G})|=p^{n}(p^{n}-1)^{2}>0\) and \(E(K_{|v(\Gamma_{G})|})-E(\Gamma_{G})=LE(K_{|v(\Gamma_{G})|})-LE(\Gamma_{G})= LE^{+}(K_{|v(\Gamma_{G})|})-LE^{+}(\Gamma_{G})=2(p^{n}(p^{n}-1)-1)>0\). Hence, the results follow. ### The semi-dihedral groups, \(SD_{8n}\) We consider \(SD_{8n}:=\langle a,b:a^{4n}=b^{2}=1,bab^{-1}=a^{2n-1}\rangle\), the semi-dihedral groups of order \(8n\) (where \(n>1\)). Results regarding different energies of non-commuting graph of \(SD_{8n}\) are given below. **Theorem 5.7** ([14, Theorem 4.1.1 (b) and Theorem 4.3.2]).: _Let \(G\) be isomorphic to \(SD_{8n}\)._ * _If_ \(n\) _is odd then_ \[E(\Gamma_{SD_{8n}})=4(n-1)+4\sqrt{(n-1)(5n-1)}\,\text{ and }\,LE(\Gamma_{SD_{8n}})=\frac{8n(4n^{2}-10n+7)}{2n-1}.\] * _If_ \(n\) _is even then_ \[E(\Gamma_{SD_{8n}})=2(2n-1)+2\sqrt{(2n-1)(10n-1)}\,\text{ and }\,LE(\Gamma_{SD_{8n}})=\frac{8n(8n^{2}-8n+3)}{4n-1}.\] **Theorem 5.8**.: _Let \(G\) be isomorphic to \(SD_{8n}\), where \(n\) is odd. Then_ \[\text{Q-spec}(\Gamma_{SD_{8n}})= \left\{(8n-8)^{3n},(4n)^{4n-5},(8n-12)^{n-1},\left(8n-6+2\sqrt{8n^ {2}-16n+9}\right)^{1},\right.\] \[\left.\left(8n-6-2\sqrt{8n^{2}-16n+9}\right)^{1}\right\}\] _and_ \[LE^{+}(\Gamma_{SD_{8n}})=\begin{cases}36+4\sqrt{33},&\text{if }n=3\\ \frac{32n^{3}-112n^{2}+96n-12}{2n-1}+4\sqrt{8n^{2}-16n+9},&\text{if }n\geq 5.\end{cases}\] Proof.: If \(G\cong SD_{8n}\) and \(n\) is odd then \(|v(\Gamma_{SD_{8n}})|=8n-4\) and \(\Gamma_{SD_{8n}}=K_{n,4,1,(4n-4)}\). Using Theorem 1.6(b), we have \[Q_{\Gamma_{SD_{8n}}}(x)= \prod_{i=1}^{2}(x-(8n-4)+p_{i})^{a_{i}(p_{i}-1)}\prod_{i=1}^{2}(x- (8n-4)+2p_{i})^{a_{i}}\left(1-\sum_{i=1}^{2}\frac{a_{i}p_{i}}{x-(8n-4)+2p_{i}}\right)\] \[= \left(x-8n+8\right)^{3n}(x-4n)^{4n-5}(x-8n+12)^{n}(x-4)\left(1- \frac{4n}{x-8n+12}-\frac{4n-4}{x-4}\right)\] \[= \left(x-(8n-8)\right)^{3n}(x-4n)^{4n-5}(x-(8n-12))^{n-1}(x^{2}-(1 6n-12)x+32n^{2}-32n).\] Thus, \(\text{Q-spec}(\Gamma_{SD_{8n}})=\left\{(8n-8)^{3n},(4n)^{4n-5},(8n-12)^{n-1}, \left(8n-6+2\sqrt{8n^{2}-16n}+9\right)^{1},\right.\) \[\left.\left(8n-6-2\sqrt{8n^{2}-16n+9}\right)^{1}\right\}\] Number of edges of \(\Gamma_{SD_{8n}}^{5}\) is \(8n^{2}-12n+10\). Thus, \(|e(\Gamma_{SD_{8n}})|=\frac{(8n-4)(8n-4-1)}{2}-(8n^{2}-12n+10)=24n(n-1)\). Now \[\left|8n-8-\frac{2|e(\Gamma_{SD_{8n}})|}{|v(\Gamma_{SD_{8n}})|} \right|=\left|\frac{4(n-1)(n-2)}{2n-1}\right|=\frac{4(n-1)(n-2)}{2n-1},\] \[\left|4n-\frac{2|e(\Gamma_{SD_{8n}})|}{|v(\Gamma_{SD_{8n}})|} \right|=\left|\frac{4n(2-n)}{2n-1}\right|=\frac{4n(n-2)}{2n-1},\] \[\left|8n-12-\frac{2|e(\Gamma_{SD_{8n}})|}{|v(\Gamma_{SD_{8n}})|} \right|=\left|\frac{4(n^{2}-5n+3)}{2n-1}\right|=\begin{cases}\frac{4(-n^{2}+5 n-3)}{2n-1},&\text{if }n\leq 3\\ \frac{4(n^{2}-5n+3)}{2n-1},&\text{if }n\geq 5,\end{cases}\] \[\left|8n-6+2\sqrt{8n^{2}-16n+9}-\frac{2|e(\Gamma_{SD_{8n}})|}{|v( \Gamma_{SD_{8n}})|}\right|= \left|2\sqrt{8n^{2}-16n+9}+(2n-3)+\frac{3}{2n-1}\right|\] \[= 2\sqrt{8n^{2}-16n+9}+(2n-3)+\frac{3}{2n-1}\] and \[\left|8n-6-2\sqrt{8n^{2}-16n+9}-\frac{2|e(\Gamma_{SD_{8n}})|}{|v( \Gamma_{SD_{8n}})|}\right|= \left|-2\sqrt{8n^{2}-16n+9}+(2n-3)+\frac{3}{2n-1}\right|\] \[= 2\sqrt{8n^{2}-16n+9}-(2n-3)-\frac{3}{2n-1}.\] Therefore, for \(n=3\) we have \(LE^{+}(\Gamma_{SD_{8n}})=36+4\sqrt{33}\). For \(n\geq 5\) we have \[LE^{+}(\Gamma_{SD_{8n}})= 3n\times\frac{4(n-1)(n-2)}{2n-1}+(4n-5)\times\frac{4n(n-2)}{2n-1 }+(n-1)\times\frac{4(n^{2}-5n+3)}{2n-1}+\] \[2\sqrt{8n^{2}-16n+9}-2n-3+\frac{3}{2n-1}+2\sqrt{8n^{2}-16n+9}+2n+3 -\frac{3}{2n-1}\] and the result follows on simplification. **Theorem 5.9**.: _Let \(G\) be isomorphic to \(SD_{8n}\), where \(n\) is even. Then_ \[\text{Q-spec}(\Gamma_{SD_{8n}})= \left\{(8n-4)^{2n},(4n)^{4n-3},(8n-6)^{2n-1},\left(8n-3+\sqrt{32n^ {2}-32n+9}\right)^{1},\right.\] \[\left.\left(8n-3-\sqrt{32n^{2}-32n+9}\right)^{1}\right\}\] _and_ \[LE^{+}(\Gamma_{SD_{8n}})=\begin{cases}\frac{134}{7}+2\sqrt{73},&\text{if }n=2\\ \frac{64n^{3}-128n^{2}+64n-6}{4n-1}+2\sqrt{32n^{2}-32n+9},&\text{if }n\geq 4. \end{cases}\] Proof.: If \(G\cong SD_{8n}\) and \(n\) is even then \(|v(\Gamma_{SD_{8n}})|=8n-2\) and \(\Gamma_{SD_{8n}}=K_{2n,2,1,(4n-2)}\). Using Theorem 1.6(b), we have \[Q_{\Gamma_{SD_{8n}}}(x)= \prod_{i=1}^{2}(x-(8n-2)+p_{i})^{a_{i}(p_{i}-1)}\prod_{i=1}^{2}(x-( 8n-2)+2p_{i})^{a_{i}}\left(1-\sum_{i=1}^{2}\frac{a_{i}p_{i}}{x-(8n-2)+2p_{i}}\right)\] \[= \left(x-8n+4\right)^{2n}(x-4n)^{4n-3}(x-8n+6)^{2n}(x-2)\left(1- \frac{4n}{x-8n+6}-\frac{4n-2}{x-2}\right)\] \[= \left(x-(8n-4)\right)^{2n}(x-4n)^{4n-3}(x-(8n-6))^{2n-1}(x^{2}-(1 6n-6)x+32n^{2}-16n).\] Thus, \(\text{Q-spec}(\Gamma_{SD_{8n}})=\left\{(8n-4)^{2n},(4n)^{4n-3},(8n-6)^{2n-1}, \left(8n-3+\sqrt{32n^{2}-32n+9}\right)^{1},\right.\) \[\left.\left(8n-3-\sqrt{32n^{2}-32n+9}\right)^{1}\right\}\] Number of edges of \(\Gamma_{SD_{8n}}^{c}\) is \(8n^{2}-8n+3\). Therefore, \(|e(\Gamma_{SD_{8n}})|=\frac{(8n-2)(8n-2-1)}{2}-(8n^{2}-8n+3)\)\(=12n(2n-1)\). Now \[\left.\left|8n-4-\frac{2|e(\Gamma_{SD_{8n}})|}{|v(\Gamma_{SD_{8n}})|}\right|= \left|\frac{(8n-4)(n-1)}{4n-1}\right|=\frac{(8n-4)(n-1)}{4n-1},\right.\] \[\left.\left|4n-\frac{2|e(\Gamma_{SD_{8n}})|}{|v(\Gamma_{SD_{8n}})|}\right|= \left|\frac{-8n(n-1)}{4n-1}\right|=\frac{8n(n-1)}{4n-1},\right.\] \[\left.\left|8n-6-\frac{2|e(\Gamma_{SD_{8n}})|}{|v(\Gamma_{SD_{8n}})|}\right|= \left|\frac{(8n^{2}-20n+6)}{4n-1}\right|=\begin{cases}\frac{2}{7},&\text{if }n=2\\ \frac{(8n^{2}-20n+6)}{4n-1},&\text{if }n\geq 4,\end{cases}\] \[\left.\left|8n-3+\sqrt{32n^{2}-32n+9}-\frac{2|e(\Gamma_{SD_{8n}})|}{|v( \Gamma_{SD_{8n}})|}\right|= \left|\sqrt{32n^{2}-32n+9}+2n-\frac{3}{2}+\frac{3}{8n-2}\right|\right.\] \[= \sqrt{32n^{2}-32n+9}+2n-\frac{3}{2}+\frac{3}{8n-2}\] and \[\left.\left|8n-3-\sqrt{32n^{2}-32n+9}-\frac{2|e(\Gamma_{SD_{8n}})|}{|v( \Gamma_{SD_{8n}})|}\right|= \left|-\sqrt{32n^{2}-32n+9}+2n-\frac{3}{2}+\frac{3}{8n-2}\right|\right.\] \[= \sqrt{32n^{2}-32n+9}-2n+\frac{3}{2}-\frac{3}{8n-2}.\] Therefore, for \(n=2\), we have \(LE^{+}(\Gamma_{SD_{8n}})=\frac{134}{7}+2\sqrt{73}\). For \(n\geq 4\), we have \[LE^{+}(\Gamma_{SD_{8n}})= 2n\times\frac{(8n-4)(n-1)}{4n-1}+(4n-3)\times\frac{8n(n-1)}{4n-1 }+(2n-1)\times\frac{(8n^{2}-20n+6)}{4n-1}+\] \[\sqrt{32n^{2}-32n+9}+2n-\frac{3}{2}+\frac{3}{8n-2}+\sqrt{32n^{2}- 32n+9}-2n+\frac{3}{2}-\frac{3}{8n-2}\] and the result follows on simplification. **Theorem 5.10**.: _If \(G\) is isomorphic to \(SD_{8n}\) then_ 1. \(E(\Gamma_{SD_{8n}})<LE^{+}(\Gamma_{SD_{8n}})<LE(\Gamma_{SD_{8n}})\)_._ 2. \(\Gamma_{SD_{8n}}\) _is non-hypoenergetic as well as non-hyperenergetic._ 3. \(\Gamma_{SD_{8n}}\) _is Q-hyperenergetic and L-hyperenergetic._ Proof.: (a) **Case 1:**\(n\) is odd For \(n=3\), using Theorems 5.7 and 5.8, we have \(E(\Gamma_{SD_{24}})=8+8\sqrt{7}\), \(LE(\Gamma_{SD_{24}})=\frac{312}{5}\) and \(LE^{+}(\Gamma_{SD_{24}})=36+4\sqrt{33}\). Clearly, \(E(\Gamma_{SD_{24}})<LE^{+}(\Gamma_{SD_{24}})<LE(\Gamma_{SD_{24}})\). For \(n\geq 5\), using Theorems 5.7 and 5.8, we have \[LE(\Gamma_{SD_{8n}})-LE^{+}(\Gamma_{SD_{8n}})=\frac{32n^{2}-40n+12}{2n-1}-4 \sqrt{8n^{2}-16n+9} \tag{5.1}\] and \[LE^{+}(\Gamma_{SD_{8n}})-E(\Gamma_{SD_{8n}})=\frac{32n^{3}-116n^{2}+102n-14}{2 n-1}+4\sqrt{8n^{2}-16n+9}-4\sqrt{5n^{2}-6n+1}. \tag{5.2}\] Since \(32n^{2}-40n+12>0\), \(4(2n-1)\sqrt{8n^{2}-16n+9}>0\) and \((32n^{2}-40n+12)^{2}-\left(4\sqrt{8n^{2}-16n+9}\right)^{2}(2n-1)^{2}=512n^{3}(n -2)+128n(5n-1)>0\) we have \[32n^{2}-40n+12-4(2n-1)\sqrt{8n^{2}-16n+9}>0.\] Therefore, by (5.1), \((2n-1)(LE(\Gamma_{SD_{8n}})-LE^{+}(\Gamma_{SD_{8n}}))>0\). Hence, \(LE(\Gamma_{SD_{8n}})>LE^{+}(\Gamma_{SD_{8n}})\). Again, \(\sqrt{8n^{2}-16n+9}>0,\sqrt{5n^{2}-6n+1}>0\) and \(\left(\sqrt{8n^{2}-16n+9}\right)^{2}-\left(\sqrt{5n^{2}-6n+1}\right)^{2}=n(3n-10) +8>0\). Therefore, \(\sqrt{8n^{2}-16n+9}-\sqrt{5n^{2}-6n+1}>0\). Since \(32n^{3}-116n^{2}+102n-14>0\) we have \(\frac{32n^{3}-116n^{2}+102n-14}{2n-1}+4\sqrt{8n^{2}-16n+9}-4\sqrt{5n^{2}-6n+1 }>0\). Therefore, by (5.2), \(LE^{+}(\Gamma_{SD_{8n}})>E(\Gamma_{SD_{8n}})\). Hence, \(E(\Gamma_{SD_{8n}})<LE^{+}(\Gamma_{SD_{8n}})<LE(\Gamma_{SD_{8n}})\). **Case 2:**\(n\) is even For \(n=2\), using Theorems 5.7 and 5.9, we have \(E(\Gamma_{SD_{16}})=6+2\sqrt{57}\), \(LE(\Gamma_{SD_{16}})=\frac{304}{7}\) and \(LE^{+}(\Gamma_{SD_{16}})=\frac{134}{7}+2\sqrt{73}\). Clearly, \(E(\Gamma_{SD_{16}})<LE^{+}(\Gamma_{SD_{16}})<LE(\Gamma_{SD_{16}})\). For \(n\geq 4\), using Theorems 5.7 and 5.9, we have \[LE(\Gamma_{SD_{8n}})-LE^{+}(\Gamma_{SD_{8n}})=\frac{64n^{2}-40n+6}{4n-1}-2\sqrt {32n^{2}-32n+9} \tag{5.3}\] and \[LE^{+}(\Gamma_{SD_{8n}})-E(\Gamma_{SD_{8n}})=\frac{16n^{2}(4n-9)+76n-8}{4n-1}+ 2\sqrt{32n^{2}-32n+9}-2\sqrt{20n^{2}-12n+1}. \tag{5.4}\] Since \(64n^{2}-40n+6>0\), \(2(4n-1)\sqrt{32n^{2}-32n+9}>0\) and \((64n^{2}-40n+6)^{2}-(2\sqrt{32n^{2}-32n+9})^{2}(4n-1)^{2}=2048n^{3}(n-1)+64n( 10n-1)>0\) we have \(64n^{2}-40n+6-2(4n-1)\sqrt{32n^{2}-32n+9}>0\). Therefore, by (5.3), \((4n-1)(LE^{+}(\Gamma_{SD_{8n}})-LE(\Gamma_{SD_{8n}}))>0\). Hence, \(LE(\Gamma_{SD_{8n}})>LE^{+}(\Gamma_{SD_{8n}})\). Again, \(\sqrt{32n^{2}-32n+9}>0,\sqrt{20n^{2}-12n+1}>0\) and \(\left(\sqrt{32n^{2}-32n+9}\right)^{2}-\left(\sqrt{20n^{2}-12n+1}\right)^{2}=4n (3n-5)+8>0\). Therefore, \(\sqrt{32n^{2}-32n+9}-\sqrt{20n^{2}-12n+1}>0\). Since \(16n^{2}(4n-9)+76n-8>0\) we have \(\frac{16n^{2}(4n-9)+76n-8}{2n-1}+2\sqrt{32n^{2}-32n+9}-2\sqrt{20n^{2}-12n+1}>0\). Therefore, by (5.4), \(LE^{+}(\Gamma_{SD_{8n}})>E(\Gamma_{SD_{8n}})\). Hence, \(E(\Gamma_{SD_{8n}})<LE^{+}(\Gamma_{SD_{8n}})<LE(\Gamma_{SD_{8n}})\). (b) **Case 1:**\(n\) is odd Here \(|v(\Gamma_{SD_{8n}})|=8n-4\) and \(E(K_{|v(\Gamma_{SD_{8n}})|})=LE(K_{|v(\Gamma_{SD_{8n}})|})=LE^{+}(K_{|v(\Gamma _{SD_{8n}})|})=16n-10\). Using Theorem 5.7, we have \[E(\Gamma_{SD_{8n}})-|v(\Gamma_{SD_{8n}})|=4\sqrt{(n-1)(5n-1)}-4n \tag{5.5}\] and \[E(K_{|v(\Gamma_{SD_{8n}})|})-E(\Gamma_{SD_{8n}})=12n-6-4\sqrt{(n-1)(5n-1)}. \tag{5.6}\] Since \(4\sqrt{(n-1)(5n-1)}>0\), \(4n>0\) and \(\left(4\sqrt{(n-1)(5n-1)}\right)^{2}-(4n)^{2}=16(4n^{2}-6n+1)>0\) we have \(4\sqrt{(n-1)(5n-1)}-4n>0\). Therefore, by (5.5), \(E(\Gamma_{SD_{8n}})>|v(\Gamma_{SD_{8n}})|\). Again, \(4\sqrt{(n-1)(5n-1)}>0\), \(12n-6>0\) and \((12n-6)^{2}-\left(4\sqrt{(n-1)(5n-1)}\right)^{2}=4(16n^{2}-12n+5)>0\) and so \(12n-6-4\sqrt{(n-1)(5n-1)}>0\). Therefore, by (5.6), \(E(K_{|v(\Gamma_{SD_{8n}})|})>E(\Gamma_{SD_{8n}})\). **Case 2:**\(n\) is even Here \(|v(\Gamma_{SD_{8n}})|=8n-2\) and \(E(K_{|v(\Gamma_{SD_{8n}})|})=LE(K_{|v(\Gamma_{SD_{8n}})|})=LE^{+}(K_{|v(\Gamma_{SD _{8n}})|})=16n-16\). Using Theorem 5.7, we have \[E(\Gamma_{SD_{8n}})-|v(\Gamma_{SD_{8n}})|=2\left(\sqrt{(2n-1)(10n-1)}-2n\right) \tag{5.7}\] and \[E(K_{|v(\Gamma_{SD_{8n}})|})-E(\Gamma_{SD_{8n}})=2\left(3(2n-1)+1-\sqrt{(2n-1)( 10n-1)}\right). \tag{5.8}\] Since \(\sqrt{(2n-1)(10n-1)}>0\), \(2n>0\) and \(\left(\sqrt{(2n-1)(10n-1)}\right)^{2}-(2n)^{2}=4n(4n-3)+1>0\) we have \(\sqrt{(2n-1)(10n-1)}-2n>0\). Therefore, by (5.7), \(E(\Gamma_{SD_{8n}})>|v(\Gamma_{SD_{8n}})|\). Again, \(\sqrt{(2n-1)(10n-1)}>0\), \(3(2n-1)+1>0\) and \((3(2n-1)+1)^{2}-\left(\sqrt{(2n-1)(10n-1)}\right)^{2}=4n(4n-3)+3>0\) and so \(3(2n-1)+1-\sqrt{(2n-1)(10n-1)}>0\). Therefore, by (5.8), \(E(K_{|v(\Gamma_{SD_{8n}})|})>E(\Gamma_{SD_{8n}})\). (c) **Case 1:**\(n\) is odd Using Theorem 5.8, for \(n=3\) we have \(LE^{+}(\Gamma(SD_{24}))=36+4\sqrt{33}\) and \(LE^{+}(K_{|v(\Gamma(SD_{44}))|})=38\). Also, for \(n\geq 5\) we have \[LE^{+}(\Gamma_{SD_{8n}})-LE^{+}(K_{|v(\Gamma_{SD_{8n}})|})=\frac{2(8n^{2}(2n-9)+66 n-11)}{2n-1}+4\sqrt{8n^{2}-14n+9}>0.\] Therefore, \(LE^{+}(\Gamma_{SD_{8n}})>LE^{+}(K_{|v(\Gamma_{SD_{8n}})|})\) which implies \(\Gamma_{SD_{8n}}\) is Q-hyperenergetic and consequently part (a) implies \(\Gamma_{SD_{8n}}\) is L-hyperenergetic. **Case 2:**\(n\) is even For \(n=2\) we have \(LE^{+}(K_{|v(\Gamma_{SD_{16}})|})=16\) and using Theorem 5.9, \(LE^{+}(\Gamma_{SD_{16}})=\frac{134}{7}+2\sqrt{73}\). Therefore, \(\Gamma_{SD_{16}}\) is Q-hyperenergetic and consequently part (a) implies \(\Gamma_{SD_{16}}\) is L-hyperenergetic. For \(n\geq 4\), using Theorem 5.9, we have \[LE^{+}(\Gamma_{SD_{8n}})-LE^{+}(K_{|v(\Gamma_{SD_{8n}})|})=\frac{64n^{2}(n-3)+ 144n-22}{4n-1}+2\sqrt{32n^{2}-32n+9}>0.\] Therefore, \(LE^{+}(\Gamma_{SD_{8n}})>LE^{+}(K_{|v(\Gamma_{SD_{8n}})|})\) which implies \(\Gamma_{SD_{8n}}\) is Q-hyperenergetic and consequently part (a) implies \(\Gamma_{SD_{8n}}\) is L-hyperenergetic. In Theorem 5.10, we compare \(E(\Gamma_{SD_{8n}})\), \(LE(\Gamma_{SD_{8n}})\) and \(LE^{+}(\Gamma_{SD_{8n}})\). However, in the following figures, we show how close are they. ### The groups, \(V_{8n}\) We consider \(V_{8n}:=\langle a,b:a^{4n}=b^{4}=1,b^{-1}ab^{-1}=bab=a^{-1}\rangle\), the groups of order \(8n\) (where \(n>1\)). We compute Signless Laplacian spectrum, Signless Laplacian energy, Laplacian spectrum, Laplacian energy and spectrum and energy of \(\Gamma_{V_{8n}}\). If \(n\) is odd then energy and Laplacian energy of \(\Gamma_{V_{8n}}\) are as given in the following theorem. **Theorem 5.11** ([14, Theorem 4.1.1 (a) and Theorem 4.3.1]).: _Let \(G\) be isomorphic to \(V_{8n}\), where \(n\) is odd. Then_ \[E(\Gamma_{V_{8n}})=2(2n-1)+2\sqrt{(2n-1)(10n-1)}\;\;\text{and}\;\;LE(\Gamma_{V _{8n}})=\frac{8n(8n^{2}-8n+3)}{4n-1}.\] **Theorem 5.12**.: _Let \(G\) be isomorphic to \(V_{8n}\), where \(n\) is odd. Then_ \[\text{\rm Q-spec}(\Gamma_{V_{8n}})= \left\{(8n-4)^{2n},(4n)^{4n-3},(8n-6)^{2n-1},\left(8n-3+\sqrt{32n ^{2}-32n+9}\right)^{1},\right.\] \[\left.\left(8n-3-\sqrt{32n^{2}-32n+9}\right)^{1}\right\}\] _and \(LE^{+}(\Gamma_{V_{8n}})=\frac{64n^{3}-128n^{2}+64n-6}{4n-1}+2\sqrt{32n^{2}-32n +9}\)._ Proof.: If \(G\cong V_{8n}\) and \(n\) is odd then \(|v(\Gamma_{V_{8n}})|=8n-2\) and \(\Gamma_{V_{8n}}=K_{2n,2,1,(4n-2)}\). Using Theorem 1.6(b), we have \[Q_{\Gamma_{V_{8n}}}(x)= \prod_{i=1}^{2}(x-(8n-2)+p_{i})^{a_{i}(p_{i}-1)}\prod_{i=1}^{2}(x- (8n-2)+2p_{i})^{a_{i}}\left(1-\sum_{i=1}^{2}\frac{a_{i}p_{i}}{x-(8n-2)+2p_{i}}\right)\] \[= \left(x-8n+4\right)^{2n}(x-4n)^{4n-3}(x-8n+6)^{2n}(x-2)\left(1- \frac{4n}{x-8n+6}-\frac{4n-2}{x-2}\right)\] \[= \left(x-(8n-4)\right)^{2n}(x-4n)^{4n-3}(x-(8n-6))^{2n-1}(x^{2}-( 16n-6)x+32n^{2}-16n).\] Thus, \(\text{Q-spec}(\Gamma_{V_{8n}})=\left\{(8n-4)^{2n},(4n)^{4n-3},(8n-6)^{2n-1},\left(8 n-3+\sqrt{32n^{2}-32n+9}\right)^{1},\right.\) \[\left.\left(8n-3-\sqrt{32n^{2}-32n+9}\right)^{1}\right\}\!.\] Number of edges of \(\Gamma_{V_{8n}}^{\Gamma}\) is \(8n^{2}-8n+3\). Therefore, \(|e(\Gamma_{V_{8n}})|=\frac{(8n-2)(8n-2-1)}{2}-(8n^{2}-8n+3)=12n(2n-1)\). Now \[\left|8n-4-\frac{2|e(\Gamma_{V_{8n}})|}{|v(\Gamma_{V_{8n}})|}\right|=\left| \frac{(8n-4)(n-1)}{4n-1}\right|=\frac{(8n-4)(n-1)}{4n-1},\] \[\left|4n-\frac{2|e(\Gamma_{V_{8n}})|}{|v(\Gamma_{V_{8n}})|}\right|=\left|\frac {-8n(n-1)}{4n-1}\right|=\frac{8n(n-1)}{4n-1},\] \[\left|8n-6-\frac{2|e(\Gamma_{V_{8n}})|}{|v(\Gamma_{V_{8n}})|}\right|=\left| \frac{(8n^{2}-20n+6)}{4n-1}\right|=\frac{8n^{2}-20n+6}{4n-1},\] \[\left|8n-3+\sqrt{32n^{2}-32n+9}-\frac{2|e(\Gamma_{SD_{8n}})|}{|v( \Gamma_{SD_{8n}})|}\right|= \left|\sqrt{32n^{2}-32n+9}+2n-\frac{3}{2}+\frac{3}{8n-2}\right|\] \[= \sqrt{32n^{2}-32n+9}+2n-\frac{3}{2}+\frac{3}{8n-2}\] and \[\left|8n-3-\sqrt{32n^{2}-32n+9}-\frac{2|e(\Gamma_{SD_{8n}})|}{|v( \Gamma_{SD_{8n}})|}\right|= \left|-\sqrt{32n^{2}-32n+9}+2n-\frac{3}{2}+\frac{3}{8n-2}\right|\] \[= \sqrt{32n^{2}-32n+9}-2n+\frac{3}{2}-\frac{3}{8n-2}.\] Therefore, for \(n\geq 3\), we have \[LE^{+}(\Gamma_{V_{8n}})= 2n\times\frac{(8n-4)(n-1)}{4n-1}+(4n-3)\times\frac{8n(n-1)}{4n-1 }+(2n-1)\times\frac{8n^{2}-20n+6}{4n-1}+\] \[\sqrt{32n^{2}-32n+9}+2n-\frac{3}{2}+\frac{3}{8n-2}+\sqrt{32n^{2}- 32n+9}-2n+\frac{3}{2}-\frac{3}{8n-2}\] and the result follows on simplification. **Theorem 5.13**.: _Let \(G\) be isomorphic to \(V_{8n}\), where \(n\) is even. Then_ \[\text{Q-spec}(\Gamma_{V_{8n}})\] \[\qquad= \left\{(8n-8)^{3n},(4n)^{4n-5},(8n-12)^{n-1},\left(8n-6+2\sqrt{8 n^{2}-16n+9}\right)^{1},\left(8n-6-2\sqrt{8n^{2}-16n+9}\right)^{1}\right\},\] \[\text{L-spec}(\Gamma_{V_{8n}})=\left\{0,(4n)^{4n-5},(8n-8)^{3n},(8 n-4)^{n}\right\}\text{ and }\] \[\text{Spec}(\Gamma_{V_{8n}})=\left\{0^{7n-5},(-4)^{n-1},\left(2(n -1)+2\sqrt{(n-1)(5n-1)}\right)^{1},\left(2(n-1)+2\sqrt{(n-1)(5n-1)}\right)^{1} \right\}.\] _Further_ \[LE^{+}(\Gamma_{V_{8n}})=\begin{cases}\frac{24n^{3}-64n^{2}+32n+12}{2n-1}+4 \sqrt{8n^{2}-16n+9},&\text{if $n\leq 4$}\\ \frac{32n^{3}-112n^{2}+96n-12}{2n-1}+4\sqrt{8n^{2}-16n+9},&\text{if $n\geq 6$,} \end{cases}\] \[LE(\Gamma_{V_{8n}})=\frac{8n(4n^{2}-10n+7)}{2n-1}\text{ and }\,E(\Gamma_{V_{8n}})=4(n-1)+4 \sqrt{(n-1)(5n-1)}.\] Proof.: If \(G\cong V_{8n}\) and \(n\) is even then \(|v(\Gamma_{V_{8n}})|=8n-4\) and \(\Gamma_{V_{8n}}=K_{n.4,1.(4n-4)}\). Using Theorem 1.6(b), we have \[Q_{\Gamma_{V_{8n}}}(x)= \prod_{i=1}^{2}(x-(8n-4)+p_{i})^{a_{i}(p_{i}-1)}\prod_{i=1}^{2}(x -(8n-4)+2p_{i})^{a_{i}}\left(1-\sum_{i=1}^{2}\frac{a_{i}p_{i}}{x-(8n-4)+2p_{i}}\right)\] \[= \left(x-8n+8\right)^{3n}(x-4n)^{4n-5}(x-8n+12)^{n}(x-4)\left(1- \frac{4n}{x-8n+12}-\frac{4n-4}{x-4}\right)\] \[= \left(x-(8n-8)\right)^{3n}(x-4n)^{4n-5}(x-(8n-12))^{n-1}(x^{2}-( 16n-12)x+32n^{2}-32n).\] Thus, \(\text{Q-spec}(\Gamma_{V_{8n}})=\left\{(8n-8)^{3n},(4n)^{4n-5},(8n-12)^{n-1},\left(8 n-6+2\sqrt{8n^{2}-16n+9}\right)^{1},\right.\) \[\left.\left(8n-6-2\sqrt{8n^{2}-16n+9}\right)^{1}\right\}\text{.}\] Number of edges of \(\Gamma_{V_{8n}}^{\text{c}}\) is \(8n^{2}-12n+10\) and so \(\left.\left|e(\Gamma_{V_{8n}})\right|=\frac{(8n-4)(8n-4-1)}{2}-(8n^{2}-12n+10)= 24n(n-1)\right.\). Now \[\left.\left|8n-8-\frac{2\left|e(\Gamma_{V_{8n}})\right|}{\left|v(\Gamma_{V_{8n }})\right|}\right|=\left|\frac{4(n-1)(n-2)}{2n-1}\right|=\frac{4(n-1)(n-2)}{2 n-1},\right.\] \[\left.\left|4n-\frac{2\left|e(\Gamma_{V_{8n}})\right|}{\left|v(\Gamma_{V_{8n }})\right|}\right|=\left|\frac{4(n-2)}{2n-1}\right|=\frac{4n(n-2)}{2n-1},\] \[\left.\left|8n-12-\frac{2\left|e(\Gamma_{V_{8n}})\right|}{\left|v(\Gamma_{V_{8 n}})\right|}\right|=\left|\frac{4(n^{2}-5n+3)}{2n-1}\right|=\frac{4(n^{2}-5n +3)}{2n-1},\text{\ \ if }n\leq 4\right.\] \[\left.\left|8n-6+2\sqrt{8n^{2}-16n+9}-\frac{2\left|e(\Gamma_{V_{8n}})\right|} {\left|v(\Gamma_{V_{8n}})\right|}\right|= \left|2\sqrt{8n^{2}-16n+9}+(2n-3)+\frac{3}{2n-1}\right|\] \[= 2\sqrt{8n^{2}-16n+9}+(2n-3)+\frac{3}{2n-1}\] and \[\left.\left|8n-6-2\sqrt{8n^{2}-16n+9}-\frac{2\left|e(\Gamma_{V_{8 n}})\right|}{\left|v(\Gamma_{V_{8n}})\right|}\right|= \left|-2\sqrt{8n^{2}-16n+9}+(2n-3)+\frac{3}{2n-1}\right|\] \[= 2\sqrt{8n^{2}-16n+9}-(2n-3)-\frac{3}{2n-1}.\] Therefore, for \(n\leq 4\), we have \[LE^{+}(\Gamma_{V_{8n}})= 3n\times\frac{4(n-1)(n-2)}{2n-1}+(4n-5)\times\frac{4n(n-2)}{2n- 1}+(n-1)\times\frac{4(-n^{2}+5n-3)}{2n-1}+\] \[2\sqrt{8n^{2}-16n+9}-2n-3+\frac{3}{2n-1}+2\sqrt{8n^{2}-16n+9}+2n +3-\frac{3}{2n-1}.\] For \(n\geq 6\), we have \[LE^{+}(\Gamma_{V_{8n}})= 3n\times\frac{4(n-1)(n-2)}{2n-1}+(4n-5)\times\frac{4n(n-2)}{2n- 1}+(n-1)\times\frac{4(n^{2}-5n+3)}{2n-1}+\] \[2\sqrt{8n^{2}-16n+9}-2n-3+\frac{3}{2n-1}+2\sqrt{8n^{2}-16n+9}+2n +3-\frac{3}{2n-1}.\] Thus we get the required expressions for \(LE^{+}(\Gamma_{V_{8n}})\) on simplification. Since \(\Gamma_{V_{8n}}^{\text{c}}=nK_{4}\cup K_{4n-4}\), using Theorem 1.5, we have \[\text{L-spec}(\Gamma_{V_{8n}})= \left\{(0)^{1},\left(\sum_{i=1}^{2}l_{i}m_{i}-m_{2}\right)^{l_{2} (m_{2}-1)},\left(\sum_{i=1}^{2}l_{i}m_{i}-m_{1}\right)^{l_{1}(m_{1}-1)},\left( \sum_{i=1}^{2}l_{i}m_{i}\right)^{\sum_{i=1}^{2}l_{i}-1}\right\}\] \[= \{(0)^{1},(n.4+4n-4-4n+4)^{1(4n-4-1)},(n.4+4n-4-4)^{n(4-1)},(n.4+4n -4)^{n+1-1}\}\] \[= \{(0)^{1},(4n)^{4n-5},(8n-8)^{3n},(8n-4)^{n}\}.\] Now \[\left|0-\frac{2\left|e(\Gamma_{V_{8n}})\right|}{\left|v(\Gamma_{V_{8n}}) \right|}\right|=\left|\frac{-12n(n-1)}{2n-1}\right|=\frac{12n(n-1)}{2n-1}, \text{\ \ }\left|4n-\frac{2\left|e(\Gamma_{V_{8n}})\right|}{\left|v(\Gamma_{V_{8n}}) \right|}\right|=\left|\frac{4n(2-n)}{2n-1}\right|=\frac{4n(n-2)}{2n-1},\] \[\left|8n-8-\frac{2\left|e(\Gamma_{V_{8n}})\right|}{\left|v(\Gamma_{V_{8n}}) \right|}\right|=\left|\frac{4(n-1)(n-2)}{2n-1}\right|=\frac{4(n-1)(n-2)}{2n-1}\] and \[\left|8n-4-\frac{2\left|e(\Gamma_{V_{8n}})\right|}{\left|v(\Gamma_{V_{8n}}) \right|}\right|=\left|\frac{4(n^{2}-n+1)}{2n-1}\right|=\frac{4(n^{2}-n+1)}{2n-1}.\] Therefore, \(LE(\Gamma_{V_{\rm S_{n}}})=1\times\frac{12n(n-1)}{2n-1}+(4n-5)\times\frac{4n(n-2 )}{2n-1}+3n\times\frac{4(n-1)(n-2)}{2n-1}+n\times\frac{4(n^{2}-n+1)}{2n-1}\) and we get the required expression for \(LE(\Gamma_{V_{\rm S_{n}}})\) on simplification. Since \(\Gamma_{V_{\rm S_{n}}}\) is a complete \((n+1)\)-partite graph with \(8n-4\) vertices and \(\Gamma_{V_{\rm S_{n}}}=K_{n,4,1,(4n-4)}\). Therefore, using Theorem 1.6(a), the characteristic polynomial of \(\Gamma_{V_{\rm S_{n}}}\) is \[P_{\Gamma_{V_{\rm S_{n}}}}(x) =x^{(8n-4)-(n+1)}(x+4)^{n-1}(x+4n-4)^{1-1}(x^{2}-(4n-4)x-16n^{2}-16n)\] \[=x^{7n-5}(x+4)^{n-1}(x^{2}-(4n-4)x-16n^{2}-16n).\] Thus, \({\rm Spec}(\Gamma_{V_{\rm S_{n}}})=\bigg{\{}(0)^{7n-5},(-4)^{n-1},\Big{(}2(n- 1)+2\sqrt{(n-1)(5n-1)}\Big{)}^{1},\Big{(}2(n-1)+2\sqrt{(n-1)(5n-1)}\Big{)}^{1} \bigg{\}}\). Therefore, \(E(\Gamma_{V_{\rm S_{n}}})=(7n-5)\times|0|+(n-1)\times|-4|+|2(n-1)+2\sqrt{(n-1 )(5n-1)}|+|2(n-1)-2\sqrt{(n-1)(5n-1)}|\) and we get the required expression for \(E(\Gamma_{V_{\rm S_{n}}})\) on simplification. **Theorem 5.14**.: _If \(G\) is isomorphic to \(V_{8n}\) then_ * \(E(\Gamma_{V_{\rm S_{n}}})\leq LE^{+}(\Gamma_{V_{\rm S_{n}}})\leq LE(\Gamma_{V_ {\rm S_{n}}})\)_; equality holds if and only if_ \(G\cong V_{16}\)_._ * \(\Gamma_{V_{\rm S_{n}}}\) _is non-hypoenergetic as well as non-hyperenergetic._ * \(\Gamma_{V_{\rm S_{n}}}\) _is not L-hyperenergetic and not Q-hyperenergetic. If_ \(n\neq 2\) _then_ \(\Gamma_{V_{\rm S_{n}}}\) _is Q-hyperenergetic and L-hyperenergetic._ Proof.: (a)**Case 1:**\(n\) is odd Using Theorems 5.11 and 5.12, we have \[LE(\Gamma_{V_{\rm S_{n}}})-LE^{+}(\Gamma_{V_{\rm S_{n}}})=\frac{64n^{2}-40n+6} {4n-1}-2\sqrt{32n^{2}-32n+9} \tag{5.9}\] and \[LE^{+}(\Gamma_{V_{\rm S_{n}}})-E(\Gamma_{V_{\rm S_{n}}})=\frac{16n^{2}(4n-9)+ 76n-8}{4n-1}+2\sqrt{32n^{2}-32n+9}-2\sqrt{20n^{2}-12n+1}. \tag{5.10}\] Since \(64n^{2}-40n+6>0\), \(2(4n-1)\sqrt{32n^{2}-32n+9}>0\) and \((64n^{2}-40n+6)^{2}-(2\sqrt{32n^{2}-32n+9})^{2}(4n-1)^{2}=2048n^{3}(n-1)+64n( 10n-1)>0\) we have \(64n^{2}-40n+6-2(4n-1)\sqrt{32n^{2}-32n+9}>0\). Therefore, by (5.9), \((4n-1)(LE^{+}(\Gamma_{V_{\rm S_{n}}})-LE(\Gamma_{V_{\rm S_{n}}}))>0\). Hence, \(LE(\Gamma_{V_{\rm S_{n}}})>LE^{+}(\Gamma_{V_{\rm S_{n}}})\). Again, \(\sqrt{32n^{2}-32n+9}>0,\sqrt{20n^{2}-12n+1}>0\) and \(\big{(}\sqrt{32n^{2}-32n+9}\big{)}^{2}-\big{(}\sqrt{20n^{2}-12n+1}\big{)}^{2}=4 n(3n-5)+8>0\). Therefore, \(\sqrt{32n^{2}-32n+9}-\sqrt{20n^{2}-12n+1}>0\). Since \(16n^{2}(4n-9)+76n-8>0\) we have \(\frac{16n^{2}(4n-9)+76n-8}{2n-1}+2\sqrt{32n^{2}-32n+9}-2\sqrt{20n^{2}-12n+1}>0\). Therefore, by (5.10), \(LE^{+}(\Gamma_{V_{\rm S_{n}}})>E(\Gamma_{V_{\rm S_{n}}})\). Hence, \(E(\Gamma_{V_{\rm S_{n}}})<LE^{+}(\Gamma_{V_{\rm S_{n}}})<LE(\Gamma_{V_{\rm S_{n }}})\). **Case 2:**\(n\) is even Using Theorems 5.11 and 5.13, for \(n\leq 4\), we have \[LE(\Gamma_{V_{\rm S_{n}}})-LE^{+}(\Gamma_{V_{\rm S_{n}}})=\frac{8n^{3}-16n^{2}+ 24n-12}{2n-1}-4\sqrt{8n^{2}-16n+9} \tag{5.11}\] and \[LE^{+}(\Gamma_{V_{\rm S_{n}}})-E(\Gamma_{V_{\rm S_{n}}})=\frac{4(n-2)(6n^{2}-6 n-1)}{2n-1}+4\sqrt{8n^{2}-16n+9}-4\sqrt{5n^{2}-6n+1}. \tag{5.12}\] Since \(8n^{3}-16n^{2}+24n-12>0\), \(4(2n-1)\sqrt{8n^{2}-16n+9}>0\) and \[(8n^{3}-16n^{2}+24n-12)^{2}-\big{(}4\sqrt{8n^{2}-16n+9}\big{)}^{2}\,(2n-1)^{2}=64 n(n-2)^{2}(n-1)(n^{2}+n-1)\geq 0\] (equality holds if and only if \(n=2\)) we have \(8n^{3}-16n^{2}+24n-12-4(2n-1)\sqrt{8n^{2}-16n+9}\geq 0\). Therefore, by (5.11), \((2n-1)(LE(\Gamma_{V_{\rm S_{n}}})-LE^{+}(\Gamma_{V_{\rm S_{n}}}))\geq 0\). Hence, \(LE(\Gamma_{V_{\rm S_{n}}})\geq LE^{+}(\Gamma_{V_{\rm S_{n}}})\) equality holds if and only if \(G\cong V_{16}\). Again, \(\sqrt{8n^{2}-16n+9}>0,\sqrt{5n^{2}-6n+1}>0\) and \(\big{(}\sqrt{8n^{2}-16n+9}\big{)}^{2}-\big{(}\sqrt{5n^{2}-6n+1}\big{)}^{2}=n(3n-10)+8 \geq 0\) (equality holds if and only if \(n=2\)). Therefore, \(\sqrt{8n^{2}-16n+9}-\sqrt{5n^{2}-6n+1}\geq 0\). Since \(4(n-2)(6n^{2}-6n-1)\geq 0\) we have \(\frac{4(n-2)(6n^{2}-6n-1)}{2n-1}+4\sqrt{8n^{2}-16n+9}-4\sqrt{5n^{2}-6n+1}\geq 0\). Therefore, by (5.12), \(LE^{+}(\Gamma_{V_{\rm S_{n}}})\geq E(\Gamma_{V_{\rm S_{n}}})\). Hence, \(E(\Gamma_{V_{\rm S_{n}}})\leq LE^{+}(\Gamma_{V_{\rm S_{n}}})\leq LE(\Gamma_{V_ {\rm S_{n}}})\) equality holds if and only if \(G\cong V_{16}\). Using Theorems 5.11 and 5.13, for \(n\geq 6\), we have \[LE(\Gamma_{V_{8n}})-LE^{+}(\Gamma_{V_{8n}})=\frac{32n^{2}-40n+12}{2n-1}-4\sqrt{8n ^{2}-16n+9} \tag{5.13}\] and \[LE^{+}(\Gamma_{V_{8n}})-E(\Gamma_{V_{8n}})=\frac{32n^{3}-116n^{2}+102n-14}{2n-1} +4\sqrt{8n^{2}-16n+9}-4\sqrt{5n^{2}-6n+1}. \tag{5.14}\] Since \(32n^{2}-40n+12>0\), \(4(2n-1)\sqrt{8n^{2}-16n+9}>0\) and \((32n^{2}-40n+12)^{2}-\left(4\sqrt{8n^{2}-16n+9}\right)^{2}(2n-1)^{2}=512n^{3}( n-2)+128n(5n-1)>0\) we have \(32n^{2}-40n+12-4(2n-1)\sqrt{8n^{2}-16n+9}>0\). Therefore, by (5.13), \((2n-1)(LE(\Gamma_{V_{8n}})-LE^{+}(\Gamma_{V_{8n}}))>0\). Hence, \(LE(\Gamma_{V_{8n}})>LE^{+}(\Gamma_{V_{8n}})\). Again, \(\sqrt{8n^{2}-16n+9}>0,\sqrt{5n^{2}-6n+1}>0\) and \(\left(\sqrt{8n^{2}-16n+9}\right)^{2}-\left(\sqrt{5n^{2}-6n+1}\right)^{2}=n(3n-1 0)+8>0\). Therefore, \(\sqrt{8n^{2}-16n+9}-\sqrt{5n^{2}-6n+1}>0\). Since \(32n^{3}-116n^{2}+102n-14>0\) we have \(\frac{32n^{3}-116n^{2}+102n-14}{2n-1}+4\sqrt{8n^{2}-16n+9}-4\sqrt{5n^{2}-6n+1}>0\). Therefore, by (5.14), \(LE^{+}(\Gamma_{V_{8n}})>E(\Gamma_{V_{8n}})\). Hence, \(E(\Gamma_{V_{8n}})<LE^{+}(\Gamma_{V_{8n}})<LE(\Gamma_{V_{8n}})\). (b) **Case 1:**\(n\) is odd Here \(|v(\Gamma_{V_{8n}})|=8n-2\) and \(E(K_{|v(\Gamma_{V_{8n}})|})=LE(K_{|v(\Gamma_{V_{8n}})|})=LE^{+}(K_{|v(\Gamma_{ V_{8n}})|})=16n-16\). Using Theorem 5.11, we have \[E(\Gamma_{V_{8n}})-|v(\Gamma_{V_{8n}})|=2\left(\sqrt{(2n-1)(10n-1)}-2n\right) \tag{5.15}\] and \[E(K_{|v(\Gamma_{V_{8n}})|})-E(\Gamma_{V_{8n}})=2\left(3(2n-1)+1-\sqrt{(2n-1)( 10n-1)}\right). \tag{5.16}\] Since \(\sqrt{(2n-1)(10n-1)}>0\), \(2n>0\) and \(\left(\sqrt{(2n-1)(10n-1)}\right)^{2}-(2n)^{2}=4n(4n-3)+1>0\) we have \(\sqrt{(2n-1)(10n-1)}-2n>0\). Therefore, by (5.15), \(E(K_{|v(\Gamma_{V_{8n}})|})=\)\(16n-10\). Using Theorem 5.11, we have \[E(\Gamma_{V_{8n}})-|v(\Gamma_{V_{8n}})|=4\sqrt{(n-1)(5n-1)}-4n \tag{5.17}\] and \[E(K_{|v(\Gamma_{V_{8n}})|})-E(\Gamma_{V_{8n}})=12n-6-4\sqrt{(n-1)(5n-1)}. \tag{5.18}\] Since \(4\sqrt{(n-1)(5n-1)}>0\), \(4n>0\) and \(\left(4\sqrt{(n-1)(5n-1)}\right)^{2}-(4n)^{2}=16(4n^{2}-6n+1)>0\) we have \(4\sqrt{(n-1)(5n-1)}-4n>0\). Therefore, by (5.17), \(E(\Gamma_{V_{8n}})>|v(\Gamma_{V_{8n}})|\). Again, \(4\sqrt{(n-1)(5n-1)}>0\), \(12n-6>0\) and \((12n-6)^{2}-\left(4\sqrt{(n-1)(5n-1)}\right)^{2}=4(16n^{2}-12n+5)>0\) and so \(12n-6-4\sqrt{(n-1)(5n-1)}>0\). Therefore, by (5.18), \(E(K_{|v(\Gamma_{V_{8n}})|})>E(\Gamma_{V_{8n}})\). (c) **Case 1:**\(n\) is odd Using Theorem 5.12 we have \(LE^{+}(\Gamma_{V_{8n}})-LE^{+}(K_{|v(\Gamma_{V_{8n}})|})=\frac{64n^{2}(n-3)+1 44n-22}{4n-1}+2\sqrt{32n^{2}-32n+9}>0\). Therefore, \(LE^{+}(\Gamma_{V_{8n}})>LE^{+}(K_{|v(\Gamma_{V_{8n}})|})\) which implies \(\Gamma_{V_{8n}}\) is Q-hyperemergetic and consequently part (a) implies \(\Gamma_{V_{8n}}\) is L-hyperemergetic. **Case 2:**\(n\) is even Using Theorem 5.13, for \(n=2\), we have \(LE(\Gamma_{V_{8n}})=16\) and \(LE(K_{|v(\Gamma_{V_{8n}})|})=22\). Clearly, \(LE(\Gamma_{V_{8n}})<LE(K_{|v(\Gamma_{V_{8n}})|})\). For \(n\leq 4\), \[LE^{+}(\Gamma_{V_{8n}})-LE^{+}(K_{|v(\Gamma_{V_{8n}})|})=\frac{4(6n^{2}(n-4)+20 n-1)}{2n-1}+4\sqrt{8n^{2}-16n+9}>0\] for all \(n\neq 2\). Therefore, for all \(n\neq 2\), \(LE^{+}(\Gamma_{V_{8n}})>LE^{+}(K_{|v(\Gamma_{V_{8n}})|})\) which implies \(\Gamma_{V_{8n}}\) is Q-hyperemergetic and consequently part (a) implies \(\Gamma_{V_{8n}}\) is L-hyperemergetic. For \(n\geq 6\), \[LE^{+}(\Gamma_{V_{8n}})-LE^{+}(K_{|v(\Gamma_{V_{8n}})|})=\frac{2(8n^{2}(2n-9)+66 n-11)}{2n-1}+4\sqrt{8n^{2}-16n+9}>0.\] Hence, the result holds. In Theorem 5.14, we compare \(E(\Gamma_{V_{\mathrm{S}_{n}}})\), \(LE(\Gamma_{V_{\mathrm{S}_{n}}})\) and \(LE^{+}(\Gamma_{V_{\mathrm{S}_{n}}})\). However, in the following figures, we show how close are they. ### The Frobenious groups of order \(pq\) We consider \(F_{p,q}:=\langle a,b\colon a^{p}=b^{q}=1;b^{-1}ab=a^{u}\rangle\), the Frobenious groups of order \(pq\), where \(p\) and \(q\) are two primes such that \(q|(p-1)\) and \(u\) is an integer such that \(\bar{u}\in\mathbb{Z}_{p}\setminus\{\bar{0}\}\) having order \(q\). Results regarding different energies of non-commuting graph of \(F_{p,q}\) are given below. **Theorem 5.15** ([14, (4.1.e) and (4.3.b)]).: _Let \(G\) be isomorphic to \(F_{p,q}\). Then_ \[E(\Gamma_{G})=\alpha+\sqrt{\alpha^{2}+4p\alpha}\;\;\text{and}\;\;LE(\Gamma_{G} )=\frac{2p^{2}\alpha+2p(q-1)^{2}}{pq-1},\;\text{where}\;\alpha=(p-1)(q-1).\] **Theorem 5.16**.: _Let \(G\) be isomorphic to \(F_{p,q}\). Then_ \[\text{Q-spec}(\Gamma_{G})= \left\{(pq-p)^{p-2},(pq-q)^{pq-2p},(pq-2q+1)^{p-1},\left(\frac{A} {2}\right)^{1},\left(\frac{B}{2}\right)^{1}\right\},\] _where \(A=3pq-2p-2q+1+\sqrt{pq(pq-2)+4(p-q)(pq-p-q+1)+1}\) and \(B=3pq-2p-2q+1-\sqrt{pq(pq-2)+4(p-q)(pq-p-q+1)+1}\) and_ \[LE^{+}(\Gamma_{G})=\frac{2p^{3}q-p^{2}q^{2}-2pq^{2}-6pq-4p^{3}+6p^{2}+2q-1}{ pq-1}+\sqrt{pq(pq-2)+4(p-q)(pq-p-q+1)+1}.\] Proof.: If \(G\cong F_{p,q}\) then \(|v(\Gamma_{G})|=pq-1\) and \(\Gamma_{G}=K_{1.(p-1),p.(q-1)}\). Using Theorem 1.6(b), we have \[Q_{\Gamma_{G}}(x)= \prod_{i=1}^{2}(x-(pq-1)+p_{i})^{a_{i}(p_{i}-i)}\prod_{i=1}^{2}(x -(pq-1)+2p_{i})^{a_{i}}\left(1-\sum_{i=1}^{2}\frac{a_{i}p_{i}}{x-(pq-1)+2p_{i}}\right)\] \[= (x-pq+p)^{p-2}(x-pq+q)^{pq-2p}(x-pq+2p-1)(x-pq+2q-1)^{p}\] \[\times\left(1-\frac{p-1}{x-pq+2p-1}-\frac{pq-p}{x-pq+2q-1}\right)\] \[= (x-(pq-p))^{p-2}(x-(pq-q))^{pq-2p}(x-(pq-2q+1))^{p}\] \[\times(x^{2}-(3pq-2p-2q+1)x+2p^{2}q^{2}-4p^{2}q+2pq-2pq^{2}+2p^{2 }-2p+2q-1).\] Thus, \(\text{Q-spec}(\Gamma_{G})=\left\{(pq-p)^{p-2},(pq-q)^{pq-2p},(pq-2q+1)^{p-1}, \left(\frac{A}{2}\right)^{1},\left(\frac{B}{2}\right)^{1}\right\}\). Number of edges of \(\Gamma_{G}^{c}\) is \(\frac{p^{2}-3p+2p+pq^{2}-3pq+2p}{2}\). Thus, \(|e(\Gamma_{G})|=\frac{pq-1}{2}(pq-2)-\frac{p^{2}-3p+2p^{2}-3pq+2p}{2}=\frac{(p^ {2}-p)(q^{2}-1)}{2}\). Now, \[\left|pq-p-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|=\left|\frac{(q-1)( pq-p^{2})}{pq-1}\right|=\frac{(q-1)(p^{2}-pq)}{pq-1},\] \[\left|pq-q-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|=\left|\frac{(p-q)(p -1)}{pq-1}\right|=\frac{(p-q)(p-1)}{pq-1},\] Figure 15: Energies of \(\Gamma_{V_{\mathrm{S}_{n}}}\), where \(n\) is odd Figure 16: Energies of \(\Gamma_{V_{\mathrm{S}_{n}}}\), where \(n\) is even \[\left|A-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|=\left|\frac{p^{2}q^{2}+2p^ {2}+p+2q-2p^{2}q-2pq-1}{2(pq-1)}+\frac{C}{2}\right|=\frac{p^{2}q^{2}+2p^{2}+p+2 q-2p^{2}q-2pq-1}{2(pq-1)}+\frac{C}{2}\] and \[\left|B-\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}\right|=\left|\frac{p^{2}q^{2} +2p^{2}+p+2q-2p^{2}q-2pq-1}{2(pq-1)}-\frac{C}{2}\right|=\frac{-(p^{2}q^{2}+2p^ {2}+p+2q-2p^{2}q-2pq-1)}{2(pq-1)}+\frac{C}{2},\] where \(C=\sqrt{pq(pq-2)+4(p-q)(pq-p-q+1)+1}\). Therefore, \[LE^{+}(\Gamma_{G})= (p-2)\times\frac{(q-1)(p^{2}-pq)}{pq-1}+(pq-2p)\times\frac{(p-q)( p-1)}{pq-1}+(p-1)\times\frac{(pq^{2}+p+1)-(p^{2}+2q)}{pq-1}+\] \[\frac{p^{2}q^{2}+2p^{2}+p+2q-2p^{2}q-2pq-1}{2(pq-1)}+\frac{C}{2} -\left(\frac{p^{2}q^{2}+2p^{2}+p+2q-2p^{2}q-2pq-1}{2(pq-1)}\right)+\frac{C}{2}\] and the result follows on simplification. ## 6 Some implications of the preceding findings It was shown in [8] that the non-commuting graphs of the groups considered in this paper are L-integral. Also, in [14, Chapter 4], several conditions were obtained such that the non-commuting graphs of these groups are integral. In view of Theorems 2.16 and 4.2, it follows that \(\Gamma_{G}\) is not Q-integral if \(G\cong U_{6n}\) or \(\frac{G}{Z(G)}\cong Sz(2)\). However, \(\Gamma_{G}\) is Q-integral if \(G\cong A(n,\mathcal{V})\), \(A(n,p)\) or \(\frac{G}{Z(G)}\cong Z_{p}\times\mathbb{Z}_{p}\) (follows from Theorems 3.2, 5.2 and 5.5). As a consequence of our results we also have the following theorem related to Question 1.1. **Theorem 6.1**.: \(\Gamma_{G}\) _is Q-integral if_ * \(G\cong D_{2r},M_{2rs}\) _or_ \(SD_{8r}\)_,_ \(r\) _is odd and_ \(8r^{2}-16r+9\) _is a perfect square._ * \(G\cong V_{8n}\)_,_ \(n\) _is even and_ \(8n^{2}-16n+9\) _is a perfect square._ * \(G\cong Q_{4n}\) _or_ \(\frac{G}{Z(G)}\cong D_{2n}\) _and_ \(8n^{2}-16n+9\) _is a perfect square._ * \(G\cong D_{2r}\) _or_ \(M_{2rs}\)_,_ \(r\) _is even and_ \(2r^{2}-8r+9\) _is a perfect square._ * \(G\cong V_{8n}\)_,_ \(n\) _is odd and_ \(32n^{2}-32n+9\) _is a perfect square._ * \(G\cong SD_{8n}\)_,_ \(n\) _is even and_ \(32n^{2}-32n+9\) _is a perfect square._ * \(G\cong QD_{2^{n}}\) _and_ \(2^{2n-1}-2^{n+2}+9\) _is a perfect square._ In the following table we give some positive integers such that \(8n^{2}-16n+9\), \(2n^{2}-8n+9\) and \(32n^{2}-32n+9\) are perfect squares. It may be interesting to obtain general terms of such sequences of positive integers. \begin{tabular}{|c|c|c|c|c|c|} \hline \(n\) & \(\sqrt{8n^{2}-16n+9}\) & \(n\) & \(\sqrt{2n^{2}-8n+9}\) & \(n\) & \(\sqrt{32n^{2}-32n+9}\) \\ \hline 1 & 1 & 2 & 1 & 1 & 3 \\ \hline 2 & 3 & 4 & 3 & 18 & 99 \\ \hline 7 & 17 & 14 & 17 & 595 & 3363 \\ \hline 36 & 99 & 72 & 99 & 20196 & 114243 \\ \hline 205 & 577 & 410 & 577 & 686053 & 3880899 \\ \hline 1190 & 3363 & 2380 & 3363 & 23305590 & 131836323 \\ \hline 6931 & 19601 & 13862 & 19601 & & \\ \hline 40392 & 114243 & 80784 & 114243 & & \\ \hline 235417 & 665857 & 470834 & 665857 & & \\ \hline 1372106 & 3880899 & 2744212 & 3880899 & & \\ \hline 7997215 & 22619537 & 15994430 & 22619537 & & \\ \hline 46611180 & 131836323 & 93222360 & 131836323 & & \\ \hline 271669861 & 768398401 & 543339722 & 768398401 & & \\ \hline \end{tabular} Table 1 As consequences of our results we also have the following theorems related to Questions 1.2 and 1.3. **Theorem 6.2**.: _Let \(G\) be a finite non-abelian group. Then_ * \(E(\Gamma_{G})=LE(\Gamma_{G})=LE^{+}(\Gamma_{G})\) _if_ \(G\cong D_{8},Q_{8},M_{8s},A(n,\mathcal{V}),A(n,p)\)_,_ \(V_{16}\) _a non-abelian group of order_ \(p^{3}\)_. Also, if_ \(\frac{G}{Z(G)}\cong\mathbb{Z}_{p}\times\mathbb{Z}_{p}\)_, where_ \(p\) _is a prime, then_ \(E(\Gamma_{G})=LE(\Gamma_{G})=LE^{+}(\Gamma_{G})\)_._ * \(E(\Gamma_{G})<LE^{+}(\Gamma_{G})<LE(\Gamma_{G})\) _if_ \(G\cong D_{2m}(m\neq 4)\)_,_ \(QD_{2^{n}}\)_,_ \(M_{2rs}(r\neq 4)\)_,_ \(Q_{4n}(n\neq 2)\)_,_ \(U_{6n}\)_,_ \(SD_{8n}\) _and_ \(V_{8n}\) _(_\(n\neq 2\)_). Also, if_ \(\frac{G}{Z(G)}\cong D_{2m}(m\geq 3)\) _and_ \(Sz(2)\) _then_ \(E(\Gamma_{G})<LE^{+}(\Gamma_{G})<LE(\Gamma_{G})\)_._ * \(\Gamma_{G}\) _is non-hypoenergetic as well as non-hyperenergetic if_ \(G\cong D_{2m},QD_{2^{n}},M_{2rs},Q_{4n},U_{6n},A(n,\mathcal{V})\) _,_ \(A(n,p),SD_{8n}\) _and_ \(V_{8n}\)_. Also, if_ \(\frac{G}{Z(G)}\cong D_{2m},\mathbb{Z}_{p}\times\mathbb{Z}_{p}\) _and_ \(Sz(2)\)_, where_ \(m\geq 3\) _and_ \(p\) _is a prime, then_ \(\Gamma_{G}\) _is non-hypoenergetic as well as non-hyperenergetic._ * \(\Gamma_{G}\) _is L-hyperenergetic but not Q-hyperenergetic if_ \(G\cong D_{6},M_{6}\) _and_ \(Sz(2)\)_._ * \(\Gamma_{G}\) _is neither L-hyperenergetic nor Q-hyperenergetic if_ \(G\cong D_{8},M_{8s},Q_{8},A(n,\mathcal{V}),A(n,p),V_{16}\)_. Also, if_ \(\frac{G}{Z(G)}\cong\mathbb{Z}_{p}\times\mathbb{Z}_{p}\) _then_ \(\Gamma_{G}\) _is neither L-hyperenergetic nor Q-hyperenergetic._ * \(\Gamma_{G}\) _is L-hyperenergetic as well as Q-hyperenergetic if_ \(G\cong D_{2m}(m\neq 3,4)\)_,_ \(QD_{2^{n}}\)_,_ \(M_{2rs}(2rs\neq 6,8s)\)_,_ \(Q_{4n}(n\neq 2)\)_,_ \(U_{6n}\)_,_ \(SD_{8n}\) _and_ \(V_{8n}(n\neq 2)\)_. Also, if_ \(\frac{G}{Z(G)}\cong Sz(2)\) _(_\(G\not\cong Sz(2)\)_) and_ \(D_{2m}\) _(_\(m=3,4\) _and_ \(|Z(G)|\neq 1\) _or_ \(m\geq 5\) _and_ \(|Z(G)|\geq 1\)_) then_ \(\Gamma_{G}\) _is L-hyperenergetic as well as Q-hyperenergetic._ In the following theorem we get an example of a graph (non-commuting graph of the symmetric group of degree 4) disproving Conjecture 1.4. **Theorem 6.3**.: _Let the commuting graph of a finite group \(G\) be planar. Then_ * \(\Gamma_{G}\) _is Q-integral if_ \(G\cong D_{8},Q_{8},\mathbb{Z}_{2}\times D_{8},\mathbb{Z}_{2}\times Q_{8}, \mathcal{M}_{16},\mathbb{Z}_{4}\rtimes\mathbb{Z}_{4},D_{8}*\mathbb{Z}_{4}\) _or SG_\((16,3)\)_, otherwise not Q-integral._ * \(\Gamma_{G}\) _is non-hypoenergetic._ * \(\Gamma_{G}\) _is hyperenergetic if_ \(G\cong S_{4}\)_, otherwise non-hyperenergetic._ * \(\Gamma_{G}\) _is Q-hyperenergetic if_ \(G\cong D_{10},D_{12},Q_{12},A_{4},A_{5},S_{4}\) _or_ \(SL(2,3)\)_, otherwise not Q-hyperenergetic_ * \(\Gamma_{G}\) _is L-hyperenergetic if_ \(G\cong D_{6}\)_,_ \(D_{10}\)_,_ \(D_{12}\)_,_ \(Q_{12}\)_,_ \(A_{4}\)_,_ \(A_{5}\)_,_ \(S_{4}\)_,_ \(SL(2,3)\) _or_ \(Sz(2)\)_, otherwise not L-hyperenergetic._ Proof.: If the commuting graph of \(G\) is planar then, by [3, Theorem 2.2], \(G\cong D_{6},D_{8},D_{10},D_{12},Q_{8},Q_{12},\mathbb{Z}_{2}\times D_{8}, \mathcal{Z}_{2}\times Q_{8},\mathcal{M}_{16},\mathbb{Z}_{4}\rtimes\mathbb{Z}_{4 },D_{8}*\mathbb{Z}_{4},SG(16,3),A_{4},A_{5},S_{4},SL(2,3),Sz(2)\). By Theorems 2.2, 2.3 and 2.13, \(\Gamma_{G}\) is Q-integral if \(G\cong D_{8},Q_{8}\) and not Q-integral if \(G\cong D_{6},D_{10},D_{12},Q_{12}\). If \(G\cong D_{6}\), then from Theorem 2.4, \(\Gamma_{G}\) is non-hypoenergetic, non-hyperenergetic but S=L-hyperenergetic. If \(G\cong D_{10},D_{12}\), then from Theorem 2.4, \(\Gamma_{G}\) is non-hypoenergetic, non-hyperenergetic, but is Q-hyperenergetic and L-hyperenergetic. If \(G\cong D_{8}\), then from Theorem 2.4, \(\Gamma_{G}\) is non-hypoenergetic, non-hyperenergetic, non-hyperenergetic, not Q-hyperenergetic and not L-hyperenergetic. If \(G\cong Q_{8}\), then from Theorem 2.14, \(\Gamma_{G}\) is non-hypoenergetic, non-hyperenergetic, not Q-hyperenergetic, and not L-hyperenergetic. If \(G\cong Q_{12}\), then from Theorem 2.14 \(\Gamma_{G}\) is non-hypoenergetic, non-hyperenergetic, but is Q-hyperenergetic and L-hyperenergetic. If \(G\cong\mathbb{Z}_{2}\times D_{8},\mathbb{Z}_{2}\times Q_{8},\mathcal{M}_{16}, \mathbb{Z}_{4}\rtimes\mathbb{Z}_{4},D_{8}*\mathbb{Z}_{4},SG(16,3)\), then \(\frac{G}{Z(G)}\cong\mathbb{Z}_{2}\times\mathbb{Z}_{2}\). Using Theorems 3.2 and 3.3, for \(p=2\), we get \(\Gamma_{G}\) is Q-integral but not hypoenergetic, hyperenergetic, Q-hyperenergetic as well as L-hyperenergetic. If \(G\cong A_{4}\) then from Proposition 4.3.13 of [14], we have \(E(\Gamma_{A_{4}})=6+2\sqrt{33}\) and \(LE(\Gamma_{A_{4}})=\frac{224}{11}\). Now, \(|v(\Gamma_{A_{4}})|=11\) so \(E(K_{|v(\Gamma_{A_{4}})|})=LE^{+}(K_{|v(\Gamma_{A_{4}})|})=LE(K_{|v(\Gamma_{A_{4}})| })=20\). Here, \(\Gamma_{A_{4}}=K_{4,2,1.3}\) so using Theorem 1.6(b) we get \[\text{Q-spec}(\Gamma_{A_{4}})=\left\{(9)^{4},(8)^{2},(7)^{3},\left(\frac{23+ \sqrt{145}}{2}\right)^{1},\left(\frac{23-\sqrt{145}}{2}\right)^{1}\right\}.\] It follows that \(\Gamma_{A_{4}}\) is not Q-integral. We have \(|e(\Gamma_{A_{4}})|=48\) and so \(\frac{2|e(\Gamma_{A_{4}})|}{|v(\Gamma_{A_{4}}|}=\frac{96}{11}\). Therefore, \(\left|9-\frac{96}{11}\right|=\frac{3}{11}\), \(\left|8-\frac{96}{11}\right|=\frac{8}{11}\), \(\left|7-\frac{96}{11}\right|=\frac{19}{11}\), \(\left|\frac{23+\sqrt{145}}{2}-\frac{96}{11}\right|=\frac{61}{22}+\frac{\sqrt{145 }}{2}\) and \(\left|\frac{23-\sqrt{145}}{2}-\frac{96}{11}\right|=-\frac{61}{22}+\frac{\sqrt{145 }}{2}\). Thus, \[LE^{+}(\Gamma_{A_{4}})=4\times\frac{3}{11}+2\times\frac{8}{11}+3\times\frac{19 }{11}+\frac{61}{22}+\frac{\sqrt{145}}{2}-\frac{61}{22}+\frac{\sqrt{145}}{2}= \frac{85}{11}+\sqrt{145}>20.\] Hence, \(\Gamma_{A_{4}}\) is non-hypoenergetic, non If \(G\cong A_{5}\) then from Proposition 4.3.13 of [14], we have \(E(\Gamma_{A_{5}})=111.89\) and \(LE(\Gamma_{A_{5}})=\frac{5850}{59}\). Now, \(|v(\Gamma_{A_{5}})|=59\) so \(E(K_{|v(\Gamma_{A_{5}})|})=LE^{+}(K_{|v(\Gamma_{A_{5}})|})=LE(K_{|v(\Gamma_{A_{ 5}})|})=116\). Here, \(\Gamma_{A_{5}}=K_{5.3,10.2,6.4}\) so using Theorem 1.6(b) we get \[\text{Q-spec}(\Gamma_{A_{5}})=\{(56)^{10},(57)^{10},(55)^{27},(53)^{4},(51)^{5 },(x_{1})^{1},(x_{2})^{1},(x_{3})^{1}\},\] where \(x_{1}\approx 52.03252,x_{2}\approx 54.05266\) and \(x_{3}\approx 111.91482\) are the roots of the equation \(x^{3}-218x^{2}+14685x-314760=0\). It follows that \(\Gamma_{A_{5}}\) is not Q-integral. We have \(|e(\Gamma_{A_{5}})|=1650\) and so \(\frac{2|e(\Gamma_{A_{5}})|}{|v(\Gamma_{A_{5}})|}=\frac{3900}{59}\). Therefore, \(\left|57-\frac{3300}{59}\right|=\frac{63}{59}\), \(\left|56-\frac{3300}{59}\right|=\frac{4}{59}\), \(\left|55-\frac{3300}{59}\right|=\frac{55}{59}\), \(\left|53-\frac{3300}{59}\right|=\frac{173}{59}\), \(\left|51-\frac{3300}{59}\right|=-\frac{291}{59}+\frac{7145}{2}\), \(\left|x_{1}-\frac{3300}{59}\right|=-(x_{1}-\frac{350}{59})\), \(\left|x_{2}-\frac{3300}{59}\right|=-(x_{2}-\frac{3300}{59})\) and \(\left|x_{3}-\frac{3300}{59}\right|=x_{3}-\frac{3300}{59}\). Thus, \[LE^{+}(\Gamma_{A_{5}})=\frac{7602}{59}-x_{1}-x_{2}+x_{3}>116.\] Hence, \(\Gamma_{A_{5}}\) is non-hypoenergetic, non-hyperenergetic but is Q-hyperenergetic and L-hyperenergetic. If \(G\cong S_{4}\) then from Proposition 4.3.13 of [14], we have \(E(\Gamma_{S_{4}})=35.866+4\sqrt{5}\) and \(LE(\Gamma_{S_{4}})=\frac{1072}{23}+4\sqrt{13}\). Now, \(|v(\Gamma_{S_{4}})|=23\) so \(E(K_{|v(\Gamma_{S_{4}})|})=LE^{+}(K_{|v(\Gamma_{S_{4}})|})=LE(K_{|v(\Gamma_{S_ {4}})|})=44\). Using GAP [34], the characteristic polynomial of \(Q(\Gamma_{S_{4}})\) is \[Q_{\Gamma_{S_{4}}}(x)=x(x+20)^{4}(x+21)^{7}(x+23)^{7}(x^{2}+40x+394)^{2}\] and so \[\text{Q-spec}(\Gamma_{S_{4}})=\left\{(0)^{1},(-20)^{4},(-21)^{7},(-23)^{7}, \left(-20+\sqrt{6}\right)^{2},\left(-20-\sqrt{6}\right)^{2}\right\}.\] It follows that \(\Gamma_{S_{4}}\) is not Q-integral. We have \(|e(\Gamma_{S_{4}})|=228\) and so \(\frac{2|e(\Gamma_{S_{4}})|}{|v(\Gamma_{S_{4}})|}=\frac{456}{23}\). Therefore, \(\left|0-\frac{456}{23}\right|=\frac{456}{23}\), \(\left|20-\frac{456}{23}\right|=\frac{4}{23}\), \(\left|21-\frac{456}{23}\right|=\frac{27}{23}\), \(\left|23-\frac{456}{23}\right|=\frac{73}{23}\), \(\left|-20+\sqrt{6}-\frac{456}{23}\right|=\frac{916}{23}-\sqrt{6}\) and \(\left|-20-\sqrt{6}-\frac{456}{23}\right|=\frac{916}{23}+\sqrt{6}\). Thus, \[LE^{+}(\Gamma_{S_{4}})=\frac{456}{23}+4\times\frac{4}{23}+7\times\frac{27}{23}+ 7\times\frac{73}{23}+2\times\left(\frac{916}{23}-\sqrt{6}\right)+2\times\left( \frac{916}{23}+\sqrt{6}\right)=\frac{4836}{23}.\] Hence, \(\Gamma_{S_{4}}\) is non-hypoenergetic, hyperenergetic but is Q-hyperenergetic and L-hyperenergetic. If \(G\cong SL(2,3)\) then from Proposition 4.3.13 of [14], we have \(E(\Gamma_{SL(2,3)})=16+8\sqrt{7}\) and \(LE(\Gamma_{SL(2,3)}))=\frac{552}{11}\). Now, \(|v(\Gamma_{SL(2,3)}))|=22\) so \(E(K_{|v(\Gamma_{SL(2,3)})|})=LE^{+}(K_{|v(\Gamma_{SL(2,3)})|})=LE(K_{|v(\Gamma_{ SL(2,3)})|})=42\). Here, \(\Gamma_{SL(2,3)}=K_{3,2,4,4}\) so using Theorem 1.6(b) we get \[\text{Q-spec}(\Gamma_{SL(2,3)})=\left\{(20)^{3},(18)^{14},(14)^{3},\left(\frac {54+\sqrt{420}}{2}\right)^{1},\left(\frac{54-\sqrt{420}}{2}\right)^{1}\right\}.\] It follows that \(\Gamma_{SL(2,3)}\) is not Q-integral. We have \(|e(\Gamma_{SL(2,3)})|=204\) and so \(\frac{2|e(\Gamma_{SL(2,3)})|}{|v(\Gamma_{SL(2,3)}|}=\frac{204}{11}\). Therefore, \(\left|20-\frac{204}{11}\right|=\frac{16}{11}\), \(\left|18-\frac{204}{11}\right|=\frac{6}{11}\), \(\left|14-\frac{204}{11}\right|=\frac{50}{11}\), \(\left|\frac{54+\sqrt{420}}{2}-\frac{204}{11}\right|=\frac{93}{11}+\frac{\sqrt{4 20}}{2}\) and \(\left|\frac{54-\sqrt{420}}{2}-\frac{204}{11}\right|=-\frac{93}{22}+\frac{\sqrt{4 20}}{2}\). Thus, \[LE^{+}(\Gamma_{SL(2,3)})=3\times\frac{16}{11}+14\times\frac{6}{11}+3\times\frac {50}{11}+\frac{93}{11}+\frac{\sqrt{420}}{2}-\frac{93}{22}+\frac{\sqrt{420}}{2}= \frac{282}{11}+\sqrt{420}.\] Hence, \(\Gamma_{SL(2,3)}\)) is non-hypoenergetic, non-hyperenergetic but is Q-hyperenergetic and L-hyperenergetic. If \(G\cong Sz(2)\) then, by Theorem 4.2, we have \(\Gamma_{G}\) is not Q-integral. Also, Theorem 4.3 gives that \(\Gamma_{G}\) is non-hypoenergetic, non-hypoenergetic, not Q-hyperenergetic but is L-hyperenergetic. **Theorem 6.4**.: _Let \(G\) be a finite group and the commuting graph of \(G\) is toroidal. Then_ * \(\Gamma_{G}\) _is Q-integral if_ \(G\cong D_{14}\) _or_ \(A_{4}\times\mathbb{Z}_{2}\)_, otherwise not Q-integral._ * \(\Gamma_{G}\) _is non-hypoenergetic, non-hyperenergetic but is Q-hyperenergetic and L-hyperenergetic._ Proof.: If commuting graph of \(G\) is toroidal then, by [9, Theorem 3.3]\(G\cong D_{14},D_{16},QD_{16},QD_{16},\mathbb{Z}_{7}\rtimes\mathbb{Z}_{3},D_{6} \times\mathbb{Z}_{3},A_{4}\times\mathbb{Z}_{2}\). By Theorems 2.2, 2.6 and 2.13, \(\Gamma_{G}\) is Q-integral if \(G\cong D_{14}\) and not Q-integral if \(G\cong D_{16},Q_{16}\) or \(QD_{16}\). If \(G\cong D_{14},D_{16}\), then from Theorem 2.4, \(\Gamma_{G}\) is non-hypoenergetic, non-hyperenergetic and L-hyperenergetic. If \(G\cong Q_{16}\), then from Theorem 2.14, \(\Gamma_{G}\) is non-hypoenergetic, non-hypereergetic but is Q-hyperenergetic and L-hyperenergetic. If \(G\cong QD_{16}\), then from Theorem 2.7, \(\Gamma_{G}\) is non-hypoenergetic, non-hypoenergetic, non-hyperen If \(G\cong\mathbb{Z}_{7}\rtimes\mathbb{Z}_{3}\) then, by Theorem 5.16, we have \[\text{Q-spec}(\Gamma_{G})=\left\{(14)^{5},(18)^{7},(16)^{6},\left(22+2\sqrt{37} \right)^{1},\left(22-2\sqrt{37}\right)^{1}\right\}.\] Thus \(\Gamma_{G}\) is not Q-integral. By Theorem 5.15, we also have \(E(\Gamma_{G})=12+4\sqrt{30}\) and \(LE(\Gamma_{G})=\frac{308}{5}\) and from Theorem 5.16, we have \(LE^{+}(\Gamma_{G})=\frac{292}{20}+4\sqrt{37}\). Now, \(|v(\Gamma_{G})|=20\) so \(E(K_{|v(\Gamma_{G})|})=LE^{+}(K_{|v(\Gamma_{G})|})=LE(K_{|v(\Gamma_{G})|})=38\). Hence, \(\Gamma_{G}\) is non-hypoenergetic, non-hyperenergetic but is Q-hyperenergetic and L-hyperenergetic. If \(G\cong D_{6}\rtimes\mathbb{Z}_{3}\), then from Proposition 4.3.14 of [14], we have \(E(\Gamma_{G})=6+6\sqrt{7}\) and \(LE(\Gamma_{G})=\frac{594}{15}\). Now, \(|v(\Gamma_{G})|=15\) so \(E(K_{|v(\Gamma_{G})|})=LE^{+}(K_{|v(\Gamma_{G})|})=LE(K_{|v(\Gamma_{G})|})=28\). Here, \(\Gamma_{G}=K_{3.3,1.6}\) so using Theorem 1.6(b) we get \[\text{Q-spec}(\Gamma_{G})=\left\{(12)^{6},(9)^{7},\left(\frac{27+\sqrt{297}}{ 2}\right)^{1},\left(\frac{27-\sqrt{297}}{2}\right)^{1}\right\}.\] It follows that \(\Gamma_{G}\) is not Q-integral. We have \(|e(\Gamma_{G})|=81\) and so \(\frac{2\text{[}e(\Gamma_{G})|}{|v(\Gamma_{G})|}=\frac{162}{15}\). Therefore, \(\left|12-\frac{162}{15}\right|=\frac{18}{15}\), \(\left|9-\frac{162}{15}\right|=\frac{27}{15}\), \(\left|\frac{27+\sqrt{297}}{2}-\frac{162}{15}\right|=\frac{81}{30}+\frac{\sqrt {297}}{2}\) and \(\left|\frac{27-\sqrt{145}}{2}-\frac{162}{15}\right|=-\frac{81}{30}+\frac{\sqrt {297}}{2}\). Thus, \[LE^{+}(\Gamma_{G})=6\times\frac{18}{15}+7\times\frac{189}{11}+\frac{81}{30}+ \frac{\sqrt{297}}{2}-\frac{81}{30}+\frac{\sqrt{297}}{2}=\frac{99}{5}+3\sqrt{3 3}.\] Hence, \(\Gamma_{G}\) is non-hypoenergetic, non-hyperenergetic but is Q-hyperenergetic and L-hyperenergetic. If \(G\cong A_{4}\rtimes\mathbb{Z}_{2}\), then from Proposition 4.3.14 of [14], we have \(E(\Gamma_{G})=12+4\sqrt{33}\) and \(LE(\Gamma_{G})=\frac{544}{11}\). Now, \(|v(\Gamma_{G})|=22\) so \(E(K_{|v(\Gamma_{G})|})=LE^{+}(K_{|v(\Gamma_{G})|})=LE(K_{|v(\Gamma_{G})|})=42\). Here, \(\Gamma_{G}=K_{4.4,1.6}\) so using Theorem 1.6(b) we get \[\text{Q-spec}(\Gamma_{G})=\{(18)^{12},(16)^{5},(14)^{3},(36)^{1},(10)^{1}\}.\] Clearly, \(\Gamma_{G}\) is Q-integral. We have \(|e(\Gamma_{G})|=192\) and so \(\frac{2|e(\Gamma_{G})|}{|v(\Gamma_{G})|}=\frac{192}{11}\). Therefore, \(\left|18-\frac{192}{11}\right|=\frac{6}{11}\), \(\left|16-\frac{192}{11}\right|=\frac{16}{11}\), \(\left|14-\frac{192}{11}\right|=\frac{38}{11}\), \(\left|36-\frac{192}{11}\right|=\frac{204}{11}\) and \(\left|10-\frac{192}{11}\right|=\frac{82}{12}\). Thus, \[LE^{+}(\Gamma_{G})=12\times\frac{6}{11}+5\times\frac{16}{11}+3\times\frac{38}{ 11}+\frac{204}{11}+\frac{82}{11}=\frac{552}{11}.\] Hence, \(\Gamma_{G}\) is non-hypoenergetic, non-hyperenergetic but is Q-hyperenergetic and L-hyperenergetic. If non-commuting graph of \(G\) is planar then, by [1, Proposition 2.3]\(G\cong D_{6},D_{8},Q_{8}\). Therefore, we have the following theorem. **Theorem 6.5**.: _Let \(G\) be a finite group whose non-commuting graph is planar. Then_ * \(\Gamma_{G}\) _is non-hypoenergetic, non-hyperenergetic and not Q-hyperenergetic._ * \(\Gamma_{G}\) _is not L-hyperenergetic but Q-integral if_ \(G\not\cong D_{6}\)_._ ## Acknowledgements The authors would like to thank Mr. Nabin Pokhrel and Mr. Kallol Ray for their help in drawing Figures and computing perfect squares in Table 1. Ms. M. Sharma expresses gratitude to DST for the INSPIRE fellowship. ## Funding No funding was received by the authors. ## Data Availability No data was used in the preparation of this manuscript. ## Conflict of interest The authors declare that they have no conflict of interest.
2309.03793
Negative thermal expansion coefficient and amorphization in defective 4H-SiC
Silicon Carbide (SiC) is a wide bandgap semiconductor material recently being used in replacement of traditional semiconductors for high-voltage power device applications. Radiation environments induce defects through displacement damage in the lattice that can saturate over periods of high energy particle exposure at various concentrations. Defects are characterized by the formation of vacancies, interstitials and Frenkel pairs. Using molecular dynamics software we calculate thermal expansion coefficient (TEC) over and specific heat capacity at constant volume ($c_v$) values over a temperature range varying defect concentrations in single crystal 4H-SiC. At a discovered critical defect density amorphous defect clusters form in the lattice triggering macroscopic negative thermal expansion across the entire temperature range. Exponential $c_v$ loss is observed as defect density increases until the isothermal process becomes completely adiabatic at a identified critical Frenkel pair concentration. Providing insight to the degradation of SiC from displacement damage effects can ultimately assist the development of radiation-hardened electronics.
Christopher Allen Grome
2023-09-07T15:44:26Z
http://arxiv.org/abs/2309.03793v1
# Negative thermal expansion coefficient and amorphization in defective 4H-SiC ###### Abstract Silicon Carbide (SiC) is a wide bandgap semiconductor material recently being used in replacement of traditional semiconductors for high-voltage power device applications. Radiation environments induce the defects through displacement damage in the lattice that can saturate over periods of high energy particle exposure at varous concentrations, which include the formation of vacancies, interstitials and Frenkel pairs. Using molecular dynamics software we calculate thermal expansion coefficient (TEC) over and specific heat capacity at constant volume (\(c_{v}\)) values over a temperature range varying defect concentrations in single crystal 4H-SiC. At a discovered critical defect density amorphous defect clusters form in the lattice triggering macroscopic negative thermal expansion across the entire temperature range. Exponential \(c_{v}\) loss is observed as defect density increases until the isothermal process becomes completely adiabatic at a identified critical frenkel pair concentration. Providing insight to the degradation of SiC from displacement damage effects can ultimately assist the development of radiation-hardened electronics. ## 1 Introduction Silicon Carbide (SiC) is a wide bandgap semiconductor material recently being used in replacement of traditional semiconductors for high-voltage power device applications [1, 2, 3]. Radiation environments that exist terrestrially and in outer space pose a noticeable risk to electronics which can lead to cumulative degradation effects within the material known as displacement damage (DD) [4, 5, 6]. DD characterizes the defects in the lattice that can accumulate over periods of high energy particle exposure, which include the formation of vacancies, interstitials and Frenkel pairs. Although the recombination rates for point defects in SiC are recorded to be between 46.29% and 62.16% for a given radiation strike [7], 4H-SiC has observed amorphous transformations when exposed to high enough ion fluxes [8]. Experimental Raman spectra analysis has shown that SiC exposed to neutron doses of.11 displacement per atom (dpa) saturate defects at an average distance of 0.6 nm yielding defect concentrations at considerable percentages [9]. Displacement damage effects are have been known to cause a variety of device malfunction. Effects on device's majority carrier density, carrier mobility, carrier lifetimes and other electronic properties have been well documented [10, 11, 12, 13, 14]. However, The irradiation induced point defects affect on the macroscopic thermal and mechanical properties of 4H-SiC are not very clear. To answer questions that are difficult to observe on-line during experiments, a theoretical study at a fundamental atomic level is employed. Molecular Dynamics (MD) is a very appropriate simulation approach for understanding material processes as it provides high fidelity studying capability of atomic-level events. MD has also been most notably employed for addressing a number of key problems in semiconductor thermal [15, 16], mechanical [17, 18] and electrical [19] properties. In this paper, artificially produced point defects (vacancy, interstitial and Frenkel pairs) are stochastically generated in bulk 4H-SiC. The thermal expansion coefficient (TEC) and specific heat capacity (\(c_{v}\)) are investigated over a temperature range of 200K to 1200K.The effects of defect type and concentration are quantified and an advancement in the understanding of the thermal mechanics of defected systems is made. Studying the TEC is important for semiconductor materials. Compatible TECs are required between substrates and at bonded material interfaces to minimize thermal stress and optimize device performance [20]. Variation in the TEC can result in incompatible thermal stresses that can cause permanent damage in semiconductor devices [21, 22]. Specific heat capacity is an imperative parameter for conductance and thermal management of semiconductor materials [23]. Deviations in the specific heat can lead to premature changes in the temperature of power devices consequently effecting all temperature dependent electronic parameters leading to performance degradation. Additionally, specific heat has been shown to effect the susceptibility of devices to radiation effects. Simulation work in [24, 25, 26], shows heat capacity critically effects thermal modulation in power devices during single event effects (SEE). Changes to this parameter leave power electronics more prone to reaching its sublimation point during high voltage stress or radiation strikes, resulting in permanent degradation or catastrophic failure. The goal and motivation behind this research is to provide insight to the degradation of the material in extreme environments to ultimately assist the development of radiation-hardened electronics with applications in nuclear reactor monitoring, aerospace, and deep space exploration. ## 2 Computational Methods Modeling the displacement damage defects on single crystal 4H-SiC and its effect on the materials thermal and mechanical properties requires the use of a toolkit and a simulator. Displacement damage and non-ionizing energy loss mechanisms are inherently atomistic-scale events. To comprehensively map the physical movements of atoms and observe the evolution of the system after Figure 1: Modeling of 4H-SiC supercells a) pure system b) vacancy defects in red c) Interstitial defects in blue d) Frenkel pairs DD defects are imposed, a molecular dynamics (MD) simulator was employed. A compatible toolkit was developed to provide the MD software with atomic models of defected 4H-SiC. The MD simulations carried out in this work are with the software LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator). The Modified Embedded Atom Method (MEAM) potential is the first semi-empirical interatomic potential formalism that demonstrated the possibility that a single formalism can be applied to a wide range of structured elements [27]. The MEAM potential proposed by Kang _et al._ is used in this work to describe inter-atomic and point defect interactions. For modeling the body interactions, LAMMPS uses the notation Si-C. The MEAM library and SiC parameter files used for MD are taken from [28, 29] and can be found in the Appendix. Using the.dat file generated from the modeling toolkit, LAMMPS can compute thermal simulations of interest. ### 4H-SiC Models The super cell of pure 4H-SiC is generated with dimension 8a \(\times\) 8b \(\times\) 8c consists of 8192 atoms, developed using known lattice constants, dimensional properties given in Table 1 and stacking sequence ABCB. This structure has a space group 186 (P 63 m c) and point group 6mm (\(C_{6V}\)) one 6 fold rotation, six mirror planes and no inversion [30, 31]. It has been documented that defects from displacement damage are inherently stochastic in nature [11, 13, 17]. Ergo, our modeling techniques introduce point defects into the system randomly and uniformly. For vacancy defects atoms are randomly removed, interstitial defects randomly add atoms to the system and Frenkel pairs randomly add and remove atoms in the system. In this work, the defects are introduced separately and uniformly at concentrations of 2%, 4%, 6%, 8% and 10% (as percentages of the total atom count) to investigate the role the defect type and density have on the materials properties. The defected models used in this with their respective atom counts are listed in Table 2 and are visualized in Figure 1 \begin{table} \begin{tabular}{||c c||} \hline Lattice Constant & Dimension \\ \hline \hline a & 3.079 Å \\ b & 3.079 \(\sqrt{3}\) Å \\ c & 10.053 Å \\ \(\alpha\) & 90\({}^{\circ}\) \\ \(\beta\) & 90\({}^{\circ}\) \\ \(\gamma\) & 120\({}^{\circ}\) \\ \hline \end{tabular} \end{table} Table 1: Lattice constants for 4H-SiC unit cell ### Thermal Simulation Modeling The energy data of these samples is calculated with LAMMPS to study the effect on the thermal properties. In these simulations, we initially relax the system by applying a conjugate gradient minimizer to dissipate residual stress. Following, we assign random velocity distributions to the atoms to represent 1K. We then apply a Noose-Hoover thermostat to raise the system to the target temperature over a period of 5ns with a time step of 1 fs. The pressure damping coefficient in the NPT ensemble simulation were tailored for each temperature simulation to converge to the experimental and literature data. A breakdown of the damping coefficient parameters is provided in Table 3. Periodic boundary conditions are applied along the [111] direction and the supercells orthogonal sides are bounded by the planes with normal directions along the [111], [\(1\,\overline{1}\,0\)] and [\(1\,1\,\overline{2}\)] directions. For structure analysis, Open Visualization Tool (OVITO) is employed. ## 3 Thermal Simulation Results A number of papers have been reported on the TEC of SiC. Work in [32] showed anisotropic dependence, where the a-axis TEC was larger than the TEC measured on the c-axis. However in more recent study using both laser interferometry and dilatometry techniques, Nakabayashi et al. shows there is an isotropic relationship. A review in [33] recorded more than 30 published thermal expansion measurements had no reported anisotropy for various silicon carbide \begin{table} \begin{tabular}{||c c c c c||} \hline Defect Density: & 2\% & 4\% & 6\% & 8\% & 10\% \\ \hline \hline Vacancy Models (\# of atoms) & 8028 & 7864 & 7701 & 7537 & 7373 \\ Interstitial Models (\# of atoms) & 8356 & 8520 & 8683 & 8847 & 9011 \\ Frenkel Pair Models (\# of atoms) & 8192 & 8192 & 8192 & 8192 & 8192 \\ \hline \end{tabular} \end{table} Table 2: Atomic counts for defected models \begin{table} \begin{tabular}{||c c||} \hline Temperature (K) & Pdamp (fs) \\ \hline \hline 200 & 320.0 \\ 300 & 245.0 \\ 400 & 199.3 \\ 600 & 285.9 \\ 800 & 250.0 \\ 1000 & 490.0 \\ 1200 & 396.0 \\ \hline \end{tabular} \end{table} Table 3: Pressure damping coefficients (Pdamp) for TEC conversion at different temperatures materials. Our directional dependence study in Figure 2 shows our MD simulations also are isotropic. The TEC values between the c-axis (\(\alpha_{33}\)) direction and the a-axis (\(\alpha_{11}\)) direction were exactly the same to six decimal places. The linear thermal expansion coefficients were taken along the \(\langle\)11-20\(\rangle\) direction and were dynamically compared at both low and high temperatures to literature. Using the ensemble bond length at equilibrium, we calculate the temperature dependent linear thermal expansion coefficient (\(\alpha\)) along the a-axis from Equation 1. \[\alpha=\left(\frac{dln(L)}{dT}\right)_{p} \tag{1}\] The low temperatures were ran every 25K from 200K to 400K and were compared to an adopted analytical TEC polynomial from [34] that fit the experimental data from 123K to 473K in their work. The polynomial for the TEC along the a-axis is \[\alpha_{11}=-2.0404+1.9374\times 10^{-8}\cdot T+1.1385\times 10^{-11}\cdot T^{2} (K^{-1}) \tag{2}\] and is plotted in comparison to the results from our MD in Figure 3a. As illustrated our MD shows very good agreement with the literature results and can attribute fidelity to our simulation. At higher temperatures, our MD values are compared to individual experimental values every 200K from 400K to 1200K in Figure 3a. Our MD simulation agreement with analytical and experimental results indicates the second nearest neighbor distance dominates the LTEC behavior due to the MEAM potentials inherent nearest-neighbor model. The results for the pure structure show positive and values and a square root relationship to temperature. As temperature increases the potential energy decreases which represents Figure 2: Directional dependence of TECs calculated along a and c axes the stiffening and softening of the state of the bond. The kinetic energy of the system is increased and the bond lengths between the atoms expand. At high temperatures, our simulation continued to show good agreement with literature and allows for defect analysis. Specific heat capacity for SiC has also been well studied and is very useful for quantifying internal energy changes within a system. Work in [33] shows the change in 4H-SiC \(c_{v}\) as temperature increases is due to more phonon modes being made available. As these modes become available and occupied, the internal energy increases at a faster rate. Ergo, the lattice structure needs more energy to increase the temperature by a degree in higher temperature environments. An analytical 4H-SiC \(c_{v}\) model was developed in [33] from experimental measurements of the specific heat up to its sublimation temperature and is used to validate our MD results at T=300K. \[c_{v}=\left(\frac{dE}{dT}\right)_{V} \tag{3}\] The TEC MD simulations gives us the internal energy data needed to compute \(c_{v}\). Using Equation 3, our MD at T = 300K gives us a \(c_{v}\) of 2.568 J \(\cdot\) K\({}^{-1}\cdot\) cm\({}^{-3}\). Using the polynomial developed in [47] at T = 300K, the analytical \(c_{v}\) is 2.628 J \(\cdot\) K\({}^{-1}\cdot\) cm\({}^{-3}\), showing good agreement with our simulation. Using our simulation value as a baseline, the analysis explored will be looking at the \(c_{v}\) dependency on defect density at T = 300K. ### TEC of Defected 4H-SiC Linear thermal expansion coefficient curves for 4H-SiC supercells with vacancies, interstitials and Frenkel pairs at varied concentrations are calculated at 7 different temperatures ranging from 200 to 1200K. Using the bond length at equilibrium, we calculate TEC along the a-axis using Equation 1. We consider Figure 3: a) Linear Thermal Expansion Coefficient for low temperatures comparing MD simulations to analytical polynomial b) LTEC for high temperatures compared to experimental data defect concentrations from 2 to 10% and observe the LTEC defect density dependence. Due to the stochastic algorithm used to employ We model 5 iterations of each defect concentration Observations on the effect of vacancy defects are first discussed. The \(\alpha_{11}\) for all vacancy defect densities (\(\rho_{vd}\)) are obtained and compared to the baseline pure SiC. The results presented in Figure 4a suggest a number of important features in vacancy defected 4H-SiC: (1) \(\alpha_{11}\) is not highly vacancy density dependent until \(\rho_{vd}>6\%\); (2) for \(\rho_{vd}\leq 6\%\) the square root temperature dependence is followed until higher temperatures T \(>\) 600K, for 800K\(<\)T\(<\) 1200K a decline in \(\alpha_{11}\) is observed; (3) for \(\rho_{vd}=8\%\) a negative \(\alpha_{11}\) is observed during T \(<\) 400K, then showing \(\alpha_{11}\) temperature independence for T \(\geq\) 400K; (5) there exist a critical vacancy defect value \(8\%<\rho_{vd}<\) 10% that triggers a complete thermal contraction in the material. The TEC data at various interstitial defect densities \(\rho_{id}\) provided in Figure 4b suggests several important features in interstitial defected 4H-SiC: (1) \(\alpha_{11}\) is not highly interstitial density dependent until \(\rho_{id}\,6\%\); (2) for \(\rho_{id}<6\%\) the systems under-expand the baseline case; (3) for \(\rho_{id}\geq 6\%\) the systems over expand the baseline case; (4) for \(\rho_{id}<8\%\), increases in temperature lead to closer agreement with the pure SiC \(\alpha_{11}\); (5) interstitials trigger an inherent over-expansion phenomena in the system that increases in magnitude with respect to \(\rho_{id}\). Lastly, observations on the effect of Frenkel pairs are discussed. Figure 4c suggests: (1) the largest TEC deviations per defect density are caused by Frenkel pairs; (2) increasing \(\rho_{fp}\) exclusively underexpands the baseline case; (3) square root temperature dependence seen in \(\rho_{fp}\leq 4\%\), uniform dip at 600K in remaining defected cases; (4) for \(T\leq 400K\) negative thermal expansion (NTE) is observed in all, but \(\rho_{fp}=2\%\) case; (5) exclusive NTE observed for \(\rho_{fp}=10\%\) case. ### \(C_{v}\) of Defected 4H-SiC To quantify the effect defects have on the energy required to raise the temperature on the material, the specific heat capacity at constant volume is calculated in comparison to the baseline case at 300K. Figure 6 suggests some interesting characteristics: (1) \(c_{v}\) decreases in an exponential fashion with increased defect density; (2) Frenkel pairs have the highest decreasing rate amongst the defect types, dropping 63% by \(\rho_{fp}=6\%\); (3) vacancies show the least rapid decrease in energy modulation, dropping 21.2% of the baseline when \(\rho_{vd}=10\%\); (4) Frenkel pairs have a critical defect value between 6% and 8% that causes the system to not require energy to raise the temperature of the material. The specific heat capacity for \(\rho_{fp}\geq 8\%\) suggests an amorphous effect compromises the materials thermal stability. Extrapolating, there exists critical concentrations for all defect types that will compromise the enthalpy yielding no material stability at moderate temperatures. It is acknowledged that reaching a defect concentration for interstitials and vacancies alone to achieve that level of criticality would require such ex Figure 4: TEC of (a) vacancy (b) interstitial and (c) defected 4H-SiC at various concentrations in comparison to the non-defected material Figure 5: a) Vacancy, (c) interstitial and (e) Frenkel pair defect density dependence on change in systems potential energy per atom during thermal expansion. Comparison of (b) vacancy, (d) interstitial and (e) Frenkel pair of structures magnitude of deviation in potential energy (PE) per atom from the baseline case treme environment exposure that would not be within the scope of electronic applications. Overall, the data suggests that defected systems do not behave like typical lattice solids. As defects increase in concentration less energy is required to raise the temperature of the material. This is a very important parameter in radiation hardening, as defected systems with significantly impacted heat capacity will be much more susceptible to reaching melting/sublimation temperatures and premature burnout. ### Dynamics of Defected 4H-SiC The dependency of \(\alpha_{11}\) on defect density can be explained by exploring the potential energy (PE) of the system during thermal expansion. As temperature is introduced into the statically equilibrated pure 4H-SiC structure, the potential energy increases, but since PE is inherently a negative value the magnitude decreases. Note that the change in the potential energy during thermal expansion is the equivalent of the kinetic energy added into the system. The PE of the system is calculated by summing the bond energies within the system, so when adding and removing atoms in the system by introducing defects, changes in PE are anticipated. In order to quantify the effect the defects have on the material, the system energy data is normalized by calculating the PE per atom. The relationship of the potential energy per atom during thermal expansion for defected systems is displayed in Figure 5a,5c and 5e. To elucidate the relationship between defect density and PE deviation from the baseline, the magnitude of the difference in defected potential energies to the pure 4H-SiC are taken in Figures 5b, 5d and 5f. Figure 5 provides some additional features: (1) magnitude of PE deviation for frenkel pair concentrations are an order of magnitude higher than the other defect types; (2) the rate of PE deviation and TEC deviation increases at an increasing rate with respect to \(\rho_{vd}\), rate of increase for other Figure 6: Specific heat capacity dependency on defect type and defect density at 300K. defect types are uniform; (3) deviation in PE from the baseline is a function of temperature; (4) Figure 5e shows for \(T\leq 1000K\) PE per atom for \(\rho_{fp}\geq 8\%\) is negative during thermal expansion, showing largest magnitude at T = 600K. The potential energies are calculated by the pair and bond energies between atoms which directly affects the enthalpy (H) of the system. In turn, these potentials provide internal energy contributions to the Gibbs free energy (G) equation in Equation 4, which describes the thermodynamic process to reach system equilibration as a function of internal energy (U), pressure (p), temperature (T), volume (V) and entropy (S) after point defect formation. \[\Delta G=\Delta H-T\Delta S=\Delta U+p\Delta V-T\Delta S \tag{4}\] As the enthalpy changes within the system, the volumetric response does as well. The deviation observed in \(\alpha_{11}\) with respect to defect density is caused by the first law of thermodynamics. At constant temperature and constant pressure, the thermal-dynamic potential represented in Gibbs free energy changes causing variation in the volumetric expansion, directly effecting \(\alpha_{11}\). When \(\rho_{vd}\geq 8\%\) and \(\rho_{fp}\geq 6\%\), the change in energy per atom surpasses a threshold in the thermal dynamic potential causing a contracting effect on the system yielding negative thermal expansion (NTE). Rather than expanding with increased temperatures, these defected systems expand with decreased temperatures. The increase of deviation in potential energy per atom suggests that increased vacancy concentrations change the anharmonicity associated with the interatomic distances. The vibrational properties in the \(\rho_{vd}\geq 8\%\) cases are quite different. The interatomic potentials yield transverse motion perpendicular to the directions of the atomic chains, creating a shortening phenomenon. The transverse phonon amplitudes outweigh the expansion effects of the longitudinal modes resulting in systematic NTE. Additionally, the nonlinear rate of increased PE and TEC deviation suggests that the defects are interacting with each other at higher concentrations to trigger more rapid thermal degeneration of the material. In the positive thermal expansion (PTE) cases, the interatomic potential is not disrupted enough to have a the contracting effect. The anaharmonicity leads to an increase in average interatomic distances as higher vibrational states become more populated as temperature rises. The atom chains move in longitudinal vibrations in the directions of the bonds causing a lengthening effect during increased temperature. A PE contour on [010] plane of the \(\rho_{vd}=10\%\) is visualized using OVITO in its initial state at 1K (Figure 7a ) and at 800K, the point of most negative TEC, in Figure 7b. The structure initially shows a stable lattice, but shows the formation of localized vacancy defect clusters that interact with each other triggering shortening effects within the system. As vacancy defects increase in concentration the localized cluster interactions cause macroscopic decreases to the TEC at an increasing rate. A PE contour on [010] plane of the \(\rho_{fp}=10\%\) is visualized using OVITO in its initial state at 1K (Figure 8a ) and at 200K, the point of most negative Figure 8: a) Potential Energy contour of 10% frenkel pair defected model at 1K and at b) 200K Figure 7: a) Potential Energy contour of 10% vacancy defected model at 1K b) Potential Energy contour of 10% vacancy defected model at 800K TEC, in Figure 8b. The potential energy scale within the system remains during thermal equilibration. However, the frenkel pair defects interact with each other at this concentration to completely overwhelm the material by 200K. Where there are localized defect clusters at 1K, the material appears to be amorphous in regions in the system hardly keeping its lattice structure. The material at this concentration is not behaving like a solid lattice and suggests the material is prematurely changing states of matter. The results from the specific heat capacity study indicate that the energy required to raise the temperature of the system is so low that the material sublimates. ## 4 Conclusion We have shown the baseline structures have agreeing thermal properties with documented experimental and analytical values in literature. Observing defected 4H-SiC systems at various concentrations, Frenkel pairs show the highest magnitude difference per defect density in TEC and specific heat capacity in comparison with the pure structures. The variation in thermal expansion comes from the defects effect on the interatomic potential of the system. When defects are introduced the bonds and anaharmonicity in the system change resulting in changes in the internal energy. Changes in the internal energy consequently create deviation in the volume response via thermodynamic processes. We also document that defects at larger concentrations can cause NTE, which can cause severe incompatibility at material interfaces in electronic devices prompting degradation in the electrical performance. Though interstitials have the smallest effect on the thermal expansion, they have a much larger impact on effecting the internal energy of the system. Specific heat simulations show interstitials drop the energy needed to raise the temperature of the system over twice as much as the effect from the vacancies. The defect types showed decreases in \(c_{v}\) at a decreasing rate suggesting the defects are interacting with each other to cause more rapid degeneration of the material. At high frenkel pair concentrations, the specific heat capacity reaches zero corresponding to an amorphous effect in the material and premature sublimation. This suggests the isothermal processes shift to adiabatic at this critical \(\rho_{fp}\) Overall, the effects from displacement damage significantly alter the thermal properties of 4H-SiC as defect concentration increases. Alterations in the TEC and the specific heat can cause thermal stresses at material interfaces and quicker heating in power devices that can lead to permanent degradation or premature burnouts.
2309.12846
Estimation of redshift and associated uncertainty of Fermi/LAT extra-galactic sources with Deep Learning
With the advancement of technology, machine learning-based analytical methods have pervaded nearly every discipline in modern studies. Particularly, a number of methods have been employed to estimate the redshift of gamma-ray loud active galactic nuclei (AGN), which are a class of supermassive black hole systems known for their intense multi-wavelength emissions and violent variability. Determining the redshifts of AGNs is essential for understanding their distances, which, in turn, sheds light on our current understanding of the structure of the nearby universe. However, the task involves a number of challenges such as the need for meticulous follow-up observations across multiple wavelengths and astronomical facilities. In this study, we employ a simple yet effective deep learning model with a single hidden layer having $64$ neurons and a dropout of 0.25 in the hidden layer, on a sample of AGNs with known redshifts from the latest AGN catalog, 4LAC-DR3, obtained from Fermi-LAT. We utilized their spectral, spatial, and temporal properties to robustly predict the redshifts of AGNs as well quantify their associated uncertainties, by modifying the model using two different variational inference methods. We achieve a correlation coefficient of 0.784 on the test set from the frequentist model and 0.777 and 0.778 from both the variants of variational inference, and, when used to make predictions on the samples with unknown redshifts, we achieve mean predictions of 0.421, 0.415 and 0.393, with standard deviations of 0.258, 0.246 and 0.207 from the models, respectively.
Sarvesh Gharat, Abhimanyu Borthakur, Gopal Bhatta
2023-09-22T13:15:59Z
http://arxiv.org/abs/2309.12846v3
Estimation of redshift and associated uncertainty of Fermi/LAT extra-galactic sources with Deep Learning ###### Abstract With the advancement of technology, machine learning-based analytical methods have pervaded nearly every discipline in modern studies. Particularly, a number of methods have been employed to estimate the redshift of gamma-ray loud active galactic nuclei (AGN), which are a class of supermassive black hole systems known for their intense multi-wavelength emissions and violent variability. Determining the redshifts of AGNs is essential for understanding their distances, which, in turn, sheds light on our current understanding of the structure of the nearby universe. However, the task involves a number of challenges such as the need for meticulous follow-up observations across multiple wavelengths and astronomical facilities. In this study, we employ a simple yet effective deep learning model with a single hidden layer having 64 neurons and a dropout of 0.25 in the hidden layer, on a sample of AGNs with known redshifts from the latest AGN catalog, 4LAC-DR3, obtained from Fermi-LAT. We utilized their spectral, spatial, and temporal properties to robustly predict the redshifts of AGNs as well quantify their associated uncertainties, by modifying the model using two different variational inference methods. We achieve a correlation coefficient of 0.784 on the test set from the frequentist model and 0.777 and 0.778 from both the variants of variational inference, and, when used to make predictions on the samples with unknown redshifts, we achieve mean predictions of 0.421, 0.415 and 0.393, with standard deviations of 0.258, 0.246 and 0.207 from the models, respectively. keywords: galaxies: active; distances and redshifts - gamma-rays: galaxies - gamma-rays: general - methods: statistical ## 1 Introduction Redshift, denoted as "z", is a measure of the displacement of spectral lines towards longer wavelengths in the electromagnetic spectrum. This phenomenon arises due to the expansion of the universe, stretching the wavelength of light emitted by distant celestial objects. Redshift estimation plays a fundamental role in understanding the properties of these objects, including their distance, cosmological evolution, and the nature of the universe itself. In the realm of astrophysics, redshift estimation traditionally relies on spectroscopic measurements, where the light emitted by celestial objects is dispersed into its constituent wavelengths, revealing characteristic absorption or emission features. However, spectroscopic observations are often constrained by limited observational time, expensive resources, and the technical limitations of spectrographs. Consequently, obtaining spectroscopic redshift measurements for a large number of objects, as required by comprehensive surveys, becomes challenging and impractical. The Fermi Gamma-ray Space Telescope (Fermi-LAT) has revolutionized the study of high-energy gamma-ray sources and contributed significantly to our understanding of the universe. The Fermi-LAT observatory observes celestial objects in gamma-ray wavelengths. However, efficiently extracting redshift information solely from gamma-ray observations poses a challenge as these observations are devoid of any spectral line, besides that of the 511 keV feature Skinner (2010). Therefore, the sole viable approach to gauge the distance involves linking the gamma-ray emitter with a recognized source that exhibits absorption or emission lines in other wavelengths, thereby enabling the calculation of redshift. The majority of discrete sources detected by Fermi/LAT are blazars, which consist of flat-spectrum radio quasars (FSRQs) exhibiting distinct optical emission lines over a broad-band continuum, and BL Lacs (BLLs), characterized by weak or absent emission line signatures (see Bhatta & Dhital, 2020, and references therein). This indicates that while it may be relatively easier to estimate the redshifts of FSRQs, the redshift evaluation for BL Lacs is a complex and often computationally expensive task as it necessitates extensive optical spectroscopic observations along with comprehensive multi-wavelength observations involving diverse astronomical facilities. To address these challenges, astronomers have turned to ML and DL techniques Dainotti et al. (2021); Narendra et al. (2022); Coronado-Blazquez (2023), which have demonstrated remarkable success. The study done by Dainotti et al. (2021) is one of the initial works in estimating the redshift of \(\gamma\)-Ray loud AGNs. The authors make use of an ensemble-based approach that combines \begin{table} \begin{tabular}{c c c c} \hline \hline & Known Redshift & Unknown Redshift & Total \\ \hline BLL & 738 & 433 & 1171 \\ \hline BCU & 59 & 459 & 518 \\ \hline FSRQ & 390 & 0 & 390 \\ \hline RDG & 26 & 4 & 30 \\ \hline NLSY1 & 5 & 0 & 5 \\ \hline AGN & 3 & 0 & 3 \\ \hline CSS & 3 & 0 & 3 \\ \hline Total & 1224 & 896 & 2120 \\ \hline \hline \end{tabular} \end{table} Table 1: Classwise distribution of the data considered for this study \begin{table} \begin{tabular}{c c c c c} \hline \hline Model & Hidden Layer & Output Layer & Estimator \\ \hline Frequentist & Dense (64 neurons) & Dense (1 neuron) & - \\ Variational Inference & Dense (64 neurons) & DenseFlipout (1 neuron) & Flipout \\ Variational Inference & Dense (64 neurons) & DenseReparameterization (1 neuron) & Reparameterization \\ \hline \hline \end{tabular} \end{table} Table 2: Neural Network Architectures: Dropout of 0.25 between the hidden and output layers is common for each model Figure 1: Plots for Epochs vs Loss (MAE) and RMSE for Variational Inference (Flipout Estimator) standard regression algorithms such as Random Forest, XG Boost, Big LASSO, and Bayes GLM to estimate the redshift of the corresponding input target. The authors make use of a 10 cross-fold validation technique iterated over 10 times to report a correlation coefficient (r) ranging from 0.704 to 0.718. Moreover, they also reported a root-mean-squared error (RMSE) ranging from 0.432 to 0.438. Narendra et al. (2022) is an advancement of Dainotti et al. (2021). The authors employed a similar ensemble-based technique as observed in Dainotti et al. (2021), however, the only difference besides an increase in the data points and the feature vector is the choice of machine learning models. The authors report an RMSE value of 0.212 when the sample size is 111 and 0.458 when the sample size is 1112. As RMSE is inversely proportional to the number of samples used during evaluation, it can not be considered the best evaluation metric to compare different algorithms unless the sample size is the same across the algorithms. Also, the authors report a correlation coefficient of r \(\approx\) 0.74 in both of the aforementioned cases In Coronado-Blazquez (2023), the author makes use of the 4LAC DR3 catalog, which is an updated version of the data used \begin{table} \begin{tabular}{l c c c c c} \hline True Value & Estimator & 68.2\% CI & 95.4\% CI & 99.7\% CI & Variance \\ \hline 0.1860 & Reparameterized & 0.1498–0.245 & 0.1022–0.2926 & 0.0546–0.3402 & 0.002 \\ & Flipout & 0.1539–0.2533 & 0.1041–0.3031 & 0.0543–0.3529 & 0.002 \\ \hline 0.2974 & Reparameterized & 0.3138–0.4124 & 0.2645–0.4617 & 0.2152–0.511 & 0.002 \\ & Flipout & 0.3173–0.4177 & 0.2671–0.4679 & 0.2169–0.5181 & 0.002 \\ \hline 0.4470 & Reparameterized & 0.2148–0.3588 & 0.1428–0.4308 & 0.0708–0.5028 & 0.005 \\ & Flipout & 0.3563–0.4699 & 0.2955–0.5267 & 0.2427–0.5835 & 0.003 \\ \hline 1.014 & Reparameterized & 0.7964–1.0154 & 0.6869–1.1249 & 0.5774–1.2344 & 0.011 \\ & Flipout & 0.7924–0.979 & 0.6991–1.0723 & 0.6058–1.1656 & 0.008 \\ \hline \end{tabular} \end{table} Table 4: Assessing redshift predictions using variational inference: Summary of True Values, Estimators, Confidence Intervals, and Variance for a random set of samples \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Known Redshift samples} & \multicolumn{3}{c}{Unknown Redshift samples} \\ \cline{2-7} & Mean Prediction & Range & \(\sigma\) & Mean Prediction & Range & \(\sigma\) \\ \hline Frequentist Model & 0.559 & 0.04 - 1.99 & 0.372 & 0.455 & 0.07 - 1.77 & 0.258 \\ Variational Inference (Flipout Estimator) & 0.581 & 0.027 - 2.11 & 0.382 & 0.415 & 0.0251 – 1.71 & 0.246 \\ Variational Inference (Reparameterization Estimator) & 0.526 & 0.0004 - 1.82 & 0.332 & 0.393 & 0.0089 – 1.47 & 0.207 \\ \hline \end{tabular} \end{table} Table 5: Redshift Prediction Summary Statistics Figure 2: Plots for Epochs vs Loss (MAE) and RMSE for Variational Inference (Reparameterization Estimator) in Dainotti et al. (2021) and Narendra et al. (2022) with multiple additional features and a significant increase in the number of data points. To optimally use both the numerical as well as categorical features, the author relies on the CatBoost algorithm which is a boosted decision tree-based algorithm capable of dealing with the categorical data. The author employs a 5-cross validation technique to make effective use of the limited data. The reported "RMSE" and "r" values in this study are 0.46 and 0.71 respectively. Similar to Dainotti et al. (2021) and Narendra et al. (2022), the author also experiments with an ensembled approach having combined eight different algorithms, however, the performance of the CatBoost model is reported to be significantly better than what was observed in the ensembled algorithm. Considering the limited number of studies conducted on this topic, none of which account for the uncertainty of the predicted redshifts, in this manuscript, we introduce an algorithm that employs a multi-layer perceptron with a single hidden layer as the foundational model which when modified using variational inference allows us to not only quantify uncertainty but also augment our results. ## 2 Methodology ### Data Collection and Processing Since its launch in 2008, the Fermi Gamma-Ray Space Telescope's onboard instrument called the LAT has been continuously monitoring the high-energy sky (Atwood et al., 2009). In this study, we utilize the Fermi fourth catalog of active galactic nuclei (AGNs) data release 3 (4LAC-DR3; Ajello et al. (2022); Ajello et al. (2022)). The catalog comprises 3407 individual sources, of which 1806 sources have known redshifts. Each source is characterized by a set of 41 different features with randomly missing values reported in this catalog. Following Coronado-Blizquez (2023), we shortlist a set of 24 features for our study. Some of the features such as "SED_class", "Highest_energy" and "Unc_LP_beta" have a number of missing values. After sufficient experimentation with different imputing techniques, feature removal, and data removal we proceed with the removal of the data points with missing values for the "Highest_energy" and "Unc_LP_beta" features. On the other hand, the missing values for the "SED_class" are imputed using the mode estimation technique or, most frequent categorical value imputation Lin and Tsai (2020). To carry out the imputation process, we make use of sklearn's Pedregosa et al. (2011) SimpleImputer with appropriate arguments like setting strategy to most_frequent. This leaves us with 1224 data Figure 3: Scatter relation between the true value and the predicted mean value by different models (Red diagonal represents a perfect prediction) Figure 4: Comparison between Predicted Mean Redshift and True Redshift using Histograms.Subplots show the distribution of the redshift values for both the known and predicted redshifts, disaggregated by the “CLASS” feature. Here, we represent only those classes with more than 50 samples. points (For detailed data distribution refer to Table 1) for our study with 90% of the data used for training and 10% of data used for validation and testing purposes equally divided among each other. It is crucial to note that the concept of the _validation data split_ refers to the division of data used for evaluating and refining a deep learning model during its training process. This division serves as a means to optimize the model's performance and make necessary adjustments. By subjecting the trained model to the validation set, we gain valuable insights into its ability to generalize on unseen data. The model's performance on the validation set can be regarded as a reliable indicator of its performance on entirely new data at each training epoch. This evaluation helps in identifying potential issues, such as overfitting, which can significantly affect the model's effectiveness in real-world applications. It allows us to make unbiased estimations of critical hyperparameters, such as the number of neurons in the hidden layer or the dropout rate, essential for optimizing the model's performance. The collected data consists of a number of numerical and categorical features. To deal with categorical data, we convert them to an integer-valued array using sklearn's ordinal encoder. Next, all the numeric data is normalized using the StandardScaler provided by Skiem and Pedregosa (2011); Buitinck et al. (2013). Prior to standard scaling, all numeric features except "Frac_Variability", "GLAT", "GLON", "LP_Index", "LP_beta", "PL_Index", "Unc_Flux1000" and "Unc_PL_Index" undergo log transformations. After pre-processing we are left with a total of 1224 samples with known redshift values and 896 samples with unknown redshifts. Please refer Table 1 for class-wise distribution of the samples. The feature engineering and data engineering and proposed algorithms are implemented using python 3.7. The pandas library Wes McKinney (2010) is used to read the dataframes from the files and store them and once the input features are identified, we store them using numpy Harris et al. (2020) arrays, in order to feed them into our TensorFlow Abadi et al. (2015) models. ### Model Architecture and Uncertainty Quantification In this study, we propose a multi-layer perception Murtagh (1991); Noriega (2005); Baum (1988) with a single hidden layer having 64 neurons. A multi-layer perceptron, often abbreviated as MLP, is a feed forward Neural Network with atleast 3 layers including the standard input, hidden and output layer. Every layer has multiple nodes/neurons in it, which along with the number of hidden layers define the complexity of the model. Though there are many standard techniques to define the number of neurons in every hidden layer, in this study due to the simplicity of our model, we come up with the value of 64 after sufficient experimentation. These MLPs are fully connected, implying that every node in layer "i" connects to each node in the subsequent layer "j" through a weight value denoted as \(w_{ij}\). The learning process is facilitated by adjusting the values of these weights as the data is processed, guided by the error between the MLP output and the target value. Further, to avoid overfitting, we introduce a dropout Srivastava et al. (2014); Cai et al. (2019); Srinivas and Babu (2016) of 0.25 in the hidden layer. This ensures that, during training, at any point in time, a neuron will be inactive with a probability of 0.25. This prevents the network from relying too heavily on specific neurons and encourages more robust, generalized Figure 5: Comparison between Known Redshift Samples and Predictions on Unknown Redshift samples using Histograms. Subplots show the distribution of the redshift values for the predictions made on the unknown redshift samples, disaggregated by the “CLASS” feature.Here, we represent only those classes with more than 50 samples Figure 6: In order to evaluate the uncertainty for variational inference using the flipout estimator (left column) and the reparameterization estimator (right column), redshift samples were evaluated 1000 times, and the resulting distribution for some of the known values is shown here. The distribution was fitted with a Gaussian probability density function (PDF), and the values corresponding to those within \(1\sigma\), between \(1\sigma\) and \(2\sigma\), and between \(2\sigma\) and \(3\sigma\) from the mean were color-coded as cyan, green, and red, respectively. learning. Next, to ensure non-linearity within the model we apply ReLU - a widely used activation function Fukushima (1975). In the output layer, we utilise the softplus activation function Dubey et al. (2022) which is just a smooth continuous version of ReLU. Placing an activation function at the end of each layer ensures that the layer's output undergoes a non-linear transformation before being passed to the next layer. This is crucial in enabling the algorithm to learn and capture non-linear dependencies between the input and the output. For the loss function, we employ the "Mean Absolute Error" (MAE) Hudson (2022). This baseline model treats its parameters as point estimates and hence we refer to it as the "frequentist" model. Moreover, to account for uncertainty, we employ the method of variational inference to modify our frequestist model using two different estimators, as discussed below. A summary of the architectures for the three models is listed in Table 2 Variational inference is a technique that aims to approximate the true but often intractable posterior distribution of the model's parameters (weights and biases) given the observed data Shridhar et al. (2019); Jospin et al. (2022). Instead of directly calculating the posterior, which is either challenging or impossible in complex models, variational inference introduces an approximating distribution (usually a known and tractable distribution). To achieve this, a prior distribution is assigned to the model's parameters, representing our initial beliefs about their values. As data is observed, the prior is updated using Bayes' rule to obtain the posterior distribution. However, directly calculating the posterior is intractable for many models, especially neural networks. Thus, an optimization problem is formulated: we seek the closest approximating posterior distribution (in terms of the Kullback-Leibler (KL) divergence) that can be efficiently computed Bishop (2006). Both the prior and the approximating distributions are chosen as the Normal distribution, due to its desirable properties, like being a conjugate prior to itself. Unlike traditional neural networks that rely on point estimates, variational inference provides a more meaningful measure of uncertainty and captures the complexity of the posterior distribution through this probabilistic approach. We use TensorFlow Probability Dillon et al. (2017) and Keras Chollet et al. (2015) to implement the proposed models. There are multiple methods to implement variational inference using tensorflow probability, however, we proceed with the DenseFlipout and DenseReparametrization layers. In both of these methods, the layers implement the Bayesian variational inference counterpart to a Dense layer by drawing the parameter values from distributions. An important difference between both these layers is that the lipout estimator uses roughly twice as many floating point operations as the reparameterization estimator. (Refer Wen et al. (2018) and Kingma & Welling (2013) for more information on both of these layers). To quantify uncertainty, each sample is evaluated 1000 times and the uncertainty is captured using the variance of the predictions. The resulting mean from the 1000 iterations is considered as the prediction of the Bayesian model. Due to the Bayesian nature of the variational inference algorithms, the output prediction at every iteration is an independent and identically distributed Gaussian sample. Having set the output predictions to be normally distributed for a fixed data point, we then calculate the mean and standard deviation of the predictions for each sample. As evident from the theory of Gaussian distributions, we then make use of the standard 3 sigma rule to come up with a possible range of redshifts containing the true value of the redshift with an associated confidence level. Although this rule comments on the confidence levels being, \(68.2,95.4\), and \(99.7\) percent for 1, 2 and 3 standard deviations from the mean, respectively, it is easy to generalize it for any range of values depending upon the allowed tolerance. ### Training and Validation Considering the computational requirement to train the algorithm, we make use of Google Collaboratory, a cloud-based jupyter environment for model training. An important aspect of any Machine or Deep Learning algorithm is its reproducibility. To ensure this, we train our algorithms on a fixed random seed over a maximum of 2500 epochs, and include the data splits pertaining to the training, validation and testing sets in our GitHub repository. To reduce the computational overhead and avoid overfitting, we introduce early stopping Caruana et al. (2001) with a validation patience of 100, and as a result, the proposed variational inference models stop after 1170th and 390th epoch respectively as shown in Figure 1 and 2 respectively. As evident from Figure 2, during the initial 100 epochs, the rate of decrease in "loss" and "RMSE" for both the training and validation data points is high. However, at later stages, it tends to saturate. This indicates that there's a very high probability of having no further decrease in the loss. Having said this, the use of early stopping ensures that the algorithm stops its training once the rate of decrease in the validation loss tends to zero. This helps in avoiding unnecessary computations. Also, in Figure 1, we observe that at later stages there's a decrease in training loss, on the other hand, the validation loss tends to saturate and even increases in further epochs. This behavior results in overfitting of the algorithm, if not stopped at the correct time, and the introduction of early stopping ensures the same. To optimize the algorithm, we make use of "Adam" Kingma & Ba (2014) which is one of the widely used optimizers in the Deep Learning community with a learning rate of \(10^{-3}\). One of the primary reasons for its popularity is that it incorporates momentum (for which we use the default values defined in TensorFlow) and is a variant of the AdaGrad optimizer, which facilitates quicker convergence. ## 3 Results and Discussion Blazars emitting \(\gamma\)-rays with known redshifts significantly contribute to our understanding of several fundamental aspects of cosmic phenomena. Determining their redshifts aids in constraining the nature of the Extragalactic Background Light (e.g., Acciari et al., 2019; Dwek & Krennrich, 2013; Ackermann et al., 2012). Additionally, these blazars shed light on the structures of intergalactic magnetic fields (e.g., Aharonian et al., 2023; Finke et al., 2015; Tavecchio et al., 2010) and the universe's star formation history (Fermi-LAT Collaboration et al., 2018; Rojas-Bravo & Araya, 2016; Ackermann et al., 2012). Also, by computing the luminosity function, we can estimate the evolution of blazars over cosmic time (Chiang et al., 1995; Ajello et al., 2012). This, in turn, can lead to the constraining of fundamental cosmological parameters (Dominguez et al., 2019; Zeng & Yan, 2019). The study contributes by providing an algorithm that rigorously estimates the possible range of redshifts with an associated confidence. To assess the effectiveness of our model, we conducted evaluations using entirely new and unseen data, referred to as the test data. Since our study focuses on a regression problem, we utilized the "Root Mean Squared Error" (RMSE) as one of our evaluation metrics. The Root Mean Squared Error (RMSE) calculates the square root of the average of the squared differences between predicted values and actual values in the test data. A lower RMSE generally indicates better model performance, as it signifies smaller prediction errors. However, the RMSE value is influenced by the number of samples. Therefore, we also utilized the "correlation coefficient" to evaluate our model. The correlation coefficient measures the strength and direction of the linear relationship between two variables. A higher correlation coefficient indicates a better alignment between the predicted and actual values, demonstrating the model's ability to capture the underlying patterns in the data. Table 3 clearly shows that our proposed algorithm yields improved results when compared to existing studies, with a maximal increase in the correlation coefficient of around 0.07. Additionally, Table 4 presents a comparison between the actual redshift values and the predicted range of redshifts at fixed confidence levels for randomly selected data points from the test dataset. The table clearly demonstrates that in the majority of cases, the true redshift value falls within the interval associated with a confidence level of 95.4%. Although in table 4, we focus on the specific confidence levels, the range can be easily calculated for different confidence levels based on a real multiple of the standard deviation. Figure 3 presents a scatter plot that showcases the relationship between predicted and true redshifts obtained from various models. While it is evident that the predicted redshifts tend to be slightly lower than the actual values in many instances (a trend also observed in Coronado-Blazquez (2023), albeit with more scattered points), the incorporation of uncertainty and confidence levels addresses this issue. By utilizing a 3-sigma interval of the mean with a confidence level of 99.7%, the majority of true values fall within this range - an analysis reveals that for all the samples with a known redshift, the true value falls within the 99.7% confidence interval for 63% of the samples using each method of variational inference. This enables astronomers to make informed decisions regarding the reliability of the algorithm's predictions, considering the desired confidence level and width interval at any given point. Figure 4 provides similar insights. Additionally, the figure highlights the algorithm's limitation in regressing lower redshifts. However, due to the associated uncertainty and the range of predictions provided by Variational Inference, the lower redshifts are accounted for within the predicted range. This aspect of our proposed algorithm ensures that the true value is captured with a sufficiently high probability, depending on the allowed confidence level. As illustrated in Figure 5 the predictions made on the samples with an unknown redshift by the frequentist model, the flipout estimator model and the reparamterization estimator model follow distributions similar to that of the predictions made on the known redshift samples with mean values of 0.455, 0.415 and 0.393, standard deviations of 0.258, 0.246 and 0.207 and redshift values ranging from 0.07-1.77, 0.0251-1.71 and 0.0089-1.47, respectively (Table 5). Figure 6 display histograms corresponding to the data presented in Table 4. As evident from the figures, the predicted set of values for every redshift correspond to a Gaussian distribution which confirms the inclination of the implemented algorithm with the theory and hence allows us to efficiently estimate the uncertainty associated with the range of predictions. Also, as seen in Table 1 and Figure 5, the predicted redshift class is mostly composed of BL Lacs and BCUs. These results are plausible because BL Lacs are strong gamma-ray emitters with weak or no emission lines, which makes estimating their redshifts very difficult. Similarly, the BCUs are unclassified sources whose classification is challenging, as optical spectra or MWL observations required for a robust classification are not available. However, several studies based on machine learning predict that the majority of these sources are likely to be BL Lacs (see e. g., Agarwal 2023; Kang et al. 2019). ## 4 Conclusion This study introduces a straightforward yet highly effective algorithm for redshift estimation using solely Gamma-Ray observations.The proposed algorithm shows improvements over existing methods, achieving significantly low RMSE values of 0.415, 0.406, and 0.438 in its frequentist, variational inference (flipout), and variational inference (reparameterization) variants respectively. To further validate our results, we also employ the correlation coefficient as a complementary metric. Remarkably, we observe a substantial improvement in the correlation coefficient, with values increasing from 0.74 to 0.784, 0.777, and 0.778 for the respective algorithms, thus demonstrating the advantage of our proposed method. In addition to robust redshift regression, our algorithm addresses the associated uncertainty by providing an estimated range of potential redshift values based on the desired confidence level. Notably, for highest confidence interval (99.7%), the predictions of our algorithm encompass the true redshifts for the majority of the samples. This uncertainty quantification feature adds significant value to the algorithm's predictions and helps users to make informed decisions based on their desired confidence level. Furthermore, we extend the application of our algorithm to predict unknown redshifts in the 4LAC-DR3 catalog, utilizing variational inferences. This allows us to provide corresponding uncertainties alongside the predicted redshifts, enhancing the reliability and applicability of our algorithm in real-world scenarios. ## Acknowledgement We thank the anonymous referee for a careful and thorough review of this paper, which helped us improve the quality of the work. ## Data Availability The data utilized in this paper can be accessed by the public through the Fermi Science Support Center (FSSC) of NASA's Goddard Space Flight Center. Furthermore, we have made both the code and the resulting data openly available on our Github repository ([https://github.com/abhimanyu911/redshift-regression-with-uncertainty.git](https://github.com/abhimanyu911/redshift-regression-with-uncertainty.git)) for public access.
2303.17965
Measurement-device-independent continuous variable quantum key distribution protocol operation in optical transport networks
Numerically, a theoretical analysis of the noise impact caused by spontaneous Raman scattering, four-wave mixing, and linear channel crosstalk on the measurement-device-independent continuous variable quantum key distribution systems is conducted. The analysis considers symmetry and asymmetry of system paths, as well as possible channel allocation schemes, for a quantum channel located in C- and O-bans. Mathematical models for MDI CV-QKD system and the contributing noises description are provided. The secure key generation rate is estimated to state features of protocol operation when integrated with existing DWDM systems in the context of its implementation into telecommunication networks.
Irina Vorontsova, Roman Goncharov, Sergey Kynev, Fedor Kiselev, Vladimir Egorov
2023-03-31T11:12:09Z
http://arxiv.org/abs/2303.17965v1
Measurement-device-independent continuous variable quantum key distribution protocol operation in optical transport networks ###### Abstract Numerically, a theoretical analysis of the noise impact caused by spontaneous Raman scattering, four-wave mixing, and linear channel crosstalk on the measurement-device-independent continuous variable quantum key distribution systems is conducted. The analysis considers symmetry and asymmetry of system paths, as well as possible channel allocation schemes, for a quantum channel located in C- and O-bans. Mathematical models for MDI CV-QKD system and the contributing noises' description are provided. The secure key generation rate is estimated to state features of protocol operation when integrated with existing DWDM systems in the context of its implementation into telecommunication networks. device-independence, quantum key distribution, continuous variables. \({}^{1}\) ITMO University, Kronverkskiy, 49, St. Petersburg, 197101, Russia \({}^{2}\) SMARTS-Quanttelecom LLC, 6th Vasilyevskogo Ostrova Line, 59, St. Petersburg, 199178, Russia [email protected] **UDC 530.145:535.12:681.7:53.082.5** **PACS 03.67.-a, 42.50.-p** ## 1 Introduction Quantum key distribution (QKD) is one of the most rapidly advancing fields of quantum technologies [1, 2]. Its main idea is an opportunity to transport a cryptographically secure key between two and more authenticated users connected to each other through quantum and information channels. Guarantied by the principles of quantum mechanics [3], the security of QKD to attacks from an eavesdropper ensures safety of the transmitted data from all kinds of hacking and known attacks. One option to classify QKD protocols is based on them being discrete-variable (DV) or continuous-variable (CV) [4]. Additionally, among many other QKD protocol classifications, there is the one distinguishing between protocols in terms of their device-dependence or (semi-)device-independence [5]. The intersection of these two criteria gives rise to a new class of protocols, that is measurement-device-independent (MDI) CV-QKD protocol. Device-independence features particular practical importance, for it eliminates many side channel attacks, though implies accurate theoretical analysis. Not only does this work discuss the latter, but also for the first time combines this analysis with the task of simultaneous propagation of information and quantum signals in a single optical fiber [6]. The effects considered as channel loss sources are spontaneous Raman scattering, four-wave mixing, and linear channel crosstalk. A possible realization scheme is discussed, as well as the allocation of classical channels on the standard DWDM grid. The security analysis is carried out numerically, employing the known theoretical security bounds to estimate the performance of the addressed QKD system. The results are quite important in practice to be considered when integrating QKD with the existing telecommunication networks. ## 2 Measurement-device-independent QKD Let us discuss main principles underlying MDI QKD protocol operation [5, 7]. The essence of the approach lies in the fact that no assumptions are made about the detectors involved in the protocol, such that they can even be controlled by an eavesdropper (Eve). In a typical single-photon MDI QKD protocol, two legitimate users (Alice and Bob) send quantum signals to an untrusted central relay, often addressed as Charlie. A Bell state measurement is performed then; both signals interfere at a 50:50 beam splitter (BS). Next, output signals go through a polarizing beam splitter (PBS) to be projected into either horizontal (H) or vertical (V) polarization state. The measurement is pronounced successful if two of the four involved detectors click. ### Continuous-variable MDI QKD protocol Similarly to the conventional MDI QKD, the continuous variable (CV) version of the protocol [8, 9] again implies there are the two senders and an untrusted relay performing the measurements to be used during legitimate users' post-processing to generate the secure key. The two known approaches to a general protocol description, namely, "prepare-and-measure" (PM) and "entanglement-based" (EB) scenarios, apply to the case of MDI CV-QKD as well. For these scenarios are equivalent in terms of their mathematical description and effectiveness, we will consider a more practically convenient PM version of the protocol. Gaussian modulation [4, 10] (GG02 protocol) is considered, so Alice and Bob are operating with coherent states with a two-dimensional Gaussian distribution. They first generate coherent states \(|x_{\rm A}+ip_{\rm A}\rangle\) and \(|x_{\rm B}+ip_{\rm B}\rangle\) with the quadratures \(x\) and \(p\) featuring variance \(V_{\rm A(B)}-1\) (in shot noise units (SNU)) and then send their states to Charlie via quantum channels. Next, Alice's and Bob's modes interfere at the beam splitter, while Charlie measures the C and D modes' quadratures on homodyne detectors and announces the resulting state \(\{X_{\rm C},\,P_{\rm D}\}\) publicly. It is Bob only who changes his state according to \(X_{\rm B}=x_{\rm B}+kX_{C},P_{\rm B}=p_{\rm B}-kP_{\rm D}\) (with \(k\) standing for the gain associated with channel losses), whereas Alice's state remains unchanged. Finally, standard procedures are utilized for parameter estimation, information reconciliation, and privacy amplification. Since there is equivalency between the CV-QKD EB and PM scenarios' security proof against collective attacks [11, 12], we shall now switch to the well-known covariance matrix formalism. Implying that Eve has access to the relays, quantum channels, and even Bob's state displacement operation in the EB scheme, further security analysis of the MDI CV-QKD protocol can be seen as a special case of a typical one-way CV-QKD protocol [4, 10]. Then the secure key fraction can be estimated in accordance with the Devetak-Winter bound [13, 14]: \[r=\beta I(X_{\rm A},\,P_{\rm A}:X_{\rm B},\,P_{\rm B})-\chi(X_{\rm B},\,P_{\rm B }:E), \tag{1}\] where \(0\leq\beta\leq 1\) is the reconciliation efficiency (assumed to be ideal in the further numerical simulations), \(I\) is the mutual information between Alice and Bob, \(\chi(X_{\rm B},P_{\rm B}:E)=S(\widehat{\rho}_{E})-S(\widehat{\rho}_{E}|X_{\rm B },P_{\rm B})\) is the Holevo bound, and \(S(\widehat{\rho}_{E})\) denotes the von Neumann entropy of quantum state \(\widehat{\rho}_{E}\). The upper bound \(\chi(X_{\rm B},\,P_{\rm B}:{\rm A}_{1},{\rm B}^{\prime}_{1})\) is determined only using the corresponding covariance matrix. Supposing the system is under two independent entangling cloner attacks [4], the covariance matrix takes the form of: \[\Xi = \left(\begin{array}{cc}V_{\rm A}I_{2}&\sqrt{(T(V_{\rm A}^{2}-1) \sigma_{z})}\\ \sqrt{(T(V_{\rm A}^{2}-1)\sigma_{z})}&[(V_{\rm A}-1)+1+T\xi^{\prime}]I_{2} \end{array}\right), \tag{2}\] \[T = \frac{\eta_{\rm A}}{2}g^{2},\] (3) \[\xi^{\prime} = 1+\frac{1}{\eta_{\rm A}}[\eta_{\rm B}(\Xi_{\rm B}-1)+\eta_{\rm A }\Xi_{\rm A}]+\frac{1}{\eta_{\rm A}}(\frac{\sqrt{2}}{g}\sqrt{V_{\rm B}-1}- \sqrt{\eta_{\rm B}}\sqrt{V_{\rm B}+1})^{2},\] (4) \[\Xi_{\rm A} = \frac{1-\eta_{\rm A}}{\eta_{\rm A}}+\xi_{\rm A},\,\Xi_{\rm B}= \frac{1-\eta_{\rm B}}{\eta_{\rm B}}+\xi_{\rm B},\] (5) \[\eta_{\rm A} = 10^{-\alpha L_{\rm AC}/10},\,\eta_{\rm B}=10^{-\alpha L_{\rm BC} /10}, \tag{6}\] where \(\eta_{\rm A}\left(\eta_{\rm B}\right)\) is a channel (Alice-Charlie or Bob-Charlie) transmittance, \(\xi_{\rm A}(\xi_{\rm B})\) is the excess noise, \(g\) is the offset factor, \(I_{2}\) is the identity matrix, and \(\sigma\) is the Pauli \(z\)-matrix. To minimize the excess noise, the offset factor is set as \[g=\sqrt{\frac{2}{\eta_{\rm B}}}\sqrt{\frac{V_{\rm B}-1}{V_{\rm B}+1}}. \tag{7}\] Then, the excess noise is expressed as: \[\xi^{\prime}=\xi_{\rm A}+\frac{1}{\eta_{\rm A}}[\eta_{\rm B}(\xi_{\rm B}-2)+2], \tag{8}\] where the excess noise on Alice's side (Bob's side) \[\xi_{\rm A(B)}\] contain the corresponding channel noise converted to SNU. Alice's and Bob's variances are considered equal \(V_{\rm A}=V_{\rm B}=40\) in the simulations. ## 3 Channel Noise Sources and their Mathematical Description Naturally, losses are inevitable when it comes to signal propagation in any medium, be it fiber-optical communication lines or free space. Regarding QKD, three effects are addressed in terms of noise primarily, which are the spontaneous Raman scattering (SpRS), the four-wave mixing (FWM) nonlinearity, and the linear channel crosstalk (LCXT). Now, we will briefly summarize their physical nature and corresponding mathematical description to then estimate the negative contribution they make to the performance of MDI CV-QKD system under consideration. ### Spontaneous Raman Scattering Firstly, the main contributor to the overall channel losses in case QKD integrated with DWDM systems is the SpRS noise [15, 16]. Its impact is minor for classical networks, though the contribution becomes substantial for joint QKD and DWDM systems [15, 16]. The origin of the SpRS is different for the cases of co- and counter-propagation of signals. Thus, two sub-types are distinguished usually, that are forward (for co-propagating signals) and backward (for counter-propagating signals) SpRS noise. When speaking in the context of simultaneous QKD session and information transmission within single optical fiber, the mathematical representation of their contribution is given by [6, 17]: \[P_{\rm ram,f}=P_{\rm out}L\sum_{c=1}^{N_{\rm ch}}\rho(\lambda_{\rm c},\lambda_{ \rm q})\Delta\lambda, \tag{9}\] and \[P_{ram,b}=P_{out}\frac{\sinh(\xi L)}{\xi}\sum_{c=1}^{N_{ch}}\rho(\lambda_{c}, \lambda_{q})\Delta\lambda, \tag{10}\] where \(P_{\rm out}\) denotes the output power for a single channel, \(L\) is the length of the optical fiber, \(N_{ch}\) is the number of classical channels present in a DWDM system, \(\rho(\lambda_{\rm c},\lambda_{\rm q})\) describes the normalized scattering cross-section for the wavelengths of classical (\(\lambda_{\rm c}\)) and quantum (\(\lambda_{\rm q}\)) channels, and \(\Delta\lambda\) is the bandwidth of the quantum channel filtering system. For the MDI CV-QKD realization considered here, both forward and backward SpRS occur for different system paths. A detailed description of the configuration will be provided in the following section. The output power values appear in formulas instead of the input ones, so that to meet the the bit error rate (BER) requirements of a DWDM system directly. Thus, the value of \(P_{\rm out}\) can be obtained as follows: \[P_{\rm out}\ ({\rm dBm})=R_{\rm x}\ ({\rm dBm})+IL({\rm dBm}), \tag{11}\] where \(R_{\rm x}\) is the sensitivity of a receiver and \(IL\) denotes the insertion losses of the system. ### Four-wave Mixing Next channel noise source to consider is FWM. It is a third-order nonlinear process, which sequence is creation of photons at new frequencies as a result of the interaction between the initial ones [18]. These new frequencies might coincide with the one of the quantum channel [19], thus contributing to the overall noise in the band of the quantum channel. To come up with the mathematical model for the FWM noise contribution consideration, let us consider three pump channels with the frequencies \(f_{i},f_{j},\ {\rm and}\ f_{k}\). Then, the value of the resulting FWM noise peak power \(P_{ijk}\) featuring the frequency \(f_{i}+f_{j}-f_{k}\) can be expressed as [6]: \[P_{ijk}=\eta\gamma^{2}D^{2}p^{2}e^{-\xi L}\frac{(1-e^{-\xi L})^{2}}{9\xi^{2}} P_{s}P_{l}P_{h}, \tag{12}\] where the phase-matching efficiency for the FWM \(\eta\) and parameter \(\Delta\beta\) are defined as: \[\eta=\frac{\xi^{2}}{\xi^{2}+\Delta\beta^{2}}\left[1+\frac{4e^{-\xi L}\sin^{2} \left(\Delta\beta L/2\right)}{(1-e^{-\xi L})^{2}}\right], \tag{13}\] and \[\Delta\beta=\frac{2\pi\lambda^{2}}{c}|f_{i}-f_{k}||f_{j}-f_{k}|\cdot\left[D_{ c}+\frac{dD_{c}}{d\lambda}\left(\frac{\lambda^{2}}{c}\right)(|f_{i}-f_{k}|+|f_{j}- f_{k}|)\right], \tag{14}\] correspondingly. In the above equations, \(L\) is the transmission distance of the interacting light fields in the optical fiber, \(D\) denotes the degeneracy factor (\(D=6\), \(D=3\)), \(P_{i(j,k)}\) and \(f_{i(j,k)}\) are the input power and optical frequency of the interacting fields correspondingly, \(\gamma\) stands for the third-order nonlinear coefficient, \(\xi\) is the loss coefficient, \(D_{c}\) and \(dD_{c}/d\lambda\) are the dispersion coefficient of an optical fiber and its slope respectively with \(\lambda\) standing for the wavelength of the FWM radiation. Finally, performing the summation of all the powers of the resulting FWM signals with frequencies coinciding with the frequency of the quantum channel, one obtains: \[P_{\rm FWM}=\sum P_{ijk},f_{i}+f_{j}-f_{k}=f_{q}. \tag{15}\] ### Linear Channel Crosstalk It is due to the imperfections of the demultiplexers that any practically implemented DWDM system suffers LCXT losses [20]. Since information signals are orders of magnitude more powerful than quantum ones, the insufficient isolation might cause considerable LCXT noise, that can be estimated in the following way: \[P_{\rm LCXT}=P_{\rm out}\ ({\rm dBm})-{\rm ISOL}\ ({\rm dB}). \tag{16}\] Once the power values of the contributing to the overall channel noise effects are calculated, it is needed to recalculate them to a photon detection probability. To do so, the formula can be used: \[p_{\rm ram,f(b)/FWM/LCXT}=\frac{P_{\rm ram,f(b)/FWM/LCXT}}{hc/\lambda_{q}} \Delta t\eta_{D}\eta_{B}, \tag{17}\] where \(\eta_{D}\) denotes the detector efficiency, \(\eta_{B}=10^{-0.1IL}\) is the transmission coefficient associated with the insertion losses of the detection system, \(h\) is the Planck constant, and \(c\) is the light speed. ## 4 MDI CV-QKD scheme and channel allocation Here, a possible realization scheme of MDI CV-QKD protocol will be addressed to analyze its potential for creating telecommunication optical transport networks integrated with DWDM systems. The notion of maximal achievable distances of MDI CV-QKD systems employed to characterize the latter denotes fiber length corresponding to the case, when the secure key generation rate is non-zero. The realization of MDI CV-QKD addressed in the work is shown in Figure 1). Here, quantum signals are sent to the untrusted central relay to be homodyned there, whereas the information is transferred from Alice to Bob directly. It means, quantum and information signals are unidirectional in Alice's path (i.e., there is forward SpRS noise in the path) and counter-propagating through the Bob's path, so the SpRS noise features the backward type. The performance of the MDI CV-QKD realization was then numerically analyzed in terms of the possible channel allocation schemes and asymmetry coefficients between Alice's \(L_{a}\) and Bob's \(L_{b}\) paths \(R_{\rm asym}=L_{a}/L_{b}\). Similarly to the work [21], four allocation schemes for the quantum channel located in C-band and O-band of the communication window were considered. The criterion and complete explanation for such a choice is provided in detail in the works [22, 23]. The final choice of the configurations considered in the further numerical simulations is given in Table 1. The parameters of the DWDM system are the following: \(\xi=0.18\) dB/km, \(\Delta\lambda=15\) GHz, \(N_{ch}=10\) or \(40\), \(R_{x}=-32\) dBm and IL = 8 dB. As for the asymmetry coefficient \(R_{\rm asym}\), three different relations are address here: a symmetric (i.e., \(L_{a}/L_{b}=1\)) and two asymmetric realizations (\(L_{a}/L_{b}=3/2\) and \(L_{a}/L_{b}=2/1\)). ## 5 Results and discussion Using the mathematical models for the MDI CV-QKD secure key generation rate, SpRS, FWM, and LCXT noises, the realization of the system depicted in Figure 1 was numerically simulated. The results are presented in Figure 2. It is a known fact that for MDI CV-QKD systems the secure key generation rate decreases dramatically as the system approaches its symmetry (\(L_{a}=L_{b}\)), with the best result corresponding to the situation when one of the paths equals zero [24]. The results obtained confirm the stated conclusion, as the maximal achievable distance increases together with the system's paths asymmetry. Interestingly, for the case of MDI CV-QKD, configurations for which the quantum channel is located in the C-band appeared to be more efficient in terms of their maximal achievable distances. For most cases, configurations with O-band-located quantum channel are more beneficial (MDI QKD as well, see [22]), as FWM noise does not contribute to the overall losses. Though the overall contribution of channel noises here is less for such cases too, fiber attenuation for the quantum channel wavelength of 1310 nm surpasses them significantly. It can be seen also that the maximum achievable distances do not exceed \begin{table} \begin{tabular}{|c|c|c|} \hline Configuration & Number of & Quantum channel \\ & channels & wavelength, nm \\ \hline Configuration 1 & 10 & 1536.61 \\ \hline Configuration 2 & 10 & 1310 \\ \hline Configuration 3 & 40 & 1537.40 \\ \hline Configuration 4 & 40 & 1310 \\ \hline \end{tabular} \end{table} Table 1: Description of the optimal configurations chosen for numerical simulations Figure 1: Schematic illustration for the MDI CV-QKD protocol realization with DWDM: information is transmitted from Alice to Bob via DWDM-information channels; quantum channel is co-propagating with information channels in Alice’s path, whereas counter-propagating with them in Bob’s path and, thus, which means forward SpRS noise is induced in the Alice-Charlie path and backward SpRS - in the Bob-Charlie path 6 km, thus, the secure key distribution over long distances is not possible here. Still, such realizations can be utilized for short-distance communication. Regarding the number of information channels (10 and 40 in the work), for a larger number of information channels, a decrease in the maximal achievable distances takes place. This is a natural observation, as the more information channels are there in the system, the greater the value of the overall channel losses. The decrease is substantial for C-band configurations, whereas is quite small in case of O-band. ## 6 Conclusion In this work, the MDI CV-QKD protocol was addressed. A theoretical research and numerical simulation of the noise influence caused by SpRS, FWM, and LCXT effects on the performance of the MDI CV-QKD system performance was analyzed for a proposed practical realization scheme, in terms of channel allocation and the paths' asymmetry coefficient. Increasing the number of information channels naturally leads to a decrease in the maximal achievable distances. In addition, the allocation of a wavelength of 1310 nm for a quantum channel results in the shortening of maximal distance values for MDI CV-QKD, regardless of the fact the overall channel noise is less for such configurations. The superior contribution Figure 2: The dependence of the secure key generation rate on the optical fiber length for MDI CV-QKD corresponding to the scheme in Figure 1. comes from the fiber attenuation, which is larger for the wavelength of 1310 nm. It was confirmed that the more asymmetric the paths for the MDI CV-QKD scheme are, the more efficient the systems is. Moreover, MDI CV-QKD realizations feature significantly shorter maximal achievable distances, that do not exceed several kilometers and can be utilized for short-distance information transmission only. The results obtained can be used in terms of practical implementation of MDI CV-QKD systems, so that to obtain optimal results. ## Acknowledgements The work was done by Leading Research Center "National Center for Quantum Internet" of ITMO University by the order of JSCo Russian Railways.
2309.04942
Inertial self-propelled particles in anisotropic environments
Self-propelled particles in anisotropic environments can exhibit a motility that depends on their orientation. This dependence is relevant for a plethora of living organisms but difficult to study in controlled environments. Here, we present a macroscopic system of self-propelled vibrated granular particles on a striated substrate that displays orientation-dependent motility. An extension of the active Brownian motion model involving orientation-dependent motility and inertial effects reproduces and explains our experimental observations. The model can be applied to general $n$-fold symmetric anisotropy and can be helpful for predictive optimization of the dynamics of active matter in complex environments.
Alexander R. Sprenger, Christian Scholz, Anton Ldov, Raphael Wittkowski, Hartmut Löwen
2023-09-10T06:14:38Z
http://arxiv.org/abs/2309.04942v1
# Inertial self-propelled particles in anisotropic environments ###### Abstract Self-propelled particles in anisotropic environments can exhibit a motility that depends on their orientation. This dependence is relevant for a plethora of living organisms but difficult to study in controlled environments. Here, we present a macroscopic system of self-propelled vibrated granular particles on a striated substrate that displays orientation-dependent motility. An extension of the active Brownian motion model involving orientation-dependent motility and inertial effects reproduces and explains our experimental observations. The model can be applied to general \(n\)-fold symmetric anisotropy and can be helpful for predictive optimization of the dynamics of active matter in complex environments. ## I Introduction The survival of organisms in complex environments essentially depends on their fitness and strategy to react and adapt to external conditions. In particular, a realistic environment is never isotropic but typically anisotropic, i.e., its traversability depends on the direction of motion [1]. Anisotropy can be caused on various scales by many different means: by an external force arising from gravity [2; 3], viscosity [4], light [5], and chemical gradients [6], electromagnetic fields [7], through steric confinement by channels, veins, and anisotropic porous media [8; 9], or by motion in a liquid-crystalline [10; 11; 12; 13; 14] or crystalline [15; 16; 17] medium. Anisotropic environments can have a pronounced impact on the motion of self-propelled particles. These "active" particles convert energy from their environment into directed motion and comprise both living organisms and artificial inanimate objects, like activated colloids [18; 19; 20], granules [21; 22; 23; 24; 25], and robots [26; 27; 28]. Standard models of self-propelled particles [29] assume that the propulsion force is isotropic in the sense that it always points into the direction of the particle orientation with a constant self-propulsion speed even in an inhomogeneous environment [30; 31; 32; 33; 34; 35; 36]. In anisotropic environments, a dependence of the self-propulsion speed of the particle on its orientation is frequently observed, i.e., some biological organisms react to their environment in a sense that the propulsion force depends on their orientation relative to the environment. For instance, microorganisms can move faster towards light sources [37] or in the direction of food sources [38]. Additionally, flying animals such as bees and birds control their flying speed by relative changes of their environment, which in turn leads to anisotropic flying velocities within structured environments [39; 40; 41]. Similarly, anisotropic movement is also observed for smaller insects like ants in guiding structures [42; 43]. Those macroscopic self-propelled particles in low-friction environments (e.g., such as flying insects) where the effect of anisotropy is most prominent, are also governed by inertial effects [44]. This poses a challenging problem because inertia introduces correlations that can persist for longer times [45; 46; 47; 48; 49; 50; 51; 52; 53; 54]. In this communication, we present an experimental realization of a self-propelled granular particle on an anisotropically structured substrate, which exhibits orientation-dependent motility. We observe pronounced anisotropy in the motion of the particle, which is well explained analytically by an extension of the active Brownian motion model with inertia and orientation-dependent motility. The orientation-dependence can be written in terms of a Fourier series which allows a general solution for anisotropic motility that can be applied to our experiments. Our findings establish a class of active matter models useful for anisotropic environments and shed light on the potential self-propulsion strategies of organisms in such anisotropies. The analytical results of our model can be particularly useful for predictive optimization of control parameters of artificial active agents, such as robots [26; 27; 28], to better explore anisotropic environments [55]. ## II Results ### Experimental observation of anisotropic self-propulsion Macroscopic active matter with orientation-dependent motility can be realized from self-propelled 3D-printed agents called vibrobots (see Fig. 1a) on structured substrates. These particles are excited by vertical vibrations generated by a rectangular acrylic baseplate attached to an electromagnetic shaker. The particles stand on slightly tilted legs, which causes the particles to hop forward. These legs are all tilted equally along the orientation (or symmetry axis) of the particle. The baseplate is covered with a lenticular plastic sheet on top, which is the source for the anisotropic motility. The experimental setup is depicted in Fig. 1b. An illustration with a side-view of the particle resting on such a grooved surface are shown in Fig. 1c. The vibration frequency is set to \(f=80\,\mathrm{Hz}\). In this frequency range, the plate vibration is sufficiently uniform [47]. Three different peak acceleration amplitudes \(A=1.28\,g\), \(1.44\,g\), and \(1.60\,g\) are investigated, which varies the motility and motion properties of the vibrobot. For this choice of \(f\) and \(A\), the vibration is strong enough to ensure stable vibrobot motion, but not too strong to prevent particles from falling over. We find pronounced anisotropy in the motion of the particle and observe a modulation of the velocity parallel but also perpendicular to the orientation of the grooves, as well as an increased activity with increasing excitation amplitude. The motion of the particles is illustrated in Supplementary videos 1 - 6, where we show a montage of all measured trajectories for each excitation amplitude as well as for parallel and perpendicular initial orientation, respectively. From the trajectories, the anisotropy is already visible by the naked eye, in particular when comparing parallel and perpendicular starting orientations. This anisotropy is best illustrated when displaying all recorded trajectories (integrated and smoothed) and distinguishing parallel and perpendicular initial orientations, as shown in Fig. 1d, e. For particles starting parallel to the grooves, we observe that the peak of the density (which is linked to the starting position of the particles) is broad along and narrow perpendicular to the starting orientation since the particles tend to move faster parallel to the grooves and therefore propagate further before they reorient. In the case of perpendicular starting orientation, the density spreads more around the peak, since particles reorient near to the starting position. Hence the persistence length depends on the orientation of the particle. Surprisingly, from individual particle trajectories, we also identify a driving-force component perpendicular to the orientation, whenever a particle is not moving exactly parallel or perpendicular to the grooves. The anisotropic self-propulsion is caused by the grooved surface of the vibrating plate. Our conjecture is that this is due to the strong dependence of the particle speed on the relative inclination angle between legs and surface [56]. When resting on the vibrating plate, the legs are bent along the orientation of the particle. This deformation stores elastic energy. Then, after detaching from Figure 1: Description of experimental system and observations. **a** Vibrationally driven self-propelled particle (vibrobot) manufactured by 3D printing. The white cross indicates the particle orientation. Scale bar represents \(1\,\mathrm{cm}\). **b** Experimental setup: Rectangular acrylic baseplate attached to an electromagnetic shaker. The size (width \(\times\) length) of the top-mounted plate equals \(30\,\mathrm{cm}\)\(\times\)\(30\,\mathrm{cm}\). **c** Cross-section of the anisotropic substrate (lenticular foil) with particle to scale. **d, e** Trajectory density for vibrobots starting parallel (**d**) and perpendicular (**e**) to grooves with an excitation amplitude \(A=1.28\,\mathrm{g}\). **f-h** Sketch of the two velocity contributions. The particle moves with increased velocity \(\mathrm{v}_{\parallel}\) when aligned along the grooves (**f**). When orientated diagonally, the particle moves with average velocity \(\mathrm{v}_{\parallel}\) along its orientation while simultaneously experiencing active propulsion \(\mathrm{v}_{\perp}\) perpendicular to it (**g**). The particle moves with decreased velocity \(\mathrm{v}_{\parallel}\) when perpendicularly aligned to the grooves (**h**). **i-k** Three representative trajectories with an excitation amplitude \(A=1.60\,\mathrm{g}\). The persistence length is noticeably shorter for perpendicularly aligned particles than for parallel aligned particles. Length ratios and velocity contributions are not to scale. the base, the energy is released and the vibrobot jumps forward. When the particle is oriented perpendicular to the grooves, the legs face an elliptical half-cylinder and the relative inclination angle of the legs is decreased (see Fig. 1c). As a result, the legs will bend less compared to the case where the particle is oriented along the grooves. If the particle is diagonally aligned with the grooves, the legs will not bend along the orientation and the particle experiences a force perpendicular to its orientation. This in fact results in propulsion perpendicular to the orientation of the particle. In Fig. 1f-h, we illustrate the two velocity contributions for three different orientations of the particle. As described in the literature, we also observe orientational fluctuations, caused by an instability of the driving mechanism to the microscopic surface roughness, and inertial delay effects due to the mass of the particles [47; 57]. When vibrobots are excited above a certain amplitude threshold, they begin to tumble [58]. As a result, they randomly reorient while moving and eventually change the direction of their path. Figure 1i-k shows three representative trajectories with different initial orientations. Clearly, the particle does not show a deterministic motion, apart from short-time correlations due to initial orientation and inertia. The particle rather undergoes an anisotropic two dimensional random walk with a certain persistence length. Due to the simplicity of our particles, compared to living active matter, our experiment allows us to investigate kinetic properties of particles with orientation-dependent motility, which can be useful for optimization of motion and search strategies of active matter in general. This requires an analytical description of the motion that captures the essential properties of the particle and must be applicable to general cases of anisotropic motility. ### Langevin dynamics model Finding an analytical description for macroscopic self-propelled systems can be challenging due to the complex interaction of particles and environment. Here, we model those interactions with an effective driving force and thereby introduce a minimal model, where the interplay of orientation-dependent motility, inertia, and fluctuations, is treated in terms of a generalized active Langevin dynamics model. Our model reproduces the experimental observations quantitatively despite its complex anisotropic nature. We assume that the particle has non-negligible mass \(M\) and moment of inertia \(J\). The motion of such an underdamped particle is in general characterized by the translational center-of-mass velocity \(\dot{\mathbf{r}}(t)\) with the center-of-mass position \(\mathbf{r}(t)\) and the time variable \(t\) as well as by the angular velocity \(\dot{\phi}(t)\) and the angle of orientation \(\phi(t)\), which denotes the angle between the orientation vector \(\mathbf{\hat{n}}=(\cos\phi,\sin\phi)\) and the positive \(x\)-axis. By taking the above considerations into account, the translational and rotational motion of the particle is governed by the force balance between inertial, frictional, self-propulsive driving, and random forces and torques \[M\,\ddot{\mathbf{r}}(t)+\gamma_{\mathrm{t}}\,\dot{\mathbf{r}}(t) =\gamma_{\mathrm{t}}\,\mathbf{v}\big{(}\phi(t)\big{)}+\sqrt{2D_{\mathrm{t}}} \,\gamma_{\mathrm{t}}\,\mathbf{\xi}(t), \tag{1}\] \[J\,\ddot{\phi}(t)+\gamma_{\mathrm{r}}\,\dot{\phi}(t)=\gamma_{ \mathrm{r}}\,\omega+\sqrt{2D_{\mathrm{r}}}\,\gamma_{\mathrm{r}}\,\eta(t). \tag{2}\] Here, \(\gamma_{\mathrm{t}}\) and \(\gamma_{\mathrm{r}}\) denote the translational and rotational friction coefficients, respectively. To take translational and rotational diffusion into account, the Langevin equations contain independent Gaussian white noise terms \(\mathbf{\xi}(t)\) and \(\eta(t)\), with zero means \(\langle\mathbf{\xi}(t)\rangle=\mathbf{0}\) and \(\langle\eta(t)\rangle=0\) and delta-correlated variances \(\langle\xi_{i}(t_{1})\xi_{j}(t_{2})\rangle=\delta_{ij}\,\delta(t_{1}-t_{2})\) and \(\langle\eta(t_{1})\eta(t_{2})\rangle=\delta(t_{1}-t_{2})\), where \(i,j\in\{x,y\}\). Therein, \(D_{\mathrm{t}}\) and \(D_{\mathrm{r}}\) are the translational and rotational short-time diffusion coefficients of the particle, respectively. The brackets \(\langle\ldots\rangle\) denote the noise average in the stationary state (meaning after losing correlation with initial conditions [52]) and \(\delta_{ij}\) is the Kronecker delta. Most importantly, \(\mathbf{v}(\phi)\) denotes an arbitrary orientation-dependent motility which accounts for the interaction between the particle and environment. For mathematical convenience, we represent \(\mathbf{v}(\phi)\) as a Fourier series \[\mathbf{v}(\phi)=\sum_{\begin{subarray}{c}k=-\infty\\ k\neq 0\end{subarray}}^{\infty}\mathbf{c}_{k}\exp(\mathrm{i}k\phi), \tag{3}\] where \(\mathbf{c}_{k}\) is the Fourier-coefficient vector of the mode \(k\), and \(\mathrm{i}\) denotes the imaginary unit. This representation lets us solve the model for any type of orientation-dependence and then apply the results to our experimental system. In particular, this description can be used for different experimental realizations ranging from anisotropic illuminated Janus particles, triangular microparticles in traveling ultrasound waves, and the motion of living insects in guiding structures to the specific setup studied in this communication [59; 60]. In general, for a given propulsion velocity \(\mathbf{v}(\phi)\), these Fourier coefficients can be calculated as \(\mathbf{c}_{k}=\int_{-\pi}^{\pi}(\mathbf{v}(\phi)/(2\pi))\exp(-\mathrm{i}k \phi)\,\mathrm{d}\phi\) (thus we have after complex conjugation \(\mathbf{c}_{k}^{*}=\mathbf{c}_{-k}\)). The seminal case of isotropic propulsion is recovered for the two non-zero coefficients \(\mathbf{c}_{\pm 1}=v(1,\mp\mathrm{i})/2\). Note that we exclude the mode \(k=0\) in Eq. (3), which would correspond to a drift velocity induced by a constant external force (e.g., gravity) not measured in the experiment. Moreover, as typical 3D-printed particles are not perfectly symmetrical, they tend to perform circular motions on long time scales. To capture this behaviour, we assume a systematic torque which acts on the particle and leads to an angular speed \(\omega\). In contrast to \(\mathbf{v}(\phi)\), we measured no orientational dependency in the angular speed which could in principle be caused by the anisotropic substrate. Concluding, our theoretical model depends on a number of parameters: the angular velocity \(\omega\), the rotational diffusion coefficient \(D_{\mathrm{r}}\), the rotational friction time \(\tau_{\mathrm{r}}=J/\gamma_{\mathrm{r}}\), the set of Fourier coefficients \(\{\mathbf{c}_{k}\}\) describing the anisotropic motility, the translational diffusion coefficient \(D_{\mathrm{t}}\) and the translational friction time \(\tau_{\mathrm{t}}=M/\gamma_{\mathrm{t}}\). In the context of the experimental observations, we assume that the vibrobot is moving with an orientation-dependent velocity \[\mathbf{v}(\phi)=\left(\mathrm{v}_{\parallel}+\delta\mathrm{v}_{\parallel} \cos(2\phi)\right)\mathbf{\hat{n}}(\phi)-\delta\mathrm{v}_{\perp}\sin(2\phi) \mathbf{\hat{n}}_{\perp}(\phi), \tag{4}\] where \(\mathbf{\hat{n}}(\phi)=(\cos\phi,\sin\phi)\) is pointing parallel and \(\mathbf{\hat{n}}_{\perp}(\phi)=(-\sin\phi,\cos\phi)\) is pointing perpendicular to the particle's orientation. The sine and cosine terms in Eq. (4) reflect the orientation dependence of the particle velocity and the symmetry of the system. This adds the parallel speed \(\mathrm{v}_{\parallel}\), the parallel speed anisotropy \(\delta\mathrm{v}_{\parallel}\), and the perpendicular speed anisotropy \(\delta\mathrm{v}_{\perp}\), leading to a total of 8 independent parameters. The four non-zero Fourier coefficients of Eq. (4) read \(\mathbf{c}_{\pm 1}=\mathrm{v}_{\parallel}(1,\mp\mathrm{i})/2+(\delta\mathrm{v}_{ \parallel}+\delta\mathrm{v}_{\perp})(1,\pm\mathrm{i})/4\) and \(\mathbf{c}_{\pm 3}=(\delta\mathrm{v}_{\parallel}-\delta\mathrm{v}_{\perp})(1, \mp\mathrm{i})/4\). These parameters are determined from analytic fits to the experimental results. We use temporal correlation functions, like the orientational correlation function \(C(t)=\langle\mathbf{\hat{n}}(t)\cdot\mathbf{\hat{n}}(0)\rangle\) and the velocity correlation function \(Z(t)=\langle\mathbf{\dot{r}}(t)\cdot\mathbf{\dot{r}}(0)\rangle\), to determine the relevant timescales and diffusion coefficients. Further stationary observables, like the mean translational velocity \(\mathbf{v}_{0}=\langle\mathbf{\dot{r}}(0)\rangle\) and the mean angular velocity \(\langle\dot{\phi}(0)\rangle\), are used to estimate all motility parameters. More information on the parameter estimation can be found in the Methods section and the parameter values are listed in Tab. 1. In the following, we compare the experimental data with analytic predictions derived from the theoretical model and discuss the anisotropy found in several observables. ### Comparison between analytical results and experiment As described above, the mean self-propulsion strongly depends on the relative orientation of the particle with respect to the groove direction. The model describes this via two orthogonal velocity components. In Fig. 2, we separately show the mean velocity along the body-axis \(\mathrm{v}_{\parallel}=\mathbf{v}_{0}\cdot\mathbf{\hat{n}}\) and perpendicular to it \(\mathrm{v}_{\perp}=\mathbf{v}_{0}\cdot\mathbf{\hat{n}}_{\perp}\) as functions of the orientation \(\phi\). The parallel contribution \(v_{\parallel}\) in Fig. 2a shows considerably greater propulsion along the grooves than perpendicular to them. For the perpendicular contribution (see Fig. 2b) we find the assumed \(\sin(2\phi)\)-modulation (see Eq. (4)), which has an alignment effect on the overall velocity direction in favor of the groove direction. Overall, we measure increased activity for larger excitation amplitudes while the degree of anisotropy remains almost the same for all three measurements. From the theoretical side, the mean instantaneous velocity \(\mathbf{v}_{0}=\langle\mathbf{\dot{r}}(0)\rangle\) at a specific orientation \(\phi_{0}\) can be computed in general as instatanteous \[\mathbf{v}_{0}=\frac{\tau_{\mathrm{r}}}{\tau_{\mathrm{t}}}\sum_{ \begin{subarray}{c}k=-\infty\\ k\neq 0\end{subarray}}^{\infty}\mathbf{c}_{k}e^{\mathrm{S}_{k}}\mathrm{S}_{k}^{- \Omega_{k}^{+}}\Gamma(\Omega_{k}^{+},0,\mathrm{S}_{k})e^{\mathrm{i}k\phi_{0}}, \tag{5}\] with the dimensionless coefficients \(\mathrm{S}_{k}=D_{\mathrm{r}}\tau_{\mathrm{r}}k^{2}\), \(\Omega_{k}^{+}=D_{\mathrm{r}}\tau_{\mathrm{r}}k^{2}+\mathrm{i}\omega\tau_{ \mathrm{r}}k+\tau_{\mathrm{r}}/\tau_{\mathrm{t}}\), and the generalized incomplete gamma function \(\Gamma(s,x_{1},x_{2})=\int_{x_{1}}^{x_{2}}t^{s-1}e^{-t}\,\mathrm{d}t\). The analytic result is plotted in Fig. 2 and yields good agreement with the experimental data. In contrast to overdamped motion, where the particle's mean velocity is simply equal to the internal self-propulsion velocity, here the particle moves on average with a smaller velocity due to inertial delay effects, i.e., \(|\mathbf{v}_{0}(\phi)|\leq|\mathbf{v}(\phi)|\). Further, the faster varying contributions (i.e., the higher Fourier modes) of the propulsion are more affected by these inertial delay effects, resulting in a more isotropic mean velocity for in Figure 2: Orientation dependence of stationary velocity. **a** Stationary parallel velocity \(\mathrm{v}_{\parallel}\) and **b** stationary perpendicular velocity \(\mathrm{v}_{\perp}\) plotted as a function of the orientation angle \(\phi\) for three different excitation amplitudes \(\mathrm{A}=1.28\) g (upper row), \(\mathrm{A}=1.44\) g (middle row), and \(1.60\) g (lower row). Solid dark blue and dashed red curves show the experimental data and analytical results, respectively. Blue experimental error intervals represent the standard error of the mean. creasing mass \(M\). Conversely, the anisotropy is restored for increasing moment of inertia: \(\lim_{J\to\infty}\mathbf{v}_{0}(\phi)=\mathbf{v}(\phi)\). A suitable quantifier for the presence of inertial effects is the delay function \(d(t)=\langle\dot{\mathbf{r}}(t)\cdot\mathbf{\hat{n}}(0)\rangle-\langle\dot{ \mathbf{r}}(0)\cdot\mathbf{\hat{n}}(t)\rangle\)[47, 48, 54]. This function quantifies the average difference between the projection of the initial velocity on the orientation and the projection of the initial orientation on the velocity. In overdamped systems, this function is zero at all times. Here, we find that this function is significantly different from zero in particular for large excitation amplitudes \(A\) (see the Methods section). The standard delay function can be generalized to resolve anisotropy in the system by conditioning the average with a specific initial orientation \(\phi_{0}\) at time \(t=0\). In Fig. 3 we plot the anisotropic delay function \(d_{\phi_{0}}(t)\) both as a function of \(\phi_{0}\) for given \(t\) and as a function of \(t\) for given \(\phi_{0}\). We compare the experimental data with simulations which follow Eqs. (1) and (2) and are initialized similar to the experiments. The delay function is a highly fluctuating quantity making the experimental data difficult to interpret. The simulated data suggests an isotropic delay for short times and a larger delay along the grooves as time proceeds mimicking the modulation of the self-propulsion velocity. The simulated data always fits within the standard error of the experimental Figure 3: Anisotropic delay function. **a** The anisotropic delay function \(d_{\phi_{0}}(t)\) plotted as function of the initial orientation \(\phi_{0}\) after fixed times \(t=0.1\,\mathrm{s}\), \(t=0.4\,\mathrm{s}\). Solid blue and dashed red curves show the experimental and simulated data, respectively. **b** The anisotropic delay function \(d_{\phi_{0}}(t)\) plotted as a function time \(t\) for parallel \(\phi_{0}=0\) (cyan), diagonal \(\phi_{0}=\pi/4\) (green), and perpendicular \(\phi_{0}=\pi/2\) (yellow) orientations, each. Both for excitation amplitude \(A=1.28\,\mathrm{g}\) (upper row), \(A=1.44\,\mathrm{g}\) (middle row) and \(A=1.60\,\mathrm{g}\) (lower row). Solid and dashed curves correspond to the experimental and simulated data (using the parameter values given in Tab. 1), respectively. Figure 4: Mean displacement. Comparison between model and measurement with excitation amplitude \(A=1.28\,\mathrm{g}\) (upper row), \(A=1.44\,\mathrm{g}\) (middle row), and \(A=1.60\,\mathrm{g}\) (lower row). **a** The anisotropic motion of the particle is visualized by plotting the mean displacement \(\langle\Delta\mathbf{r}(\phi_{0})\rangle\) for \(\phi_{0}\in[0,2\pi)\) and fixed times \(t=0.2\,\mathrm{s}\), \(t=0.6\,\mathrm{s}\) and \(t=1.0\,\mathrm{s}\). Solid blue and dashed red curves show the experimental data and analytical results, respectively. Light blue area expresses the standard error of the mean. **b** The absolute mean displacement \(|\langle\Delta\mathbf{r}(t)\rangle|\) is plotted as a function of time \(t\) for initial orientations \(\phi_{0}=0\) (cyan) and \(\phi_{0}=\pi/2\) (yellow). Solid colored curves represent the experimental data and dashed colored curves the analytic results. In addition, dashed black curves depict simulation data for a particle in confinement. Black dots correspond to the experimental values for the fixed times of Fig. 4a. Theoretical predictions and simulations use the parameters given in Tab. 1. data. For stochastic processes, it is common to analyze the first and seconds moments of the motion, i.e., the mean and mean square displacement. In anisotropic systems, these quantities will strongly depend on the initial orientation of a particle. In Fig. 4, we compare the experimental mean displacement \(\langle\Delta\mathbf{r}(t)\rangle\) conditioned at different initial orientations \(\phi_{0}\) with that resulting from our theoretical model. To demonstrate the effect of the orientation-dependent motility, we show the mean displacement as a function of the initial orientation \(\phi_{0}\) after fixed times \(t\) forming elliptic-like shapes in the \(xy-\)plane (see Fig. 4a). In Fig. 4b, we plot the absolute mean displacement \(|\langle\Delta\mathbf{r}(t)\rangle|\) as a function of time \(t\) for particles which are initially orientated along the grooves (blue) and for those starting perpendicular to the grooves (red). The experimental data fit within theoretical results for short time, where the particle moves linearly in time with \(\langle\Delta\mathbf{r}(t)\rangle=\mathbf{v}_{0}t+\mathcal{O}(t^{2})\). For longer time, confinement effects play an increasing role. Since recordings are stopped once a particle hits the boundary, events where the particle reorients beforehand dominate the statistic. As a consequence, the measured mean displacement decreases for times larger than the mean first-passage time of hitting the boundary. We perform simulations with absorbing boundaries and find an excellent agreement for all experimental accessible time scales (indicated by the black dashed curves in Fig. 4b). Without confinement, the theoretical mean displacement saturates to an anisotropic persistence length \(\mathbf{L}_{\mathrm{p}}=\lim_{t\to\infty}\langle\Delta\mathbf{r}(t)\rangle\) for long times \[\mathbf{L}_{\mathrm{p}}=\mathbf{v}_{0}\tau_{\mathrm{t}}+\sum_{ \begin{subarray}{c}k=-\infty\\ k\neq 0\end{subarray}}^{\infty}\mathbf{c}_{k}\,\tau_{k}\,e^{\mathrm{i}k\phi_{0}}, \tag{6}\] with the persistence time of mode \(k\) \[\tau_{k}=\tau_{\mathrm{r}}e^{\mathrm{S}_{k}}\mathrm{S}_{k}^{-\Omega_{k}} \Gamma(\Omega_{k},0,\mathrm{S}_{k}) \tag{7}\] and \(\Omega_{k}=D_{\mathrm{r}}\tau_{\mathrm{r}}k^{2}+\mathrm{i}\omega\tau_{ \mathrm{r}}k\). The persistence length \(\mathbf{L}_{\mathrm{p}}\) consists of two contributions: the first term is given by the mean stationary velocity \(\mathbf{v}_{0}\) which is damped over the translational friction time \(\tau_{\mathrm{t}}\). The second term in Eq. (6) describes the active propulsion getting decorrelated due to the rotational noise \(D_{\mathrm{r}}\). Again, the degree of anisotropy increases as a function of the moment of inertia \(J\). For vanishing angular speed \(\omega=0\), we find the following asymptotic behavior for small and large \(J\), respectively: \[\tau_{k}\sim\begin{cases}\frac{1}{D_{\mathrm{r}}k^{2}}\left(1+ \frac{D_{\mathrm{r}}k^{2}}{\gamma_{\mathrm{r}}}J\right),&\text{for }J\to 0,\\ \frac{1}{k}\sqrt{\frac{\pi}{2D_{\mathrm{r}}\gamma_{\mathrm{r}}}\sqrt{J}},& \text{for }J\to\infty.\end{cases} \tag{8}\] Note that for large \(J\) the contribution of higher modes decays only linearly instead of quadratically, demonstrating the relevance of the moment of inertia as an important control parameter. Last, we address the mean-square displacement, which is most commonly investigated for passive and active Brownian motion. In Fig. 5, we compare the experimentally determined mean-square displacement with the corresponding theoretical result. For short times, the particle is moving ballistically, as \(\langle\Delta\mathbf{r}^{2}(t)\rangle=\langle\mathbf{\dot{r}}^{2}(0)\rangle\,t ^{2}+\mathcal{O}(t^{3})\) (see Fig. 5a). For larger times, the particle transitions towards a diffusive regime \(\langle\Delta\mathbf{r}^{2}(t)\rangle\sim 4D_{\mathrm{L}}t\), which is characterized by the long-time diffusion coefficient \[D_{\mathrm{L}}=D_{\mathrm{t}}+\sum_{k=1}^{\infty}|\mathbf{c}_{k}|^{2}\,\mathrm{ Re}\{\tau_{k}\}. \tag{9}\] Similar to the mean displacement, the mean-square displacement is affected by the confinement for long times which hinders the particle to reach a diffusive state. In Figure 5: Mean square displacement. Comparison between model and measurement with excitation amplitude \(A=1.28\,\mathrm{g}\) (upper row), \(A=1.44\,\mathrm{g}\) (middle row), and \(A=1.60\,\mathrm{g}\) (lower row). **a** The total mean-square displacement as a function of time \(t\) (double logarithmic scaling). Open blue circles and dashed red curves show the experimental data and analytical results, respectively. **b** The mean-square displacement along the \(x\)-axis (cyan) and \(y\)-axis (yellow) as functions of time \(t\). Solid colored curves and dashed colored curves show the experimental data and analytical results, respectively. Light colored areas represent the standard error of the mean. Dashed black curves show simulation data for a particle in confinement. Theoretical predictions correspond to the parameters given in Tab. 1. Fig. 5b, we show the mean-square displacement parallel and perpendicular to the grooves comparing experiment, theory, and simulation. The mean-square displacement is non-monotonic in time due to the confinement. At longer times, the particle needs to reorient before hitting the wall. The non-monotonic behavior results from the persistency of the particle and therefore is not observed for passive particles. The particle makes larger displacements along the grooves than perpendicular to them. In the absence of confinement, this anisotropy can persist even in the long-time limit characterized by the long-time diffusion matrix \[\left(\mathbf{D}_{\mathrm{L}}\right)_{ij}=D_{\mathrm{t}}\delta_{ij}+\sum_{k=1} ^{\infty}\left(\mathrm{c}_{k,i}\mathrm{c}_{-k,j}+\mathrm{c}_{-k,i}\mathrm{c} _{k,j}\right)\mathrm{Re}\{\tau_{k}\}, \tag{10}\] for \(i,j\in\{x,y\}\). The eigenvalues of this matrix are given as \(D_{\pm}=D_{\mathrm{L}}\pm\Delta D_{\mathrm{L}}\), with the long-time anisotropy \[\Delta D_{\mathrm{L}}=\Big{(}\sum_{k,l=1}^{\infty}\left(|\mathbf{ c}_{k}\cdot\mathbf{c}_{l}|^{2}+|\mathbf{c}_{k}\cdot\mathbf{c}_{-l}|^{2}-| \mathbf{c}_{k}|^{2}|\mathbf{c}_{l}|^{2}\right)\\ \times\mathrm{Re}\{\tau_{k}\}\mathrm{Re}\{\tau_{l}\}\Big{)}^{1/2}, \tag{11}\] which describes the long-time diffusion along the principal axes of maximal and minimal diffusion, respectively. The existence of a long-time anisotropy \(\Delta D_{\mathrm{L}}\neq 0\) will depend in general on the specific form of \(\mathbf{v}(\phi)\). ## IV Discussion Anisotropic motility has a strong impact on the motion of active particles both on short and long time scales. Our experiments demonstrate this explicitly for short and intermediate times and implicitly for long time-scales through simulations. Anisotropy persists for long times in the mean and mean-square displacement. We derived an analytical description that explains this behavior in terms of the Fourier series of the anisotropic driving term. The Fourier modes of the motility are linked to different time scales that add up and have an effect on the stationary mean velocity, persistence length and long-time diffusion. Specifically, these quantities are mostly affected by the low-order Fourier coefficients. Our theoretical results predict that the degree of anisotropy is not only set by the orientation-dependent motility itself but depends non-trivially on all time scales \(1/D_{\mathrm{r}}\), \(1/|\omega|\), \(\tau_{\mathrm{t}}\), and \(\tau_{\mathrm{r}}\) of the model. In Fig. 6, we depict the anisotropy of the stationary mean velocity, persistence length, and long-time diffusion for different values of the moment of inertia \(J\) and two exemplary orientation-dependent motilities \(\mathbf{v}(\phi)=v(1+\cos(n\phi))\mathbf{\hat{n}}(\phi)\) with 2-fold symmetry (\(n=2\)) and 3-fold symmetry (\(n=3\)). In general, the mass and the moment of inertia have contrary effects on the anisotropy for short and intermediate times. For increasing mass, the dynamics of the particle involves stronger delay effects, smoothing the trajectories of the particle and effectively decreasing the anisotropy. On the other hand, increasing the moment of inertia leads to more resistance to reorientation and subsequently to higher persistence. The stationary parallel velocity in Fig. 6a,b shows an increasing degree of anisotropy (being the ratio of outermost points to the innermost points on these curves) for increasing moment of inertia \(J\). For the persistence length (see Fig. 6c,d), the degree of anisotropy remains fairly invariant with increasing \(J\) but overall we find a large persistence length (recalling Eq. (8)). Note that the mean displacement and thus the persistence length inherit the symmetry of the driving velocity \(\mathbf{v}(\phi)\). This symmetry is in general lost for long times, since the long-time diffusion can either follow a 2-fold symmetric modulation or behaves fully isotropic in every direction (see Fig. 6e,f). In fact, for motilities with higher rotational symmetry than two-fold, the long-time diffusion is always isotropic. Thus, we like to stress that even a system showing isotropic diffusion can hide anisotropic dynamics on shorter time scales. As an outlook, we want to highlight the intriguing possibilities that arise from combining position- and orientation-dependent motility. This opens up avenues to explore migration in gradients of anisotropy. Our experimental system relies on pre-molded lenticular sheets to create the anisotropic substrate. However, by employing a larger 3D printer or an engraving tool, more complex substrates could be generated, for instance, to introduce gradients in anisotropy. For proof of principle, we assume that the particle exhibits 2-fold symmetric motility \(\mathbf{v}(\mathbf{r},\phi)=\left(v+\delta v(\mathbf{r})\cos(2\phi)\right) \hat{\mathbf{n}}(\phi)\), where the anisotropy \(\delta v(\mathbf{r})\) increases along the direction of \(\mathbf{\hat{s}}\). Specifically, we employ a logistic function to describe the spatial dependency, \(\delta v(\mathbf{r})=v/(1+e^{-s\mathbf{r}\cdot\hat{\mathbf{s}}})\), with \(\kappa\) representing the growth rate. This function yields maximum anisotropy (\(\delta v(\mathbf{r})=v\)) for \(\mathbf{r}\cdot\hat{\mathbf{s}}\to\infty\), and isotropic motility (\(\delta v(\mathbf{r})=0\)) for \(\mathbf{r}\cdot\hat{\mathbf{s}}\to-\infty\), with a symmetric slope around the origin (see Fig. 7b). For simplicity, we consider only the overdamped case (\(m=J=0\)) and examine the mean position along the gradient \(\left\langle\mathbf{r}(t)\cdot\hat{\mathbf{s}}\right\rangle\) for particles initially starting in the origin (which corresponds to the inflection point of \(\delta v(\mathbf{r})\)). In Fig. 7a, we present the mean position for different growth rates \(\kappa\) and gradient directions \(\hat{\mathbf{s}}\). We observe opposing behavior depending on whether the gradient is aligned with \(\hat{\mathbf{x}}\) or \(\hat{\mathbf{y}}\), i.e., positive displacement when \(\hat{\mathbf{s}}=\hat{\mathbf{x}}\) and negative displacement for \(\hat{\mathbf{s}}=\hat{\mathbf{y}}\). Thus, the particle exhibits motion parallel or antiparallel to the gradient towards regions where the motility increases. Specifically, for \(\hat{\mathbf{s}}=\hat{\mathbf{x}}\) the particle moves towards more anisotropic regions, whereas for \(\hat{\mathbf{s}}=\hat{\mathbf{y}}\) it moves towards more isotropic regions. Since the rotational dynamics is independent of the particle position, there is always an equal probability of moving upward and downward the gradient. However, when particles move towards regions of higher motility, they experience simultaneous acceleration within the persistence time. Consequently, the persistence length is always greater in the direction of increasing motility compared to the opposite direction, indicating a displacement for long times towards regions of higher motility. We like to stress that this result is not contradictory to previous studies in spatial motility fields, where the stationary positional probability shows accumulation in regions of low motility [61]. Here, we are considering a gradient in an unbounded space. Thus, we are not reaching stationarity within finite simulation time. Our model could be useful to predictively optimize driving parameters for the navigation of active matter in anisotropic environments [62, 63, 64, 65], for instance robotic systems. In particular, the persistence length is an important control parameter that strongly impacts collective phenomena, like motility-induced phase separation [66, 67, 68]. Swarms of self-propelled particles moving with an orientation-dependent motility would be an interesting topic for future research, for which our model provides a baseline [69, 70, 71, 72]. ## Methods ### Particle fabrication The particle used in this work has been manufactured by 3D-printing using a stereolithographic acrylic based photopolymer 3D printer (Formlabs Form 2, using Grey V3 material, identical to Ref. [47]). Figure 1a shows an image of the particle. It consists of a cylindrical core (diameter 9 mm, height 4 mm) and a cap (diameter 15 mm, height 2 mm). Seven tilted cylindrical legs (diameter 0.8 mm, inclination angle 4 degrees) are attached to the cap in a regular heptagon around the bottom cylinder. The legs are tilted parallel to each other defining the orientation of the particle. The length of the legs is chosen such that the bottom of the particle is lifted by 1 mm above the surface. The particle is marked with a sticker from which the orientation can be determined using computational image processing. The particle's mass is about \(m=0.76\,\mathrm{g}\). From the particle's mass and shape, its moment of inertia is computed to be \(J=1.64\times 10^{-8}\,\mathrm{kg}\ \mathrm{m}^{2}\), assuming homogeneous density. Figure 7: Mean displacement of a particle in an anisotropy gradient. **a** Mean position in the gradient direction, \(\left\langle\mathbf{r}(t)\cdot\hat{\mathbf{s}}\right\rangle\) for different reduced growth rates \(\kappa v/D_{t}\). The upper plane displays simulated data for the gradient direction \(\hat{\mathbf{s}}=\hat{\mathbf{x}}\), while the lower plane shows the data for \(\hat{\mathbf{s}}=\hat{\mathbf{y}}\). **b** The anisotropy \(\delta v(\mathbf{r})\) of the orientation-dependent motility is plotted as a function of position \(\mathbf{r}\cdot\hat{\mathbf{s}}\), using the same reduced growth rates as in **a**. The plot is sideways to align with **a**. ### Experimental setup and analysis Particle motion is excited by vertical vibrations of a rectangular acrylic baseplate (side length 300 mm, thickness 15 mm) with a lenticular plastic sheet on top, attached to an electromagnetic shaker (Tira TV 51140). The sheet's surface consists of equally spaced elliptical half-cylinders with a density of 0.787 mm\({}^{-1}\) (20 lines per inch) and a groove depth of 0.315 mm. An illustration and a cross-section of the particle resting on such a grooved surface are shown in Fig. 1c, respectively. Lencticular sheets of this kind are typically used in digital printing or displays to create images with the illusion of depth. Here, we use it to induce an anisotropic driving of the particle parallel and perpendicular to the lines, since the speed of the particle is very sensitive to the contact angle of the legs to the surface. Note that the width and height of the grooves are chosen such that the particle legs cannot be significantly trapped (see Fig. 1c), in order to prevent the particle simply from sliding along grooves. The tilt of the plate is adjusted with an accuracy of \(0.01^{\circ}\) to minimize gravitational drift. The vibration frequency is set to \(f=80\,\mathrm{Hz}\) and three different peak acceleration amplitudes \(A=1.28\,g\), \(1.44\,g\) and \(1.60\,g\) are studied. A mid-to-high-speed camera system (Allied Vision Mako-U130B) operating at 150 frames per second is used to record the experiment with a spatial resolution of \(1024\times 1024\) pixels. The particle location and orientation are determined and tracked using standard image recognition methods (Hough transform and morphological image region analysis) to a spatial accuracy of about \(\pm 3\times 10^{-5}\,\mathrm{m}\) and a orientational accuracy of \(\pm 0.74^{\circ}\)[47]. Multiple single trajectories are recorded for each amplitude, until 20 min of data are acquired per recording. Half of the recorded time the particle starts parallel and the other half of the time it starts perpendicular to the grooves. Events involving particle-border collisions mark a trajectory's termination and are subsequently discarded, resulting in trajectories of various lengths. The velocity was calculated from the displacement of successive positions of the particle as \(\mathbf{v}(t)=\left(\mathbf{r}(t+\Delta t)-\mathbf{r}(t)\right)/\Delta t\), where \(\Delta t=1/150\,\mathrm{s}\) is the time between two frames. The time steps are not fully equidistant between recorded frames, therefore the experimental data were linearly interpolated to obtain equidistant points. Experimental means with respect to a specific initial orientation \(\phi_{0}\) were calculated by averaging in the interval \([\phi_{0}-\delta\phi,\phi_{0}+\delta\phi]\). We chose \(\delta\phi=10^{\circ}\) and modified the theoretical results accordingly by \(\exp(\mathrm{i}k\phi)\rightarrow\exp(\mathrm{i}k\phi)\sin(k\delta\phi)/(k \delta\phi)\). We took advantage of the rotational and inflection symmetries of the experiment (by rotating some trajectories by 180 degrees) to increase the angular statistics for the mean displacement. ### Analytic results Both the translational velocity \(\dot{\mathbf{r}}(t)\) and the angular velocity \(\dot{\phi}(t)\) undergo a simple stochastic process for which a general solution is easily obtained (see Eqs. (1) and (2)). Several dynamical correlation function as well as low-order moments can be consequently calculated using standard methods of stochastic calculus [73]. The orientational correlation function \(C(t)=\langle\mathbf{\hat{n}}(t)\cdot\mathbf{\hat{n}}(0)\rangle\) displays a double exponential decay \[C(t)=\cos(\omega t)e^{-D_{\mathrm{r}}\left(t-\tau_{\mathrm{r}}\left(1-e^{-t/ \tau_{\mathrm{r}}}\right)\right)}, \tag{12}\] (as previously discussed in Ref. [45; 46; 47]). The velocity correlation function \(Z(t)=\langle\dot{\mathbf{r}}(t)\cdot\dot{\mathbf{r}}(0)\rangle\) is given as \[Z(t)=2\frac{D_{\mathrm{t}}}{\tau_{\mathrm{t}}}e^{-t/\tau_{\mathrm{t}}}+2\sum_{ k=1}^{\infty}\lvert\mathbf{c}_{k}\rvert^{2}\,\mathrm{Re}\{V_{k}^{+}(t)\}, \tag{13}\] where the Fourier-coefficient vectors are determined by the orientation-dependent motility, as \(\mathbf{c}_{k}=\int_{-\pi}^{\pi}\mathbf{v}(\phi)\exp(-\mathrm{i}k\phi)/(2\pi )\,\mathrm{d}\phi\) (see Eq. (3)), and \[V_{k}^{\pm}(t)=\frac{\tau_{\mathrm{r}}}{\tau_{\mathrm{t}}}\frac{ \mathrm{e}^{\mathrm{S}_{k}}}{2}\bigg{(}\pm\mathrm{S}_{k}^{-\Omega_{k}^{+}} \Gamma\left(\Omega_{k}^{+},0,\mathrm{S}_{k}e^{-t/\tau_{\mathrm{r}}}\right)e^{t /\tau_{\mathrm{t}}} \tag{14}\] \[+\left(\mathrm{S}_{k}^{-\Omega_{k}^{+}}\Gamma\left(\Omega_{k}^{+},0,\mathrm{S}_{k}\right)+\mathrm{S}_{k}^{-\Omega_{k}^{-}}\Gamma\left(\Omega_{k}^ {-},0,\mathrm{S}_{k}\right)\right)e^{-t/\tau_{\mathrm{t}}}\bigg{)},\] with \(\Omega_{k}^{\pm}=D_{\mathrm{r}}\tau_{\mathrm{r}}k^{2}\pm(\mathrm{i}\omega\tau _{\mathrm{r}}k+\tau_{\mathrm{r}}/\tau_{\mathrm{t}})\) and \(\mathrm{S}_{k}=D_{\mathrm{r}}\tau_{\mathrm{r}}k^{2}\). The real part is denoted by \(\mathrm{Re}\{\dots\}\) and the generalized incomplete gamma function is \(\Gamma(s,x_{1},x_{2})=\int_{x_{1}}^{x_{2}}\!t^{s-1}e^{-t}\,\mathrm{d}t\). The delay function measuring the difference between the direction of the velocity and the current orientation, \(d(t)=\langle\dot{\mathbf{r}}(t)\cdot\mathbf{\hat{n}}(0)\rangle-\langle\dot{ \mathbf{r}}(0)\cdot\mathbf{\hat{n}}(t)\rangle\), is given by \[d(t)=\mathrm{Re}\big{\{}\mathrm{c}_{1,x}+\mathrm{c}_{1,x}^{\ast}+\mathrm{i}( \mathrm{c}_{1,y}-\mathrm{c}_{1,y}^{\ast})\big{\}}V_{1}^{-}(t)\big{\}}, \tag{15}\] which coincides with the result for isotropic self-propulsion [47] (due to the projection onto the orientation). Next, we give the mean displacement \(\langle\Delta\mathbf{r}(t)\rangle=\langle\mathbf{r}(t)-\mathbf{r}_{0}\rangle\) under the condition that initially the position \(\mathbf{r}_{0}\) and the orientation \(\phi_{0}\) are prescribed, \[\langle\Delta\mathbf{r}(t)\rangle=\mathbf{v}_{0}\tau_{\mathrm{t}}(1-e^{-t/ \tau_{\mathrm{t}}})+\sum_{\begin{subarray}{c}k=-\infty\\ k\neq 0\end{subarray}}^{\infty}\!\mathbf{c}_{k}R_{k}(t)e^{\mathrm{i}k\phi_{0}}, \tag{16}\] with the stationary velocity \(\mathbf{v}_{0}\) (see Eq. (5)), \[R_{k}(t)= \tau_{\mathrm{r}}e^{S_{k}}\bigg{(}\mathrm{S}_{k}^{-\Omega_{k}} \Gamma\left(\Omega_{k},\mathrm{S}_{k}e^{-t/\tau_{\mathrm{r}}},\mathrm{S}_{k} \right) \tag{17}\] \[-\mathrm{S}_{k}^{-\Omega_{k}^{-}}\Gamma\left(\Omega_{k}^{-}, \mathrm{S}_{k}e^{-t/\tau_{\mathrm{r}}},\mathrm{S}_{k}\right)e^{-t/\tau_{ \mathrm{t}}}\bigg{)},\] and \(\Omega_{k}=D_{\mathrm{r}}\tau_{\mathrm{r}}k^{2}+\mathrm{i}\omega\tau_{\mathrm{r}}k\). Lastly, we provide the result for the mean-square displacement \(\langle\Delta\mathbf{r}^{2}(t)\rangle=\langle(\mathbf{r}(t)-\mathbf{r}_{0})^{2}\rangle\) which can be expressed as \[\langle\Delta\mathbf{r}^{2}(t)\rangle=4D_{\mathrm{L}}t+2\big{(}Z(t)-Z(0)\big{)} \tau_{\mathrm{r}}^{2}-4F(t)\tau_{\mathrm{r}}^{2} \tag{18}\] with the long-time diffusion coefficient \(D_{\mathrm{L}}\) (see Eq. (9)), the velocity correlation function \(Z(t)\) (see Eq. (13)) and \[F(t)= \sum_{k=1}^{\infty}\lvert\mathbf{c}_{k}\rvert^{2}\,\mathrm{Re} \left\{\frac{e^{\mathrm{S}_{k}}}{\Omega_{k}^{2}}\Bigg{(}{}_{2}F_{2}\Bigg{[} \begin{matrix}\Omega_{k},&\Omega_{k}\\ \Omega_{k}+1,&\Omega_{k}+1\end{matrix};-\mathrm{S}_{k}\Bigg{]}\right. \tag{19}\] \[\left.-{}_{2}F_{2}\Bigg{[}\begin{matrix}\Omega_{k},&\Omega_{k}\\ \Omega_{k}+1,&\Omega_{k}+1\end{matrix};-\mathrm{S}_{k}e^{-t/\tau_{\mathrm{r}} }\Bigg{]}e^{-\Omega_{k}t/\tau_{\mathrm{r}}}\Bigg{)}\right\}\!,\] where \({}_{2}F_{2}\) denotes the generalized hypergeometric function. Last we remark that in the overdamped limit, i.e. \(m\to 0\) and \(J\to 0\), we recover the results of orientation-dependent motility in underdamped systems [7] and similarly for an isotropic self-propulsion \(\mathbf{v}(\phi)=v_{0}\mathbf{\hat{n}}(\phi)\), we obtain the expressions of Ref. [52]. ### Parameter estimation The underdamped active Brownian motion model depends on eight independent parameters. All parameters were obtained using the MATLAB standard optimizer fminsearch (Nelder-Mead optimization of a function of several variables on an unbounded domain). Our cost function consists of five terms covering different parameters. Each term is constructed as follows: The absolute deviation between the experimental mean and the analytical expectation is weighted with the standard error of the mean and then averaged over time or orientation. This procedure takes into account the experimental uncertainty [74]. At the same time, the value of our cost function quantifies the fit itself. We call a fit sufficiently representative of the experimental mean if the mean deviation between experimental mean and analytical expectation is no greater than one standard error. We use this definition to determine an error interval for our optimal parameters. The orientational correlation function \(C(t)\) (see Eq. (12)) is used to determine the rotational diffusion constant \(D_{\mathrm{r}}\) and the rotational friction time \(\tau_{\mathrm{r}}\). In addition, we use the mean stationary angular velocity \(\langle\dot{\phi}(0)\rangle=\omega\) to determine the angular speed \(\omega\). Further, we use the velocity correlation function \(Z(t)\) (see Eq. (13)) to extract values for the translational friction time \(\tau_{\mathrm{t}}\) and the translational short-time diffusion coefficient \(D_{\mathrm{t}}\). Lastly, we use the mean stationary velocity \(\mathbf{v}_{0}\) (see Eq. (5)), which is projected parallel (\(\mathrm{v}_{\parallel}=\mathbf{v}_{0}\cdot\mathbf{\hat{n}}\)) and perpendicular (\(\mathrm{v}_{\perp}=\mathbf{v}_{0}\cdot\mathbf{\hat{n}}_{\perp}\)) to the body axis, to determine all the motility parameters \(\mathrm{v}_{\perp}\), \(\delta\mathrm{v}_{\parallel}\), and \(\delta\mathrm{v}_{\perp}\). In Fig. 8a-d, the analytic fitting curves to the experimental data are shown and the resulting set of parameter is Figure 8: Determination of model parameters. **a** Orientational correlation function \(C(t)\), **b** velocity correlation function \(Z(t)\), **c** stationary parallel velocity \(\mathrm{v}_{\parallel}\)**d** stationary perpendicular velocity \(\mathrm{v}_{\perp}\). Solid dark blue and dashed red curves show the experimental data and analytical results, respectively. Experimental error intervals represent the standard error of the mean. The parameter values are listed in Tab. I. **e** Time-dependence of the delay function \(d(t)\) validating the parameters on an independent quantity. The different vibration amplitudes are A = 1.28 g (upper row), A = 1.44 g (middle row), and A = 1.60 g (lower row). listed in Tab. 1. For vibrobots, the delay function \(d(t)\) (see Eq. (15)) proved to be a sensitive measure for the quality of the determined parameter-set [47]. Figure 8e shows good agreement between theory and experiment for all three measurements. Note that \(\omega\) and \(D_{\mathrm{t}}\) are not significantly different from zero for our particles. However, they are included in the model, since they can be relevant for different experimental realizations in the literature [23, 47, 75]. For the inertial time scales \(\tau_{\mathrm{r}}\) and \(\tau_{\mathrm{t}}\) we see an increase for increasing \(A\), and \(\tau_{\mathrm{t}}\) only becomes significant for \(A=1.44\,g\) and \(A=1.60\,g\). This is likely caused by the reduction of friction at larger \(A\). ### Simulation Numerical data for a self-propelled particle with orientation-dependent motility enclosed by absorbing boundaries are included in Figs. 3, 4b, and 5b. Equations (1) and (2) were discretized to perform Langevin dynamics simulations using first-order finite difference discretization. For these simulations, we chose the time step size \(\Delta t=10^{-2}\) s and we performed \(10^{5}\) realizations in Fig. 4b, and 5b and 2000 realizations in Fig. 3 to calculate the respective ensemble averages. Half of the trajectories started at \(x_{0}=0\,\mathrm{mm}\), \(y_{0}=-100\,\mathrm{mm}\), and \(\phi_{0}=\pi/2\) and the other half at \(x_{0}=100\,\mathrm{mm}\), \(y_{0}=0\,\mathrm{mm}\) and \(\phi_{0}=\pi\) (modelling the initialisation in the experiment). The rectangular absorbing boundary was set at \(\{(x,y)|(x=\pm 130\,\mathrm{mm},\,y\in[-130\,\mathrm{mm},\,130\,\mathrm{mm}]) \vee(x\in[-130\,\mathrm{mm},\,130\,\mathrm{mm}],\,y=\pm 130\,\mathrm{mm})\}\). Figure 7 presents simulation data for a self-propelled particle with additional positional dependency in its motility \(\mathbf{v}(\mathbf{r},\phi)\). In this case, we used a time-step size of \(\Delta t=10^{-1}\) s and performed \(10^{5}\) realizations to compute the ensemble averages. Each trajectory started from the origin \((x_{0},y_{0})=(0,0)\) with a random orientation. ### Orientation-dependent friction and torque In our underdamped active Brownian particle model, the influence of an anisotropic environment is effectively described by an orientation-dependent motility. However, in more general cases, self-propelled particles may undergo more complicated dynamics in anisotropic environments, resulting in orientational dependencies in various model parameters beyond motility. Here, we provide supplementary information on why we chose not to model anisotropic motion using an anisotropic friction matrix or orientation-dependent torque. While using an anisotropic friction matrix may seem like an intuitive approach to describe anisotropic motion, it is not suitable for our experimental particles. Most of the time, our particles do not have direct contact with the \begin{table} \begin{tabular}{l l c c c} \hline \hline A & (g) & 1.28 & 1.44 & 1.60 \\ \hline \(\omega^{*}\) & (s\({}^{-1}\)) & 0.09 \({}^{+0.76}_{-0.76}\) & 0.12 \({}^{+0.88}_{-0.99}\) & 0.11 \({}^{+0.89}_{-1.11}\) \\ \(D_{\mathrm{r}}\) & (s\({}^{-1}\)) & 0.39 \({}^{+0.05}_{-0.04}\) & 0.80 \({}^{+0.07}_{-0.10}\) & 1.18 \({}^{+0.11}_{-0.13}\) \\ \(\tau_{\mathrm{r}}\) & (s) & 0.05 \({}^{+0.02}_{-0.02}\) & 0.06 \({}^{+0.03}_{-0.01}\) & 0.07 \({}^{+0.03}_{-0.02}\) \\ \(\mathrm{v}_{\parallel}\) & (mm s\({}^{-1}\)) & 57.5 \({}^{+4.5}_{-4.1}\) & 73.2 \({}^{+4.9}_{-4.4}\) & 85.0 \({}^{+5.3}_{-4.8}\) \\ \(\delta\mathrm{v}_{\parallel}\) & (mm s\({}^{-1}\)) & 9.2 \({}^{+7.0}_{-6.8}\) & 8.5 \({}^{+7.5}_{-7.5}\) & 15.7 \({}^{+8.6}_{-8.6}\) \\ \(\delta\mathrm{v}_{\perp}\) & (mm s\({}^{-1}\)) & 15.6 \({}^{+5.7}_{-5.5}\) & 19.3 \({}^{+7.1}_{-7.0}\) & 23.8 \({}^{+9.4}_{-9.1}\) \\ \(D_{\mathrm{t}}\)\({}^{*}\) & (mm\({}^{2}\) s\({}^{-1}\)) & 27.89 \({}^{+31.85}_{-27.89}\) & 36.23 \({}^{+44.38}_{-36.23}\) & 59.41 \({}^{+40.59}_{-59.41}\) \\ \(\tau_{\mathrm{t}}\) & (s) & 0.07 \({}^{+0.10}_{-0.07}\) & 0.10 \({}^{+0.09}_{-0.06}\) & 0.13 \({}^{+0.06}_{-0.06}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Model parameters obtained from analytical fits to measurements in Fig. 8. Lower and upper 95% confidence bounds are displayed behind each value. Parameters marked by \({}^{*}\) are not significantly different from zero. Figure 9: Initial increase of the velocity and orientation-dependence of the angular velocity. **a** Normalized initial velocity \(\langle|\dot{\mathbf{r}}(t)|\rangle/\langle|\dot{\mathbf{r}}(0.5\,\mathrm{s})|\rangle\) of a particle starting from rest (solid curves) for initial orientations \(\phi_{0}=0\) (cyan) and \(\phi_{0}=\pi/2\) (yellow). The difference between horizontal and vertical starting orientation is small and within the variation observed in simulations (dashed lines). This suggests that friction in the parallel and perpendicular direction is not significantly different. **b** Mean angular velocity \(\langle\dot{\phi}\rangle\) as a function of particle orientation \(\phi\) (solid curves). The slight periodicity is an artifact of the initial conditions and can be reproduced by simulating with equal conditions (dashed curves). Both are shown for excitation amplitudes \(A=1.28\,\mathrm{g}\) (upper row), \(A=1.44\,\mathrm{g}\) (middle row), and \(A=1.60\,\mathrm{g}\) (lower row). substrate, and energy dissipation only occurs during collisions. Only the effective angle between legs and plate is different in perpendicular and parallel directions causing anisotropic self-propulsion. Experimental evidence supporting this conjecture is presented in Fig. 9a, showing the initial increase of the velocity, which reaches an intermediate state after approximately 0.5 s regardless of the initial orientation. From this we conclude an isotropic translational damping time and thus isotropic friction. Furthermore, particles may experience anisotropic torques. Surprisingly, measuring the angular velocity for given orientations suggests no such torques in our experiment (see Fig. 9b). The slight modulation observed in the angular velocity, which does not align with the overall two-fold symmetry of the environment, can be attributed to initialization bias. Simulation results assuming isotropic torque, indicated by the black curves in Figure 8b, are consistent with the experiment. ## Data availability Illustrative videos of the experiments are available as Supplementary Movies 1-6. Raw data supporting the results of this work are available at [https://doi.org/10.5281/zenodo.7220326](https://doi.org/10.5281/zenodo.7220326). ## Code availability All custom simulation and analysis code used to derive the results presented herein is available at [https://doi.org/10.5281/zenodo.7220326](https://doi.org/10.5281/zenodo.7220326). ## Acknowledgments C.S., R.W., and H.L. are funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - SCHO 1700/1-1; 283183152 (WI 4170/3-2); LO 418/23-1. ## Author contributions C.S. and H.L. designed the research. C.S. designed the experimental setup. C.S. and A.L. carried out the experiments. A.S. and C.S. analyzed the measurements. A.S. developed the theoretical and numerical results. All authors discussed the results and wrote the manuscript. ## Competing interests The authors declare no competing interests. ## Additional information Correspondence and requests for materials should be addressed to A.S. (email: [email protected]) or to C.S. (email: [email protected]).
2306.17605
A co-preLie structure from chronological loop erasure in graph walks
We show that the chronological removal of cycles from a walk on a graph, known as Lawler's loop-erasing procedure, generates a preLie co-algebra on the vector space spanned by the walks. In addition, we prove that the tensor and symmetric algebras of graph walks are graded Hopf algebras, provide their antipodes explicitly and recover the preLie co-algebra from a brace coalgebra on the tensor algebra of graph walks. Finally we exhibit sub-Hopf algebras associated to particular types of walks.
Loïc Foissy, Pierre-Louis Giscard, Cécile Mammez
2023-06-30T12:26:23Z
http://arxiv.org/abs/2306.17605v1
# A co-preLie structure from chronological loop erasure in graph walks ###### Abstract We show that the chronological removal of cycles from a walk on a graph, known as Lawler's loop-erasing procedure, generates a preLie co-algebra on the vector space spanned by the walks. In addition, we prove that the tensor and symmetric algebras of graph walks are graded Hopf algebras, provide their antipodes explicitly and recover the preLie co-algebra from a brace coalgebra on the tensor algebra of graph walks. Finally we exhibit sub-Hopf algebras associated to particular types of walks. keywords: Graphs, walks, cycles, coproduct, co-preLie co-algebra, Hopf algebra + Footnote †: journal: Elsevier ## Introduction Graphs and walks are ubiquitous objects in combinatorics, discrete mathematics and beyond: they appear throughout linear algebra, differential calculus and have found widespread applications in physics, engineering and biology. Yet, while graph theory is being developed, less attention has been devoted to the walks themselves, a walk being a contiguous succession of directed edges on a graph. In particular the algebraic structures associated to walks have not, to the best of our knowledge, been fully explored. We may here refer the reader to quivers and path algebras and hike monoids [4]. The goal of the present work is to exhibit a co-preLie structure naturally associated to walks on graphs (simple graphs, multi-graphs, digraphs and hypergraphs). The structure arises from a simple procedure, now known as _Lawler's loop erasing_[6], first conceived in the context of percolation theory to randomly generate simple paths-walks where all vertices are distinct-from a sample of random walks. The procedure consists of a chronological removal of cycles (called loops in Lawler's original work) as one walks along on the graph: consider for instance the complete graph \(K_{4}\) on \(4\) vertices and label these vertices with integers \(1\) through \(4\). Walking along the path \(1\to 2\to 1\to 3\to 4\to 3\to 1\to 3\) on the graph and removing cycles whenever they appear, we are left with the simple path \(1\to 3\) after having successively 'erased' the cycles \(1\to 2\to 1\), then \(3\to 4\to 3\) and finally \(1\to 3\to 1\). Note how \(1\to 3\to 1\) does not appear contiguously in the original walk. Once terminated, Lawler's loop-erasing has eliminated a set of cycles, all of whose internal vertices are distinct, leaving a possibly trivial walk-skeleton behind. If the initial walk was itself a cycle, this skeleton is the empty walk on the initial vertex (also called length-0 walk) and otherwise it is a self-avoiding path. Remark that because the loop-removal occurs in a chronological fashion, Lawler's process is strongly non-Markovian: complete knowledge of all the past steps of a walk is required to decide the current and future erased sections at any point of the walk. We show below that this intuitive process is naturally associated with a co-preLie coproduct. In addition, slightly relaxing the chronological constraints by allowing simultaneous erasures under some compatibility conditions leads to Hopf algebra structures on the tensor and symmetric algebras of graph walks. The article is organized as follows. In Section SS1 we begin with basic notations and definitions concerning walks, graphs and Lawler's loop erasing procedure. In SS2 we describe the chronological structure that walks acquire from Lawler's process and use this structure to define the admissible cuts of a walk. We show in particular that this notion is well defined in the sense that in spite of the strong chronological constraints created by Lawler's process, cutting out admissible cuts does not alter the other cuts admissibility. This leads in SS3 to the definition of a co-product on walks which we show to be co-preLie. Then, in SS4.2, considering a wider set of simultaneously admissible cuts, called extended admissible cuts, we construct a co-associative co-product on the tensor and symmetric algebras generated by the vector space of walks on a graph. We then prove an explicit formula for the antipode maps in the so-obtained Hopf algebras. In SS5 we construct a brace coalgebra and a codendriform bialgebra on the tensor algebra generated by graph walks and use these to recover the preLie structure as a corollary of the Hopf algebra of the preceding section. Finally in SS6 we exhibit Hopf subalgebras associated to certain types of walks, the cacti, towers and corollas. In a subsequent work inspired by previous combinatorial results [5], we will show that Lawler's process is also naturally associated with a non-associative permutative product, known as nesting [5], which satisfies the Livernet compatibility condition [7] with the co-preLie co-product defined here. This will provide the very first concrete example of the NAP - co-preLie operad in a 'living' context. This construction appears to be of paramount importance given the pervasive use of graph-walks in mathematics and mathematical-physics. In particular, we will show that this leads to a useful bridge between formal sums over infinite families of walks and branched continued fractions. ## 1 Notations and definitions ### Notations for graphs and rooted walks While we begin by recalling standard definitions for graphs, we introduce somewhat less common concepts for walks, of which we advise the reader to take special notice. A _graph_\(G=(V,E)\) is a countable set of vertices \(V\) and a countable set \(E\) of distinct paired vertices, called edges, denoted \(\{i,j\}\), \(i,j\in V\). A _digraph_\(G=(V,E)\) is a finite set of vertices \(V\) and a finite set \(E\subseteq V^{2}\) of _directed edges_ (or _arcs_), denoted \((i,j)\) for the arc from \(i\) to \(j\). A _directed multigraph_ (or _multidigraph_) is defined the same way as a digraph, except that \(E\) is a multiset. An edge of \(E\) is then denoted \((i,j)_{k}\), the integer \(k\) specifying which edge from \(i\) to \(j\) we consider. In the present work we always assume that \(G\) is non-empty. A _rooted walk_, or rooted path, of length \(\ell\) from vertex \(i\) to vertex \(j\) on a multi directed graph \(G\) is a contiguous sequence of \(\ell\) arcs starting from \(i\) and ending in \(j\), e.g. \(\omega=(i,i_{1})_{k_{1}}(i_{1},i_{2})_{k_{2}}\cdots(i_{\ell-1},j)_{k_{\ell}}\) (a sequence of arcs is said to be contiguous if each arc but the first one starts where the previous ended). The rooted walk \(\omega\) is _open_ if \(i\neq j\) and _closed_ otherwise, in which case it is also called _rooted cycle_. Since we only consider rooted walks in this work, we shorten this terminology to _walks_. On digraphs we may unambiguously represent walks simply as ordered sequences of vertices \(\omega=w_{0}w_{1}\cdots w_{\ell-1}w_{\ell}\). The walk \(\omega=w_{0}\) of length \(0\) is called the _trivial_ walk on vertex \(w_{0}\), it is both open and closed. The set of all walks on a graph \(G\) is denoted \(\mathcal{W}(G)\). Consider a walk \(\omega=w_{0}\ldots w_{\ell}\). A _subwalk_ of a walk \(\omega=w_{0}\cdots w_{\ell}\) is any walk \(w_{k}\cdots w_{k^{\prime}}\) where \(0\leq k\leq k^{\prime}\leq\ell\). If \(k\neq k^{\prime}\) and \(w_{k}=w_{k^{\prime}}\), we designate by \(\omega^{k,k^{\prime}}:=w_{k}w_{k+1}\ldots w_{k^{\prime}}\) the _closed subwalk_ of \(\omega\) with root \(w_{k}\). In a complementary way, we define the _remainder_ section \(\omega_{k,k^{\prime}}:=w_{0}\ldots w_{k}w_{k^{\prime}+1}\ldots w_{\ell}\) to be what remains of \(\omega\) after removal of the section \(\omega^{k,k^{\prime}}\). Note, for convenience we denote \(\omega^{l,l^{\prime}}_{k,k^{\prime}}\) for \((\omega_{k,k^{\prime}})^{l,l^{\prime}}\), the section \(w_{l}\ldots w_{l^{\prime}}\) erased from the remainder \(\omega_{k,k^{\prime}}=w_{0}\ldots w_{k}w_{k^{\prime}+1}\ldots w_{\ell}\). This means in particular that in \(\omega_{k,k^{\prime}}^{l,l^{\prime}}\), integers \(k,k^{\prime},l\) and \(l^{\prime}\) all refer to indices from \(\omega\). A rooted walk in which all vertices are distinct is said to be a _simple path_ or _self-avoiding walk_. The set of all such walks on a digraph \(G\) is denoted \(\mathrm{SAW}(G)\). Similarly, a rooted cycle \((i_{0},i_{1})_{k_{1}}(i_{1},i_{2})_{k_{2}}\cdots(i_{\ell-1},i_{0})_{k_{\ell}}\) of non-zero length for which all vertices \(i_{t}\) are distinct is said to be a _simple cycle_ or _self-avoiding polygon_. Note that a self-loop \((i,i)_{k}\) is considered a rooted simple cycle of length one. The set of all simple cycles on \(G\) is \(\mathrm{SAP}(G)\). For \(G\) any (directed multi)graph, to ease the notation, we also denote by \(\mathcal{W}(G)\) the \(\mathbb{K}\)-vector space spanned by all walks on \(G\), \(\mathbb{K}\) being a field of characteristic \(0\). For a walk \(\omega\in\mathcal{W}(G)\), we designate by \(V(\omega)\) the support of \(\omega\), that is the set of _distinct_ vertices visited by \(\omega\); and by \(E(\omega)\) the _multiset_ of directed edges visited by \(\omega\). ### Definitions for loop-erasure As stated in the introduction, Lawler's loop-erasing procedure consists in erasing all cycles from a walk \(\omega\) in the _chronological_ order in which they appear. Formally, it is a selection-quotient process which transforms a walk into its self-avoiding skeleton. To construct the algebraic structures associated with Lawler's procedure we must not only consider its end product but also what it produces during its intermediary stages and what it removes from the walk, in its original context: **Definition 1** (Loop-erased sections).: Let \(G\) be a digraph and consider \(\omega=w_{0}\ldots w_{\ell}\in\mathcal{W}(G)\). The set \(\mathrm{LES}(\omega)\) of loop-erased sections is the set of all _closed subwalks_ of \(\omega\)_erased_ by Lawler's procedure. **Example 1**.: On the complete graph \(K_{5}\) on \(5\) vertices (including self-loops), consider the walk \(\omega=12324522\), In this illustration, the integers in boxes in the middle of edges give these edges' order of traversal while vertices are labeled by black integers next to them. The simple cycles erased by Lawler's procedure are \(\omega^{1,3}=232\), \(\omega^{3,6}=2452\) and \(\omega^{6,7}=22\) and the set of erased closed subwalks of \(\omega\) is therefore, \[\mathrm{LES}(12324522)= \{\omega^{1,3},\omega^{3,6},\omega^{1,6},\omega^{6,7},\omega^{3, 7},\omega^{1,7}\}\] \[= \{232,2452,232452,22,24522,2324522\}.\] **Remark 1**.: The requirement that the closed subwalks of \(\mathrm{LES}(\omega)\) be constructed solely from _erased_ sections is crucial. For example, in \[\omega=1232341=\raisebox{-14.226378pt}{\includegraphics[]{figs/loop-erased-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edgesedges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edgesedges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edgesedges-edges-edges-edges-edges-edges-edges-edges-edgesedges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edgesedges-edges-edges-edges-edges-edges-edgesedges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edges-edgesedges **Definition 2** (Loop-erased walks).: Let \(G\) be a digraph and \(\omega=w_{0}\cdots w_{\ell}\in\mathcal{W}(G)\) of length \(\ell\). For \(0\leq k\leq\ell\), we designate \(\operatorname{LEW}_{k}(\omega)\), called loop-erased walk \(\omega\) at step \(k\), to be what is left of \(\omega\) after its first \(k\) steps while performing Lawler's procedure. By the definitions of \(\operatorname{LES}(\omega)\) and \(\operatorname{LEW}(\omega)\) we obtain what was remarked above, namely that loop-erased sections may not straddle over one-another, a consequence of their step-by-step erasure in chronological order: **Lemma 1**.: _Let \(G\) be a digraph and \(\omega=w_{0}\ldots w_{\ell}\in\mathcal{W}(G)\). Then \(\omega^{k,k^{\prime}}\in\operatorname{LES}(\omega)\) if and only if there does not exist a pair of integers \(0\leq l<k<l^{\prime}<k^{\prime}\leq\ell\) with \(w_{k}=w_{k^{\prime}}\neq w_{l}=w_{l^{\prime}}\) and \(\omega^{l,l^{\prime}}\in\operatorname{LES}(\omega)\)_ Before we prove the lemma, we remark that the notion of loop-erased walks allows for an alternative but equivalent definition of that of loop-erased section: **Remark 2** (A recursive procedure for constructing \(\operatorname{LES}(\omega)\)).: Let \(G\) be a digraph and consider \(\omega=w_{0}\ldots w_{\ell}\in\mathcal{W}(G)\). The set \(\operatorname{LES}(\omega)\) of loop-erased sections of \(\omega=w_{0}\cdots w_{\ell}\) is constructed recursively as follows. Initialize with \(\operatorname{LES}(\omega)=\emptyset\). Then for \(k\in\{1,\ldots,\ell-1\}\), if \(w_{k+1}\in V(\operatorname{LEW}_{k}(\omega))\) denote \(k^{\prime}\), the greatest integer such that \(0\leq k^{\prime}\leq k\) and \(w_{k^{\prime}}=w_{k+1}\). If \(k^{\prime}\) exists, then: 1. add the closed walk \(\omega^{k^{\prime},k+1}=w_{k^{\prime}}\ldots w_{k+1}\) to \(\operatorname{LES}(\omega)\); 2. if there exists \(\omega^{k^{\prime\prime},k^{\prime}}\in\operatorname{LES}(\omega)\), add the closed walk \(\omega^{k^{\prime\prime},k+)}=w_{k^{\prime\prime}}\ldots w_{k+1}\) to \(\operatorname{LES}(\omega)\) as well. While equivalent to Definition 1, the above formulation is more formal in flavor and recursive in nature, thus better suited to algorithm designs and easier to yield in proofs. Proof of Lemma 1.: Assuming that \(\omega^{k,k^{\prime}},\omega^{l,l^{\prime}}\in\operatorname{LES}(\omega)\), suppose that both sections nonetheless straddle over one-another. We may choose wlog that \(k<l<k^{\prime}<l^{\prime}\). In particular, there is no earlier step \(m<k\) with \(w_{m}=w_{l}\) since otherwise we would effectively be in the straddling situation where \(l<k\). Then at step \(l^{\prime}-1\geq k^{\prime}\) of the walk, vertex \(w_{l}\notin V(\operatorname{LEW}_{l^{\prime}-1})(w)\) since at this point \(\omega^{k,k^{\prime}}\) has already been erased and so by Remark 2, \(\omega^{l,l^{\prime}}\notin\operatorname{LES}(\omega)\), a contradiction. ## 2 The chronological structure of walks From a walk \(\omega\), Lawler's process, once terminated, produces a set of erased simple cycles and one self-avoiding skeleton (possibly trivial). It is therefore natural to seek a co-product which to the walk \(\omega\) would associate a sum over erased sections \(\omega^{k,k^{\prime}}\) and associated remainders \(\omega_{k,k^{\prime}}\), so that \(\omega\) could be obtained back from these through grafting of the former onto the latter. The 'grafting' product appropriate to that end, known as _nesting_, was first identified thanks to purely combinatorial considerations [5] and is permutative non-associative reflecting Lawler's process' chronological constraints. It is difficult to maintain any form of compatibility with nesting via such an indiscriminate procedure as cutting out all loop-erased sections however, as not all pairs \((\omega^{k,k^{\prime}},\,\omega_{k,k^{\prime}})\) can be consistently grafted back to form the original walk; and when grafting is possible, it may be so in more than one way. These problems arise from certain towers and all corollas, respectively. Consider first an instance of the former, which is a tower in the sense that the self-loop 33 is attached 'on top of' cycle 232, itself attached to the 'base' triangle 1231. Here \(\omega^{2,3}=33\) is a valid loop-erased section of \(\omega\), yet can be grafted back onto \(\omega_{2,3}\) in two distinct ways: one producing \(\omega\) and the other yielding the walk \(\omega^{\prime}=1232331\). Remark how in \(\omega^{\prime}\), the self-loop 33 occurs one level below its original location in \(\omega\) since it is now attached directly to the 'base' triangle 1231. Algebraically such instances correspond to cases where the nesting product fails to be associative. Second, for the issue with corollas, i.e. bouquets of closed walks with the same root, consider e.g. \[\omega=12131=\raisebox{-1.72pt}{\includegraphics[]{fig/f1.eps}}.\] Here both \(121\), \(131\in\mathrm{LES}(\omega)\); yet cutting e.g. \(\omega^{0,2}=121\) and grafting it back onto \(\omega_{0,2}=131\) either gives back the walk \(\omega=12131\) or the completely different one \(\omega^{\prime}=13121\). Algebraically, these instances translate into cases where the nesting product fails to be commutative. ### Admissible cuts To resolve the difficulties mentioned above, which become extensive when taken together in arbitrary long walks, we must refine the set of loop-erased sections that can be cut out of the original walk by the co-product. Here, as earlier, the major hurdle is due to the chronological constraints inherent to Lawler's process. Because of this, special attention must be paid to erased sections that appear within longer erased sections, the latter providing the temporal context of the former: **Definition 3** (Temporal context of an erased section): _Let \(G\) be a digraph, \(\omega\in\mathcal{W}(G)\) and \(\omega^{k,k^{\prime}}\in\mathrm{LES}(\omega)\). We denote \(\mathrm{LES}(\omega)_{k,k^{\prime}}^{<}\subset\mathrm{LES}(\omega)\) the subset of loop-erased sections \(\omega^{l,l^{\prime}}\) which strictly include \(\omega^{k,k^{\prime}}\) as subwalk, i.e. \(l\leq k<k^{\prime}<l^{\prime}\). Because we require \(k^{\prime}<l^{\prime}\) strictly, \(\mathrm{LES}(\omega)_{k,k^{\prime}}^{<}\) may be empty. Otherwise, we denote \(\omega_{k,k^{\prime}}^{\min}\) the smallest element of \(\mathrm{LES}(\omega)_{k,k^{\prime}}^{<}\) for inclusion._ By construction, if \(\omega_{k,k^{\prime}}^{\min}\) exists, it is the tightest erased section which comprises \(\omega^{k,k^{\prime}}\) entirely. It provides the relevant temporal context for \(\omega^{k,k^{\prime}}\) since anything outside of \(\omega_{k,k^{\prime}}^{\min}\) creates no further chronological constraints on \(\omega^{k,k^{\prime}}\) beyond those on \(\omega_{k,k^{\prime}}^{\min}\). This is because vertices appearing in the loop-erased walk at the start of \(\omega_{k,k^{\prime}}^{\min}\) cannot appear again inside of it by Lemma 1, so are necessarily avoided by \(\omega^{k,k^{\prime}}\). Hence, any additional constraint that Lawler's process imposes on \(\omega^{k,k^{\prime}}\) as compared to \(\omega_{k,k^{\prime}}^{\min}\) arise solely from within \(\omega_{k,k^{\prime}}^{\min}\). **Example 2**: _Let \(\omega=12324522\) be the walk of Example 1 and consider its loop-erased section \(\omega^{1,3}\). Since \(\mathrm{LES}(12324522)=\{\omega^{1,3}\), \(\omega^{3,6}\), \(\omega^{1,6}\), \(\omega^{6,7}\), \(\omega^{3,7}\), \(\omega^{1,7}\}\), then \(\mathrm{LES}(\sigma)_{1,3}^{<}=\{\sigma^{1,6}\), \(\sigma^{1,7}\}\). Indeed, both section \(\omega^{1,6}\) and \(\omega^{1,7}\) strictly contain \(\omega^{2,4}\). Furthermore, the smallest of these by inclusion is \(\omega_{1,3}^{\min}=\omega^{1,6}\), i.e. \(\omega^{1,6}\) is the shortest loop-erased section strictly containing \(\omega^{1,3}\). At the opposite, there is no loop-erased section strictly containing \(\omega^{3,7}\in\mathrm{LES}(\omega)\), that is \(\mathrm{LES}(\omega)_{4,8}^{<}=\emptyset\) and \(\omega_{4,8}^{\min}\) does not exist._ We can now control the loop-erased sections that a co-product may extract by admitting only those cuts which are corollas within their relevant temporal context and only if those cuts are contiguous subwalks including the last petals of the corolla: **Definition 4** (Admissible cuts): _Let \(G\) be a digraph and \(\omega=w_{1}\ldots w_{\ell}\in\mathcal{W}(G)\). A non-empty loop-erased section \(\omega^{k,k^{\prime}}:=w_{k}w_{k+1}\ldots w_{k^{\prime}}\in\mathrm{LES}(\omega)\) is an admissible cut of \(\omega\) when \(\omega^{k,k^{\prime}}\neq\omega\) and either \(\omega^{l,l^{\prime}}:=\omega_{k,k^{\prime}}^{\min}\) does not exist or \(w_{k}\) does not appear in \(w_{k^{\prime}+1}\cdots w_{l^{\prime}}\). The set of admissible cuts of \(\omega\) is denoted \(\mathrm{AdC}(\omega)\)._ **Remark 3**: _The condition that for \(\omega^{k,k^{\prime}}\in\mathrm{LES}(\omega)\), \(\omega^{l,l^{\prime}}:=\omega_{k,k^{\prime}}^{\min}\) either does not exist or \(w_{l}\) does not appear in \(w_{k^{\prime}+1}\cdots w_{l^{\prime}}\) implies that admissible cuts can only be made right to left in the walk, that is from the latest to the earliest, in reverse chronological order._ **Example 3**.: In the complete graph \(K_{5}\), consider the walk \[\omega=12324345=\raisebox{-14.226378pt}{\includegraphics[]{figures/1.eps}},\] The loop-erased sections \(\omega^{1,3}=232\in\operatorname{LES}(\omega)\) and \(\omega^{4,6}=434\in\operatorname{LES}(\omega)\) are both admissible cuts of \(\omega\). At the opposite, \(\omega^{2,5}=3243\notin\operatorname{LES}(\omega)\) and so is not an admissible cut. **Example 4**.: In the walk \[\omega=12131=\raisebox{-14.226378pt}{\includegraphics[]{figures/1.eps}}\] subwalk \(\omega^{2,4}\in\operatorname{LES}(\omega)\) is an admissible cut of \(\omega\), while \(\omega^{0,2}=121\in\operatorname{LES}(\omega)\) is not admissible because vertex \(1\) is visited again by \(\omega^{\min}_{0,2}\) after completion of \(\omega^{0,2}\). The notion of admissible cut is well defined because the property of being admissible does not depend on the order in which admissible cuts are considered and removed from the original walk. In particular, if a loop-erased section is an admissible cut of an admissible cut of a walk or of its remainder, then it is an admissible of that walk and vice-versa. This is significant because it indicates that, in spite of the strong chronological constraints created by Lawler's process, cutting out admissible cuts does not alter the other cuts relevant temporal context and thence, their admissibility: **Proposition 2**.: _Let \(G\) be a digraph and \(\omega\in\mathcal{W}(G)\)._ **Case 1**.: _If_ \(k<k^{\prime}<l<l^{\prime}\) _or_ \(l<l^{\prime}<k<k^{\prime}\) _then,_ \[\omega^{k,k^{\prime}}\in\operatorname{AdC}(\omega)\text{ and }\,\omega^{l,l^{ \prime}}\in\operatorname{AdC}(\omega_{k,k^{\prime}})\iff\omega^{l,l^{\prime}} \in\operatorname{AdC}(\omega)\text{ and }\,\omega^{k,k^{\prime}}\in \operatorname{AdC}(\omega_{l,l^{\prime}}).\] **Case 2**.: _If_ \(k<l<l^{\prime}\leq k^{\prime}\) _then,_ \[\omega^{k,k^{\prime}}\in\operatorname{AdC}(\omega)\text{ and }\,\omega^{l,l^{ \prime}}\in\operatorname{AdC}(\omega^{k,k^{\prime}})\iff\omega^{l,l^{\prime}} \in\operatorname{AdC}(\omega)\text{ and }\,\omega^{k,k^{\prime}}_{l,l^{\prime}}\in \operatorname{AdC}(\omega_{l,l^{\prime}}).\] Proof.: **Case 1.** We assume \(k<k^{\prime}<l<l^{\prime}\) without loss of generality, pictorially this is the situation where \[\omega=w_{0}\dots w_{k}\dots w_{k^{\prime}}\dots w_{l}\dots w_{l^{\prime}} \dots w_{l^{\prime}}\dots w_{\ell}=\raisebox{-14.226378pt}{\includegraphics[]{figures/1.eps}}\] Suppose that \(\omega^{k,k^{\prime}}\in\operatorname{AdC}(\omega)\) and \(\omega^{l,l^{\prime}}\in\operatorname{AdC}(\omega_{k,k^{\prime}})\), we first establish that \(\omega^{l,l^{\prime}}\in\operatorname{AdC}(\omega)\). Given that \(\omega^{k,k^{\prime}}\in\operatorname{LES}(\omega)\), a closed subwalk is erased from \(\omega\) if and only if it is either erased from inside of the \(\omega^{k,k^{\prime}}\) section or from outside of it, i.e. \(\omega_{k,k^{\prime}}\). This is because, by Lemma 1, erased sections cannot straddle over one-another owing to their step-by-step erasure in chronological order. Here, \(\omega^{l,l^{\prime}}\in\operatorname{AdC}(\omega_{k,k^{\prime}})\) and since \(\omega_{k,k^{\prime}}\in\operatorname{LES}(\omega)\), then \(\omega^{l,l^{\prime}}\) is an erased closed subwalk of within an erased closed subwalk of \(\omega\). This indicates that \(\omega^{l,l^{\prime}}\in\operatorname{LES}(\omega)\). To show that \(\omega^{l,l^{\prime}}\) is an admissible loop-erased section of \(\omega\), consider the set \(\operatorname{LES}(\omega)^{<}_{l,l^{\prime}}\) of loop-erased sections of \(\omega\) which strictly include \(\omega^{l,l^{\prime}}\) as subwalk. If \(\omega^{\min}_{l,l^{\prime}}\) does not exist, then \(\omega^{l,l^{\prime}}\) is admissible, \(\omega^{l,l^{\prime}}\in\operatorname{AdC}(\omega)\). Suppose instead that \(\omega^{m,m^{\prime}}:=\omega^{\min}_{l,l^{\prime}}\) exists and recall that \(\omega^{l,l^{\prime}}\in\operatorname{AdC}(\omega_{k,k^{\prime}})\). This implies one of the two following possibilities: 1. \((\omega_{k,k^{\prime}})_{l,l^{\prime}}^{\min}\) does not exist then, 1. either \(m\in\{1,\ldots,k\}\cup\{k^{\prime},\ldots,l\}\) and thus \(\omega_{k,k^{\prime}}^{m,m^{\prime}}\in\mathrm{LES}(\omega_{k,k^{\prime}})_{l,l^ {\prime}}^{<}\neq\emptyset\) so its minimum exists, a contradiction; 2. or \(m\in\{k+1,\ldots,k^{\prime}-1\}\) but then \(\omega^{k,k^{\prime}}\in\mathrm{AdC}(\omega)\Rightarrow\omega^{m,m^{\prime}} \notin\mathrm{LES}(\omega)\), a contradiction. 2. \(\omega^{n,n^{\prime}}:=(\omega_{k,k^{\prime}})_{l,l^{\prime}}^{\min}\) exists, \(n\leq l<l^{\prime}<n^{\prime}\), and 1. if \(k<n<k^{\prime}\) then \(\omega^{k,k^{\prime}}\in\mathrm{AdC}(\omega)\Rightarrow\omega^{n,n^{\prime}} \notin\mathrm{LES}(\omega_{k,k^{\prime}})\) a contradiction; 2. if \(n\geq k^{\prime}\) then \(\omega_{l,l^{\prime}}^{\min}=(\omega_{k,k^{\prime}})_{l,l^{\prime}}^{\min}\) so \(\omega^{l,l^{\prime}}\in\mathrm{AdC}(\omega_{k,k^{\prime}})\Rightarrow\omega ^{l,l^{\prime}}\in\mathrm{AdC}(\omega)\); 3. if \(n\leq k\) then \(\omega^{n,n^{\prime}}\in\mathrm{LES}(\omega)_{l,l^{\prime}}^{<}\) i.e. either \(\omega^{m,m^{\prime}}\) is a subwalk of \(\omega^{n,n^{\prime}}\) or the two are the same and in both cases \(\omega^{l,l^{\prime}}\in\mathrm{AdC}(\omega_{k,k^{\prime}})\Rightarrow\omega ^{l,l^{\prime}}\in\mathrm{AdC}(\omega)\). This shows that \(\omega^{l,l^{\prime}}\in\mathrm{AdC}(\omega)\). Second, we establish that \(\omega^{k,k^{\prime}}\in\mathrm{AdC}(\omega_{l,l^{\prime}})\). Since \(\omega^{k,k^{\prime}}\in\mathrm{AdC}(\omega)\) and given that \(k^{\prime}<l\) implies that \(\omega^{k,k^{\prime}}\) is an erased closed subwalk from outside of \(\omega^{l,l^{\prime}}\), then \(\omega^{k,k^{\prime}}\in\mathrm{LES}(\omega_{l,l^{\prime}})\). Supposing that \(\omega_{k,k^{\prime}}^{\min}\) does not exist then \((\omega_{l,l^{\prime}})_{k,k^{\prime}}^{\min}\) does not exist either and \(\omega^{k,k^{\prime}}\in\mathrm{AdC}(\omega_{l,l^{\prime}})\). Now suppose instead that \(\omega_{k,k^{\prime}}^{\min}\) exists, then \((\omega_{l,k^{\prime}})_{k,k^{\prime}}^{\min}\) is either identical to \(\omega_{k,k^{\prime}}^{\min}\) or is a subwalk of it. In both situations \(\omega^{k,k^{\prime}}\in\mathrm{AdC}(\omega)\) then entails that vertex \(w_{k}=w_{k}^{\prime}\) is not visited again after step \(k^{\prime}\) in \(\omega_{k,k^{\prime}}^{\min}\) and its subwalks, hence \(\omega^{k,k^{\prime}}\in\mathrm{AdC}(\omega_{l,l^{\prime}})\). Conversely, assuming that \(\omega^{l,l^{\prime}}\in\mathrm{AdC}(\omega)\) and \(\omega^{k,k^{\prime}}\in\mathrm{AdC}(\omega_{l,l^{\prime}})\) and proceeding as above we obtain that \(\omega^{k,k^{\prime}}\in\mathrm{AdC}(\omega)\) and \(\omega^{l,l^{\prime}}\in\mathrm{AdC}(\omega_{k,k^{\prime}})\), which proves Case 1 of the Proposition. **Case 2.**\(k<l<l^{\prime}\leq k^{\prime}\), pictorially this is the situation where, \[\omega=w_{0}\ldots w_{k}\ldots w_{l}\ldots w_{l^{\prime}}\ldots w_{k^{\prime} }\ldots w_{\ell}=\raisebox{-2.0pt}{\includegraphics[width=14.226378pt]{fig1.eps}} \raisebox{-2.0pt}{\includegraphics[width=14.226378pt]{fig2.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig3.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig4.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig5.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig6.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig6.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig6.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig7.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig8.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig9.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig10.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig11.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig12.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig13.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig14.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig15.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig16.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig17.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14.226378pt]{fig18.eps}}\raisebox{-2.0 pt}{\includegraphics[width=14. 1. if \(l^{\prime}<k^{\prime}\) then \(\omega^{k,k^{\prime}}\in\operatorname{LES}(\omega^{k,k^{\prime}})^{<}_{l,l^{ \prime}}\) and so \((\omega^{k,k^{\prime}})^{\min}_{l,l^{\prime}}=\omega^{\min}_{l,l^{\prime}}\); 2. if \(l^{\prime}=k^{\prime}\) then \(\operatorname{LES}(\omega^{k,k^{\prime}})^{<}_{l,l^{\prime}}\) is empty and \((\omega^{k,k^{\prime}})^{\min}_{l,l^{\prime}}\) does not exist. Therefore in both situations \(\omega^{l,l^{\prime}}\) is an admissible cut of \(\omega^{k,k^{\prime}}\). The converse results, namely proving that \(\omega^{l,l^{\prime}}\in\operatorname{AdC}(\omega)\) and \(\omega^{k,k^{\prime}}_{l,l^{\prime}}\in\operatorname{AdC}(\omega_{l,l^{ \prime}})\) while assuming \(\omega^{k,k^{\prime}}\in\operatorname{AdC}(\omega)\) and \(\omega^{l,l^{\prime}}\in\operatorname{AdC}(\omega^{k,k^{\prime}})\) are obtained completely similarly, yielding Case 2 of the Proposition. ### All walks are totally-ordered temporal trees Lemma 1 and Proposition 2 strongly suggest that any walk on any graph is chronologically equivalent to a tree where the root node is the self-avoiding skeleton of the walk and each non-root node stands for a simple cycle, see Theorem 4 below. In that tree, Lawler's procedure erases nodes from the leaves down to the root and operates on the branches from left to right (or more precisely along the direction given to time). That is, time totally orders the walk's tree structure. Formally, this translates into a total order on the set of admissible cuts of a walk: **Definition 5** (Time-ordering of the admissible cuts).: Let \(G\) be a digraph and \(\omega\in\mathcal{W}(G)\). Assuming that \(\operatorname{AdC}(\omega)\neq\emptyset\) we define the relation \(\leqslant_{\tiny\raisebox{-0.5pt}{\includegraphics[height=14.226378pt]{fig:w **Theorem 4** (All walks are temporal-trees).: _Let \(G\) be a digraph and \(\omega\in\mathcal{W}(G)\). Then \(\omega\) has the temporal-structure of a tree \(t(\omega)\) whose nodes are totally ordered by \(\leqslant_{\bigodot}\) according to a depth-first order._ Proof.: To establish the theorem, we first map walks to cacti then cacti to trees: **Definition 6** (Cactus).: Let \(G\) be a digraph. A walk \(\omega=w_{0}\ldots w_{\ell}\in\mathcal{W}(G)\) is a cactus if and only if for any \(0\leq k<k^{\prime}\leq\ell\), \(w_{k}=w_{k^{\prime}}\iff\omega^{k,k^{\prime}}\in\operatorname{LES}(\omega)\). We denote \(\mathcal{C}\) the set of all cacti on the complete graph \(K_{\mathbb{N}}\) with \(V(K_{\mathbb{N}})=\mathbb{N}\) and by \(\operatorname{Cact}(G)\) the vector space spanned by the cacti on \(G\). In a cactus all repeated vertices delimit valid loop-erased sections, which means that there cannot be patterns such as \(\omega=1\underline{2}1\underline{2}1\) as \(212\notin\operatorname{LES}(\omega)\). Intuitively a cactus is therefore a 'disentangled' walk, where every instance of repeated vertex is the root of a simple-cycle erased by Lawler's procedure. We may always map walks to cacti by defining, \[C:\,\mathcal{W}(G) \to\mathcal{C}, \tag{1}\] \[\omega=w_{0}\cdots w_{\ell} \mapsto\kappa:=C(\omega)=c_{0}\cdots c_{\ell},\] where \(\kappa\) is the cactus defined as follows: \(c_{0}=w_{0}\) and for any \(k\in\{0,...,\ell-1\}\) if \(\operatorname{LEW}_{k+1}(\omega)=\operatorname{LEW}_{k}(\omega)w_{k+1}\) then \(c_{k+1}=\max(V(c_{0}\ldots c_{k}))+1\); else \(c_{k+1}=c_{l}\) where \(l=\max(i\in\{0,\ldots,k\},\ w_{l}=w_{k+1})\). In words, considering the loop-erased walk \(\operatorname{LEW}_{k}(\omega)\) at step \(k\), if vertex \(w_{k+1}\) reached at step \(k+1\) is distinct from those of \(\operatorname{LEW}_{k}(\omega)\) then \(c_{k+1}\) is a vertex with integer label given by the length of \(\operatorname{LEW}_{k}(\omega)\) plus one (an expedient ensuring that we map distinct labels to distinct labels). If instead vertex \(w_{k+1}\) was visited at some step \(l\) prior to step \(k+1\) in the loop-erased walk, that is \(w_{l}\cdots w_{k+1}\) closes an erased-section, then \(c_{k+1}\) is given the same label as \(c_{l}\). For example walk \(\omega=12121\) becomes \(\kappa:=C(\omega)=12131\). Because the new labels are not labels of nodes of \(G\), it may be that \(\kappa\) is not a valid walk on \(G\) but at least it is a walk on a complete graph \(K_{\mathbb{N}}\), see SS6.2 below. This is sufficient for our purpose: by definition \(\omega\) and \(\kappa:=C(\omega)\) have the same length, \(\kappa\) is a cactus, and \(\omega\) and \(\kappa\) share the same temporal structure, \[\omega^{k,k^{\prime}}\in\operatorname{AdC}(\omega)\iff\kappa^{k,k^{\prime}} \in\operatorname{AdC}\bigl{(}\kappa\bigr{)}. \tag{2}\] Since any two simple cycles in cacti share at most one vertex (their roots), we define a tree \(t(\omega)\) from \(\kappa\) by drawing a tree-node \(t\) for every simple cycle \(\sigma\) of \(\kappa\). Two nodes \(t\) and \(t^{\prime}\) of the tree are connected if and only if the corresponding simple cycles \(\sigma\) and \(\sigma^{\prime}\) share a vertex in \(\kappa\). Finally we add a root node representing the (possibly trivial) self-avoiding skeleton of \(\omega\). The time-order \(\leqslant_{\bigodot}\) now totally orders the nodes of \(t(\omega)\): for \(t,t^{\prime}\) two nodes of \(t(\omega)\) corresponding to simple cycles \(\sigma\) and \(\sigma^{\prime}\) of \(\kappa\), \(t\leqslant_{\bigodot}t^{\prime}\iff\sigma\) is erased prior to \(\sigma\) in \(\kappa\). This builds a reverse depth-first order on the nodes of \(t(\omega)\) with the top-left leaf of the tree corresponding to the first erased simple cycle, hence the smallest as per \(\leqslant_{\bigodot}\). **Example 7**.: As an example, consider the walk \(\omega=12332331\). Then \(\kappa:=C(\omega)=12332441\) and the tree \(\tau:=t(\omega)\) is The simple cycles of \(\kappa\) are \(1241\), corresponding to node \(d\) of the tree; \(232\) (node \(b\) of the tree); \(33\) (node \(a\)) and \(66\) (node \(c\)). The root node of \(t(\omega)\) stands for the trivial walk '\(1\)' on vertex \(1\), which is the self-avoiding skeleton of \(\omega\). The time-order on the tree nodes is \(a\leqslant_{\bigodot}b\leqslant_{\bigodot}c\leqslant_{\bigodot}d\leqslant \operatorname{Root}\). Although the tree \(t(\omega)\) depends on the walk \(\omega\), an universal tree can be constructed for all walks of a given digraph \(G\). Considering only the trees \(t(\omega)\) obtained from walks with no repeated sections produces a finite number of structurally distinct trees from all walks on \(G\). These trees can be ordered partially by inclusion and the resulting poset always admits an unique maximum. This maximum tree is of paramount importance to \(G\): it is one of the few invariants of its hike monoid [3; 4], and dictates the shape of all branched continued fractions counting walks on \(G\)[5]. ## 3 The co-preLie co-algebra of walks ### Co-product With the notion of admissible cut, we may now formally define the co-product associated to Lawler's process, by mapping a walk to a sum over all its admissible cuts tensored with their remainders: **Definition 7** (Co-product).: Let \(G\) be a digraph. The co-product associated to Lawler's process is the linear map \(\Delta_{\mathrm{CP}}\) defined by, \[\Delta_{\mathrm{CP}}:\left\{\begin{array}{rcl}\mathcal{W}(G)&\to&\mathcal{W} (G)\otimes\mathcal{W}(G)\\ \omega&\mapsto&\Delta_{\mathrm{CP}}(\omega)=\sum_{\omega^{c}\in\mathrm{AdC}( \omega)}\omega_{c}\otimes\omega^{c}.\end{array}\right.\] An essential property of this co-product is that a walk is primitive for it if and only if it is a simple path or a simple cycle. **Proposition 5**.: _Let \(G\) be a digraph and \(\omega\in\mathcal{W}(G)\). Then,_ \[\Delta_{\mathrm{CP}}(\omega)=0\iff\omega\in\mathrm{SAW}(G)\cup\mathrm{SAP}(G).\] Proof.: Let \(\omega=w_{0}\ldots w_{\ell}\in\mathcal{W}(G)\). If \(\omega\in\mathrm{SAW}(G)\) then it has no cycles and \(\mathrm{AdC}(\omega)=\emptyset\) so \(\Delta(\omega)=0\). If \(\omega\in\mathrm{SAP}(G)\), then \(\mathrm{LES}(\omega)=\{\omega\}\), \(\mathrm{AdC}(\omega)=\emptyset\) since the walk is not an admissible cut of itself, therefore \(\Delta(\omega)=0\). Now suppose that \(\omega\notin\mathrm{SAW}(G)\cup\mathrm{SAP}(G)\). Then \(\omega\) has at least one simple cycle and we may consider the last such cycle \(\omega^{k,k^{\prime}}\in\mathrm{LES}(\omega)\) erased from \(\omega\) by Lawler's process. This simple cycle cannot be \(\omega\) itself since \(\omega\notin\mathrm{SAP}(G)\). Furthermore after step \(k^{\prime}\) no vertex of \(\mathrm{LEW}_{k^{\prime}}(\omega)\) is visited again in \(w_{k^{\prime}+1}\cdots w_{\ell}\) as otherwise \(\omega^{k,k^{\prime}}\) would not be the last erased simple cycle. This simply indicates that the last erased simple cycles is not within a wider erased section by virtue of being the last to be removed. Thus \(\omega^{\min}_{k,k^{\prime}}\) does not exist, \(\omega^{k,k^{\prime}}\in\mathrm{AdC}(\omega)\), and \(\Delta(\omega)\neq 0\). **Example 8**.: Consider the walk \(\omega=1233234441\), then \[\Delta_{\mathrm{CP}}(1233234441)=123323441\otimes 44+12332341\otimes 444+123234441 \otimes 33+1234441\otimes 2332,\] or, graphically, \[\Delta_{\mathrm{CP}}\left(\begin{array}{c}\includegraphics[width=142.374pt ]{figs/2.eps}\\ \includegraphics[width=142.374pt]{figs/2.eps}\end{array}\right)=\includegraphics[width=142.374pt ]{figs/2.eps}\otimes\includegraphics[width=142.374pt]{figs/2.eps}+ \includegraphics[width=142.374pt]{figs/2.eps}\otimes\includegraphics[width=142.374pt ]{figs/2. ### The co-preLie property Having established the definition of the co-product associated to the Lawler process and identified its primitive walks, we now turn to the co-algebraic structure it gives to the walk vector space \(\mathcal{W}(G)\). Recall that: **Definition 8**.: A co-preLie co-algebra is a couple \((\mathcal{V},\Delta)\) where \(\mathcal{V}\) is a vector space and \(\Delta:\mathcal{V}\to\mathcal{V}\otimes\mathcal{V}\) is a linear map such that for any \(v\in\mathcal{V}\) the following relation is satisfied \[(\Delta\otimes\mathrm{Id}-\mathrm{Id}\otimes\Delta)\circ\Delta(v)=(\mathrm{Id }\otimes\tau)\circ(\Delta\otimes\mathrm{Id}-\mathrm{Id}\otimes\Delta)\circ \Delta(v)\] where \(\mathrm{Id}\) is the identity map and \(\tau\) is the twisting linear map, \(\tau:\mathcal{V}\otimes\mathcal{V}\to\mathcal{V}\otimes\mathcal{V}\), \(\tau(u\otimes v)=v\otimes u\). **Theorem 6**.: _The vector space \(\mathcal{W}(G)\), equipped with the coproduct \(\Delta_{\mathrm{CP}}\), is a co-preLie (but not co-unital) co-algebra._ We present two proofs of this result. The first, given immediately below, is a direct approach based on the properties of admissible cuts. The second proof, presented in SS5, obtains the theorem as a corollary of the Hopf structure on the tensor algebra generated by \(\mathcal{W}(G)\) via a brace coalgebra construction. Proof.: Let \(\omega=w_{0}\ldots w_{\ell}\in\mathcal{W}(G)\). We begin with evaluating \((\Delta_{\mathrm{CP}}\otimes\mathrm{Id})\circ\Delta_{\mathrm{CP}}(\omega)\) explicitly. To that end consider an admissible cut \(\omega^{k,k^{\prime}}\in\mathrm{AdC}(\omega)\), assuming that \(\omega_{k,k^{\prime}}\) is not self-avoiding nor a simple cycle as this leads to a \(0\) result. Then, \((\Delta_{\mathrm{CP}}\otimes\mathrm{Id})(\omega_{k,k^{\prime}}\otimes\omega^{ k,k^{\prime}})\) yields a sum over cuts that fall into four distinct cases, depending on the second cut's coordinates \(l\), \(l^{\prime}\) relatively to \(k\), \(k^{\prime}\): 1. \(l<l^{\prime}<k<k^{\prime}\), i.e. \(\omega=w_{0}\cdots w_{l}\cdots w_{l^{\prime}}\cdots w_{k}\cdots w_{k^{\prime}} \cdots w_{k^{\prime}}\), this gets cut as \(\omega_{k,k^{\prime},l^{\prime}}\otimes\omega^{l,l^{\prime}}\otimes\omega^{ k,k^{\prime}}\), 2. \(k<k^{\prime}<l<l^{\prime}\), i.e \(\omega=w_{0}\cdots w_{k}\cdots w_{k^{\prime}}\cdots w_{l}\cdots w_{l^{\prime}} \cdots w_{k}\), this gets cut as \(\omega_{k,k^{\prime},l^{\prime}}\otimes\omega^{l,l^{\prime}}\otimes\omega^{ k,k^{\prime}}\), 3. \(l<k<k^{\prime}<l^{\prime}\), i.e. \(\omega=w_{0}\cdots w_{l}\cdots w_{k}\cdots w_{k^{\prime}}\cdots w_{l^{\prime}} \cdots w_{l}\), this gets cut as \(\omega_{l,l^{\prime}}\otimes\omega^{l,l^{\prime}}_{k,k^{\prime}}\otimes\omega^ {k,k^{\prime}}\), 4. \(l<l^{\prime}=k<k^{\prime}\), i.e. \(\omega=w_{0}\cdots w_{l}\cdots w_{l^{\prime}=k}\cdots w_{k^{\prime}}\cdots w_{ \ell}\), this gets cut as \(\omega_{l,k^{\prime}}\otimes\omega^{lk}_{k,k^{\prime}}\otimes\omega^{k,k^{ \prime}}\). Remark that \(\omega^{l,l^{\prime}}\) is not an admissible cut of \(\omega\) because \(w_{l^{\prime}}=w_{k^{\prime}}\) occurs after step \(l^{\prime}\) in \(\omega^{\min}_{l,l^{\prime}}\). By Case 1 of Proposition 2, if an admissible cut falls into situation 1) above, another one will be admissible as per situation 2). Thus, \[(\Delta_{\mathrm{CP}}\otimes\mathrm{Id})\circ\Delta_{\mathrm{CP}}(\omega)= \sum_{\begin{subarray}{c}c\in\mathrm{AdC}(\omega)\\ \omega_{k,k^{\prime}}\not\in\mathrm{SAW}(G)\end{subarray}}\sum_{ \begin{subarray}{c}c^{\prime}\in\mathrm{AdC}(\omega_{k,k^{\prime}})\\ \omega_{k,k^{\prime}}\not\in\mathrm{SAW}(G)\end{subarray}}\begin{cases} \omega_{k,k^{\prime},l,l^{\prime}}\otimes\omega^{l,l^{\prime}}\otimes\omega^{ k,k^{\prime}}&l<l^{\prime}<k<k^{\prime}\\ \omega_{k,k^{\prime},l^{\prime}}\otimes\omega^{l,l^{\prime}}\otimes\omega^{k,k^ {\prime}}&k<k^{\prime}<l<l^{\prime},\\ \omega_{l,l^{\prime}}\otimes\omega^{k,k^{\prime}}_{k,k^{\prime}}\otimes\omega^{ k,k^{\prime}}&l<k<k^{\prime}<l^{\prime},\\ \omega_{l,k^{\prime}}\otimes\omega^{l,k}_{k,k^{\prime}}\otimes\omega^{k,k^{ \prime}}&l<l^{\prime}=k<k^{\prime},\end{cases}\] where we used \(c:=\omega^{k,k^{\prime}}\) and \(c^{\prime}:=\omega^{l,l^{\prime}}\) to alleviate the notation. Now we turn to \((\mathrm{Id}\otimes\Delta_{\mathrm{CP}})\circ\Delta_{\mathrm{CP}}(\omega)\). Let again \(\omega^{k,k^{\prime}}\in\mathrm{AdC}(\omega)\) be an admissible cut of \(\omega\) which we assume not to be a simple cycle as this would lead to a \(0\) result. Then, \((\mathrm{Id}\otimes\Delta_{\mathrm{CP}})\circ(\omega_{k,k^{\prime}}\otimes \omega^{k,k^{\prime}})\) yields a sum over cuts that fall into two distinct cases, depending on the second cut's coordinates \(l\), \(l^{\prime}\) inside of \(\omega_{k,k^{\prime}}\): 1. \(k<l<l^{\prime}<k^{\prime}\), that is \(\omega=w_{0}\cdots w_{k}\cdots w_{l}\cdots w_{l^{\prime}}\cdots w_{k^{\prime}} \cdots w_{k}\), which gets cut as \(\omega_{k,k^{\prime}}\otimes\omega^{k,k^{\prime}}_{l,l^{\prime}}\otimes\omega^ {l,l^{\prime}}\), 2. \(k<l<l^{\prime}=k^{\prime}\), that is \(\omega=w_{1}\cdots w_{k}\cdots w_{l}\cdots w_{l^{\prime}=k^{\prime}}\cdots w_{m}\), which gets cut as \(\omega_{k,k^{\prime}}\otimes\omega^{k,k^{\prime}}_{l,k^{\prime}}\otimes\omega^ {l,k^{\prime}}\). Here we do not need to consider the case \(k=l<l^{\prime}\leq k^{\prime}\). Indeed, either \(k=l<l^{\prime}=k^{\prime}\) then \(\omega^{l,l^{\prime}}=\omega^{k,k^{\prime}}\) meaning we cut \(\omega^{k,k^{\prime}}\) out of itself, which is not admissible; or \(k=l<l^{\prime}<k^{\prime}\) but then, \(\omega^{l,l^{\prime}}\) is not admissible because \(w_{l^{\prime}}=w_{k^{\prime}}\) is visited again after step \(l^{\prime}\). Rather, in that situation it is \(\omega^{l^{\prime},k^{\prime}}\) that is admissible and falls into case 2) above. Thus, \[(\mathrm{Id}\otimes\Delta_{\mathrm{CP}})\circ\Delta_{\mathrm{CP}}(\omega)=\sum_{ \begin{subarray}{c}c\in\mathrm{AdC}(\omega)\\ \omega^{k,k^{\prime}}\not\in\mathrm{SAP}(\omega)\end{subarray}}\sum_{c^{\prime }\in\mathrm{AdC}(\omega^{k,k^{\prime}})}\begin{cases}\omega_{k,k^{\prime}} \otimes\omega_{l,l^{\prime}}^{k,k^{\prime}}\otimes\omega^{l,l^{\prime}}&k<l< l^{\prime}<k^{\prime},\\ \omega_{k,k^{\prime}}\otimes\omega_{l,k^{\prime}}^{k,k^{\prime}}\otimes \omega^{l,k^{\prime}}&k<l<l^{\prime}=k^{\prime},\end{cases}\] where we used \(c:=\omega^{k,k^{\prime}}\) and \(c^{\prime}:=\omega^{l,l^{\prime}}\) to alleviate the notation. By Case 2 of Proposition 2, gathering everything, we obtain \[(\Delta_{\mathrm{CP}}\otimes\mathrm{Id}-\mathrm{Id}\otimes\Delta_{\mathrm{CP}} )\circ\Delta_{\mathrm{CP}}(\omega)=\sum_{\begin{subarray}{c}c\in\mathrm{AdC}( \omega)\\ \omega_{k,l^{\prime}}\not\in\mathrm{SAP}(G)\end{subarray}}\sum_{ \begin{subarray}{c}c^{\prime}\in\mathrm{AdC}(\omega_{k,k^{\prime}})\\ l<l^{\prime}<k<l^{\prime}\\ k<l^{\prime}<l<l^{\prime}\end{subarray}}\omega_{k,k^{\prime};l,l^{\prime}} \otimes\omega^{l,l^{\prime}}\otimes\omega^{k,k^{\prime}}.\] Remark how \(k,k^{\prime}\) and \(l,l^{\prime}\) now play completely symmetric roles in the above so that the co-prelie relation holds for all walks \(\omega\in\mathcal{W}(G)\), \[(\Delta_{\mathrm{CP}}\otimes\mathrm{Id}-\mathrm{Id}\otimes\Delta_{\mathrm{CP}} )\circ\Delta_{\mathrm{CP}}(\omega)=(\mathrm{Id}\otimes\tau)\circ(\Delta_{ \mathrm{CP}}\otimes\mathrm{Id}-\mathrm{Id}\otimes\Delta_{\mathrm{CP}})\circ \Delta_{\mathrm{CP}}(\omega).\] This indicates, perhaps suprisingly, that Lawler's intuitive chronological removal of the simple cycles from walks naturally endows their vector space with a sophisticated co-preLie structure. **Example 9**.: Consider again the walk \(\omega=1233234441\) of Example 8. Then, \[\begin{split}\left(\Delta_{\mathrm{CP}}\otimes\mathrm{Id}\right) \circ\Delta_{\mathrm{CP}}(1233234441)=&\ 12332341\otimes 44\otimes 4 4+12323441\otimes 2332\otimes 44\\ &+1232341\otimes 33\otimes 444+12341\otimes 2332\otimes 444\\ &+12323441\otimes 44\otimes 33+1232341\otimes 444\otimes 33+123444 1\otimes 232\otimes 33\\ &+123441\otimes 44\otimes 2332+12341\otimes 44\otimes 2332,\\ \left(\mathrm{Id}\otimes\Delta_{\mathrm{CP}}\right)\circ\Delta_{ \mathrm{CP}}(1233234441)=&\ 12332341\otimes 44\otimes 4+1234441\otimes 232\otimes 33.\end{split}\] From this, reordering the terms for convenience, we obtain \[\begin{split}\left(\Delta_{\mathrm{CP}}\otimes\mathrm{Id}- \mathrm{Id}\otimes\Delta_{\mathrm{CP}}\right)\circ\Delta_{\mathrm{CP}}(12332344 41)=&\ 12323441\otimes 33\otimes 44+12323441\otimes 44\otimes 33\\ &+123441\otimes 2332\otimes 44+123441\otimes 44\otimes 2332\\ &+1232341\otimes 33\otimes 444++1232341\otimes 44\otimes 33\\ &+12341\otimes 2332\otimes 44+12341\otimes 44\otimes 2332,\end{split}\] which is invariant under the action of \(\mathrm{Id}\otimes\tau\) as dictated by Theorem 6. ## 4 Hopf structures on the tensor and symmetric algebras of walks As in the case of trees due to Connes and Kreimer in [1] or the case of decorated trees due to Foissy [2], for any walk, we can extend the notion of admissible cut to allow for multiple simultaneous cuts, which we call _extended_ admissible cuts. Thanks to this construction, a dual of Oudon and Guin's own [8], we get a co-product compatible with the tensor and symmetric algebra structures generated by walks on \(G\). ### Extended admissible cuts We begin by defining the notion of extended admissible cuts of a walk, then show that they turn the tensor and symmetric algebras of walks into Hopf algebras. Finally, we present three special families of walks, ladders, corollas and cacti, and explain how we can make them into Hopf algebras. **Definition 9**.: Let \(G\) be a finite connected non-empty graph and \(\mathbb{K}\) a field of characteristic \(0\). 1. We define \(\mathcal{T}\langle\mathcal{W}(G)\rangle\) as the tensor algebra generated by \(\mathcal{W}(G)\). To alleviate the notation, for walks \(\omega_{1},\ldots,\omega_{p}\in\mathcal{W}(G)\), the tensor \(\omega_{1}\otimes\cdots\otimes\omega_{p}\) will be denoted by \(\omega_{1}\,|\,\ldots\,|\,\omega_{p}\). Such elements of \(\mathcal{T}\langle\mathcal{W}(G)\rangle\) are called forests. 2. Let \(\omega=w_{0}\ldots w_{\ell}\) be a walk in \(G\). In keeping we common terminology for Hopf algebras we call _degree_\(\deg(\omega)\) the length of the walk \(\omega\). We recall that the tensor algebra \(\mathcal{T}\langle\mathcal{W}(G)\rangle\) is equipped with the concatenation product \({}^{\bullet}\), \[{}^{\bullet}:\left\{\begin{array}{rcl}\mathcal{T}\langle\mathcal{W}(G) \rangle\otimes\mathcal{T}\langle\mathcal{W}(G)\rangle&\longrightarrow& \mathcal{T}\langle\mathcal{W}(G)\rangle\\ \omega_{1}\,|\,\ldots\,|\,\omega_{m}\otimes\omega_{1}^{\prime}\,|\,\ldots\, |\,\omega_{n}^{\prime}&\longmapsto&\omega_{1}\,|\,\ldots\,|\,\omega_{m}\,|\, \omega_{1}^{\prime}\,|\,\ldots\,|\,\omega_{n}^{\prime},\end{array}\right.\] and the degree \(\deg(\omega_{1}|\ldots|\omega_{m})=\sum_{i=1}^{m}\deg(\omega_{i})\) is the sum of the involved walks' degrees. By construction, \((\mathcal{T}\langle\mathcal{W}(G)\rangle,{}^{\bullet})\) is an unital associative algebra with unit the empty forest (), identified with \(\mathbf{1}\in\mathbb{K}\), written in bold font so as to distinguish it from a vertex label '1'. **Definition 10** (Extended admissible cut).: Let \(G\) be a digraph and \(\omega\in\mathcal{W}(G)\). An _extended admissible cut_ of \(\omega\) is the tensor product of \(n\in\mathbb{N}\backslash\{0\}\) consecutive admissible cuts \(\omega^{k_{i},k_{i}^{\prime}}\in\mathrm{AdC}(\omega)\) which are _non-overlapping_ in \(\omega\), that is \(k_{1}<k_{1}^{\prime}<k_{2}<k_{2}^{\prime}<\cdots<k_{n}<k_{n}^{\prime}\). We write, \[\omega^{k_{1},k_{1}^{\prime};\ldots;k_{n},k_{n}^{\prime}}:=\omega^{k_{1},k_{1 }^{\prime}}\,|\,\ldots\,|\,\omega^{k_{n},k_{n}^{\prime}}\in\mathcal{T}\langle \mathcal{W}(G)\rangle.\] The set of extended admissible cuts of \(\omega\) is denoted \(E\mathrm{AdC}(\omega)\). Observe that \(\mathrm{AdC}(\omega)\subset E\mathrm{AdC}(\omega)\). To alleviate the notation whenever possible we designate an extended admissible cut by a single letter, e.g. \(\omega^{c}\in E\mathrm{AdC}(\omega)\) and might then simply write that \(c\) is an extended admissible cut of \(\omega\). **Example 10**.: Consider the walk \(\omega=123324441\), then \[E\mathrm{AdC}(\omega)=\{33,44,444,2332,33\,|\,44,33\,|\,444,2332\,|\,44,2332\, |\,444\}.\] **Remark 4**.: The notion of extended admissible cut generalizes straightforwardly from \(\mathcal{W}(G)\) to \(\mathcal{T}\langle\mathcal{W}(G)\rangle\). Consider \(\omega_{1}\,|\,\ldots\,|\,\omega_{m}\in\mathcal{T}\langle\mathcal{W}(G)\rangle\) with \(\omega_{i}\in\mathcal{W}(G)\), then an extended admissible cut of this is an element \(\omega^{c_{i}}\,|\,\ldots\,|\,\omega^{c_{m}}\in\mathcal{T}\langle\mathcal{W}(G)\rangle\) such that all \(\omega^{c_{i}}\in E\mathrm{AdC}(\omega_{i})\). As stated Definition 10 the admissible cuts constituting an extended admissible cut \(\omega^{c}\) of a walk \(\omega=w_{0}\cdots w_{\ell}\) are non-overlapping, i.e. \(\omega^{c}:=\omega^{k_{1},k_{1}^{\prime};\ldots;k_{n},k_{n}^{\prime}}\in E \mathrm{AdC}(\omega)\) satisfies \(0\leq k_{1}<k_{1}^{\prime}<k_{2}<k_{2}^{\prime}<\cdots<k_{n}<k_{n}^{\prime}\leq\ell\). We can therefore meaningfully denote \[\omega_{c}:=\omega_{k_{1},k_{1}^{\prime};\ldots;k_{n},k_{n}^{\prime}}=w_{0} \ldots w_{k_{1}}w_{k_{1}^{\prime}+1}\ldots w_{k_{2}}w_{k_{2}^{\prime}+1}\ldots w _{k_{n}}w_{k_{n}^{\prime}+1}\ldots w_{\ell},\] for what remains of \(\omega\) after erasure of all \(\omega^{k_{i},k_{i}^{\prime}}\). Since admissible cuts are closed subwalks of a walk, \(\omega_{c}\) is still a walk. Together with the non-overlapping condition this implies that, for any \(1\leq i\leq n\), \[\omega^{k_{1},k_{1}^{\prime};\ldots;k_{n},k_{n}^{\prime}}\in E\mathrm{AdC}( \omega)\Rightarrow\omega^{k_{i},k_{i}^{\prime}}\in\mathrm{AdC}(\omega_{k_{1},k_ {1}^{\prime};\ldots;k_{i-1},k_{i-1}^{\prime};k_{i+1},k_{i+1}^{\prime};\ldots;k_{ n},k_{n}^{\prime}}). \tag{3}\] Extended admissible cuts are 'well behaved' in the sense that such cuts and their remainders satisfy an analog of Proposition 2 for admissible cuts: **Proposition 7**.: _Let \(G\) a digraph, \(\omega\in\mathcal{W}(G)\), \(\omega^{c}\in E\mathrm{AdC}(\omega)\). Then,_ \[\omega^{c^{\prime}}\in\mathrm{AdC}(\omega_{c})\Rightarrow\omega^{c^{\prime}}\in \mathrm{AdC}(\omega)\] _which also implies \(\omega^{c^{\prime}}\in E\mathrm{AdC}(\omega_{c})\Rightarrow\omega^{c^{\prime}} \in E\mathrm{AdC}(\omega)\). Furthermore,_ \[\omega^{c^{\prime}}\in E\mathrm{AdC}(\omega^{c})\Rightarrow\omega^{c^{\prime}} \in E\mathrm{AdC}(\omega)\] Proof.: Let \(\omega^{c}:=\omega^{k_{1},k_{1}^{\prime};\ldots;k_{n},k_{n}^{\prime}}\) be an extended admissible cut of \(\omega\). By virtue of Proposition 2, an admissible cut of the remainder of an admissible cut of a walk is an admissible cut of that walk, \[\omega^{k,k^{\prime}}\in\mathrm{AdC}(\omega),\;\omega^{l,l^{\prime}}\in \mathrm{AdC}(\omega_{k,k^{\prime}})\Rightarrow\omega^{l,l^{\prime}}\in\mathrm{ AdC}(\omega).\] In addition \(\omega^{k_{1},k_{1}^{\prime}}_{k_{2},k_{2}^{\prime};\ldots;k_{n},k_{n}^{ \prime}}\in\mathrm{AdC}(\omega_{k_{2},k_{2}^{\prime};\ldots;k_{n},k_{n}^{ \prime}})\) by virtue of the fact that extended admissible cuts only comprise non-overlapping admissible cuts. Therefore we get \[\omega^{l,l^{\prime}}\in\mathrm{AdC}(\omega_{k_{1},k_{1}^{\prime};\ldots;k_{n},k_{n}^{\prime}})\Rightarrow\omega^{l,l^{\prime}}\in\mathrm{AdC}(\omega_{k_{2},k_{2}^{\prime};\ldots;k_{n},k_{n}^{\prime}}).\] Iterating this observation leads to the first claim for \(\omega^{c^{\prime}}:=\omega^{l,l^{\prime}}\). Note that since all cuts are non-overlapping we could have chosen to remove the \(k_{i},k_{i}^{\prime}\) cuts in any order in the iteration. The result for extended admissible cuts is now immediate since such cuts comprise only non-overlapping admissible cuts to each of which we apply the result just proven. For the second claim, observe that by Case 2 of Proposition 2, \(\omega^{c}\in\mathrm{AdC}(\omega)\) and \(\omega^{c^{\prime}}\in\mathrm{AdC}(\omega^{c})\) implies \(\omega^{c^{\prime}}\in\mathrm{AdC}(\omega)\). The result for extended admissible cuts follows once more from the observation that such cuts comprise only non-overlapping admissible cuts, each of which behaves as dictated by Case 2 of Proposition 2. ### Hopf algebra on walks We may now define a co-product on \(\mathcal{T}\langle\mathcal{W}(G)\rangle\) by relying on extended admissible cuts and their remainders: **Definition 11** (Extended co-product).: Let \(G\) be a digraph. Consider the morphism of algebras \(\Delta_{\mathrm{H}}\) defined by: \[\Delta_{\mathrm{H}}:\left\{\begin{array}{rcl}\mathcal{T}\langle\mathcal{W}(G )\rangle&\longrightarrow&\mathcal{T}\langle\mathcal{W}(G)\rangle\otimes \mathcal{T}\langle\mathcal{W}(G)\rangle\\ \omega&\longmapsto&\Delta_{\mathrm{H}}(\omega)=\mathbf{1}\otimes\omega+\omega \otimes\mathbf{1}+\sum_{c\in E\mathrm{AdC}(\omega)}\omega_{c}\otimes\omega^{ c},\end{array}\right.\] where the sum runs over all extended admissible cuts \(\omega^{c}\) of \(\omega\). **Theorem 8**.: _Let \(G\) a digraph and consider the triple \(\mathcal{H}_{\mathcal{T}}:=(\mathcal{T}\langle\mathcal{W}(G)\rangle,\mathbf{\cdot},\Delta_{\mathrm{H}})\). Equipped with the map \(\deg\), it defines a graded connected Hopf algebra._ Proof.: Observe first that \(\deg\) is a graduation by direct calculation. Second, to prove the theorem we must establish that \(\Delta_{\mathrm{H}}\) is coassociative. Since \(\Delta_{\mathrm{H}}\) is a morphism of algebras it is sufficient to show that for any walk \(\omega\in\mathcal{W}(G)\), \[(\Delta_{\mathrm{H}}\otimes\mathrm{Id})\circ\Delta_{\mathrm{H}}(\omega)=( \mathrm{Id}\otimes\Delta_{\mathrm{H}})\circ\Delta_{\mathrm{H}}(\omega).\] Let \(\omega\) be a walk in \(G\). Then \[(\Delta_{\mathrm{H}}\otimes\mathrm{Id})\circ\Delta_{\mathrm{H}}(\omega)=\omega \otimes\mathbf{1}\otimes\mathbf{1}+\mathbf{1}\otimes\omega\otimes\mathbf{1}+ \mathbf{1}\otimes\mathbf{1}\otimes\omega+\sum_{c\in E\mathrm{AdC}(\omega)} \omega_{c}\otimes\omega^{c}\otimes\mathbf{1}\\ +\sum_{c\in E\mathrm{AdC}(\omega)}\omega_{c}\otimes\mathbf{1} \otimes\omega^{c}+\sum_{c\in E\mathrm{AdC}(\omega)}\mathbf{1}\otimes\omega_{c }\otimes\omega^{c}+\sum_{c\in E\mathrm{AdC}(\omega)}\sum_{c^{\prime}\in E \mathrm{AdC}(\omega_{c})}(\omega_{c})_{c^{\prime}}\otimes(\omega_{c})^{c^{ \prime}}\otimes\omega^{c},\] Similarly, \[(\mathrm{Id}\otimes\Delta_{\mathrm{H}})\circ\Delta_{\mathrm{H}}( \omega)=\omega\otimes\mathbf{1}\otimes\mathbf{1}+\mathbf{1}\otimes\omega \otimes\mathbf{1}+\mathbf{1}\otimes\mathbf{1}\otimes\omega+\sum_{c\in E\mathrm{ AdC}(\omega)}\mathbf{1}\otimes\omega_{c}\otimes\omega^{c}\] \[+\sum_{c\in E\mathrm{AdC}(\omega)}\omega_{c}\otimes\mathbf{1} \otimes\omega^{c}+\sum_{c\in E\mathrm{AdC}(\omega)}\omega_{c}\otimes\omega^{c} \otimes\mathbf{1}+\sum_{c\in E\mathrm{AdC}(\omega)}\sum_{c^{\prime}\in E \mathrm{AdC}(\omega^{c})}\omega_{c}\otimes(\omega^{c})_{c^{\prime}}\otimes( \omega^{c})^{c^{\prime}},\] So the theorem follows if we prove that \[\sum_{c\in E\mathrm{AdC}(\omega)}\sum_{c^{\prime}\in E\mathrm{AdC}(\omega_{c}) }(\omega_{c})_{c^{\prime}}\otimes(\omega_{c})^{c^{\prime}}\otimes\omega^{c}= \sum_{c\in E\mathrm{AdC}(\omega)}\sum_{c^{\prime}\in E\mathrm{AdC}(\omega^{c}) }\omega_{c}\otimes(\omega^{c})_{c^{\prime}}\otimes(\omega^{c})^{c^{\prime}}. \tag{4}\] Consider first terms from the left-hand side of the above, i.e. of the form \[(\omega_{c})_{c^{\prime}}\otimes(\omega_{c})^{c^{\prime}}\otimes\omega^{c} \tag{5}\] with \(c\in E\mathrm{AdC}(\omega)\) and \(c^{\prime}\in E\mathrm{AdC}(\omega_{c})\). Since \(c^{\prime}\in E\mathrm{AdC}(\omega_{c})\) and \(c\in E\mathrm{AdC}(\omega)\) then \(c^{\prime}\in E\mathrm{AdC}(\omega)\) by the first result of Proposition 7. Furthermore \(c^{\prime}\in E\mathrm{AdC}(\omega_{c})\) implies that cuts \(c\) and \(c^{\prime}\) are vertex-disjoint since \(c^{\prime}\) is cut-out of the remainder of \(c\). Then \(k:=c\cup c^{\prime}\) is an extended admissible cut of \(\omega\) and, by construction of \(k\), \(c\) is an extended admissible cut of \(k\). Hence any term of the form given by Eq. (5) is also of the form \[\omega_{k}\otimes(\omega^{k})_{c}\otimes\omega^{c}\] with \(\omega^{k}\in E\mathrm{AdC}(\omega)\) and \(\omega^{c}\in E\mathrm{AdC}(\omega^{k})\). This implies that the LHS of Eq. (4) is comprised in its RHS. Observe that this result is not true for admissible cuts, indeed we used that \(k:=c\cup c^{\prime}\) is the union of two non-overlapping cuts and so while \(k\in E\mathrm{AdC}(\omega)\), we have \(k\notin\mathrm{AdC}(\omega)\). This explains why \(\Delta_{\mathrm{CP}}\) fails to be coassociative. Second, consider terms from the RHS of Eq. (4), \[\omega_{c}\otimes(\omega^{c})_{c^{\prime}}\otimes(\omega^{c})^{c^{\prime}}, \tag{6}\] with \(c\in E\mathrm{AdC}(\omega)\) and \(c^{\prime}\in E\mathrm{AdC}(\omega^{c})\). Since \(c\in E\mathrm{AdC}(\omega)\) and \(c^{\prime}\in E\mathrm{AdC}(\omega^{c})\) then \(c^{\prime}\in EAdC(\omega)\) by the second result of Proposition 7. Since \(c^{\prime}\in E\mathrm{AdC}(\omega^{c})\), \(c^{\prime}\) is entirely included within cut \(c\) and we can define \(l:=c\backslash c^{\prime}\), \(c=l\cup c^{\prime}\) to be the extended admissible cut \(\omega^{l}\in E\mathrm{AdC}(\omega_{c^{\prime}})\) which cuts out \(c\) from the remainder \(\omega_{c^{\prime}}\). By construction \((\omega_{c^{\prime}})^{l}=(\omega^{c})_{c^{\prime}}\) and \(\omega_{l}=\omega_{c,c^{\prime}}\). Consequently, any term of the form given by Eq. (6) is also of the form \[(\omega_{c^{\prime}})_{l}\otimes(\omega_{c^{\prime}})^{l}\otimes\omega^{c^{ \prime}}\] with \(\omega^{c^{\prime}}\in E\mathrm{AdC}(\omega)\) and \(\omega^{l}\in E\mathrm{AdC}(\omega_{c^{\prime}})\). This implies that the RHS of Eq. (4) is comprised in its LHS. Remark that this statement is still true had we allowed only for admissible cuts. This is because if \(c^{\prime}\in\mathrm{AdC}(\omega)\) and \(c\in\mathrm{AdC}(\omega_{c^{\prime}})\) then \(l:=c\backslash c^{\prime}\) is an admissible cut of \(\mathrm{AdC}(\omega_{c^{\prime}})\) by Proposition 2. This indicates that all terms generated by \(\left(\mathrm{Id}\otimes\Delta_{\mathrm{CP}}\right)\circ\Delta_{\mathrm{CP}}\) can be found in those generated by \(\left(\Delta_{\mathrm{CP}}\otimes\mathrm{Id}\right)\circ\Delta_{\mathrm{CP}}\), see e.g. Example 9. The equality of Eq. (4) is proven and \(\Delta_{\mathrm{H}}\) is coassociative. **Example 11**.: Let \(\omega=1233234441\) be the walk of Examples 8, 9 and 10. Then \[\Delta_{\mathrm{H}}(\omega) =\mathbf{1}\otimes\omega+\omega\otimes\mathbf{1}+123323441\otimes 44+12332341\otimes 44+12323441\otimes 3 3+1234441\otimes 2332\] \[+12323441\otimes 33\,|\,44+1232341\otimes 33\,|\,44+123441 \otimes 2332\,|\,44+12341\otimes 2332\,|\,444.\] Omitting all terms involving \(\mathbf{1}\) for the sake of concision and because they trivially satisfy the theorem as shown in its proof, we have \[(\Delta_{\mathrm{H}}\otimes\mathrm{Id})\circ\Delta_{\mathrm{H}}( \omega)=\] \[\quad 12332341\otimes 44\otimes 44+12323441\otimes 33\otimes 44+123441 \otimes 2332\otimes 44+1232341\otimes 33\,|\,44\otimes 44\] \[\quad\quad\quad\quad+12341\otimes 2332\,|\,44\otimes 44\] \[\quad\quad\quad+1232341\otimes 33\otimes 444+12341\otimes 2332\otimes 4 44\] \[\quad\quad\quad+12323441\otimes 44\otimes 33+1232341\otimes 444 \otimes 33+1234441\otimes 232\otimes 33+123441\otimes 232\,|\,44\otimes 33\] \[\quad\quad\quad\quad+12341\otimes 232\,|\,444\otimes 33\] \[\quad\quad\quad+12341\otimes 444\otimes 2332+123441\otimes 44 \otimes 2332\] \[\quad\quad\quad+12341\otimes 232\,|\,44\otimes 33\,|\,44+123441 \otimes 232\otimes 33\,|\,44+1232341\otimes 44\otimes 33\,|\,44\] \[\quad\quad\quad+12341\otimes 232\otimes 33\,|\,444\] \[\quad\quad\quad+12341\otimes 44\otimes 2332\,|\,44.\] The presentation has been organized for the sake of readability: each line above represents terms steming from the same term found in \(\Delta_{\mathrm{H}}(\omega)\), while an additional indentation denotes a continuing line. Similarly, \[(\mathrm{Id}\otimes\Delta_{\mathrm{H}})\circ\Delta_{\mathrm{H}}( \omega)=12332341\otimes 44\otimes 44\] \[\quad+1234441\otimes 232\otimes 33\] \[\quad+12323441\otimes 44\otimes 33+12323441\otimes 33\otimes 44\] \[\quad+1232341\otimes 33\otimes 444+1232341\otimes 444\otimes 33+123 2341\otimes 33\,|\,44\otimes 44+1232341\otimes 44\otimes 33\,|\,44\] \[\quad+123441\otimes 2332\,|\,44+123441\otimes 232\,|\,44\otimes 33+12 3441\otimes 232\otimes 33\,|\,44+123441\otimes 44\otimes 2332\] \[\quad\quad\quad+12341\otimes 232\,|\,444\otimes 33+12341\otimes 2332 \,|\,44\otimes 44+12341\otimes 232\,|\,44\otimes 33\,|\,44+12341\otimes 2332 \otimes 444\] \[\quad\quad\quad+12341\otimes 444\otimes 2332+12341\otimes 232\otimes 33 \,|\,444+12341\otimes 2332\,|\,44.\] A _close_ examination of both results reveals their equality as predicted by Theorem 8. **Proposition 9**.: _Let \(G\) be a finite connected non-empty graph. We denote by \(\mathcal{I}\) the vector space spanned by the elements \(\omega_{1}|\dots|\omega_{n}-\omega_{\sigma(1)}|\dots|\omega_{\sigma(n)}\) where \(\omega_{1}|\dots|\omega_{n}\in\mathcal{T}\langle\mathcal{W}(G)\rangle\) and \(\sigma\) is a permutation. Then, \(\mathcal{I}\) is a Hopf bi-ideal of \(\mathcal{T}\langle\mathcal{W}(G)\rangle\)._ Proof.: By direct calculation, \(\mathcal{I}\) is an ideal. For the sake of brevity we denote by \(E\mathrm{AdC}_{+}(\omega)\) the set \(E\mathrm{AdC}(\omega)\cup\{\mathbf{1},\omega\}\), \(\omega\in\mathcal{W}(G)\). Let \(\omega_{1}|\dots|\omega_{n}\in\mathcal{T}\langle\mathcal{W}(G)\rangle\) and \(\sigma\) be a permutation on \(n\) elements. Then, \[\Delta_{\mathrm{H}}(\omega_{1}|\dots|\omega_{n}-\omega_{\sigma(1) }|\dots|\omega_{\sigma(n)})\] \[=\sum_{c_{i}\in E\mathrm{AdC}_{+}(\omega_{i})}(\omega_{c_{1}}| \dots|\omega_{c_{n}})\otimes(\omega^{c_{1}}|\dots|\omega^{c_{n}})-\sum_{c_{i} \in E\mathrm{AdC}_{+}(\omega_{i})}(\omega_{c_{\sigma}(1)}|\dots|\omega_{c_{ \sigma}(n)})\otimes(\omega^{c_{\sigma}(1)}|\dots|\omega^{c_{\sigma}(n)})\] \[=\sum_{c_{i}\in E\mathrm{AdC}_{+}(\omega_{i})}(\omega_{c_{1}}| \dots|\omega_{c_{n}})\otimes(\omega^{c_{1}}|\dots|\omega^{c_{n}})-\sum_{c_{i} \in E\mathrm{AdC}_{+}(\omega_{i})}(\omega_{c_{1}}|\dots|\omega_{c_{n}})\otimes( \omega^{c_{\sigma}(1)}|\dots|\omega^{c_{\sigma}(n)})\] \[\quad+\sum_{c_{i}\in E\mathrm{AdC}_{+}(\omega_{i})}(\omega_{c_{1}}| \dots|\omega_{c_{n}})\otimes(\omega^{c_{\sigma}(1)}|\dots|\omega^{c_{\sigma}(n)})- \sum_{c_{i}\in E\mathrm{AdC}_{+}(\omega_{i})}(\omega_{c_{\sigma}(1)}|\dots| \omega_{c_{\sigma}(n)})\otimes(\omega^{c_{\sigma}(1)}|\dots|\omega^{c_{\sigma}(n)})\] This shows that \[\Delta_{\mathrm{H}}(\omega_{1}|\dots|\omega_{n}-\omega_{\sigma(1)}|\dots|\omega _{\sigma(n)})\in\mathcal{T}\langle\mathcal{W}(G)\rangle\otimes\mathcal{I}+ \mathcal{I}\otimes\mathcal{T}\langle\mathcal{W}(G)\rangle,\] that is \(\mathcal{I}\) is a co-ideal. Let \(G\) be a digraph. We define \[\mathcal{S}\langle\mathcal{W}(G)\rangle:=\frac{\mathcal{T}\langle\mathcal{W}(G) \rangle}{\mathcal{I}}.\] In the vector space \(\mathcal{S}\langle\mathcal{W}(G)\rangle\) the concatenation product \(\bullet\) becomes the disjoint-union product \(\square\). It follows from Theorem 8 and Proposition 9 that: **Corollary 10**.: _Let \(G\) be a digraph. Then \(\mathcal{H}_{\mathcal{S}}:=(\mathcal{S}\langle\mathcal{W}(G)\rangle,\square, \Delta_{\mathrm{H}})\) is a Hopf algebra._ ### Antipode The existence of antipode maps in \(\mathcal{H}_{\mathcal{T}}\) and \(\mathcal{H}_{\mathcal{S}}\) is guaranteed by the fact that these are graded connected bialgebras. In this section we construct the antipodes explicitly, relying on the total order on decorated trees introduced by Foissy [2], which can be used in the present context thanks to Theorem 4. **Definition 12**.: Let \(\omega\) be a walk, \(\mathrm{AdC}(\omega)\) be its set of admissible cuts which we assume to be not empty. Let \(1\leq n\leq|\mathrm{AdC}(\omega)|\) be a positive integer, \(c_{i}\in\mathrm{AdC}(\omega)\) a collection of \(n\) totally ordered, distinct, non-overlapping admissible cuts of \(\omega\) with \(c_{1}\leqslant_{\bigodot}\cdots\leqslant_{\bigodot}c_{n}\). Let \(e:=c_{1}\,|\,\ldots\,|\,c_{n}\in E\mathrm{AdC}(\omega)\), we may also conveniently use the notation \(|e|:=n\). We associate to \(e\) a tensor \(T_{e}\) and a disjoint union \(S_{e}\) as follows, \[T_{e}:=\omega_{c_{1},\ldots,c_{n}}\otimes(\omega_{c_{1},\ldots,c_{n-1}})^{c_{ n}}\otimes\cdots\otimes(\omega_{c_{1},\ldots,c_{i-1}})^{c_{i}}\otimes\cdots \otimes(\omega_{c_{1}})^{c_{2}}\otimes\omega^{c_{1}},\] and \[S_{e}:=\omega_{c_{1},\ldots,c_{n}}\,\square\,(\omega_{c_{1},\ldots,c_{n-1}})^{ c_{n}}\,\square\,\cdots\,\square\,(\omega_{c_{1},\ldots,c_{i-1}})^{c_{i}}\, \square\,\cdots\,\square\,(\omega_{c_{1}})^{c_{2}}\,\square\,\omega^{c_{1}}.\] **Example 12**.: Consider again the walk of Example 6, \[\omega=12333222456657=\] and three of its admissible cuts \(c_{1}=\omega^{2,4}\), \(c_{2}=\omega^{3,4}\) and \(c_{3}=\omega^{10,11}\). Since \(\omega^{3,4}\leqslant_{\bigodot}\omega^{2,4}\leqslant_{\bigodot}\omega^{10,11}\), for \(e:=c_{1}\,|\,c_{2}\,|\,c_{3}\in E\mathrm{AdC}(\omega)\), \(|e|=3\), **Theorem 11**.: _Let \(G\) be a digraph and \(\omega\in\mathcal{W}(G)\). Then, in \(\mathcal{T}\langle\mathcal{W}(G)\rangle\), the antipode \(S(\omega)\) calculated on \(\omega\) is,_ \[S(\omega)=-\omega-\sum_{e\in E\mathrm{AdC}(\omega)}(-1)^{|e|}T_{e}=-\omega- \sum_{n=1}^{|\mathrm{AdC}(\omega)|}\sum_{\begin{subarray}{c}c_{1}\leqslant_{ \bigodot}\cdots\leqslant_{\bigodot}c_{n}\\ c_{i}\in\mathrm{AdC}(\omega)\end{subarray}}(-1)^{n}\,T_{c_{1}|\ldots|c_{n}}\] _where \(|\mathrm{AdC}(\omega)|\) designates the cardinality of \(\mathrm{AdC}(\omega)\)._ **Corollary 12**.: _Let \(G\) be a digraph and \(\omega\in\mathcal{W}(G)\). Then, in \(\mathcal{S}\langle\mathcal{W}(G)\rangle\), the antipode \(S(\omega)\) calculated on \(\omega\) is,_ \[S(\omega)=-\omega-\sum_{e\in E\mathrm{AdC}(\omega)}(-1)^{|e|}S_{e}=-\omega- \sum_{n=1}^{|\mathrm{AdC}(\omega)|}\sum_{\begin{subarray}{c}c_{1}\leqslant_{ \bigodot}\cdots\leqslant_{\bigodot}c_{n}\\ c_{i}\in\mathrm{AdC}(\omega)\end{subarray}}(-1)^{n}\,S_{c_{1}|\ldots|c_{n}}\,,\] _where \(|\mathrm{AdC}(\omega)|\) designates the cardinality of \(\mathrm{AdC}(\omega)\)._ Proof of Theorem 11.: We prove the theorem by induction on the cardinality of \(\operatorname{AdC}(\omega)\), using the relation \(\varepsilon=\bullet\circ(\operatorname{Id}\otimes S)\circ\Delta_{\operatorname{H}}\) where \(\varepsilon\) is the counity of the Hopf algebra \(\mathcal{T}\langle\mathcal{W}(G)\rangle\), and the algebra antimorphism relation \(S(\omega|\omega^{\prime})=S(\omega)S(\omega^{\prime})\) for \(\omega,\omega^{\prime}\in\mathcal{W}(G)\). Firstly, if \(\operatorname{AdC}(\omega)=\emptyset\) then \(\omega\in\operatorname{SAW}(G)\cup\operatorname{SAP}(G)\) and therefore \(S(\omega)=-\omega\). Secondly, if \(\operatorname{AdC}(\omega)=\{\omega^{k,k^{\prime}}\}\) then by Proposition 2, \(\omega_{k,k^{\prime}}\in\operatorname{SAW}(G)\cup\operatorname{SAP}(G)\) and \[\Delta_{\operatorname{H}}(\omega)=\omega\otimes\mathbf{1}+\mathbf{1}\otimes \omega+\omega_{k,k^{\prime}}\otimes\omega^{k,k^{\prime}}.\] Consequently, \[S(\omega)=-\omega+\omega_{k,k^{\prime}}\otimes\omega^{k,k^{\prime}},\] as claimed by the theorem. Thirdly, we assume that there exists and integer \(n\in\mathbb{N}\) such that the theorem is satisfied by any walk \(\omega^{\prime}\in\mathcal{W}(G)\) with \(|\operatorname{AdC}(\omega^{\prime})|\leq n\). Consider \(\omega\in\mathcal{W}(G)\) a walk with \(|\operatorname{AdC}(\omega)|=n+1\). Then, \[S(\omega)= -\omega-\sum_{\begin{subarray}{c}k_{1}<k^{\prime}_{1}<\cdots<k_{ n}<k^{\prime}_{n}\\ \omega^{k_{i},k^{\prime}_{i}}\in\operatorname{AdC}(\omega)\end{subarray}} \omega_{k_{1},k^{\prime}_{1};\ldots;k_{n},k^{\prime}_{n}}\,\bullet\,S(\omega^{ k_{i},k^{\prime}_{1}}\,\bullet\,\ldots\,\bullet\,\omega^{k_{n},k^{\prime}_{n}})\] \[= -\omega-\sum_{\begin{subarray}{c}k_{1}<k^{\prime}_{1}<\cdots<k_{ n}<k^{\prime}_{n}\\ \omega^{k_{i},k^{\prime}_{i}}\in\operatorname{AdC}(\omega)\end{subarray}} \omega_{k_{1},k^{\prime}_{1};\ldots;k_{n},k^{\prime}_{n}}\,\bullet\,S(\omega^{ k_{n},k^{\prime}_{s}})\,\bullet\,\ldots\,\bullet\,S(\omega^{k_{1},k^{\prime}_{1}}).\] Thanks to Proposition 2, \[\bigcup_{i=1}^{n}\operatorname{AdC}(\omega^{k_{i},k^{\prime}_{i}})\subset \operatorname{AdC}(\omega).\] and as a consequence, \(\forall i\in\{1,\ldots,n\}\), \(|\operatorname{AdC}(\omega^{k_{i},k^{\prime}_{i}})|\leq n\) and by induction hypothesis the theorem holds true for all \(\omega^{k_{i},k^{\prime}_{i}}\). In particular, since any collection of \(m\) admissible cuts of any \(\omega^{k_{i},k^{\prime}_{i}}\) is totally ordered by \(\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}\hbox to 0.0pt{\raise 3.0pt \hbox{$\sim$}}}\hbox to 0.0pt{\raise 3.0pt\hbox{$\sim$}}}\hbox to 0.0pt{\raise 3.0pt\hbox{$ \sim$}}\hbox to 0. ## 5 Brace coalgebra and codendriform bialgebra on walks ### Brace coalgebra We show in this section that by paying attention to the number of admissible cuts appearing simultaneously in extended admissible cuts, we may endow \(\mathcal{T}\langle\mathcal{W}(G)\rangle\) with a brace coalgebra structure from which the preLie co-structure on \(\mathcal{W}(G)\) is recovered. We begin by recalling the necessary definitions pertaining to brace coalgebras. **Definition 13** (\(B_{\infty}\)-algebra): _Let \(\mathcal{V}\) be a vector space, \(\mathcal{T}\langle\mathcal{V}\rangle\) the tensor algebra generated by \(\mathcal{V}\) and let \(\pi\) be the canonical projection from \(\mathcal{T}\langle\mathcal{V}\rangle\) to \(\mathcal{V}\). A \(B_{\infty}\)-algebra is a family \((\mathcal{V},((-,-)_{k,l})_{k,l\geq 0})\) where \(\mathcal{V}\) is a vector space and for any \(k,l\geq 0\), \(\langle-,-\rangle_{k,l}:\mathcal{V}^{\otimes k}\otimes\mathcal{V}^{\otimes l} \longrightarrow\mathcal{V}\) such that:_ * \(\langle-,-\rangle_{k,0}=\langle-,-\rangle_{0,k}=0\) _if_ \(k\neq 1\) _and_ \(\langle-,-\rangle_{1,0}=\langle-,-\rangle_{0,1}=\mathrm{Id}_{\mathcal{V}}\)_._ * _The unique coalgebra morphism_ \(m:\mathcal{T}\langle\mathcal{V}\rangle\otimes\mathcal{T}\langle\mathcal{V} \rangle\longrightarrow\mathcal{T}\langle\mathcal{V}\rangle\) _defined by_ \(\pi\circ m_{\mathcal{V}^{\otimes k}\otimes\mathcal{V}^{\otimes l}}=\langle-,- \rangle_{k}\) _is associative._ _Then, equipped with the deconcatenation coproduct \(\Delta_{\mathrm{dec}}(v_{1}\ldots v_{n}):=\sum_{i=0}^{n}v_{1}\ldots v_{i} \otimes v_{i+1}\ldots v_{n}\), \((\mathcal{T}\langle\mathcal{V}\rangle,m,\Delta_{\mathrm{dec}})\) is a Hopf algebra._ A _brace algebra_ is a \(B_{\infty}\)-algebra such that \(\langle-,-\rangle_{k,l}=0\) if \(k\geq 2\). If \(\mathcal{V}\) is a brace algebra, then for any \(u=x_{1}\ldots x_{k}\in\mathcal{V}^{\otimes k}\) and \(v\in\mathcal{T}\langle\mathcal{V}\rangle_{+}\), \[m(u\otimes v)=\sum_{v\,=\,v_{0}\ldots v_{2k}}v_{0}\langle x_{1},v_{1}\rangle v _{2}\ldots\langle x_{k},v_{2k-1}\rangle v_{2k},\] where \(v_{i}\) may be empty. Dually, a locally finite brace coalgebra is a family \((\mathcal{V},(\delta_{n})_{n\geq 1})\) where \(\mathcal{V}\) is a vector space and for any \(n\), \(\delta_{n}:\mathcal{V}\longrightarrow\mathcal{V}\otimes\mathcal{V}^{\otimes n}\) such that \(\delta_{1}=\mathrm{Id}_{\mathcal{V}}\); for any \(v\in\mathcal{V}\), there exists \(N(v)\in\mathbb{N}\) such that if \(n\geq N(v)\), then \(\delta_{n}(v)=0\); and the algebra morphism defined by \[\Delta:\left\{\begin{array}{rcl}\mathcal{T}\langle\mathcal{V}\rangle& \longrightarrow&\mathcal{T}\langle\mathcal{V}\rangle\otimes\mathcal{T}\langle \mathcal{V}\rangle\\ v&\longmapsto&\Delta(v)=1\otimes v+v\otimes 1+\sum_{n\geq 1}\ \underbrace{\delta_{n}(v)}_{ \in\mathcal{V}\otimes\mathcal{V}^{\otimes n}\subseteq\mathcal{T}\langle \mathcal{V}\rangle\otimes\mathcal{T}\langle\mathcal{V}\rangle},\end{array}\right. \tag{7}\] is coassociative. Then, \((\mathcal{T}\langle\mathcal{V}\rangle,\raisebox{-1.0pt}{\scalebox{1.5}{ \scalebox{1.5}{\scalebox{1.5}{\scalebox{1.5}{\scalebox{1.5}{\scalebox{1.5}{ \scalebox{1.5}{\scalebox{1.5}{\scalebox{1.5}{\scalebox{1.5}{\scalebox{1.5}{\scalebox{1.5}{ \scalebox{1.5}{\scalebox{1.5}{\scalebox{1.5}{\scalebox{1.5}{\scalebox{\scalebox{1.5}{ \scalebox{1.5}{\scalebox{\scalebox{1.5}{\scalebox{1.5}{\scalebox{1.5}{\scalebox{1.5}{ \scalebox{1.5}{\scalebox{1.5}{\scalebox{1.5}{\scalebox{\cdot}{\cdotcdotcdot}{\cdot{ \cdotcdotcdot{\ Proof.: For any \(v_{1},\ldots,v_{n}\in\mathcal{V}\), \[(\pi\otimes\pi)\circ\Delta(v_{1}\ldots v_{n})=\begin{cases}\delta_{1}(v_{1}),& \text{if }n=1,\\ v_{1}\otimes v_{2}+v_{2}\otimes v_{1},&\text{if }n=2,\\ 0,&\text{otherwise}.\end{cases}\] Therefore, for any \(v\in\mathcal{V}\), \[(\pi\otimes\pi\otimes\pi)\circ(\Delta\otimes\text{Id})\circ \Delta(v) =(\pi\otimes\pi\otimes\text{Id})\circ(\Delta\otimes\text{Id}) \circ\delta_{1}(v)\] \[=(\delta_{1}\otimes\text{Id})\circ\delta_{1}(v),\] \[(\pi\otimes\pi\otimes\pi)\circ(\text{Id}\otimes\Delta)\circ \Delta(v) =\sum_{k=1}^{\infty}(\text{Id}\otimes\pi\otimes\pi)\circ(\text{ Id}\otimes\Delta)\circ\delta_{n}(v)\] \[=(\text{Id}\otimes\delta_{1})\circ\delta_{1}(v)+(\text{Id} \otimes\text{Id}\otimes\text{Id}+\text{Id}\otimes\tau)\circ\delta_{2}(v).\] As a consequence, by the coassociativity of \(\Delta\), \[(\delta_{1}\otimes\text{Id})\circ\delta_{1}-(\text{Id}\otimes\delta_{1}) \circ\delta_{1}=(\text{Id}\otimes\text{Id}\otimes\text{Id}+\text{Id}\otimes \tau)\circ\delta_{2},\] and it follows that, \[(\delta_{1}\otimes\text{Id})\circ\delta_{1}-(\text{Id}\otimes\delta_{1}) \circ\delta_{1}=(\text{Id}\otimes\text{Id}\otimes\text{Id}+\text{Id}\otimes \tau)\circ((\delta_{1}\otimes\text{Id})\circ\delta_{1}-\circ(\text{Id}\otimes \delta_{1})\circ\delta_{1}),\] so \((\mathcal{V},\delta_{1})\) is a preLie coalgebra. In the case of interest here, namely that of \(\mathcal{W}(G)\), define for \(\omega\in\mathcal{W}(G)\), \[\delta_{n}(\omega):=\sum_{c\in E_{n}\text{AdC}(w)}\omega_{c}\otimes\omega^{c},\] with \(c\) an extended admissible cut _involving exactly \(n\) admissible cuts_, i.e. \(c\in E_{n}\text{AdC}(w)\iff\omega^{c}=\omega^{k_{1},k_{1}^{\prime}}|\cdots| \,\omega^{k_{n},k_{n}^{\prime}}\), \(\omega^{k_{i},k_{1}^{\prime}}\in\text{AdC}(\omega)\). By Theorem 8, the coproduct defined as in Eq. (7) with the above definition for the \(\delta_{n}\), namely \(\Delta_{\text{H}}\), is coassociative. Proposition 13 then implies that \((\mathcal{W}(G),\delta_{1})\), \(\delta_{1}\equiv\Delta_{\text{CP}}\), is a preLie coalgebra. In other terms, Theorem 6 may be seen as a corollary of Theorem 8. ### Codendriform bialgebra The brace co-structure on \(\mathcal{W}(G)\) now implies that \(\mathcal{T}(\mathcal{W}(G))\) is a codendriform bialgebra, a dual of the results of [9]. Denoting by \(E\text{AdC}_{+}(\omega)=E\text{AdC}(\omega)\cup\{\mathbf{1},\omega\}\) the set of extended admissible cuts of \(\omega\) augmented by the empty cut and the total cut, recall that for any \(n\geq 1\) walks \(\omega_{1},\cdots,\omega_{n}\in\mathcal{W}(G)\), \[\Delta_{\text{H}}(w_{1}\mid\ldots\mid w_{n})=\sum_{c_{i}\in E\text{AdC}_{+}( \omega_{i})}(\omega_{1})_{c_{1}}\mid\ldots\mid(\omega_{n})_{c_{n}}\otimes \omega_{1}^{c_{1}}\mid\ldots\mid\omega_{n}^{c_{n}}.\] Now define, for any nonempty word \(\omega_{1}\mid\ldots\mid\omega_{n}\in\mathcal{T}(\mathcal{W}(G))\), the maps \[\Delta_{\prec}(\omega_{1}\mid\ldots\mid\omega_{n}):=\sum_{\begin{subarray}{c}c_ {i}\in E\text{AdC}_{+}(\omega_{i}),\\ (\omega_{1})_{c_{1}}\not=\mathbf{1}\end{subarray}}(\omega_{1})_{c_{1}}\mid \ldots\mid(\omega_{n})_{c_{n}}\otimes\omega_{1}^{c_{1}}\mid\ldots\mid\omega_{ n}^{c_{n}},\] \[\Delta_{\succ}(w_{1}\mid\ldots\mid w_{n}):=\sum_{\begin{subarray}{c}c_{i}\in E \text{AdC}_{+}(w_{i}),\\ (\omega_{1})_{c_{1}}=\mathbf{1}\end{subarray}}(\omega_{1})_{c_{1}}\mid\ldots \mid(\omega_{n})_{c_{n}}\otimes\omega_{1}^{c_{1}}\mid\ldots\mid\omega_{n}^{c_{n}}.\] **Proposition 14**.: \((\mathcal{T}(\mathcal{W}(G)),\Delta_{\prec},\Delta_{\succ})\) _is a codendriform bialgebra. Furthermore, for any \(x\in\mathcal{T}(\mathcal{W}(G))\) with no constant term and any \(y\in\mathcal{T}(\mathcal{W}(G))\), \(\Delta_{\prec}(x\mid y)=\Delta_{\prec}(x)\mid\Delta_{\mathrm{H}}(y)\), and \(\Delta_{\succ}(x\mid y)=\Delta_{\succ}(x)\mid\Delta_{\mathrm{H}}(y)\)._ Proof.: By the coassociativity of \(\Delta_{\mathrm{H}}\), for any \(\omega\in\mathcal{W}(G)\), \((\Delta_{\mathrm{H}}\otimes\mathrm{Id})\circ\Delta_{\mathrm{H}}(\omega)=( \mathrm{Id}\otimes\Delta_{\mathrm{H}})\circ\Delta_{\mathrm{H}}(\omega)\), that is \[\sum_{c\in E\mathrm{AdC}_{+}(\omega)}\sum_{c^{\prime}\in E\mathrm{AdC}_{+}( \omega_{c})}(\omega_{c})_{c^{\prime}}\otimes(\omega_{c})^{c^{\prime}}\otimes \omega^{c}=\sum_{c\in E\mathrm{AdC}_{+}(\omega)}\sum_{c^{\prime}\in E\mathrm{ AdC}_{+}(\omega^{c})}\omega_{c}\otimes(\omega^{c})_{c^{\prime}}\otimes(\omega^{c}) ^{c^{\prime}}.\] Thus, there exists a set \(E\mathrm{AdC}_{+}^{(2)}(\omega)\) such that the above may be put in the form \[\sum_{c\in E\mathrm{AdC}_{+}^{(2)}(\omega)}\omega_{c}\otimes\omega^{c(1)} \otimes\omega^{c(2)}.\] Then, using this notation, \[(\Delta_{\mathrm{H}}\otimes\mathrm{Id})\circ\Delta_{\succ}(\omega _{1}|\ldots|\omega_{n})=(\mathrm{Id}\otimes\Delta_{\succ})\circ\Delta_{ \succ}(\omega_{1}|\ldots|\omega_{n})\] \[=\sum_{\begin{subarray}{c}c_{i}\in E\mathrm{AdC}_{+}^{(2)}(w_{i} ),\\ (\omega_{1})_{c_{1}}=\omega_{1}^{c_{1}(1)}=\mathbf{1}\end{subarray}}(\omega_{1} )_{c_{1}}\mid\ldots\mid(\omega_{n})_{c_{1}}\otimes\omega_{1}^{c_{1}(1)}\mid \ldots\mid\omega_{n}^{c_{n}(1)}\otimes\omega_{1}^{c_{1}(2)}\mid\ldots\mid \omega_{n}^{c_{n}(2)},\] \[(\Delta_{\succ}\otimes\mathrm{Id})\circ\Delta_{\prec}(\omega_{1} |\ldots|\omega_{n})=(\mathrm{Id}\otimes\Delta_{\prec})\circ\Delta_{\succ}( \omega_{1}|\ldots|\omega_{n})\] \[=\sum_{\begin{subarray}{c}c_{i}\in E\mathrm{AdC}_{+}^{(2)}(w_{i} ),\\ (\omega_{1})_{c_{1}}=\mathbf{1},\,\omega_{1}^{c_{1}(1)}\neq\mathbf{1}\end{subarray}}( \omega_{1})_{c_{1}}\mid\ldots\mid(\omega_{n})_{c_{1}}\otimes\omega_{1}^{c_{1}( 1)}\mid\ldots\mid\omega_{n}^{c_{n}(1)}\otimes\omega_{1}^{c_{1}(2)}\mid\ldots \mid\omega_{n}^{c_{n}(2)},\] \[(\Delta_{\prec}\otimes\mathrm{Id})\circ\Delta_{\prec}(\omega_{1} |\ldots|\omega_{n})=(\mathrm{Id}\otimes\Delta_{\mathrm{H}})\circ\Delta_{ \prec}(\omega_{1}|\ldots|\omega_{n})\] \[=\sum_{\begin{subarray}{c}c_{i}\in E\mathrm{AdC}_{+}^{(2)}(w_{i} ),\\ (\omega_{1})_{c_{1}}\neq\mathbf{1}\end{subarray}}(\omega_{1})_{c_{1}}\mid\ldots \mid(\omega_{n})_{c_{1}}\otimes\omega_{1}^{c_{1}(1)}\mid\ldots\mid\omega_{n}^ {c_{n}(1)}\otimes\omega_{1}^{c_{1}(2)}\mid\ldots\mid\omega_{n}^{c_{n}(2)}.\] ## 6 Cacti, towers and corollas Recall from Definition 6 that a cactus is a kind of "disentangled" walk resembling a self-avoiding skeleton on which bouquets of towers are attached. Given that by the proof of Theorem 4 all walks are chronologically equivalent to cacti, it seems intuitive that bouquets and towers are basic building blocks of walks and ought to be associated to sub-algebras of the walk algebras. In this section we formalize this observation by showing first that cacti, towers and corollas (a special type of bouquets) give rise to sub-Hopf algebras of the tensor and symmetric algebras of all walks; and secondly that the mapping from walks to cacti effected by the map \(C\) defined in the proof of Theorem 4 generates Hopf algebra morphisms. In a later work, using the permutative non-associative product nesting and the NAP-copreLie operad it forms with \(\Delta_{\mathrm{CP}}\), we will formalize and exploit algebraically the construction of walks from bouquets and towers based on Lawler's process. ### Hopf subalgebras associated to cacti, towers and corollas **Definition 14** (Tower).: Let \(G\) be a digraph. A tower with root \(r_{1}\) and of height \(n\in\mathbb{N}\backslash\{0\}\) is a closed walk made of a collection \(\mathrm{Cycl}_{1}\),..., \(\mathrm{Cycl}_{n}\) of _simple cycles_ with roots \(r_{1}\),..., \(r_{n}\), respectively, and such that: 1. \(V(\operatorname{Cycl}_{k})\cap V(\operatorname{Cycl}_{k+1})=\{r_{k+1}\}\) for any \(k\in\{1,\ldots,n-1\}\), 2. \(V(\operatorname{Cycl}_{k})\cap V(\operatorname{Cycl}_{l})=\emptyset\) whenever \(|k-l|>1\). The vector space spanned by the towers of \(G\) is denoted by \(\operatorname{Tow}(G)\). The space \(\mathcal{T}\langle\operatorname{Tow}(G)\rangle\) (respectively \(\mathcal{S}\langle\operatorname{Tow}(G)\rangle\)) is the tensor algebra (respectively the symmetric algebra) generated by \(\operatorname{Tow}(G)\). **Definition 15** (Corolla).: Let \(G\) be a digraph. A corolla of root \(r\) in \(G\) is a closed walk made of \(n\in\mathbb{N}\backslash\{0\}\) simple cycles \(\operatorname{Cycl}_{1}\),..., \(\operatorname{Cycl}_{n}\), all with a common root \(r\). Corollas are bouquets of simple cycles. The vector space spanned by all corollas (respectively corollas of root \(r\)) of \(G\) is \(\operatorname{Cor}(G)\) (respectively \(\operatorname{Cor}_{r}(G)\)). The space \(\mathcal{T}\langle\operatorname{Cor}(G)\rangle\) (respectively \(\mathcal{S}\langle\operatorname{Cor}(G)\rangle\)) is the tensor algebra (respectively the symmetric algebra) generated by \(\operatorname{Cor}(G)\). We define the spaces \(\mathcal{T}\langle\operatorname{Cor}_{r}(G)\rangle\) and \(\mathcal{S}\langle\operatorname{Cor}_{r}(G)\rangle\) similarly from \(\operatorname{Cor}_{r}(G)\). **Example 14**.: The walk \(123454321\in\operatorname{Tow}(G)\) is a tower, while walks \(111\in\operatorname{Cor}_{1}(G)\) and \(123412451\in\operatorname{Cor}_{1}(G)\) are corollas with root \(1\). **Proposition 15**.: _Let \(G\) be a digraph and \(r\in V(G)\). Then,_ 1. \((\mathcal{T}\langle\operatorname{Tow}(G)\rangle,\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}},\Delta_{\mathrm{H}})\)_,_ \((\mathcal{T}\langle\operatorname{Cor}_{r}(G)\rangle,\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}},\Delta_{\mathrm{H}})\)_,_ \((\mathcal{T}\langle\operatorname{Cor}(G)\rangle,\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}},\Delta_{\mathrm{H}})\) _and_ \((\mathcal{T}\langle\operatorname{Cact}(G)\rangle,\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}},\Delta_{\mathrm{H}})\) _are Hopf subalgebras of_ \((\mathcal{T}\langle\mathcal{W}(G)\rangle,\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}},\Delta_{\mathrm{H}})\)_._ 2. \((\mathcal{S}\langle\operatorname{Tow}(G)\rangle,\square,\Delta_{\mathrm{H}})\)_,_ \((\mathcal{S}\langle\operatorname{Cor}_{r}(G)\rangle,\square,\Delta_{\mathrm{H }})\)_,_ \((\mathcal{S}\langle\operatorname{Cor}(G)\rangle,\square,\Delta_{\mathrm{H}})\) _and_ \((\mathcal{S}\langle\operatorname{Cact}(G)\rangle,\square,\Delta_{\mathrm{H}})\) _are Hopf subalgebras of_ \((\mathcal{S}\langle\mathcal{W}(G)\rangle,\square,\Delta_{\mathrm{H}})\)_._ Proof.: Firstly, the claims regarding \((\mathcal{T}\langle\operatorname{Tow}(G)\rangle,\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}},\Delta_{\mathrm{H}})\) and \((\mathcal{S}\langle\operatorname{Tow}(G)\rangle,\square,\Delta_{\mathrm{H}})\) are shown by direct calculation. Secondly, let \(\omega\) be a corolla with root \(r\in V(G)\) comprising \(n\in\mathbb{N}\backslash\{0\}\) simple cycles \(\operatorname{Cycl}_{1\leq k\leq n}\). Let \(v\in V(G)\) be a vertex other than the root \(r\) visited by \(\omega\). Since \(\operatorname{Cycl}_{1}\),..., \(\operatorname{Cycl}_{n}\) are simple cycles, if \(v\) is visited several times by \(\omega\) then two instances of \(v\) cannot be found within a unique simple cycle. But by using Remark 2 equivalent to Definition 4 for the loop- erased sections, any subwalk \(\omega^{l,l^{\prime}}=w_{l}\cdots w_{l^{\prime}}\) with \(w_{l}=w_{l^{\prime}}=v\) is not an admissible cut of \(\omega\), as it is not a valid loop-erased section of \(\omega\). Then all the admissible cuts of \(\omega\) take place at the root \(r\), which implies the claims for \((\mathcal{T}\langle\operatorname{Cor}(G)\rangle,\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}},\Delta_{\mathrm{H}})\), \((\mathcal{S}\langle\operatorname{Cor}_{i}(G)\rangle,\square,\Delta_{\mathrm{H }})\) and \((\mathcal{S}\langle\operatorname{Cor}(G)\rangle,\square,\Delta_{\mathrm{H}})\). Thirdly, the claims about \((\mathcal{T}\langle\operatorname{Cact}(G)\rangle,\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}},\Delta_{\mathrm{H}})\) and \((\mathcal{S}\langle\operatorname{Cact}(G)\rangle,\square,\Delta_{\mathrm{H}})\) both follow from Proposition 7 and the fact that an admissible cut of a walk \(\omega\) is, by definition, a loop-erased section of \(\omega\). **Remark 5**.: Since Proposition 15 establishes Hopf algebra structures on the tensor algebras generated by towers, corollas and cacti, the constructions of SS5 extend to these walks as well. That is, there are brace coalgebras and codendriform bialgebras on towers, corollas and cacti and these are sub coalgebras of the structures of SS5 on all walks. ### The cactus map generates Hopf algebra morphisms We now show that the map \(C\) defined in Eq. (1) which sends a walk \(\omega\) to a cactus generates Hopf algebra morphisms. Recall that, by definition, \(C(\omega)\) is a cactus in the complete graph \(K_{\mathbb{N}}\) with \(V(K_{\mathbb{N}})=\mathbb{N}\). Let \(\mathcal{I}_{\mathbb{N}}\) be the set of the injective maps \(\mathbb{N}\to\mathbb{N}\). For \(f\in\mathcal{I}_{\mathbb{N}}\) and \(\omega=w_{0}\cdots w_{\ell}\in\mathcal{W}(K_{\mathbb{N}})\), we denote by \(f(\omega)\in\mathcal{W}(K_{\mathbb{N}})\) the walk defined by \(f(\omega):=f(w_{0})\ldots f(w_{\ell})\). **Definition 16**.: Let \(\mathcal{J}_{1}\) and \(\mathcal{J}_{2}\) be the vector spaces defined by: \[\mathcal{J}_{1} :=\operatorname{Span}\bigl{(}\omega_{1}\,|\,\ldots\,|\,\omega_{ n}-f_{1}(\omega_{1})\,|\,\ldots\,|\,f_{n}(\omega_{n});\ n\in\mathbb{N}\backslash\{0\},\omega_{i}\in \operatorname{Cact}(K_{\mathbb{N}}),f_{i}\in\mathcal{I}_{\mathbb{N}}\bigr{)},\] \[\mathcal{J}_{2} :=\operatorname{Span}\bigl{(}\omega_{1}\square\,\ldots\,\square \,\omega_{n}-f_{1}(\omega_{1})\,\square\,\ldots\,\square\,f_{n}(\omega_{n}); \ n\in\mathbb{N}\backslash\{0\},\omega_{i}\in\operatorname{Cact}(K_{\mathbb{N }}),f_{i}\in\mathcal{I}_{\mathbb{N}}\bigr{)}.\] **Proposition 16**.: _The vector space \(\mathcal{J}_{1}\) (respectively \(\mathcal{J}_{2}\)) is a Hopf biideal of \(\mathcal{T}\langle\operatorname{Cact}(K_{\mathbb{N}})\rangle\) (respectively \(\mathcal{S}\langle\operatorname{Cact}(K_{\mathbb{N}})\rangle\))._ Proof.: We prove the result for \(\mathcal{J}_{1}\). The reasoning for \(\mathcal{J}_{2}\) is entirely similar. Let \(\omega=w_{0}\cdots w_{\ell}\in\mathcal{W}(G)\), then for any injective map \(f\in\mathcal{I}_{\mathbb{N}}\), the length of \(f(\omega)\) is still \(\ell\) and we have the relation Eq. (2), that is \[\omega^{k,k^{\prime}}\in\operatorname{AdC}(\omega)\iff f(\omega)^{k,k^{\prime }}\in\operatorname{AdC}(f(\omega)). \tag{8}\] Therefore, if \(\omega\in\operatorname{Cact}(f)\), \(f(\omega)\) is also a cactus. Now let \(\alpha:=\omega_{1}|\ldots|\omega_{n}-f_{1}(\omega_{1})|\ldots|f_{n}(\omega_{n})\) be a generator of \(\mathcal{J}_{1}\) and \(\beta:=\tau_{1}|\ldots|\tau_{m}\in\mathcal{T}\langle\operatorname{Cact}(K_{ \mathbb{N}})\rangle\), \[\alpha\,\bullet\,\beta =\omega_{1}\,|\,\ldots\,|\,\omega_{n}\,|\,\tau_{1}\,|\,\ldots\,| \,\tau_{m}-f_{1}(\omega_{1})\,|\,\ldots\,|\,f_{n}(\omega_{n})\,|\,\tau_{1}\,| \,\ldots\,|\,\tau_{m}\] \[=\omega_{1}\,|\,\ldots\,|\,\omega_{n}\,|\,\tau_{1}\,|\,\ldots\,| \,\tau_{m}-f_{1}(\omega_{1})\,|\,\ldots\,|\,f_{n}(\omega_{n})\,|\,\mathrm{Id}( \tau_{1})\,|\,\ldots\,|\,\mathrm{Id}(\tau_{m}).\] So we obtain \(\alpha\,\bullet\,\beta\in\mathcal{J}_{1}\) and similarly \(\beta\,\bullet\,\alpha\in\mathcal{J}_{1}\). As a consequence, \(\mathcal{J}\) is an ideal. Let \(\omega=w_{0}\ldots w_{\ell}\in\operatorname{Cact}(G)\) and \(f\in\mathcal{I}_{\mathbb{N}}\). By injectivity of \(f\) for \(c\in E\mathrm{AdC}(\omega)\) with \(\omega^{c}:=\omega^{k_{1},k^{\prime}_{1};\ldots;k_{n}k^{\prime}_{n}}\), we have \[f(\omega)_{c}=f(\omega)_{k_{1},k^{\prime}_{1};\ldots;k_{n}k^{\prime}_{n}}=f( \omega_{k_{1},k^{\prime}_{1};\ldots;k_{n}k^{\prime}_{n}})=f(\omega_{c}).\] Therefore \[\Delta_{\mathrm{H}}(\omega-f(\omega)) =(\omega-f(\omega))\otimes\mathbf{1}+\mathbf{1}\otimes(\omega-f (\omega))+\sum_{c\in E\mathrm{AdC}(\omega)}\big{\{}\omega_{c}\otimes\omega^{c} -f(\omega)_{c}\otimes f(\omega)^{c}\big{\}},\] \[=(\omega-f(\omega))\otimes\mathbf{1}+\mathbf{1}\otimes(\omega-f (\omega))+\sum_{c\in E\mathrm{AdC}(\omega)}\big{\{}\omega_{c}\otimes\omega^{c} -\omega_{c}\otimes f(\omega)^{c}\big{\}}\] \[\phantom{=}+\sum_{c\in E\mathrm{AdC}(\omega)}\big{\{}\omega_{c} \otimes f(\omega)^{c}-f(\omega_{c})\otimes f(\omega)^{c}\big{\}}.\] This shows that \(\Delta_{\mathrm{H}}(\omega-f(\omega))\in\mathcal{T}\langle\operatorname{Cact}(K _{\mathbb{N}})\rangle\otimes\mathcal{J}_{1}+\mathcal{J}_{1}\otimes\mathcal{T} \langle\operatorname{Cact}(K_{\mathbb{N}})\rangle\). Since furthermore \(\Delta_{\mathrm{H}}\) is an algebra morphism, we conclude that \(\mathcal{J}_{1}\) is a coideal. Finally, by Eq. (2), Theorem 11 and the fact the antipode is an algebra antimorphism, we get \(S(\mathcal{J}_{1})\subset\mathcal{J}_{1}\). **Remark 6**.: The elements of \(\mathcal{T}\langle\operatorname{Cact}\rangle(K_{\mathbb{N}})/\mathcal{J}_{1}\) and \(\mathcal{S}\langle\operatorname{Cact}\rangle(K_{\mathbb{N}})/\mathcal{J}_{2}\) can be seen as cacti where the node labels have been forgotten since the node labels are defined modulo the action of \(\mathcal{I}_{\mathbb{N}}\). These Hopf algebras can thus legitimately be called the tensor and symmetric Hopf algebras of unlabeled cacti, respectively. By direct calculation, **Proposition 17**.: _The degree map \(\deg\) makes \(\mathcal{T}\langle\mathrm{Cact}(K_{\mathbb{N}})\rangle/\mathcal{J}_{1}\) and \(\mathcal{S}\langle\mathrm{Cact}(K_{\mathbb{N}})\rangle/\mathcal{J}_{2}\) into graded Hopf algebras._ **Theorem 18**.: _Let \(G\) be a digraph. Let \(\Phi_{1}:\mathcal{T}\langle\mathcal{W}(G)\rangle\to\mathcal{T}\langle\mathrm{ Cact}(K_{\mathbb{N}})\rangle/\mathcal{J}_{1}\) and \(\Phi_{2}:\mathcal{S}\langle\mathcal{W}(G)\rangle\to\mathcal{T}\langle \mathrm{Cact}(K_{\mathbb{N}})\rangle/\mathcal{J}_{2}\) be the two algebra morphisms such that \(\Phi_{i}(\omega)\) is the unlabeled cactus obtained from \(C(\omega)\) by forgetting all its node labels. Then \(\Phi_{1}\) and \(\Phi_{2}\) are Hopf algebra morphisms._ Proof.: By definition, the cardinalities of \(V(\omega)\) and \(V(C(\omega))\) are equal, \(C(\omega)\) is a cactus and Eq. (2) holds. By the definition of \(\Delta_{\mathrm{H}}\) and the formulas of the antipode given in Theorem 11 and Corollary 12, we prove the theorem. ## 7 Acknowledgements C. Mammez and P.-L. Giscard are supported by the ANR Alcohol project ANR-19-CE40-0006. In addition, C. Mammez aknowledges support from Labex CEMPI, ANR-11-LABX-0007-01. P.-L. Giscard also received funding from ANR Magica project ANR-20-CE29-0007.
2309.07293
GAN-based Algorithm for Efficient Image Inpainting
Global pandemic due to the spread of COVID-19 has post challenges in a new dimension on facial recognition, where people start to wear masks. Under such condition, the authors consider utilizing machine learning in image inpainting to tackle the problem, by complete the possible face that is originally covered in mask. In particular, autoencoder has great potential on retaining important, general features of the image as well as the generative power of the generative adversarial network (GAN). The authors implement a combination of the two models, context encoders and explain how it combines the power of the two models and train the model with 50,000 images of influencers faces and yields a solid result that still contains space for improvements. Furthermore, the authors discuss some shortcomings with the model, their possible improvements, as well as some area of study for future investigation for applicative perspective, as well as directions to further enhance and refine the model.
Zhengyang Han, Zehao Jiang, Yuan Ju
2023-09-13T20:28:54Z
http://arxiv.org/abs/2309.07293v1
# GAN-based Algorithm for Efficient Image Inpainting. ###### Abstract Global pandemic due to the spread of COVID-19 has post challenges in a new dimension on facial recognition, where people start to wear masks. Under such condition, the authors consider utilizing machine learning in image inpainting to tackle the problem, by complete the possible face that is originally covered in mask. In particular, autoencoder has great potential on retaining important, general features of the image as well as the generative power of the generative adversarial network (GAN). The authors implement a combination of the two models, context encoders and explain how it combines the power of the two models and train the model with 50,000 images of influencers faces and yields a solid result that still contains space for improvements. Furthermore, the authors discuss some shortcomings with the model, their possible improvements, as well as some area of study for future investigation for applicative perspective, as well as directions to further enhance and refine the model. Image inpainting, Generative Adversarial Network (GAN), autoencoder. ## 1 Introduction Generative Adversarial Network (GAN) is First introduced by American Computer Scientist Ian Goodfellow in 2014, which has seen become a popular generative method in the area of machine learning [1]. GAN requires two separately trained models, the discriminator that classifies the generated data from the real data set, and the data generator to generate data similar to the real ones in order to fool the trained discriminator. This model is widely used to generate images of different styles, to enhance image resolution, or image blending, etc. It is also capable of image inpainting, completing a partially masked or blurred image. In fact, as a generative method, GAN has demonstrated solid performance when tackling a specific task, i.e., image inpainting. Image inpainting is the process to fill a damaged image, with a particular region either blackout or covered, with coherent content in relation to the rest of the images. In particular, facial inpainting is the task of image inpainting where the damaged region is a part of a human's face. During the covid era, when more populations endorse masks to prevent the spread of disease, the aspect of face recognition faces new challenges. Wearing masks poses new challenges to the already mature face recognition tasks, as masks cover important facial features and details for the computer to recognize a person. Essentially, images or photos of people wearing masks work the same way as those damaged or flawed face photos, and one solution is face inpainting, reconstructing the original faces without losing too much detail. As such, scientists and machine learning engineers are looking at options to utilize GAN in masked face recognition. Date back to 2017, a group of computer vision researchers proposed the generative model in face completion [1]. This group constructs a GAN model where the discriminator relies on multiple different loss function, which accounts for reconstruction loss, adversarial loss, and semantic loss and yields results that reflect the performance quantitatively and qualitatively. In 2020, a group of Chinese scientists, inspired by the effectiveness of recurrent neural networks (RNN) in low-level image tasks and the generative power of GAN, proposed ideas to construct a recurrent generative adversarial network for face inpainting [2]. This group of scientists constructed a recurrent GAN, in which the generator consists of two convolutional neural networks (CNN), and a RNN, which effectively utilizes any possible relationship between different captured features. According to the scientists, their model outperformed other models, including GIICA and GLCIC, in completing facial images. During the same year, a group of Korean computer engineers proposed a novel strategy to tackle the task [3]. They trained two separate discriminators, one checks the inpainting part in detail, the other assures global coherency. The generator was then trained to simultaneously fool both discriminators. Their model was also reported to perform with reliable results when tackling masked faces or damaged images in comparison to other state-of-the-art image inpainting methods. Around the same time, another group of Chinese scientists considered the possibility of utilizing the Coherent Semantic Attention (CSA) layer in the regular GAN structure [4]. This layer emphasizes the correlation between the generated portion of the image to the context of the image by searching through the rest of the image when training the generator. Those scientists also introduce the concept of consistency loss in addition to the regular loss function, which allows the model to adjust its CSA layer. Their model is, in fact, also effective in generating specific missing regions when tackling image inpainting task. In addition, similar approaches are also endorsed by researchers on contextual attention when tackling generative image inpainting [5]. Context encoder, on the other hand, is a model consisting of a generator based on autoencoder and an adversarial discriminator that achieves image inpainting [6]. The model works in a similar way to GAN, only that its generator, a denoising autoencoder consisting of multiple layers of encoders and decoders. This autoencoder is trained in a similar way to other autoencoders, in an unsupervised learning way. Each encoder layer preserves more general and more abstract features, and when decoding layers attempt to reconstruct the image, the generator forgets about the masks or flaws and generates images only based on learnt features. Hence, this model is able to fulfill damaged images while learning to fool the adversarial discriminator. ## 2 Our Method and Model ### Overview In this paper, the authors aim to implement a baseline GAN model consisting of a generic discriminator, and then implement a generator consisting of layers of encoders and decoders. The authors then examine the possible shortcomings of the model and attempt to refine the resolution, clarity, and convergence speed of our model. The authors also attempt to enhance the model in an applicative aspect to achieve face inpainting tasks. The authors construct a context encoder based on the general structure introduced by Pathak and implement it, rather on street view, but on facial images. The authors then examine the performance of the entire model through different lenses and discuss possible further improvements in either algorithms or possible applications and future research aspects. The framework is shown in Figure 1. ### Discriminator This adversarial discriminator is designed to distinguish between the real image data provided by the dataset from the generated image data from the generator of this network structure, as in most GAN. To enhance the equality of the generated images and widen the possible areas where our model can be applied to, the authors have altered our model Figure 1: Framework of our utilized method. from a generic GAN to a context encoder. This model is designed to deal with image of size 512\(\times\)512. The discriminator, instead of 5 keras dense layers, are consisted of 6 sub-discriminator unit fully connected with each other, each unit consists of 2 convolution layers, with kernel size of 3\(\times\)3 and 2\(\times\)2, respective. Each unit, except for the last unit, also contains a max pooling layer at the end to reduce the invariance and preserve dominant features as opposed to minor details. All hidden layers endorse the exponential linear unit (ELU) as an activation function, while the output layer uses sigmoid activation function. ### Generator This generator is designed to learn to generate images close to the original ones provided by the dataset with sufficient coherence to challenge the discriminator's ability to distinguish between the real ones and the generated ones, as in a normal GAN. However, this generator is not an usual convolutional neural network, but rather an autoencoder consists of an encoder and a decoder. This particular structure allows the encoder to process the original incomplete or damaged images and only encode the abstract or general features of the images when it reaches an information bottleneck. Hence, as the decoder reconstructs, the decoder forgets about the missing part and generates the image with coherency to the rest of the image. The encoder has a similar structure to the discriminator, with 6 sub-encoder units, each with 2 convolution layers and, except for the last unit, followed by a max pooling layer to reduce variance or other influences from any rotations or shifts. Then, the three channels, each representing one color of RGB, is reshaped to be stored as a latent vector. The encoder is then followed by a decoder consisting of 6 sub-decoder units. The size of the decoder layers is the same as those in the encoders but in reverse order, since its aim is to reconstruct the image. Each sub-decoder unit is consisting of 2 convolution layers, with kernel size 3\(\times\)3 and 2\(\times\)2 respectively. There is no pooling layer necessary for the decoder, and the activation function is ELU. ### Loss The loss for this model consists of two different losses, the reconstruction loss and the adversarial loss, which is shown in Eq.(1). The reconstruction loss of the autoencoder generator is 12 loss that captures the feature of missing or damaged region as well as the context of the image. The adversarial loss, on the other hand, is based on the negative logistic likelihood when the discriminator attempts to distinguish the images. When training the generator, however, the authors utilize a total loss, combining both adversarial loss from the discriminator and the reconstruction loss. This total loss preserves both the quality of the reconstructed image from the adversarial loss and the context of the image from the reconstruction loss. For each epoch, the overall loss is the mean square error (MSE) of all training or testing images. \[L_{total}=L_{reconstruction}+L_{adversarial} \tag{1}\] ### Implementation details With the selected dataset loaded and normalized properly, the authors then train the model on a local computer with downloaded celeb-A data. The minimum batch size is set to 8 at default and the model will be trained for 20 epochs at default. The learning rate is set at 1e-4. Further investigation on this learning rate and possible overfitting or underfitting is done later. The authors use Adam optimizer to allow the model to converge during training and first train the adversarial discriminator and then the generator. See result section for the results of this trained model. ## 3 Experiments ### Dataset description The dataset used for training and testing the model is the Influencer Human Real Face Data Set provided by Set Pretty Face, with resolution 128x128. This model is also capable of learning other featured image dataset, including but not limited to street views, mountain views, animated characters, vehicles, etc. The number of images chosen for the whole training and testing set is about 50000 images and split it in to 0.9:0.11 for Training: Testing. ### Comparison results The learning curve is shown in Figure 2. After training 20 epochs (each with 704 steps) for 14080 iterations with batch size equals to 64, our method achieves a good convergence, within a training time of 10 hours, 8 minutes, and 58 seconds. The total loss decreases a large amount at the first 1000 iterations, and the rate of decrease gradually becomes low and when the number of iterations larger than 10000, the value of total loss oscillates around 70. The results are listed in Figure 3. The result indicates solid potential in our model on image inpainting. ### Discussion Our model leaves space for improvements. Here are some issues the authors encounter during the project and a possible improvement that the authors consider implementing in future research and investigation. One issue, experienced in both the original model by Pathak and in our model, is the extensive time taken to train the model. While the authors do not record the exact training time, it takes approximately 10 hours to complete 20 epochs with a batch size of 64. While it seems an acceptable on the time scale, consider the fact the authors are training based on image size 128 X 128, training Figure 3: Sample testing output Figure 2: Training loss curve during training. datasets with larger image size, such as 512\(\times\)512 for example, may take longer time, possibly on an exponential scale. Hence, improvements can be made on the shortening the training time. It is possible that the model requires less training time once given a proper batch size. Using GPU instead of CPU should also allow the model to converge faster. Most importantly, however, the authors propose a possible improvement: adding batch normalization layers. The addition of batch normalization layers allows the output of hidden layers to be normalized before passing to the next corresponding layers. It ensures that all possible features in an output of a layer are on the same scale [7, 8]. Hence, when hidden layers attempt to capture important features, batch normalization layers are expected to avoid larger updates on one weight, small ones on another at the same time, and thus, the model converges faster. Furthermore, the authors state that image inpainting can be a potential solution to recognizing a human's masked face. Consider the possibility to first implement a CNN-based object detector, or an object removal network specifically trained to detect masks. This object detector should return the exact pixels where the mask is in the image, and then those returned pixels can be either blurred or blacked out. In 2022, researcher Yuan Zhou has implemented such face detector that relies on mobile net, which the authors believe can be further explored and refined [9]. Our context encoder now completes the partially blurred or blacked out image. Hence, our model can potentially be applied to, for example, face ID checking, where with the context encoder, the face recognition discriminator now only requires checking the reconstructed images of people without masks. Depending on the training dataset, the model can also be applied to remove the effects of rain or fog for photographers or even for surveillance cameras. As discussed, scientists are considering the possibility of using a recurrent GAN to tackle image completion tasks. Unlike regular neural networks, RNN retains some memory of the sequential features of the data. This characteristic may allow RNN to outperform other models in generative images or data where the surrounding or context is crucial. the authors hence suggest that further investigation can be focused on how to implement RNN rather than regular CNN in both generator and discriminator, and how such new model, recurrent GAN performs in comparison to the regular GAN or context encoder in the area of image inpainting. Apart from RGAN, DRGAN, disentangled representation learning GAN can also be a possible direction for face inpainting [10]. In 2017, researchers have already started to investigate the possibility of using DRGAN to tackle a serious problem in face recognition, that most images captured in real life feature different poses. DRGAN allows the model to learn and adjust different variations of people's face and body. The combination of DRGAN and context encoder may have the potential to solve complex face inpainting problems and is worth further consideration. ## 4 Conclusion In this paper, the authors aim to exploit the effectiveness of GAN models on image inpainting task. The authors design a model, which consists of a generator, a discriminator. Our context encoder model takes approximately 21 hours to complete the default 20 epochs of training at a local computer. The training and testing are done with a dataset of 2500 images of the 50,000 entire Influencer Human Real Face Dataset. The visualization results validate that the GAN model can handle the image inpainting task with satisfying performance. Furthermore, the authors also listed our limitations and future work for other researchers to design a much better GAN based image inpainting models.
2307.16632
Hybrid scale-free skin effect in non-Hermitian systems: A transfer matrix approach
Surpassing the individual characteristics of the non-Hermitian skin effect (NHSE) and the scale-free (SF) effect observed recently, we systematically exploit the exponential decay behavior of bulk eigenstates via the transfer matrix approach in non-Hermitian systems. We concentrate on one-dimensional (1D) finite-size non-Hermitian systems with 2*2 transfer matrices in either the absence or presence of the boundary impurity. We analytically unveil that the unidirectional SF effect emerges with the singular transfer matrices, while the hybrid scale-free skin (SFS) effect appears with the nonsingular transfer matrices even when an open boundary condition (OBC) is imposed. The unidirectional SF effect exceeds the scope of the SF effect in previous works, while the hybrid SFS effect is an interesting interplay between the skin effect and the SF effect in finite-size systems. Our results reveal that the skin effect under the OBC prevails when it coexists with the SF effect as the system approaches the thermodynamic limit in the presence of the hybrid SFS effect. Our approach paves the way for rigorous and unified explorations of the skin and SF effects in both Hermitian and non-Hermitian systems with generic boundary conditions.
Yongxu Fu, Yi Zhang
2023-07-31T13:09:11Z
http://arxiv.org/abs/2307.16632v2
# Hybrid skin-scale-free effect in non-Hermitian systems: A transfer matrix approach ###### Abstract Surpassing the individual buildings of the non-Hermitian skin effect (NHSE) and the scale-free localization (SFL) observed lately, we systematically exploit the exponential decay behavior of bulk eigenstates via the transfer matrix approach in non-Hermitian systems. We concentrate on the one-dimensional (1D) finite-size non-Hermitian systems with \(2\times 2\) transfer matrices in the participation of boundary impurity. We analytically unveil that the unidirectional pure scale-free (UPSF) effect emerges with the singular transfer matrices, while the hybrid skin-scale-free (HSSF) effect emerges with the nonsingular transfer matrices even though impose open boundary condition (OBC). The UPSF effect exceeds the scope of the SFL in previous works, while the HSSF effect is a charming interplay between the finite-size NHSE and SFL. Our results reveal that the NHSE under OBC prevails in the blend with the SFL as the system tends to the thermodynamic limit. Our approach paves an avenue to rigorously explore the finite-size display of the NHSE and SFL in both Hermitian and non-Hermitian systems with generic boundary conditions. ## I Introduction Exceeding the requirement of Hermitian operators for physical observables in quantum mechanics, the non-Hermitian physics has been widely broadened in the past few years [1; 2; 3; 4], which contains the basic energy band theory [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23], the higher-order topological phases [24; 25; 26; 27; 28; 29; 30; 31; 32], the unique exceptional points (EPs) in non-Hermitian systems [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46], and other subjects in the scope of condensed matter physics. The exploration of the NHSE in 1D non-Hermitian systems, which supports the accumulation of extensive number of nominal bulk eigenstates under OBC upon the single-particle level, is a milestone in the avenue of the research on the non-Hermitian systems [9; 10; 11; 12; 13; 14; 15; 16; 17]. The non-Bloch band theory with the concept of generalized Brillouin zone (GBZ) is fully explanatory for the NHSE [9; 11; 16], and more recently, similar counterparts have paved our way towards higher dimensions [47; 48; 49; 50]. A typical characteristic of the NHSE-eigenstates is their exponential decay in the bulk region. The exponential factor, corresponding to the relevant point on the GBZ, alters the wave vector in the traditional Bloch band theory from real value to complex value, a core context of the non-Bloch band theory. Strictly speaking, the complexity of the non-Hermitian systems does not constrain the localized behavior of the bulk eigenstates in the NHSE. The SFL, which suggests the existence of the eigenstates with size-dependent localization length in 1D non-Hermitian systems, is explored lately [51; 52; 53; 54; 55; 56; 57]. Nevertheless, the scale-free behavior is model-dependent, which is a seemingly accidental phenomenon closely related to the impurities [58; 59; 60; 61; 62; 63; 64; 65], and lack a unified perspective in non-Hermitian systems. Furthermore, the emergence of the interplay between NHSE and SFL remains an unsurveyed subject in the field of non-Hermitian physics. The transfer matrix approach is a powerful tool for elucidating the tight-binding models for decades [66; 67; 68; 69; 70; 71; 72], which has a more terse and compact form than the method of directly solving the eigenvalue problem, and also applies to the non-Hermitian systems in recent years [73; 74; 75]. In this paper, we utilize the transfer matrix approach to establish a unified depiction of the scale-free effect as well as the interplay between NHSE and SFL. Without loss of generality, we concentrate on the 1D finite-size non-Hermitian systems with \(2\times 2\) transfer matrices, such as the Hatano-Nelson (HN) model [76; 77; 78] and the non-Hermitian Su-Schrieffer-Heeger (NH-SSH) model [5; 9] with boundary impurity. We unveil that the UPSF effect emerges with the singular transfer matrices, while the HSSF effect appears with the nonsingular transfer matrices, which shows the existence of the bulk eigenstates possessing both NHSE and scale-free exponential decay factors. The UPSF effect must accompany the boundary impurity, while the HSSF effect may exist even under OBC. The localization length of the UPSF mode is generally quasi-linearly dependent on the system size, which is the generalized definition of the SFL in this paper - the results in previous works [53; 54; 55; 56] are equivalent to specific cases with linear dependence. The HSSF effect implies that the NHSE under OBC blends with the emergent SFL in the finite sizes yet prevails over the competition in the thermodynamic limit, revealing the NHSE's dominance. Once we turn on the boundary impurity, the HSSF effect displays a finite-size dependence. The rest of this paper is organized as follows. In Sec. II, we establish the formalism for probing UPSF and HSSF effects from the transfer matrix perspective, corresponding to the cases with singular and nonsingular transfer matrices, respectively. In Secs. III.1 and IV.1, we analytically solve the eigenstates as well as the energy spectrum with the emergence of the pure scale-free (PSF) effect for the HN model and NH-SSH model with boundary impurity. The HSSF effect is expounded for the HN model with boundary impurity in Sec. III.2 and the NH-SSH model under OBC in Sec. IV.2. We give a conclusion and further discussions in Sec. V. Probing UPSF and HSSF effect via transfer matrix approach ### Review of transfer matrix approach for non-Hermitian tight-binding models Commonly, the real-space tight-binding Hamiltonian of a 1D non-interacting non-Hermitian system reads \[\mathcal{H} =\sum_{n}\sum_{l=-R}^{R}\sum_{\mu,\nu=1}^{q}t_{l,\mu\nu}c_{n,\mu} ^{\dagger}c_{n+l,\nu}\] \[=\sum_{n}\sum_{l=-R}^{R}c_{n}^{\dagger}\mathfrak{t}_{l}c_{n+l}, \tag{1}\] where \(c_{n,\mu}^{\dagger}\) (\(c_{n,\mu}\)) is a creation (annihilation) operator with the index of internal degrees of freedom \(\mu\) in the \(n\)-th unit cell, \(c_{n}^{\dagger}\) (\(c_{n}\)) is a row (column) vector containing \(q\)\(c_{n,\mu}^{\dagger}\) (\(c_{n,\mu}\)), and \(t_{l,\mu\nu}\) (\(\mathfrak{t}_{l}\)) is a hopping amplitude (matrix) to the \(l\)-th nearest unit cell. Following the formalism established in Ref. [72; 73; 66], we bundle up at least \(R\) adjacent unit cells into a supercell, such that Eq. (1) reduces to a nearest-neighbor tight-binding Hamiltonian \[\mathcal{H}=\sum_{n=1}^{N-1}\left[\mathbf{c}_{n}^{\dagger}\mathbf{J}_{L}\mathbf{c}_{n+1}+ \mathbf{c}_{n}^{\dagger}\mathbf{M}\mathbf{c}_{n}+\mathbf{c}_{n+1}^{\dagger}\mathbf{J}_{R}^{\dagger }\mathbf{c}_{n}\right], \tag{2}\] with creation (annihilation) operator denoted as \(\mathbf{c}_{n}^{\dagger}\) (\(\mathbf{c}_{n}\)) and totally \(N\) supercells. There are \(\mathcal{N}\geq qR\) internal degrees of freedom in each supercell, and \(\mathbf{J}_{L,R}\) and \(\mathbf{M}\) are hopping matrices and onsite matrix, respectively. Without loss of generality, we require \(\mathbf{J}_{R}=\mathbf{J}_{L}\equiv\mathbf{J}\) with \(\mathbf{J}^{2}=0\) and impose non-Hermiticity merely on \(\mathbf{M}\), i.e., \(\mathbf{M}\neq\mathbf{M}^{\dagger}\)[79]. Consequently, the tight-binding Hamiltonian further reduces to \[\mathcal{H}=\sum_{n=1}^{N-1}\left[\mathbf{c}_{n}^{\dagger}\mathbf{J}\mathbf{c}_{n+1}+\mathbf{c }_{n}^{\dagger}\mathbf{M}\mathbf{c}_{n}+\mathbf{c}_{n+1}^{\dagger}\mathbf{J}^{\dagger}\mathbf{c}_{ n}\right]. \tag{3}\] Given an arbitrary single-particle state \[\left|\Psi\right\rangle=\sum_{n=1}^{N}\Psi_{n}\mathbf{c}_{n}^{\dagger}\left|0 \right\rangle, \tag{4}\] with \(\Psi_{n}\in\mathbb{C}^{\mathcal{N}}\), the single-particle Schrodinger equation \(\mathcal{H}\left|\Psi\right\rangle=\varepsilon\left|\Psi\right\rangle\) reduces to the recursion relation \[\mathbf{J}\Psi_{n+1}+\mathbf{J}^{\dagger}\Psi_{n-1}=\left(\varepsilon \mathbbm{1}-\mathbf{M}\right)\Psi_{n}. \tag{5}\] Next, we define \(\mathcal{G}=\left(\varepsilon\mathbbm{1}-\mathbf{M}\right)^{-1}\) as the onsite Green's function, which is nonsingular except when \(\varepsilon\) is an eigenvalue of \(\mathbf{M}\). Performing the reduced singular value decomposition (SVD) [80; 81], we obtain \[\mathbf{J}=V\Xi W^{\dagger}, \tag{6}\] where \(\Xi=\operatorname{diag}\left\{\xi_{1},\ldots,\xi_{r}\right\}\) is a diagonal matrix of singular value \(\xi_{i}\in\mathbb{R}^{+},i=1,2,\ldots,r\) with \(r=\operatorname{rank}\left(\mathbf{J}\right)\), and \(V\) (\(W^{\dagger}\)) is the matrix comprised of \(r\) orthonormal bases \(v_{i}\) (\(w_{i}^{\dagger}\)) in the column (row) space of \(\mathbf{J}\), that is \[V^{\dagger}V=W^{\dagger}W=1,\qquad V^{\dagger}W=0. \tag{7}\] We focus on the \(r=1\) case for simplicity; as a result, \(\Xi\equiv\xi\in\mathbb{R}^{+}\) and \(\left\{V\equiv v,W\equiv w\right\}\) constitutes a set of orthonormal basis of \(\mathbb{C}^{2}\). In this basis, we expand \(\Psi_{n}\) and \(\mathcal{G}\) as \[\Psi_{n}=\alpha_{n}v+\beta_{n}w,\quad\alpha_{n}=v^{\dagger}\Psi_{n},\quad \beta_{n}=w^{\dagger}\Psi_{n}, \tag{8}\] and \[\mathcal{G}_{AB}=B^{\dagger}\mathcal{G}A\in\mathbb{C},\qquad A,B\in\left\{v,w \right\}, \tag{9}\] respectively. Combining Eqs. (5)-(9), we obtain the propagating relation in the bulk \[\Phi_{n+1}=T\Phi_{n},\qquad\Phi_{n}\equiv\begin{pmatrix}\beta_{n}\\ \alpha_{n-1}\end{pmatrix}, \tag{10}\] where \(T\) is the \(2\times 2\) transfer matrix [73] \[T=\frac{1}{\xi\mathcal{G}_{vw}}\begin{pmatrix}1&-\xi\mathcal{G}_{ww}\\ \xi\mathcal{G}_{vv}&\xi^{2}\left(\mathcal{G}_{vw}\mathcal{G}_{wv}-\mathcal{G}_{ vv}\mathcal{G}_{ww}\right)\end{pmatrix}. \tag{11}\] We define the trace and determinant of \(T\) as \[\Delta=\operatorname{tr}\left(T\right),\qquad\Gamma=\det\left(T\right)\equiv \frac{\mathcal{G}_{wv}}{\mathcal{G}_{vw}}, \tag{12}\] which are rational functions of energy \(\varepsilon\) in general, and for simplicity, we suppress the explicit dependence on \(\varepsilon\) hereafter. If \(T\) is singular (\(\Gamma=0\)), \(T^{n}=\Delta^{n-1}T\); if \(T\) is nonsingular (\(\Gamma\neq 0\)) [73], \[T^{n}=\Gamma^{n/2}\left[\frac{U_{n-1}(z)}{\sqrt{\Gamma}}T-U_{n-2}(z)\mathbbm{1} \right], \tag{13}\] where \[U_{n}(z)=\frac{\sin\left(\left(n+1\right)\phi\right)}{\sin\phi}, \tag{14}\] is the Chebyshev polynomials of second kind [82; 83] and \[z\equiv z(\varepsilon)=\frac{\Delta}{2\sqrt{\Gamma}}=\cos\phi\in\mathbb{C}. \tag{15}\] After reviewing the transfer matrix approach for non-Hermitian tight-binding models, we shall utilize it for probing UPSF and HSSF effect. ### Transfer matrix approach for 1D tight-binding models with boundary impurity Without loss of generality, we consider the boundary impurity \[\mathbf{c}_{N}^{\dagger}\kappa_{L}\mathbf{c}_{1}+\mathbf{c}_{1}^{\dagger}\kappa_{R}\mathbf{c}_{ N}, \tag{16}\] with \(\kappa_{L}=\gamma_{L}\mathbf{J}\), \(\kappa_{R}=\gamma_{R}\mathbf{J}^{\dagger}\), \(\gamma_{L},\gamma_{R}\geq 0\) on top of the Hamiltonian Eq. (3), connecting the first and the last supercells [84]. Note that the periodic boundary condition (PBC), as well as the Bloch band theory, is reproduced in the \(\gamma_{L}=\gamma_{R}=1\) case, while the OBC in the \(\gamma_{L}=\gamma_{R}=0\) case. After some algebra, we obtain the equations on the boundaries (Appendix A) \[\Phi_{1}=K_{L}T\Phi_{N},\qquad\Phi_{2}=TK_{R}\Phi_{1}, \tag{17}\] where \(K_{L}=\operatorname{diag}\left\{1/\gamma_{L},1\right\}\) and \(K_{R}=\operatorname{diag}\left\{1,\gamma_{R}\right\}\). Together with Eq. (10), we illustrate the full propagating relation of \(\Phi_{n}\) with the boundary impurity in Fig. 1(a). Since \(\Phi_{N}=T^{N-2}\Phi_{2}\), Eq. (17) gives \[\Phi_{1}=K_{L}T^{N}K_{R}\Phi_{1}. \tag{18}\] Defining \(\varphi=K_{R}\Phi_{1}\) and \(K=K_{R}K_{L}=\operatorname{diag}\left\{1/\gamma_{L},\gamma_{R}\right\}\), we final obtain the compact boundary equation \[\varphi=KT^{N}\varphi. \tag{19}\] It implies that the legitimate \(\varphi\) is the eigenvector of \(KT^{N}\) with respect to the eigenvalue \(1\), thus the physical solution is \(\Phi_{1}=K_{R}^{-1}\varphi\) and \(\Phi_{n}=T^{n-1}\varphi\) for \(n=2,3,\ldots,N\). #### ii.1.1 Pure scale-free effect We first consider the cases with singular transfer matrices, which suggest the occurrence of real-space EPs under OBC [73]. As we turn on the boundary impurity, the determinant of \(KT^{N}\) is zero due to \(\Gamma=0\), which implies \(\operatorname{tr}\left(KT^{N}\right)=1\) for satisfying Eq. (19). Consequently, we obtain \(\Delta^{N-1}\text{tr}\left(KT\right)=1\), namely \((\operatorname{tr}\left(KT\right)\neq 0)\) \[\Delta^{N}=\frac{\Delta}{\operatorname{tr}\left(KT\right)}. \tag{20}\] Assuming \(\Delta^{N}=c^{-1}\) with \(c\) being an undetermined constant, we can obtain the expression of \(\varepsilon\) for \(c\) through \[\Delta=c^{-\frac{1}{N}}e^{-i\frac{2\pi m}{N}}, \tag{21}\] for each \(m\in\left\{1,2,\ldots,N\right\}\), denoted as \(\varepsilon_{m}\) (regardless of the possible band index). Substituting \(\varepsilon_{m}\) into Eq. (20), we figure out the constant \(c\) as well as energies for each \(m\). Although multiple or null solutions of \(c\) are perhaps obtained for some \(m\) in general, we formally denote the solution of \(c\) for each \(m\) as \(c_{m}\), that is all of \(c^{\prime}s\) generate the nominal bulk bands. Consider a fixed \(c_{m}\), the solution of eigenstate is given by \[\Phi_{1}^{m} =K_{R}^{-1}\varphi_{m},\] \[\Phi_{n}^{m} =c_{m}^{-\frac{n-2}{N}}e^{-i\frac{2\pi m}{N}(n-2)}T\varphi_{m}, \tag{22}\] where \(n=2,3,\ldots,N\) and \(KT^{N}\varphi_{m}=\varphi_{m}\). According to Eq. (8), we obtain the eigenstate (up to the normalization coefficients hereafter) concerning energy \(\varepsilon_{m}\) \[\Psi_{1}^{m} =\left(T\varphi_{m}\right)_{2}v+\left(K_{R}^{-1}\varphi_{m} \right)_{1}w, \tag{23}\] \[\Psi_{n}^{m} =c_{m}^{-\frac{n-1}{N}}e^{-i\frac{2\pi m}{N}(n-1)}\psi_{m},\ n=2,3,\ldots,N-1,\] (24) \[\Psi_{N}^{m} =\left(K_{R}^{-1}\varphi_{m}\right)_{2}v+\left(c_{m}^{-\frac{N-2 }{N}}e^{-i\frac{2\pi m}{N}(N-2)}T\varphi_{m}\right)_{1}w, \tag{25}\] where \(\psi_{m}=\left(T\varphi_{m}\right)_{2}v+c_{m}^{\frac{1}{N}}e^{i\frac{2\pi m}{N }}\left(T\varphi_{m}\right)_{1}w\), and \(\left(\mathscr{V}\right)_{1,2}\) denote the first and second components of the column 2-vector \(\mathscr{V}\). We specify the bulk region of the supercell systems Eq. (3) with boundary impurity Eq. (16) as \(\mathscr{B}=\left\{2,3,\ldots,N-1\right\}\), as well as the boundary region \(\mathscr{E}=\left\{1,N\right\}\). The bulk solution Eq. (24) visibly displays a exponential decay \[\exp\bigg{\{}-\frac{\text{Re}(\log c_{m})}{N}(n-1)\bigg{\}}, \tag{26}\] with the phase factor \[\exp\bigg{\{}-i\frac{\text{Im}(\log c_{m})+2\pi m}{N}(n-1)\bigg{\}}, \tag{27}\] which is a correction of the wave vector \(2\pi m/N\). We define the occurrence of such exponential-decay behavior of eigenstates in \(\mathscr{B}\) as the UPSF effect, which is a rigorous generalization of the SFL in previous work [51; 52; 53; 54; 55; 56]. Further, we denote the eigenstates like Eq. (24) in \(\mathscr{B}\) with \(\text{Re}(\log c_{m})>0\) (\(<0\)) as the left (right)-accumulation UPSF modes, and the localization lengths of these modes are \[\xi_{m}=\frac{N}{|\text{Re}(\log c_{m})|}. \tag{28}\] Noteworthily, \(c_{m}\) is generally dependent on system size \(N\), thus we define the localization length \(\xi_{m}\) quasi-linear dependent on \(N\), a typical character of the UPSF effect. Figure 1: (a) The propagating relation of \(\Phi_{n}\) with the boundary impurity in Eq. (16) and (b) the propagating relation of \(\Phi_{n}\) under OBC in the transfer matrix approach. #### ii.3.2 Hybrid skin-scale-free effect More imperceptible phenomena exceeding the NHSE emerge in cases of nonsingular transfer matrices [85]. When the system size is finite, the NHSE, a rigorous terminology in the thermodynamic limit under OBC, shows not only a finite-size extent but also a possible interplay with the SFL. The propagating relation of \(\Phi_{n}\) under OBC is illustrated in Fig. 1(b), and accordingly, the physical condition (the constrain on \(\phi\)) is (Appendix B) \[\frac{\sin\left(N\phi\right)}{\sin\left[(N-1)\phi\right]}=q, \tag{29}\] with \(q=\xi\sqrt{\frac{\phi_{n}}{\phi_{n\omega}}}\mathcal{G}_{vw}\). Here, we have not simplified \(q\) to avoid the branch problems of the square root. Together with Eq. (15), we can, in principle, obtain the physical energy \(\varepsilon\) and \(\phi\) simultaneously. If the energy \(\varepsilon\) renders \(q\) real, the left-hand side of Eq. (29) must be real, thus leading to real \(\phi\). However, if the energy \(\varepsilon\) renders \(q\) complex, we obtain complex \(\phi=\phi_{R}+i\phi_{I}\) and \(\phi_{I}\sim c/N\), which is quasi-linearly dependent on \(1/N\) (Appendix B). The eigenstate concerning energy \(\varepsilon\) reads \[\Psi_{n}=\Gamma^{n/2}\left[\mathscr{A}_{L}(\phi)e^{in\phi_{R}}e^ {-n\phi_{I}}+\mathscr{A}_{R}(\phi)e^{-in\phi_{R}}e^{n\phi_{I}}\right], \tag{30}\] with \(n=1,2,\ldots,N\), and the coefficients \(\mathscr{A}_{L}(\phi),\mathscr{A}_{R}(\phi)\) given in Appendix B. The overall exponential factor \(\Gamma^{n/2}\) indicates the emergence of the NHSE for \(|\Gamma|\neq 1\), while it is the Hermitian cases or the non-Hermitian cases with some special symmetries (such as parity-time (PT) symmetry) for \(|\Gamma|=1\). The exponential factors \(e^{\pm n\phi_{I}}\) indicate the SFL with bidirectionally accumulation for a set of \((N,\phi_{R})\). Therefore, we define the occurrence of \(|\Gamma|=1\) and \(\phi_{I}\neq 0\) as the superposition of bidirectional scale-free (SBSF) effect, while \(|\Gamma|=1\) and \(\phi_{I}=0\) suggests the extended eigenstate (ES). Noteworthily, the SBSF effect reduces to the UPSF effect if one of \(\mathscr{A}_{L,R}(\phi)\) vanishes, thus the PSF effect is defined as a union of the UPSF and SBSF effects. Further, the occurrence of \(|\Gamma|\neq 1\) and \(\phi_{I}\neq 0\) is defined as the HSSF effect, while the pure NHSE emerges with \(|\Gamma|\neq 1\) and \(\phi_{I}=0\). These correspondences are summarized in Table 1. However, \(n\phi_{I}\to 0\) with \(n\) lying in the deep bulk as \(N\rightarrow+\infty\) under OBC, which means the exponential factor dominated by \(\Gamma^{n/2}\) in the deep bulk, namely, the exact context of NHSE in the thermodynamic limit [23]. Hence, the emergent SFL participates in the NHSE under finite size, and the NHSE gradually dominates with increasing system size. Next, we turn on the boundary impurity in Eq. (16), and the physical condition is given by Eq. (19). Again, the complex \(\phi=\phi_{R}+i\phi_{I}\) emerges under finite size but the forms \(\phi_{I}\sim c/N\) do not always exist when \(N\rightarrow+\infty\) (Appendix C). The general form of the eigenstate in \(\mathscr{B}\) with respect to energy \(\varepsilon\) and complex \(\phi\) reads \[\Psi_{n}^{imp}=\Gamma^{n/2}\left[\mathcal{B}_{L}(\phi)e^{in\phi _{R}}e^{-n\phi_{I}}+\mathcal{B}_{R}(\phi)e^{-in\phi_{R}}e^{n\phi_{I}}\right], \tag{31}\] where the coefficients \(\mathcal{B}_{L}(\phi)\), \(\mathcal{B}_{R}(\phi)\) are given in Appendix C. Similar to the OBC cases under finite sizes, the various possible behaviors of eigenstates are also depicted in Table 1. Nevertheless, the thermodynamic limit may prevent both the NHSE and SFL effects with the boundary impurity, an abrupt change from the OBC. In other words, the HSSF effect in the presence of boundary impurity is sensitive to the system size. ## III Hatano-nelson model with boundary impurity As a typical 1D non-Hermitian model, the HN model with boundary impurity reads \[\mathcal{H}_{hn}=\sum_{n=1}^{N-1}\left(t_{L}c_{n}^{\dagger}c_{ n+1}+t_{R}c_{n+1}^{\dagger}c_{n}\right)+\gamma_{R}c_{1}^{\dagger}c_{N}+ \gamma_{L}c_{N}^{\dagger}c_{1}, \tag{32}\] where we assume \(t_{L},t_{R},\gamma_{L},\gamma_{R}\geq 0\) for simplicity. The model manifests an emergence of the SFL except for the NHSE [53; 56]. In this section, we apply the transfer matrix approach to comprehensively study the display and interplay of NHSE and scale-free effect in Eq. (32). Note the single-band nature of the NH model offers an expedient expression of the transfer matrix in the single-particle wave function space, denoted as \(|\Psi\rangle=\sum_{n=1}^{N}\psi_{n}c_{n}^{\dagger}\,|0\rangle\), instead of the nominal \(\{v,w\}\) space, after some algebra, we obtain the propagating relation in the bulk (Appendix D) \[\begin{pmatrix}\psi_{n+1}\\ \psi_{n}\end{pmatrix}=T\begin{pmatrix}\psi_{n}\\ \psi_{n-1}\end{pmatrix}, \tag{33}\] where the transfer matrix with respect to energy \(\varepsilon\) is \[T=\begin{pmatrix}\frac{\varepsilon}{t_{L}}&-\frac{t_{R}}{t_{L}} \\ \Gamma&0\end{pmatrix}, \tag{34}\] and we label \[\Delta =\text{tr}\left(T\right)=\frac{\varepsilon}{t_{L}},\] \[\Gamma =\det\left(T\right)=\frac{t_{R}}{t_{L}}. \tag{35}\] \begin{table} \begin{tabular}{|c|c|c|} \hline \(|\Gamma|=1\)\(\phi_{I}=0\) & Yes & No \\ \hline Yes & ES & PSF \\ \hline No & NHSE & HSSF \\ \hline \end{tabular} \end{table} Table 1: Summary of ES, NHSE, PSF, and HSSF concerning nonsingular transfer matrices. ### Exact solutions of the PSF effect We first consider the case with a singular transfer matrix, namely, \(t_{R}=0\) leading to \(\Gamma=0\). The eigenvectors and the corresponding energies are (Appendix D.1) \[\varepsilon_{m} =t_{L}c_{m}^{-\frac{n}{N-2}}e^{-i\frac{2\pi m}{N-2}},\quad m=1,2, \ldots,N,\] \[\psi_{n}^{m} =c_{m}^{-\frac{n-2}{N-2}}e^{-i\frac{2\pi m}{N-2}(n-2)},\quad n=3,4,\ldots,N,\] \[\psi_{1}^{m} =\frac{t_{L}}{\gamma_{L}}c_{m}^{-\frac{1}{N-2}}e^{-i\frac{2\pi m }{N-2}}\psi_{N}^{m}, \tag{36}\] with the physical condition \[\frac{t_{L}^{2}}{\gamma_{L}}c_{m}^{-\frac{2}{N-2}}e^{-i\frac{4\pi m }{N-2}}=t_{L}c_{m}+\gamma_{R}, \tag{37}\] where we have set \(\psi_{2}^{m}=1\). Apparently, the exponential factor of \(\psi_{n}^{m}\) leads to the PSF effect, with the localization length \[\xi_{m}=\frac{N-2}{|\text{Re}\left(\log c_{m}\right)|}, \tag{38}\] quasi-linearly dependent on the system size, and the left (right)-accumulation UPSF modes correspond to \(\text{Re}\left(\log c_{m}\right)>0\) (\(<0\)). The phase diagrams of the UPSF modes in \(\gamma_{L}\)-\(\gamma_{R}\) plane are plotted in Figs. 2(A)(B) with system size \(N=10,30\), where the green regions label the existence of the UPSF modes with left-accumulation (LA), the cyan regions label that with right-accumulation (RA), and the yellow regions label that with both left- and right-accumulation (LRA) [86]. In Fig. 2, we also plot the full single-particle energy spectrum with the corresponding left-accumulation UPSF modes for typical parameters [red stars in Figs. 2(A)(B)], where the analytic results Eq. (36) (red dots and lines) are perfectly consistent with the numerical results (blue dots) for various system sizes [87]. In addition, the scale-free localized modes given in Refs. [53, 56] are equivalent to the strong non-reciprocity or extremely special \(\gamma_{R}\to 0\) cases in our transfer matrix formalism (Appendix D.1). ### The emergent HSSF effect with boundary impurity We next consider the case with nonsingular transfer matrix, whose energy expression is given by \[\varepsilon=2\sqrt{t_{L}t_{R}}\cos\phi,\quad\phi=\phi_{R}+i\phi_{I}\in\mathbb{ C}, \tag{39}\] through Eq. (15). The solutions of \(\phi\) lie in \([0,\pi]\) under OBC, while lead to physical real or complex values with boundary impurity in Eq. (32) (Appendix D.2). In order to observe the HSSF effect unambiguously, we perform a generalized gauge transformation \(\mathcal{H}_{it}=S^{-1}\mathcal{H}_{hn}S\) to cancel out the more dominant NHSE amplification factor (Appendix D.2), resulting in \[\mathcal{H}_{it}=\sum_{n=1}^{N-1}t\left(c_{n}^{\dagger}c_{n+1}+ \text{h.c.}\right)+(\delta+\gamma)c_{1}^{\dagger}c_{N}+(\delta-\gamma)c_{N}^{ \dagger}c_{1}, \tag{40}\] where \(S=\text{diag}\left\{r,r^{2},\ldots,r^{N}\right\}\) is the transformation matrix, \(t=\sqrt{t_{L}t_{R}}\), \(r=\sqrt{t_{R}/t_{L}}\), and we have set \(r^{N-1}\gamma_{R}=\delta+\gamma\) and \(r^{-(N-1)}\gamma_{L}=\delta-\gamma\). Subsequently, the eigenstate corresponding to energy \(\varepsilon=2t\cos\phi\) of Eq. (40) reads \[\begin{pmatrix}\psi_{n+1}\\ \psi_{n}\end{pmatrix}=\left[A_{L}(\phi)e^{i(n-1)\phi}+A_{R}(\phi)e^{-i(n-1) \phi}\right]\begin{pmatrix}\psi_{2}\\ \psi_{1}\end{pmatrix}, \tag{41}\] where the coefficients \(A_{L,R}(\phi)\) are given in Appendix D.2. In Figs. 3(A)(B), we illustrate the energy spectrum for two typical points in the PT-broken region \(\gamma\in[|\delta-t|,\delta+t]\) (Appendix D.2). The real (complex) energy (blue dots) in Fig. 3(A) corresponds to the ES Figure 2: (A)(B): The phase diagrams of the UPSF modes in Eq. (32) with a singular transfer matrix (\(t_{L}=1.5\), \(t_{R}=0\)) in the \(\gamma_{L}\)-\(\gamma_{R}\) plane with \(N=10\) and \(N=30\), respectively. (a1)(b1): The analytic (red dots) and numerical (blue dots) results of the full energy spectrum concerning the parameters \(\gamma_{L}=0.4\) and \(\gamma_{R}=2\), denoted as red stars in (A)(B), respectively. (a2)(b2): The analytic (red lines) and numerical (blue dots) results of the normalized left-accumulation UPSF modes corresponding to (a1)(b1), respectively. (SBSF mode), of which the match-perfectly analytic (red lines) and numerical (blue dots) results are plotted in Figs. 3(a1)(a2). Further, the scale-free distributions of the LA \([A_{R}(\phi)e^{-i(n-1)\phi}]\) and RA \([A_{L}(\phi)e^{i(n-1)\phi}]\) components of the SBSF mode [Fig. 3(a2) with \(\phi_{I}<0\)] are illustrated in Fig. 4. The two complex energies (blue dots) in Fig. 3(B) correspond to the UPSF modes (vanishing RA term), of which the match-perfectly analytic (red lines) and numerical (blue dots) results are plotted in Figs. 3(b1)(b2). We remark that all solutions of \(\phi\) for Fig. 3(B) are complex-valued, whose imaginary part is \(\log{(\mu^{-1})}/N\) with \(\mu=\delta/t+\sqrt{(\delta/t)^{2}-1}\) (Appendix D.2). Therefore, we observe the pure NHSE (HSSF effect) of \(\mathcal{H}_{hn}\) (\(t_{L}\neq t_{R}\)) corresponding to ESs (PSF modes) of \(\mathcal{H}_{it}\) after the inverse gauge transformation \(S^{-1}\). We emphasize that the HSSF effect with boundary impurity may be a finite-size phenomenon, since the thermodynamic limit may prevent simultaneous NHSE and SFL. ## IV The emergent UPSF and HSSF effects in the NH-SSH model The well-known NH-SSH model is a typical \(r=1\) Hamiltonian (\(\mathcal{H}_{nssh}\)) of Eq. (3) with \[\mathbf{M}=\begin{pmatrix}0&t_{1}+\gamma\\ t_{1}-\gamma&0\end{pmatrix},\quad\mathbf{J}=\begin{pmatrix}0&0\\ t_{2}&0\end{pmatrix}, \tag{42}\] where we assume \(t_{1},t_{2},\gamma\geq 0\) for simplicity. The transfer matrix reads \[T=\frac{1}{t_{2}(t_{1}+\gamma)}\begin{pmatrix}\varepsilon^{2}-t_{1}^{2}+ \gamma^{2}&-\varepsilon t_{2}\\ \varepsilon t_{2}&-t_{2}^{2}\end{pmatrix}, \tag{43}\] with \[\Delta =\operatorname{tr}{(T)}=\frac{\varepsilon^{2}-t_{1}^{2}+\gamma^{ 2}-t_{2}^{2}}{t_{2}(t_{1}+\gamma)},\] \[\Gamma =\det{(T)}=\frac{t_{1}-\gamma}{t_{1}+\gamma}. \tag{44}\] following the reduced SVD \(\mathbf{J}=v\xi w^{\dagger}\) with \(\xi=t_{2}\), \(v=(0,1)^{T}\), and \(w=(1,0)^{T}\). We introduce the boundary impurity in Eq. (16) into such an NH-SSH model and analyze its PSF or HSSF effect. ### The PSF effect with boundary impurity The transfer matrix at the critical point \(t_{1}=\gamma\) of PT-symmetry breaking is singular with \(\Gamma=0\), where we can directly apply the analytic results in Sec. II.2.1. The Figure 4: (a) The LA component and (b) the RA component of the SBSF mode in Fig. 3(a2). Figure 3: (A)(B): The energy spectrum of Eq. (40) for \(\gamma=4.2\) and \(\gamma=\sqrt{24}\), respectively. (a1)(a2): The analytic (red lines) and numerical (blue dots) eigenstates concerning the blue points in (A). (b1)(b2): The analytic (red lines) and numerical (blue dots) eigenstates concerning the blue points in (B). We set \(t=1\), \(\delta=5\), and \(N=20\). single-particle eigenstates with respect to the two energy bands (\(\pm\)) are \[\varepsilon_{m}^{\pm} =\pm\sqrt{t_{2}^{2}+2t_{2}\gamma c_{m}^{-\frac{1}{N}}e^{-i\frac{2\pi m }{N}}},\] \[\Psi_{n}^{\pm,m} =\alpha_{n}^{\pm,m}v+\beta_{n}^{\pm,m}w, \tag{45}\] where \(m=1,2,\ldots,N\), \[\alpha_{n}^{\pm,m} =c_{m}^{-\frac{1}{N}}e^{-i\frac{2\pi m}{N}n}\frac{c_{m}\gamma_{L} t_{2}}{\varepsilon_{m}^{\pm}},\quad n=1,2,\ldots,N-1,\] \[\beta_{n}^{\pm,m} =c_{m}^{-\frac{n-1}{N}}e^{-i\frac{2\pi m}{N}(n-1)}c_{m}\gamma_{L},\quad n=2,3,\ldots,N,\] \[\alpha_{N}^{\pm,m} =\frac{\varepsilon_{m}^{\pm,2}-c_{m}\gamma_{L}(\varepsilon_{m}^ {\pm}-t_{2}^{2})}{\gamma_{R}\varepsilon_{m}^{\pm}t_{2}},\] \[\beta_{1}^{\pm,m} =1, \tag{46}\] and \(c_{m}\) satisfies the physical condition \[t_{2}(1-\gamma_{L}\gamma_{R})=2\gamma c_{m}^{-\frac{1}{N}}e^{-i\frac{2\pi m}{N }}(\gamma_{L}c_{m}-1). \tag{47}\] These results indicate the emergence of the UPSF effect with the localization length \(\xi_{m}=N/|\text{Re}(\log c_{m})|\), and the phase diagram of the UPSF modes is illustrated in Fig. 5(A). Applying Eq. (45) to the three selected points (red stars) in Fig. 5(A), which correspond to the UPSF modes with LA, RA, and LRA, respectively, we obtain full energy spectrum [red dots in Figs. 5(a1)-(c1)] and eigenstates [red lines in Figs. 5(a2)-(c2)] perfectly matching the numerical results (blue dots). ### The HSSF effect under OBC The most counter-intuitive phenomenon with the emergent HSSF effect appears in the case with nonsingular transfer matrix under OBC. According to Appendix B, the physical condition is given by \[\frac{\sin{(N\phi)}}{\sin{((N-1)\phi)}}=\frac{t_{2}\sqrt{(t_{1}+\gamma)(t_{1}- \gamma)}}{\varepsilon^{2}-t_{1}^{2}+\gamma^{2}}\equiv q, \tag{48}\] which figures out the solutions of \(\phi=\phi_{R}+i\phi_{I}\) \[\phi_{I} =\frac{\log{|\mathscr{F}|}}{2N}, \tag{49}\] \[\phi_{R} =\frac{2\pi m-\arg(\mathscr{F})}{2N},\] (50) \[\mathscr{F} =\frac{1-qe^{-i\phi}}{1-qe^{i\phi}}, \tag{51}\] with \(m=1,2,\ldots,N\). In the region \(t_{1}>\gamma\), real \(\phi\) and OBC energy spectrum exist simultaneously. However, in the region \(t_{1}<\gamma\), complex energies lead to the presence of imaginary part of \(\phi\), namely the quasi-\(1/N\)-proportion Eq. (49). Hence, the HSSF effect emerges according to Eq. (30), which has never been uncovered in previous works. As in Sec. III.2, we perform a generalized gauge transformation for \(\mathcal{H}_{nssh}\) to conceal the NHSE factor with \(S=\text{diag}\left\{1,r,r,\ldots,r^{N-1},r^{N-1},r^{N}\right\}\) and \(r=\sqrt{\Gamma}\), resulting in the Hamiltonian \(\bar{\mathcal{H}}\) with \[\bar{M}=\begin{pmatrix}0&\bar{t}_{1}\\ \bar{t}_{1}&0\end{pmatrix},\quad\bar{J}=\mathbf{J},\quad\bar{t}_{1}=\sqrt{(t_{1}+ \gamma)(t_{1}-\gamma)}. \tag{52}\] Subsequently, we obtain \(\bar{\Delta}=(\varepsilon^{2}-\bar{t}_{1}^{2}-t_{2}^{2})/\bar{t}_{1}t_{2}\) and \(\bar{\Gamma}=1\), which result the same physical condition for \(\mathcal{H}_{nssh}\) in Eq. (48), and according to Eq. (15), we obtain the expression of the energy \[\varepsilon^{2}=t_{1}^{2}-\gamma^{2}+t_{2}^{2}+2t_{2}\bar{t}_{1}\cos\phi, \tag{53}\] Figure 5: (A) The phase diagram of the UPSF modes of the Hamiltonian \(\mathcal{H}_{nssh}\) with a singular transfer matrix, where green, cyan, and yellow regions label the LA, RA, and LRA, respectively. (a1)-(c1): The full analytic (red dots) and numerical (blue dots) energy spectrum with \((\gamma_{L},\gamma_{R})=(0.2,3)\), \((0.8,2)\), and \((3,0.2)\) corresponding to the red stars in (A). (a2)-(c2): The analytic (red lines) and numerical (blue dots) eigenstates corresponding to (a1)-(c1), respectively. We set \(t_{1}=\gamma=0.5\), \(t_{2}=1\), and \(N=16\). whose related eigenstates read \[\bar{\Psi}_{n}=\bar{\mathscr{A}}_{L}(\phi)e^{in\phi_{R}}e^{-n\phi_{I}}+\bar{ \mathscr{A}}_{R}(\phi)e^{-in\phi_{R}}e^{n\phi_{I}}, \tag{54}\] where the coefficients are given by \[\bar{\mathscr{A}}_{L}(\phi) =\frac{1}{2iq\sin\phi}\left[t_{2}\bar{\mathcal{G}}_{vv}v+(e^{-i \phi}-qe^{-2i\phi})w\right],\] \[\bar{\mathscr{A}}_{R}(\phi) =\frac{1}{2iq\sin\phi}\left[-t_{2}\bar{\mathcal{G}}_{vv}v+(-e^{ i\phi}+qe^{2i\phi})w\right], \tag{55}\] with \(\bar{\mathcal{G}}_{vv}=\varepsilon/(\varepsilon^{2}-t_{1}^{2}+\gamma^{2})\). Utilizing the full energy spectrum in Fig. 6(a), we figure out the solutions of \(\phi\) for each energy labeled as \(j\) in Figs. 6(b)(c), which are perfectly consistent with the analytic formulas Eqs. (49)-(51). The analytic (red lines) and numerical (blue dots) eigenstates concerning the three selected points (blue dots) in Fig. 6(a) are plotted in Figs. 6(d)-(f), which match perfectly. Further, the scale-free distributions of the LA [\(\bar{\mathscr{A}}_{L}(\phi)e^{in\phi_{R}}e^{-n\phi_{I}}\)] and RA [\(\bar{\mathscr{A}}_{R}(\phi)e^{-in\phi_{R}}e^{n\phi_{I}}\)] components of these SBSF modes in Figs. 6(d)-(f) are illustrated in Figs. 6(g)-(i), respectively. Therefore, the SBSF modes \(\bar{\Psi}_{n}\) of \(\bar{\mathcal{H}}\), which resemble the ESs in appearance, indicating the emergent HSSF modes of the original NH-SSH model under OBC after the inverse gauge transformation. As we turn on the boundary impurity, the physical condition is given by \[\left(\gamma_{L}\Gamma^{-\frac{N}{2}}+\gamma_{R}\Gamma^{\frac{N}{ 2}}\right)\sin\phi=\sin\left((N+1)\phi\right)\] \[+\frac{t_{2}(1-\gamma_{L}\gamma_{R})}{\sqrt{(t_{1}+\gamma)(t_{1}- \gamma)}}\sin\left(N\phi\right)-\gamma_{L}\gamma_{R}\sin\left((N-1)\phi\right), \tag{56}\] Figure 6: (a) The full energy spectrum of \(\bar{\mathcal{H}}\) with the parameters \(t_{1}=0.1\), \(t_{2}=1\), \(\gamma=0.5\), and \(N=16\). (b) The solutions of \(\phi_{R}\) corresponding to (a), where the vertical axis denotes \([2N\phi_{R}+\arg(\mathscr{F})+2\pi p]/2\pi\) with \(p=16\). All the energies (orange, blue, and magenta dots), labeled by \(j\), fall on the horizontal gray lines, integer constants \(m=1,2,\dots,16\). (c) The solutions of \(2N\phi_{I}\) for each energy labeled as \(j\) corresponding to (a) are shown as orange, blue, and magenta dots, where the gray line labels \(\log|\mathscr{F}|\). (d)-(f): The analytic (red lines) and numerical (blue dots) eigenstates with respect to the blue dots in (a)-(c) match consistently. The magenta dots in (a)-(c) are related to the zero-energy modes. (g)-(i): The distributions of the LA (cyan) and RA (purple) components corresponding to the SBSF modes in (d)-(f), respectively. according to Eq. (53) and Appendix C, which is equivalent to the result in Ref. [64]. The HSSF modes in [Eq. (31)] appear with possible emergent complex solutions of \(\phi\), which match with the numerical results in Appendix E. However, the HSSF effect with boundary impurity is usually limited to finite sizes due to the restraints on the NHSE and SFL in large systems. ## V Conclusions and discussions We have developed a transfer-matrix perspective for a unified understanding of the NHSE and the SFL in non-Hermitian systems with boundary impurity. We derive the analytic expressions for the single-particle bulk eigenstates with respect to the energy spectrum upon the 1D non-Hermitian systems with \(2\times 2\) transfer matrices, which exhibit excellent consistency with numerical results in the HN model and NH-SSH model. We find that the UPSF effect accompanies singular transfer matrices, while the HSSF effect emerges with nonsingular transfer matrices, even under OBC. The UPSF effect portrays a localization length of the eigenstates quasilinear with the system size, while the HSSF effect shows an interplay between the finite-size NHSE and SFL. We further reveal that the NHSE prevails over the SFL under OBC as the system size increases, thus \(\phi_{I}=0\) in the thermodynamic limit. Interestingly, we may then generate the energy spectrum from \(\Delta=2\sqrt{\Gamma}\cos\phi_{R}\) [Eq. (15)]; correspondingly, the norm \(\sqrt{|\Gamma|}\) with corresponding \((\arg(\Gamma),\phi_{R})\) outlines the GBZ and re-invents the non-Bloch band theory through the transfer matrix approach, open to further generalizations for future studies. The constraints over the assumption \(\mathbf{J}_{R}=\mathbf{J}_{L}\), rank-1 of \(\mathbf{J}\), namely \(2\times 2\) transfer matrices, and the boundary impurity in Eq. (16), are sufficiently general capture the core mechanism and behaviors of the PSF and HSSF effects here. Nevertheless, there exist more questions to explore in the future, such as global impurity and disorder [56; 74; 88; 89; 90; 91], the role of various symmetries [1; 2], realistic experiments and applications, and other subjects of the NHSE and SFL in condensed matter physics. ## Acknowledgements We acknowledge support from the National Key R&D Program of China (No.2022YFA1403700) and the National Natural Science Foundation of China (No.12174008 & No.92270102). ## Appendix A The boundary equations with boundary impurity Modifying Eq. (5) at the boundaries with the setting \(\Psi_{0}\equiv\Psi_{N}\), we obtain \[\Psi_{N} =\mathcal{G}\mathbf{J}^{\dagger}\Psi_{N-1}+\gamma_{L}\mathcal{G}\mathbf{ J}\Psi_{1}\] \[\Psi_{1} =\gamma_{R}\mathcal{G}\mathbf{J}^{\dagger}\Psi_{N}+\mathcal{G}\mathbf{ J}\Psi_{2}. \tag{16}\] Combining with Eqs. (6)-(9), we obtain \[\alpha_{N} =\mathcal{G}_{wv}\xi\alpha_{N-1}+\gamma_{L}\mathcal{G}_{vv}\xi \beta_{1},\] \[\beta_{N} =\mathcal{G}_{ww}\xi\alpha_{N-1}+\gamma_{L}\mathcal{G}_{vw}\xi \beta_{1},\] \[\alpha_{1} =\gamma_{R}\mathcal{G}_{ww}\xi\alpha_{N}+\mathcal{G}_{vv}\xi \beta_{2},\] \[\beta_{1} =\gamma_{R}\mathcal{G}_{ww}\xi\alpha_{N}+\mathcal{G}_{vw}\xi \beta_{2}. \tag{17}\] Further derivation gives \[\begin{pmatrix}\beta_{1}\\ \alpha_{N}\end{pmatrix} =\begin{pmatrix}\frac{1}{\gamma_{L}}\xi^{-1}\mathcal{G}_{vw}^{-1}& -\frac{1}{\gamma_{L}}\xi^{-1}\mathcal{G}_{vw}^{-1}\mathcal{G}_{ww}\xi\\ \mathcal{G}_{vv}\mathcal{G}_{vw}^{-1}&\left(\mathcal{G}_{wv}-\mathcal{G}_{vv} \mathcal{G}_{vw}^{-1}\mathcal{G}_{ww}\right)\xi\end{pmatrix}\begin{pmatrix} \beta_{N}\\ \alpha_{N-1}\end{pmatrix},\] \[\begin{pmatrix}\beta_{2}\\ \alpha_{1}\end{pmatrix} =\begin{pmatrix}\xi^{-1}\mathcal{G}_{v}^{-1}&-\gamma_{R}\xi^{-1} \mathcal{G}_{vw}^{-1}\xi\\ \mathcal{G}_{vv}\mathcal{G}_{vw}^{-1}&\gamma_{R}\left(\mathcal{G}_{ww}- \mathcal{G}_{vv}\mathcal{G}_{vw}^{-1}\mathcal{G}_{ww}\right)\xi\end{pmatrix} \begin{pmatrix}\beta_{1}\\ \alpha_{N}\end{pmatrix}, \tag{18}\] which are exactly the equations on the boundaries \[\Phi_{1} =\begin{pmatrix}\frac{1}{\gamma_{L}}&0\\ 0&1\end{pmatrix}T\Phi_{N}=K_{L}T\Phi_{N},\] \[\Phi_{2} =T\begin{pmatrix}1&0\\ 0&\gamma_{R}\end{pmatrix}\Phi_{1}=TK_{R}\Phi_{1}. \tag{19}\] ## Appendix B The solutions of nonsingular cases under OBC The OBC refers to the hard or Dirichlet boundary condition \(\Psi_{0}=\Psi_{N+1}=0\), implying \(\alpha_{0}=\beta_{N+1}=0\) such that \[\Phi_{1}=\begin{pmatrix}\beta_{1}\\ 0\end{pmatrix},\quad\Phi_{N+1}=\begin{pmatrix}0\\ \alpha_{N}\end{pmatrix}. \tag{10}\] After normalization, we obtain the OBC equation \[T^{N}\begin{pmatrix}1\\ 0\end{pmatrix}=\begin{pmatrix}0\\ \tau\end{pmatrix}, \tag{11}\] with \(\tau\in\mathbb{C}\). Accordingly, the propagating relation of \(\Phi_{n}\) is illustrated in Fig. 1(b). Utilizing Eq. (13), we obtain the physical condition under OBC [73] \[\frac{\sin\left(N\phi\right)}{\sin\left(\left(N-1\right)\phi\right)}=q, \tag{12}\] with \(q=\xi\sqrt{\frac{Q_{w}}{\phi_{ww}}}\mathcal{G}_{vw}\). Further simplifying Eq. (12) with general complex \(\phi=\phi_{R}+i\phi_{I}\) and \(\phi_{R}\in[0,2\pi],\phi_{I}\in\mathbb{R}\), we obtain \[e^{2N\phi_{I}}=\frac{1-qe^{-i\phi}}{1-qe^{i\phi}}e^{2iN\phi_{R}}. \tag{13}\] Hence, the solutions of \(\phi\) reads \[\phi_{I} =\frac{\log|\mathscr{F}|}{2N},\] \[\phi_{R} =\frac{2\pi m-\arg(\mathscr{F})}{2N}, \tag{14}\] where \(\mathscr{F}\) labels \(\left(1-qe^{-i\phi}\right)/\left(1-qe^{i\phi}\right)\) and \(m=1,2,\ldots,N\). Note that \(|\mathscr{F}|=1\) leads to \(\phi_{I}=0\) if \(q\) is real. In general complex-\(q\) cases, the explicit \(\phi_{I}\) and energy \(\varepsilon\) are dependent on \(N\) and \(\phi_{R}\) through the self-consistent transcendental equations (15) and (14), that is we can denote \(\phi_{I}=c/N\) with \(c\) being a finite-value function of \(N\) and \(\phi_{R}\), and thus \(\phi_{I}\) is quasi-linearly dependent on \(1/N\). The solution of an arbitrary eigenstate is given by \(\Phi_{n+1}=T^{n}\begin{pmatrix}1\\ 0\end{pmatrix}\), resulting in \[\beta_{n} =\frac{\Gamma^{(n-1)/2}}{q\sin\phi}\left[\sin\left((n-1)\phi \right)-q\sin\left((n-2)\phi\right)\right],\] \[\alpha_{n} =\frac{\Gamma^{n/2}}{q\sin\phi}\xi\mathcal{G}_{vv}\sin\left(n\phi \right), \tag{15}\] with \(n=1,2,\ldots,N\). Note that \(\beta_{N+1}=0\) is exactly the physical condition Eq. (12) and \(\alpha_{N}=\tau\). Consequently, an eigenstate concerning energy \(\varepsilon\) reads \[\Psi_{n}=\Gamma^{n/2}\left[\mathscr{A}_{L}(\phi)e^{in\phi_{R}}e^{-n\phi_{I}}+ \mathscr{A}_{R}(\phi)e^{-in\phi_{R}}e^{n\phi_{I}}\right], \tag{16}\] where \[\mathscr{A}_{L}(\phi) =\frac{1}{2iq\sin\phi}\left[\xi\mathcal{G}_{vv}v+\left(e^{-i\phi }-qe^{-2i\phi}\right)\frac{w}{\sqrt{\Gamma}}\right],\] \[\mathscr{A}_{R}(\phi) =\frac{1}{2iq\sin\phi}\left[-\xi\mathcal{G}_{vv}v+\left(-e^{i \phi}+qe^{2i\phi}\right)\frac{w}{\sqrt{\Gamma}}\right]. \tag{17}\] ## Appendix C The solutions of nonsingular cases with boundary impurity According to the boundary equation (19) with boundary impurity, the eigenvalues of the \(2\times 2\) matrix \(KT^{N}\) must be \(1\) and \[\det\left(KT^{N}\right)=\det\left(K\right)\left(\det T\right)^{N}=\frac{ \gamma_{R}}{\gamma_{L}}\Gamma^{N}, \tag{18}\] such that, \[\mathrm{tr}\left(KT^{N}\right)=1+\frac{\gamma_{R}}{\gamma_{L}}\Gamma^{N}=\Gamma^ {N/2}\left(\frac{\gamma_{R}}{\gamma_{L}}\right)^{1/2}\left[\left(\frac{\gamma_{ R}}{\gamma_{L}}\Gamma^{N}\right)^{-1/2}+\left(\frac{\gamma_{R}}{\gamma_{L}} \Gamma^{N}\right)^{1/2}\right]. \tag{100}\] On the other hand, utilizing the formula Eq. (13) of nonsingular \(T\), we obtain \[\mathrm{tr}\left(KT^{N}\right)=\Gamma^{N/2}\left[\frac{U_{N-1}(z)}{\sqrt{ \Gamma}}\mathrm{tr}\left(KT\right)-U_{N-2}(z)\mathrm{tr}\left(K\right)\right] =\Gamma^{N/2}\left[\frac{\mathrm{tr}\left(KT\right)}{\sqrt{\Gamma}}\frac{\sin \left(N\phi\right)}{\sin\phi}-\mathrm{tr}\left(K\right)\frac{\sin\left((N-1) \phi\right)}{\sin\phi}\right]. \tag{101}\] Thus, we get the condition \[\left(\frac{\gamma_{R}}{\gamma_{L}}\right)^{1/2}\left[\left(\frac{\gamma_{R}} {\gamma_{L}}\Gamma^{N}\right)^{-1/2}+\left(\frac{\gamma_{R}}{\gamma_{L}} \Gamma^{N}\right)^{1/2}\right]\sin\phi=\frac{\mathrm{tr}\left(KT\right)}{ \sqrt{\Gamma}}\sin\left(N\phi\right)-\mathrm{tr}\left(K\right)\sin\left((N-1) \phi\right). \tag{102}\] Further simplifying with general complex \(\phi=\phi_{R}+i\phi_{I}\) and \(\phi_{R}\in[0,2\pi],\phi_{I}\in\mathbb{R}\), we obtain \[a(\phi)\left(e^{N\phi_{I}}\right)^{2}+b(\phi)e^{N\phi_{I}}+c(\phi)=0, \tag{103}\] where \[a(\phi) =\frac{e^{-iN\phi_{R}}}{2i}\left[-\frac{\mathrm{tr}\left(KT \right)}{\sqrt{\Gamma}}+\mathrm{tr}\left(K\right)e^{i\phi}\right],\] \[b(\phi) =-\left[\left(\frac{\gamma_{R}}{\gamma_{L}}\Gamma^{N}\right)^{-1/ 2}+\left(\frac{\gamma_{R}}{\gamma_{L}}\Gamma^{N}\right)^{1/2}\right]\left( \frac{\gamma_{R}}{\gamma_{L}}\right)^{1/2}\sin\phi,\] \[c(\phi) =\frac{e^{iN\phi_{R}}}{2i}\left[\frac{\mathrm{tr}\left(KT\right) }{\sqrt{\Gamma}}-\mathrm{tr}\left(K\right)e^{-i\phi}\right]. \tag{104}\] The solution of \(\phi_{I}\) is \[e^{N\phi_{I}}=\frac{1}{2a(\phi)}\left[-b(\phi)\pm\sqrt{b^{2}(\phi)-4a(\phi)c( \phi)}\right], \tag{105}\] leading to physical real or complex \(\phi\) in general. However, since \(a(\phi),b(\phi)\) are always finite and \(b(\phi)\) contains the terms \(\Gamma^{\pm N/2}\), the emergent forms \(\phi_{I}\sim c/N\) with finite size are prevented by the large \(N\). The solution of an arbitrary eigenstate is given by \(\Phi_{n+1}=T^{n}\varphi\), \(n=1,2,\ldots,N-1\) with \(KT^{N}\varphi=\varphi\), resulting in \[\Phi_{n+1}=\Gamma^{n/2}\left[\mathcal{A}_{L}(\phi)e^{in\phi_{R}}e^{-n\phi_{I} }+\mathcal{A}_{R}(\phi)e^{-in\phi_{R}}e^{n\phi_{I}}\right], \tag{106}\] where \[\mathcal{A}_{L}(\phi) =\frac{1}{2i\sin\phi}\left(\frac{T}{\sqrt{\Gamma}}-e^{-i\phi}1 \right)\varphi,\] \[\mathcal{A}_{R}(\phi) =\frac{-1}{2i\sin\phi}\left(\frac{T}{\sqrt{\Gamma}}-e^{i\phi}1 \right)\varphi. \tag{107}\] Further, \[\beta_{1} =\left(K_{R}^{-1}\varphi\right)_{1},\] \[\alpha_{n} =\Gamma^{n/2}\left[\mathcal{A}_{L,2}(\phi)e^{in\phi_{R}}e^{-n\phi_ {I}}+\mathcal{A}_{R,2}(\phi)e^{-in\phi_{R}}e^{n\phi_{I}}\right],\quad n=1,2, \ldots,N-1,\] \[\beta_{n} =\Gamma^{(n-1)/2}\left[\mathcal{A}_{L,1}(\phi)e^{i(n-1)\phi_{R}}e ^{-(n-1)\phi_{I}}+\mathcal{A}_{R,1}(\phi)e^{-i(n-1)\phi_{R}}e^{(n-1)\phi_{I}} \right],\quad n=2,3,\ldots,N,\] \[\alpha_{N} =\left(K_{R}^{-1}\varphi\right)_{2}, \tag{108}\] where \(\mathcal{A}_{L/R,i}(\phi),i=1,2\) denotes the \(i\)-th component of the column vector \(\mathcal{A}_{L/R}(\phi)\). Consequently, an eigenstate concerning energy \(\varepsilon\) reads \[\Psi_{1}^{imp} =\Gamma^{1/2}\left[\mathcal{A}_{L,2}(\phi)e^{i\phi_{R}}e^{-\phi_{ I}}+\mathcal{A}_{R,2}(\phi)e^{-i\phi_{R}}e^{\phi_{I}}\right]v+\left(K_{R}^{-1} \varphi\right)_{1}w,\] \[\Psi_{n}^{imp} =\Gamma^{n/2}\left[\mathcal{B}_{L}(\phi)e^{in\phi_{R}}e^{-n\phi_{ I}}+\mathcal{B}_{R}(\phi)e^{-in\phi_{R}}e^{n\phi_{I}}\right],\quad n=2,3, \ldots,N-1,\] \[\Psi_{N}^{imp} =\left(K_{R}^{-1}\varphi\right)_{2}v+\Gamma^{(N-1)/2}\left[ \mathcal{A}_{L,1}(\phi)e^{i(N-1)\phi_{R}}e^{-(N-1)\phi_{I}}+\mathcal{A}_{R,1}( \phi)e^{-i(N-1)\phi_{R}}e^{(N-1)\phi_{I}}\right]w, \tag{109}\] where \[\mathcal{B}_{L}(\phi) =\mathcal{A}_{L,2}(\phi)v+e^{-i\phi}\mathcal{A}_{L,1}(\phi)\frac{w} {\sqrt{\Gamma}},\] \[\mathcal{B}_{R}(\phi) =\mathcal{A}_{R,2}(\phi)v+e^{i\phi}\mathcal{A}_{R,1}(\phi)\frac{w} {\sqrt{\Gamma}}. \tag{101}\] ## Appendix D Details of the HN model with boundary impurity Utilizing the single-particle Schrodinger equation of Eq. (32) with the setting \(\psi_{0}\equiv\psi_{N}\), we obtain the bulk and boundary equations \[t_{R}\psi_{n-1}+t_{L}\psi_{n+1} =\varepsilon\psi_{n}, \tag{102}\] \[\gamma_{R}\psi_{N}+t_{L}\psi_{2} =\varepsilon\psi_{1},\] (103) \[t_{R}\psi_{N-1}+\gamma_{L}\psi_{1} =\varepsilon\psi_{N}, \tag{104}\] where \(n\in\mathscr{B}\). The bulk equation (102) gives (\(t_{L}\neq 0\)) \[\psi_{n+1}=\frac{\varepsilon}{t_{L}}\psi_{n}-\frac{t_{R}}{t_{L}}\psi_{n-1}, \tag{105}\] thus leading the propagating relation Eq. (33). ### The case with singular transfer matrix The real-space EP, or infernal point, emerges under OBC in the case with singular transfer matrix (\(t_{R}=0\)), where all energies are degenerate at zero but with only one eigenvector [39; 40; 73]. Combining the boundary equations (103) and (104) for \(t_{R}=0\), we can deduce the physical condition with boundary impurity, \[\begin{pmatrix}\psi_{2}\\ \psi_{1}\end{pmatrix} =\begin{pmatrix}\frac{\varepsilon}{t_{L}}&-\frac{\gamma_{R}}{t_{L }}\\ 1&0\end{pmatrix}\begin{pmatrix}\psi_{1}\\ \psi_{N}\end{pmatrix}\] \[=\begin{pmatrix}\frac{\varepsilon}{t_{L}}&-\frac{\gamma_{R}}{t_{L }}\end{pmatrix}\begin{pmatrix}\frac{\varepsilon}{\gamma_{L}}&0\\ 1&0\end{pmatrix}\begin{pmatrix}\psi_{N}\\ \psi_{N-1}\end{pmatrix}\] \[=\begin{pmatrix}\frac{\varepsilon}{t_{L}}&-\frac{\gamma_{R}}{t_{L }}\end{pmatrix}\begin{pmatrix}\frac{t_{L}}{\gamma_{L}}&0\\ \frac{\varepsilon}{\gamma_{L}}&0\end{pmatrix}\begin{pmatrix}\frac{ \varepsilon}{t_{L}}&0\\ 1&0\end{pmatrix}\begin{pmatrix}\psi_{N}\\ \psi_{N-1}\end{pmatrix}\] \[=\begin{pmatrix}\frac{\varepsilon}{t_{L}}&-\frac{\gamma_{R}}{t_{L }}\end{pmatrix}\begin{pmatrix}\frac{t_{L}}{\gamma_{L}}&0\\ 0&1\end{pmatrix}\begin{pmatrix}\frac{\varepsilon}{t_{L}}&0\\ 1&0\end{pmatrix}T^{N-2}\begin{pmatrix}\psi_{2}\\ \psi_{1}\end{pmatrix}\] \[=\begin{pmatrix}\frac{\varepsilon}{t_{L}}&-\frac{\gamma_{R}}{t_{L }}\end{pmatrix}T^{N-1}\begin{pmatrix}\psi_{2}\\ \psi_{1}\end{pmatrix}\] \[\equiv K_{d}T^{N-1}\begin{pmatrix}\psi_{2}\\ \psi_{1}\end{pmatrix}, \tag{106}\] where \[T=\begin{pmatrix}\frac{\varepsilon}{t_{L}}&0\\ 1&0\end{pmatrix}, \tag{107}\] is the singular transfer matrix with \(\Delta=\operatorname{tr}\left(T\right)=\varepsilon/t_{L}\). It implies that the physical eigenvector of \(K_{d}T^{N-1}\) concerns the eigenvalue \(1\). Due to \(\det\left(K_{d}T^{N-1}\right)=\det\left(K_{d}\right)\Gamma^{N-1}=0\), we obtain \(\operatorname{tr}\left(K_{d}T^{N-1}\right)=1\), thus \[\operatorname{tr}\left(K_{d}\Delta^{N-2}T\right)=\Delta^{N-2} \operatorname{tr}\left(K_{d}T\right)=\left(\frac{\varepsilon}{t_{L}}\right)^ {N-2}\frac{\varepsilon^{2}-\gamma_{L}\gamma_{R}}{t_{L}\gamma_{L}}=1, \tag{108}\] which gives \[\frac{\varepsilon^{2}}{\gamma_{L}}-\gamma_{R}=t_{L}\left(\frac{t_{L}}{ \varepsilon}\right)^{N-2}. \tag{109}\] Let \(\left(t_{L}/\varepsilon\right)^{N-2}=c\) with \(c\) being an undetermined coefficient, and thus \[\varepsilon=t_{L}c^{-\frac{1}{N-2}}e^{-i\frac{2\pi m}{N-2}},\quad m=1,2,\ldots,N-2. \tag{109}\] Substituting Eq. (109) into Eq. (108), we obtain the physical condition about \(c\) \[\frac{t_{L}^{2}}{\gamma_{L}}c^{-\frac{2}{N-2}}e^{-i\frac{2\pi m}{N-2}}=t_{L}c+ \gamma_{R}, \tag{110}\] where we will label \(c_{m}\) as the solution concerning \(m\). Applying the propagating relation, we obtain \[\begin{pmatrix}\psi_{n+1}\\ \psi_{n}\end{pmatrix}=T^{n-1}\begin{pmatrix}\psi_{2}\\ \psi_{1}\end{pmatrix}=\Delta^{n-2}T\begin{pmatrix}\psi_{2}\\ \psi_{1}\end{pmatrix}=\begin{pmatrix}\left(\frac{\varepsilon}{t_{L}}\right)^ {n-1}\psi_{2}\\ \left(\frac{\varepsilon}{t_{L}}\right)^{n-2}\psi_{2}\end{pmatrix}, \tag{111}\] with \(n\in\mathscr{B}\). Together with the boundary equations (102) and (103) for \(t_{R}=0\), we finally obtain the eigenvectors with respect to energies \[\varepsilon_{m} =t_{L}c_{m}^{-\frac{1}{N-2}}e^{-i\frac{2\pi m}{N-2}},\quad m=1,2, \ldots,N,\] \[\psi_{n}^{m} =c_{m}^{-\frac{n-2}{N-2}}e^{-i\frac{2\pi m}{N-2}}(n-2)\psi_{2}^{ m},\quad n=3,4,\ldots,N,\] \[\psi_{1}^{m} =\frac{t_{L}}{\gamma_{L}}c_{m}^{-\frac{1}{N-2}}e^{-i\frac{2\pi m }{N-2}}\psi_{N}^{m}. \tag{112}\] Furthermore, set \(t_{L}=e^{\alpha},t_{R}=e^{-\alpha},\gamma_{L}=\mu e^{\alpha},\gamma_{R}=\mu e ^{-\alpha}\), then the strong non-reciprocity \(e^{\alpha}\gg e^{-\alpha}\) corresponds to the singular case (\(t_{R}\to 0\)). In the strong impurity case (\(\mu\gg e^{\pm\alpha}\)), the physical condition Eq. (110) reduces to \[-\mu e^{-2\alpha}=c, \tag{113}\] which leading to the solution according to Eq. (112) \[\varepsilon_{m} =e^{\alpha}e^{-(\kappa_{L}+ik_{m})},\] \[\psi_{n}^{(m)} =e^{-(\kappa_{L}+ik_{m})(n-2)}\psi_{2}^{(m)},\] \[|\psi_{1}^{(m)}|\sim|\frac{e^{2\alpha}}{\mu^{2}}|\ll 1, \tag{114}\] where \(n=3,4,\ldots,N\), \(\kappa_{L}=\left(\log\mu-2\alpha\right)/(N-2)\), and \(k_{m}=\left((2m+1)\pi\right)/(N-2)\). We next consider the limit case \(\gamma_{R}\to 0\), and the condition Eq. (108) reduces to \[\left(\frac{\varepsilon}{t_{L}}\right)^{N}=\frac{\gamma_{L}}{t_{L}}. \tag{115}\] Then, the eigenvectors with corresponding energies read \[\varepsilon_{m} =t_{L}^{\frac{1}{N}(N-1)}\gamma_{L}^{\frac{1}{N}}e^{i\frac{2\pi m }{N}},\] \[\psi_{n}^{(m)} =\left(\frac{\gamma_{L}}{t_{L}}\right)^{\frac{n-1}{N}}e^{i\frac{ 2\pi m}{N}(n-1)}\psi_{1}^{(m)}, \tag{116}\] with \(m=1,2,\ldots,N\) and \(n=2,3,\ldots,N\), which are similar to the results of the \(t_{L}=\gamma_{L}=0\) case in Ref. [56]. The weak impurity case (\(\mu\ll 1\)) corresponds the \(\gamma_{R}\to 0\) limit, the solution Eq. (116) becomes \[\varepsilon_{m} =e^{\alpha}e^{i(k_{m}^{\prime}+i\kappa_{L}^{\prime})},\] \[\psi_{n}^{(m)} =e^{i(k_{m}^{\prime}+i\kappa_{L}^{\prime})(n-1)}\psi_{1}^{(m)}, \tag{117}\] where \(\kappa_{L}^{\prime}=-\log\mu/N\) and \(k_{m}^{\prime}=2\pi m/N\). Noteworthily, Eqs. (114) and (117) are exactly equivalent to the SFL solutions in Ref. [53]. ### The case with nonsingular transfer matrix Through the boundary equations (44)(45), we can imitate Eq. (43) to derive the physical condition in current case \[\begin{pmatrix}\psi_{2}\\ \psi_{1}\end{pmatrix}=KT^{N-1}\begin{pmatrix}\psi_{2}\\ \psi_{1}\end{pmatrix}, \tag{46}\] where \[K=\begin{pmatrix}\frac{\varepsilon}{\gamma_{L}}&-\frac{\gamma_{R}}{t_{L}}\\ \frac{\gamma_{L}}{\gamma_{L}}&0\end{pmatrix}. \tag{47}\] Consequently, \[\operatorname{tr}\left(KT^{N-1}\right)=1+\det\left(KT^{N-1}\right)=1+\frac{ \gamma_{R}}{\gamma_{L}}\Gamma^{N-1}. \tag{48}\] On the other hand, utilizing the formula Eq. (13) of nonsingular \(T\), we obtain (\(z=\cos\phi\)) \[\operatorname{tr}\left(KT^{N-1}\right)=\Gamma^{\frac{N-1}{2}}\left[\frac{U_{N -2}(z)}{\sqrt{\Gamma}}\operatorname{tr}\left(KT\right)-U_{N-3}(z)\text{tr} \left(K\right)\right]. \tag{49}\] Together with Eq. (39) and after some algebra, we obtain the final form of physical condition \[\begin{pmatrix}\frac{\gamma_{L}}{t_{L}}\Gamma^{-\frac{N}{2}}+\frac{\gamma_{R} }{t_{R}}\Gamma^{\frac{N}{2}}\end{pmatrix}\sin\phi=\sin\left((N+1)\phi\right)- \frac{\gamma_{L}\gamma_{R}}{t_{L}t_{R}}\sin\left((N-1)\phi\right), \tag{50}\] which is exactly the result in Ref. [64]. The solutions of \(\phi\) in Eq. (50) are real or complex values for different parameters according to Ref. [64], and the imaginary part of the complex solution \(\phi=\phi_{R}+i\phi_{I}\) can possibly take the form \(\phi_{I}=c/N\) with \(c\) dependent on \(N\) and \(\phi_{R}\) as before. The OBC condition corresponding to \(\gamma_{L}=\gamma_{R}=0\) reduces Eq. (50) to \(\sin\left((N+1)\phi\right)=0\), which figures out the solutions \(\phi=l\pi/(N+1)\in[0,\pi]\) with \(l=1,2,\ldots,N\). The eigenstate concerning energy \(\varepsilon=2\sqrt{t_{L}t_{R}}\cos\phi\) is deduced as \[\begin{pmatrix}\psi_{n+1}\\ \psi_{n}\end{pmatrix}=T^{n}\begin{pmatrix}1\\ 0\end{pmatrix}=\frac{1}{\sin\phi}\begin{pmatrix}\Gamma^{n/2}\sin\left((n+1)\phi \right)\\ \Gamma^{(n-1)/2}\sin\left(n\phi\right)\end{pmatrix}, \tag{51}\] and normalized as \(\psi_{n}=\mathcal{N}\Gamma^{n/2}\sin\left(n\phi\right)\) with \(\mathcal{N}\) being the normalization coefficient, which is exact the bulk eigenstate of pure finite-size NHSE. We can conceal the NHSE factor \(\Gamma^{n/2}\) through a generalized gauge transformation and obtain the target Hamiltonian \[\mathcal{H}_{it}=S^{-1}\mathcal{H}_{hn}S=\sum_{n=1}^{N-1}t\left(c_{n}^{\dagger }c_{n+1}+c_{n+1}^{\dagger}c_{n}\right)+r^{N-1}\gamma_{R}c_{1}^{\dagger}c_{N}+ \frac{\gamma_{L}}{r^{N-1}}c_{N}^{\dagger}c_{1}, \tag{52}\] where \(S=\text{diag}\left\{r,r^{2},\ldots,r^{N}\right\}\) is the transformation matrix, \(t=\sqrt{t_{L}t_{R}}\), and \(r=\sqrt{t_{R}/t_{L}}\). We identify that \(r^{N-1}\gamma_{R}=\delta+\gamma\) and \(r^{-(N-1)}\gamma_{L}=\delta-\gamma\) for fixed \(t_{L},t_{R}\), then Eq. (52) is exactly the focused Hamiltonian in Ref. [54], which is a original Hermitian Hamiltonian with non-Hermitian boundary impurity \((\delta+\gamma)c_{1}^{\dagger}c_{N}+(\delta-\gamma)c_{N}^{\dagger}c_{1}\). The transfer matrix of Eq. (52) becomes \[T_{it}=\begin{pmatrix}\frac{\varepsilon}{t}&-1\\ 1&0\end{pmatrix}, \tag{53}\] with \(\Gamma_{it}=1\) and \(\Delta_{it}=\varepsilon/t\), and the physical condition of \(\mathcal{H}_{it}\) is also Eq. (50). In the PT-broken region \(\gamma\in[|\delta-t|,\delta+t]\), the solutions of Eq. (50) are the coexistence of real and complex values, and specially at \(\gamma=\gamma_{a}=\sqrt{\delta^{2}-t^{2}}\), the complex solutions are \(\phi=2\pi m/N-i\log\mu/N\) with \(m=1,2\ldots,N\) and \(\mu=\delta/t+\sqrt{(\delta/t)^{2}-1}\)[54]. The eigenstate concerning energy \(\varepsilon=2t\cos\phi\) of Eq. (52) is derived as \[\begin{pmatrix}\psi_{n+1}\\ \psi_{n}\end{pmatrix}=\left[A_{L}(\phi)e^{i(n-1)\phi\kappa}e^{-(n-1)\phi_{I}}+ A_{R}(\phi)e^{-i(n-1)\phi_{R}}e^{(n-1)\phi_{I}}\right]\begin{pmatrix}\psi_{2}\\ \psi_{1}\end{pmatrix},\quad n=1,2,\ldots,N-1, \tag{54}\] where \[A_{L}(\phi) =\frac{T_{it}-e^{-i\phi}\mathbb{1}}{2i\sin\phi},\] \[A_{R}(\phi) =\frac{-T_{it}+e^{i\phi}\mathbb{1}}{2i\sin\phi}. \tag{101}\] The real (complex) solutions of \(\phi\) correspond to the ESs (PSF effect) of Eq. (100), and accordingly correspond to the pure NHSE (HSSF effect) of \(\mathcal{H}_{hn}\) after the inverse of gauge transformation. ## Appendix E The emergent HSSF modes of NH-SSH model with boundary impurity We again perform the generalized gauge transformation same as that in Sec. IV.2 on \(\mathcal{H}_{nssh}\) together with boundary impurity, and obtain the target Hamiltonian \(\bar{\mathcal{H}}_{imp}\), the Hamiltonian \(\bar{\mathcal{H}}\) in Sec. IV.2 together with boundary impurity strength \(\bar{\gamma}_{L}=r^{-N}\gamma_{L},\bar{\gamma}_{R}=r^{N}\gamma_{R}\). The solutions of the emergent PSF modes are Eq. (100) with \(\bar{\Gamma}=1\), \(\varphi=\bar{K}_{R}\Phi_{1}\), where \[\Phi_{1}=\begin{pmatrix}\psi_{1}\\ \psi_{2N}\end{pmatrix},\quad\bar{K}_{R}=\begin{pmatrix}1&0\\ 0&\bar{\gamma}_{R}\end{pmatrix}, \tag{102}\] and \(\psi\) is the numerical \(1\times 2N\) eigenvector. The energy spectrum and the comparison between analytic and numerical results of the PSF modes are illustrated in Fig. 7 for selected parameters, which expectedly match with each other. Therefore, the PSF effect of \(\bar{\mathcal{H}}_{imp}\) implies the finite-size HSSF effect of \(\mathcal{H}_{nssh}\) with boundary impurity.
2309.17094
How Easy it is to Know How: An Upper Bound for the Satisfiability Problem
We investigate the complexity of the satisfiability problem for a modal logic expressing `knowing how' assertions, related to an agent's abilities to achieve a certain goal. We take one of the most standard semantics for this kind of logics based on linear plans. Our main result is a proof that checking satisfiability of a `knowing how' formula can be done in $\Sigma_2^P$. The algorithm we present relies on eliminating nested modalities in a formula, and then performing multiple calls to a satisfiability checking oracle for propositional logic.
Carlos Areces, Valentin Cassano, Raul Fervari, Pablo Castro, Andres Saravia
2023-09-29T09:42:45Z
http://arxiv.org/abs/2309.17094v1
# How Easy it is to Know How: ###### Abstract We investigate the complexity of the satisfiability problem for a modal logic expressing 'knowing how' assertions, related to an agent's abilities to achieve a certain goal. We take one of the most standard semantics for this kind of logics based on linear plans. Our main result is a proof that checking satisfiability of a 'knowing how' formula can be done in \(\Sigma_{2}^{P}\). The algorithm we present relies on eliminating nested modalities in a formula, and then performing multiple calls to a satisfiability checking oracle for propositional logic. Keywords:Knowing HowComplexitySatisfiability. ## 1 Introduction The term 'Epistemic Logic' [15] encompasses a family of logical formalisms aimed at reasoning about the knowledge of autonomous agents about a given scenario. Originally, epistemic logics restricted their attention to so-called _knowing that_, i.e., the capability of agents to know about certain facts. More recently, several logics have been proposed to reason about alternative forms of knowledge (see [32] for a discussion). For instance, _knowing whether_ is looked into in [7]; _knowing why_ in [34]; and _knowing the value_ in [12, 3], just to mention a few. Finally, a novel approach focuses on _knowing how_ -related to an agent's ability to achieve a goal [8]. This concept is particularly interesting, as it has been argued to provide a fresh way to reason about scenarios involving strategies in AI, such as those found in automated planning (see, e.g., [6]). The first attempts to capture knowing how were through a combination of 'knowing that' and actions (see, e.g., [25, 26, 18, 14]). However, it has been discussed, e.g., in [16, 13], that this idea does not lead to an accurate representation of knowing how. In response, a new logic is presented in [31, 33] featuring an original modality specifically tailored to model the concept of 'knowing how'. In a nutshell, an agent knows how to a achieve a goal \(\varphi\) under some initial condition \(\psi\), written \(\mathsf{Kh}(\psi,\varphi)\), if and only if there exists a 'proper' plan \(\pi\), i.e., a finite sequence of actions, that unerringly leads the agent from situations in which \(\psi\) holds only to situations in which \(\varphi\) holds. A 'proper' plan is taken as one whose execution never aborts, an idea that takes inspiration from the notion of _strong executability_ from contingent planning [29]. As discussed in, e.g., [17, 13], the quantification pattern we just described cannot be captured using logics with 'knowing that' modalities and actions. For this reason, the new \(\mathsf{Kh}\) modality from [31, 33] has reached a certain consensus in the community as an accurate way of modelling 'knowing how'. Moreover, it has paved the way to a deep study of knowing how, and to a rich family of logics capturing variants of the initial reading. Some examples of which are a ternary modality of knowing how with intermediate constraints [21]; a knowing how modality with weak plans [19]; a local modality for strategically knowing how [9] (and some relatives, see [28, 27]); and, finally, a knowing how modality which considers an epistemic indistinguishability relation among plans [1]. As witnessed by all the ideas it triggered, the foundational work in [31, 33] greatly improved the understanding of 'knowing how' from a logical standpoint. The literature on logics of 'knowing how' explores a wide variety of results, such as axiom systems (in most of the works cited above), proof methods [23, 20], and expressivity [10], just to name a few. Yet, if we consider 'knowing how' logics as suitable candidates for modelling problems in strategic reasoning, it is important to consider how difficult (or how easy) it is to use these logics for reasoning tasks. There have been some recent developments on the complexity of logics with 'knowing how' modalities. For instance, model-checking for the \(\mathsf{Kh}\) modality above, and some of its variants, is investigated in [5]. The complexity of model-checking and the decidability status of satisfiability for the local 'knowing how' modality from [9], and some of its generalizations, is explored in [24]. These two problems are also explored for 'knowing how' with epistemic indistinguishability in [1]. Notwithstanding, the complexity of the satisfiability problem for the original \(\mathsf{Kh}\) modality from [31, 33] is still unknown ([22] presents only a decidability statement). In this work, we shed some light into this matter. Our contribution is to provide an upper for the satisfiability problem of the knowing how logic from [31, 33], called here \(\mathsf{L}_{\mathsf{Kh}}\). More precisely, we introduce an algorithm for deciding satisfiability that is in \(\Sigma^{\mathsf{P}}_{2}\), the second level of the polynomial hierarchy (\(\mathsf{PH}\)) [30]. In short, this complexity class can be though as those problems invoking an \(\mathsf{NP}\) oracle a polynomial number of times, and whose underlying problem is also in \(\mathsf{NP}\) (see e.g. [2]). Currently, it is unknown whether \(\mathsf{PH}\) collapses, or it is strictly contained in \(\mathsf{PSpace}\). This being said, having an algorithm in a lower level of \(\mathsf{PH}\) is generally understood as a good indication that the problem is close to, e.g., \(\mathsf{NP}\) or \(\mathsf{Co}\)-\(\mathsf{NP}\). It is easy to see that \(\mathsf{NP}\) is a lower bound for checking satisfiability in \(\mathsf{L}_{\mathsf{Kh}}\), as it extends propositional logic. For an upper bound, a natural candidate is \(\mathsf{PSpace}\), as for instance the model-checking problem for \(\mathsf{L}_{\mathsf{Kh}}\) is \(\mathsf{PSpace}\)-complete [5], a potentially higher complexity of what is proved here for satisfiability. We argue that this is due to the fact that in model-check the full expressivity of the semantics is exploited (specially related to properties of regular languages), whereas for satisfiability, all this expressivity is completely hidden. Although our procedure does not lead to a tight complexity characterization, it gives us an interesting upper bound towards filling this gap. We put forth that our result is not obvious. To obtain it, we combine techniques such as defining a normal form to eliminate nested modalities, calling an \(\mathsf{NP}\) oracle to guess propositional valuations and computing a closure over a matrix of formulas to combine them, adapting the Floyd-Warshall algorithm [4]. The article is organized as follows. In Sec. 2 we introduce some notation and the basic definitions of the logic \(\mathsf{L}_{\mathsf{Kh}}\). Sec. 3 is devoted to incrementally show our result. Finally, in Sec. 4 we provide some remarks and future lines of research. ## 2 Knowing How Logic From here onwards, we assume \(\mathsf{Prop}\) is a denumerable set of _proposition symbols_, and \(\mathsf{Act}\) is a denumerable set of _action symbols_. We refer to \(\pi\in\mathsf{Act}^{*}\) as a _plan_. Definition 1: The _language_\(\mathsf{L}_{\mathsf{Kh}}\) is determined by the grammar: \[\varphi,\psi\coloneqq p\mid\neg\varphi\mid\varphi\vee\psi\mid\mathsf{Kh}( \varphi,\psi),\] where \(p\in\mathsf{Prop}\). We use \(\bot\), \(\top\), \(\varphi\wedge\psi\), \(\varphi\rightarrow\psi\), and \(\varphi\leftrightarrow\psi\) as the usual abbreviations; \(\mathsf{A}\varphi\) is defined as \(\mathsf{Kh}(\neg\varphi,\bot)\) (see e.g. [31, 33]), while \(\mathsf{E}\varphi\) abbreviates \(\neg\mathsf{A}\neg\varphi\). The elements of \(\mathsf{L}_{\mathsf{Kh}}\) are _formulas_. We read \(\mathsf{Kh}(\varphi,\psi)\) as: "_the agent knows how to achieve \(\psi\) given \(\varphi\)"_. We call \(\varphi\) and \(\psi\), the precondition and the postcondition of \(\mathsf{Kh}(\varphi,\psi)\), respectively. We read \(\mathsf{A}\varphi\) as: "\(\varphi\)_holds anywhere_"; and its dual \(\mathsf{E}\varphi\) as: "\(\varphi\)_holds somewhere_". As it is usually done, we refer to \(\mathsf{A}\) and \(\mathsf{E}\) as the _universal_ and _existential_ modalities [11]. Formulas of \(\mathsf{L}_{\mathsf{Kh}}\) are interpreted with respect to _labelled transition systems_ over so-called _strongly executable plans_. Sometimes, we refer to LTS as _models_. We introduce their definitions below. Definition 2: A _labelled transition system (LTS)_ is a tuple \(\mathfrak{M}=\langle\mathrm{S},\mathrm{R},\mathrm{V}\rangle\) s.t.: 1. \(\mathrm{S}\) is a non-empty set of _states_; 2. \(\mathrm{R}=\{\mathrm{R}_{a}\mid a\in\mathsf{Act}\}\) is a collection of binary relations on \(\mathrm{S}\); and 3. \(\mathrm{V}:\mathsf{Prop}\to 2^{\mathrm{S}}\) is a _valuation function_. Definition 3: Let \(\{\mathrm{R}_{a}\mid a\in\mathsf{Act}\}\) be a collection of binary relations on \(\mathrm{S}\). Let \(\varepsilon\in\mathsf{Act}^{*}\) be the empty plan. We define: \(\mathrm{R}_{\varepsilon}=\{(s,s)\mid s\in\mathrm{S}\}\), and for every \(\pi\in\mathsf{Act}^{*}\), and \(a\in\mathsf{Act}\), \(\mathrm{R}_{\pi a}=\mathrm{R}_{\pi}\,\mathrm{R}_{a}\) (i.e., their composition). For every relation \(\mathrm{R}_{\pi}\), and \(T\subseteq\mathrm{S}\), define \(\mathrm{R}_{\pi}(T)=\{(s,t)\mid s\in T\text{ and }(s,t)\in\mathrm{R}_{\pi}\}\), and \(\mathrm{R}_{\pi}(t)=\mathrm{R}_{\pi}(\{t\})\). The notion of _strong executability_ determines the "adequacy" of a plan. Strong executability takes inspiration from conformant planning [29], and its justification is discussed at length in [31]. **Definition 4**.: _Let \(\pi=a_{1}\ldots a_{n}\in\mathsf{Act}^{*}\), and \(1\leq i\leq j\leq n\), we denote: \(\pi_{i}=a_{i}\); \(\pi[i,j]=a_{i}\ldots a_{j}\); and \(|\pi|=n\). Moreover, let \(\mathfrak{M}=\langle\mathrm{S},\mathrm{R},\mathrm{V}\rangle\) be an LTS; we say that \(\pi\) is strongly executable (SE) at \(s\in\mathrm{S}\), iff for all \(i\in[1,|\pi|-1]\) and all \(t\in\mathrm{R}_{(\pi[1,i])}(s)\), it follows that \(\mathrm{R}_{\pi_{(i+1)}}(t)\neq\emptyset\). The set of all states at which \(\pi\) is strongly executable is defined as \(\mathrm{SE}(\pi)=\{s\mid\pi\text{ is SE at }s\}\). Note: \(\mathrm{SE}(\varepsilon)=\mathrm{S}\)._ We illustrate the notions we just introduced with a simple example. Example 1: Let \(\mathfrak{M}=\langle\mathrm{S},\mathrm{R},\mathrm{V}\rangle\) be the LTS depicted below and \(\pi=ab\). We have, \(\mathrm{R}_{\pi}(s)=\{u\}\), and \(\mathrm{R}_{\pi[1,1]}(s)=\mathrm{R}_{a}(s)=\{t,v\}\). It can be seen that \(s\in\mathrm{SE}(a)\); while \(s\notin\mathrm{SE}(\pi)\) -since \(v\in\mathrm{R}_{\pi[1,1]}(s)\) and \(\mathrm{R}_{\pi_{(2)}}(v)=\mathrm{R}_{b}(v)=\emptyset\). Finally, we have that \(\mathrm{SE}(\varepsilon)=\mathrm{S}\), \(\mathrm{SE}(a)=\{s\}\) and \(\mathrm{SE}(ab)=\emptyset\). We are now ready to introduce the semantics of \(\mathsf{L}_{\mathsf{Kh}}\), based on [31, 33]. Definition 5: Let \(\mathfrak{M}=\langle\mathrm{S},\mathrm{R},\mathrm{V}\rangle\) be an \(\mathrm{LTS}\), we define \(\llbracket\varphi\rrbracket^{\mathfrak{M}}\) inductively as: \[\llbracket p\rrbracket^{\mathfrak{M}} =\mathrm{V}(p) \llbracket\neg\varphi\rrbracket^{\mathfrak{M}} =\mathrm{S}\setminus\llbracket\varphi\rrbracket^{\mathfrak{M}} \llbracket\varphi\vee\psi\rrbracket^{\mathfrak{M}} =\llbracket\varphi\rrbracket^{\mathfrak{M}}\cup\llbracket\varphi \rrbracket^{\mathfrak{M}}\] \[\llbracket\mathsf{Kh}(\varphi,\psi)\rrbracket^{\mathfrak{M}} =\begin{cases}\mathrm{S}&\text{if exists }\pi\in\mathsf{Act}^{*}s.t.\ \llbracket\varphi \rrbracket^{\mathfrak{M}}\subseteq\mathrm{SE}(\pi)\text{ and }\mathrm{R}_{\pi}(\llbracket\varphi \rrbracket^{\mathfrak{M}})\subseteq\llbracket\psi\rrbracket^{\mathfrak{M}} \\ \emptyset&\text{otherwise.}\end{cases}\] We say that a plan \(\pi\in\mathsf{Act}^{*}\) is a _witness_ for \(\mathsf{Kh}(\varphi,\psi)\) iff \(\llbracket\varphi\rrbracket^{\mathfrak{M}}\subseteq\mathrm{SE}(\pi)\) and \(\mathrm{R}_{\pi}(\llbracket\varphi\rrbracket^{\mathfrak{M}})\subseteq \llbracket\psi\rrbracket^{\mathfrak{M}}\). We use \((\llbracket\varphi\rrbracket^{\mathfrak{M}})^{\mathsf{C}}\) instead of \(\mathrm{S}\setminus\llbracket\varphi\rrbracket^{\mathfrak{M}}\). We write \(\mathfrak{M}\Vdash\varphi\) as an alternative to \([\varphi]^{\mathfrak{M}}=\mathrm{S}\); and \(\mathfrak{M},s\Vdash\varphi\) as an alternative to \(s\in\llbracket\varphi\rrbracket^{\mathfrak{M}}\). Example 2: Let \(\mathfrak{M}\) be the LTS from Ex. 1. From Def. 5, we have \(\llbracket\mathsf{Kh}(p,r)\rrbracket^{\mathfrak{M}}=\mathrm{S}\) (using \(a\) as a witness), while \(\llbracket\mathsf{Kh}(p,q)\rrbracket^{\mathfrak{M}}=\emptyset\) (there is no witness for the formula). We included the universal modality \(\mathsf{A}\) as abbreviation since formulas of the form \(\mathsf{A}\varphi\) play a special role in our treatment of the complexity of the satisfiability problem for \(\mathsf{L}_{\mathsf{Kh}}\). It is proven in, e.g., [31, 33], that \(\mathsf{A}\varphi\) and \(\mathsf{E}\varphi\) behave as the universal and existential modalities ([11]), respectively. Recall that \(\mathsf{A}\varphi\) is defined as \(\mathsf{Kh}(\neg\varphi,\bot)\), which semantically states that \(\varphi\) holds everywhere in a model iff \(\neg\varphi\) leads always to impossible situations. Formulas of this kind are called here 'global'. Below, we formally restate the results just discussed. Proposition 1: _Let \(\mathfrak{M}=\langle\mathrm{S},\mathrm{R},\mathrm{V}\rangle\) and \(\psi\) and \(\chi\) be formulas s.t. \(\llbracket\chi\rrbracket^{\mathfrak{M}}=\emptyset\); then \(\llbracket\mathsf{Kh}(\psi,\chi)\rrbracket^{\mathfrak{M}}=\mathrm{S}\) iff \(\llbracket\neg\psi\rrbracket^{\mathfrak{M}}=\mathrm{S}\)._ Corollary 1: _Let \(\mathfrak{M}=\langle\mathrm{S},\mathrm{R},\mathrm{V}\rangle\) and a formula \(\varphi\), \(\mathfrak{M},s\Vdash\mathsf{A}\varphi\) iff \(\llbracket\varphi\rrbracket^{\mathfrak{M}}=\mathrm{S}\)._ We introduce now Prop. 2, which is of use in the rest of the paper. Proposition 2: _Let \(\psi,\psi^{\prime},\chi,\chi^{\prime}\) and \(\varphi\) be formulas, and \(\mathfrak{M}\) an LTS; then:_ 1. \([\![\psi^{\prime}]\!]^{\mathfrak{M}}\subseteq[\![\psi]\!]^{\mathfrak{M}}\) _and_ \([\![\chi]\!]^{\mathfrak{M}}\subseteq[\![\chi^{\prime}]\!]^{\mathfrak{M}}\) _implies_ \([\![\mathsf{Kh}(\psi,\chi)]\!]^{\mathfrak{M}}\subseteq[\![\mathsf{Kh}(\psi^{ \prime},\chi^{\prime})]\!]^{\mathfrak{M}}\)_;_ 2. \([\![\psi]\!]^{\mathfrak{M}}\subseteq[\![\psi^{\prime}]\!]^{\mathfrak{M}}\) _implies_ \(([\![\mathsf{Kh}(\varphi,\psi)]\!]^{\mathfrak{M}}\cap[\![\mathsf{Kh}(\psi^{ \prime},\chi)]\!]^{\mathfrak{M}})\subseteq[\![\mathsf{Kh}(\varphi,\chi)]\!]^{ \mathfrak{M}}\)_._ We conclude this section with some useful definitions. Definition 6: A formula \(\varphi\) is _satisfiable_, written \(\mathsf{Sat}(\varphi)\), iff there is \(\mathfrak{M}\) s.t. \([\![\varphi]\!]^{\mathfrak{M}}\neq\emptyset\). A finite set \(\Phi=\{\varphi_{1},\ldots,\varphi_{n}\}\) of formulas is _satisfiable_, written \(\mathsf{Sat}(\Phi)\), iff \(\mathsf{Sat}(\varphi_{1}\wedge\cdots\wedge\varphi_{n})\). For convenience, we define \(\mathsf{Sat}(\emptyset)\) as true. We use \(\mathsf{Unsat}(\varphi)\) iff \(\mathsf{Sat}(\varphi)\) is false; similarly, \(\mathsf{Unsat}(\Phi)\) iff \(\mathsf{Sat}(\Phi)\) is false. Finally, whenever \(\mathsf{Sat}(\varphi)\) iff \(\mathsf{Sat}(\varphi^{\prime})\), we call \(\varphi\) and \(\varphi^{\prime}\)_equisatisfiable_, and write \(\varphi\equiv_{\mathsf{Sat}}\varphi^{\prime}\). Definition 7: The modal depth of a formula \(\varphi\), written \(\mathsf{md}(\varphi)\), is defined as: \[\mathsf{md}(\varphi)=\begin{cases}0&\text{if $\varphi\in\mathsf{Prop}$}\\ \mathsf{md}(\psi)&\text{if $\varphi=\neg\psi$}\\ \max(\mathsf{md}(\psi),\mathsf{md}(\chi))&\text{if $\varphi=\psi\vee\chi$}\\ 1+\max(\mathsf{md}(\psi),\mathsf{md}(\chi))&\text{if $\varphi=\mathsf{Kh}(\psi, \chi)$.}\end{cases}\] We use \(\mathsf{sf}(\varphi)\) to indicate the set of subformulas of \(\varphi\). We say that \(\mathsf{Kh}(\psi,\chi)\) is a leaf of \(\varphi\) iff \(\mathsf{Kh}(\psi,\chi)\in\mathsf{sf}(\varphi)\) and \(\mathsf{md}(\psi)=\mathsf{md}(\chi)=0\) (i.e., \(\mathsf{md}(\mathsf{Kh}(\psi,\chi)=1)\)). In words, the modal depth of a formula is the length of the longest sequence of nested modalities in the formula; whereas a leaf is a subformula of depth one. Notice that, since \(\mathsf{A}\varphi\) is a shortcut for \(\mathsf{Kh}(\neg\varphi,\bot)\), we have \(\mathsf{md}(\mathsf{A}\varphi)=1+\mathsf{md}(\varphi)\). Example 3: Let \(\varphi=\mathsf{Kh}(p,\mathsf{Kh}(\neg q,p\to q))\vee\mathsf{Kh}(r,t)\); it can easily be checked that \(\mathsf{md}(\varphi)=2\) and that \(\mathsf{Kh}(\neg q,p\to q)\) and \(\mathsf{Kh}(r,t)\) are its modal leaves. ## 3 An Upper Bound for the Satisfiability Problem of \(\mathsf{L}_{\mathsf{Kh}}\) In this section we establish an upper bound on the complexity of the satisfiability problem for \(\mathsf{L}_{\mathsf{Kh}}\), which is the main result of our paper. We start with some preliminary definitions and results. Proposition 3: _Let \(\varphi^{\prime}\) be the result of replacing all occurrences of a leaf \(\theta\) in \(\varphi\) by a proposition symbol \(k\notin\mathsf{sf}(\varphi)\); it follows that \(\varphi\equiv_{\mathsf{Sat}}(\varphi^{\prime}\wedge(\mathsf{A}k\leftrightarrow \theta))\)._ We say that \(\varphi\) is in _leaf normal form_ iff \(\mathsf{md}(\varphi)\leq 1\). Prop. 4 tells us that we can put any formula into an equisatisfiable formula in leaf normal form. The function Flatten in Alg. 1 tells us how to do this in polynomial time. Proposition 4: _Alg. 1 is in P; on input \(\varphi\), it outputs \(\varphi_{0}\) and \(\varphi_{1}\) such that \(\mathsf{md}(\varphi_{0})=0\), \(\mathsf{md}(\varphi_{1})=1\), and \(\varphi\equiv_{\mathsf{Sat}}(\varphi_{0}\wedge\varphi_{1})\)._ The result in Prop. 4 allows us to think of the complexity of the satisfiability problem for \(\mathsf{L}_{\mathsf{Kh}}\) by restricting our attention to formulas in leaf normal form. In turn, this enables us to work towards a solution in terms of subproblems. More precisely, given \(\varphi_{0}\) and \(\varphi_{1}\) in the leaf normal form that results from Flatten, the subproblems are (i) determining the satisfiability of \(\varphi_{0}\); and (ii) determining the satisfiability of \(\varphi_{1}\) based on a solution to (i). The solution to (i) is well-known, \(\varphi_{0}\) is a propositional formula. We split the solution of (ii) into (a) determining when formulas of the form \(\mathsf{Kh}(\psi_{1},\chi_{1})\wedge\cdots\wedge\mathsf{Kh}(\psi_{n},\chi_{n})\) are satisfiable, see Prop. 5; (b) determining when formulas of the form \(\neg\mathsf{Kh}(\psi_{1}^{\prime},\chi_{1}^{\prime})\wedge\cdots\wedge\neg \mathsf{Kh}(\psi_{m}^{\prime},\chi_{m}^{\prime})\) are satisfiable, see Prop. 7; and (c) combining (a) and (b), see Prop. 11. We present (a), (b), and (c), in a way such that they incrementally lead to a solution to the satisfiability problem for \(\mathsf{L}_{\mathsf{Kh}}\). Finally, in Prop. 12, we show how to combine (i) and (ii) to obtain an upper bound on the complexity of this problem. Let us start by solving the first problem: checking whether a conjunction \(\varphi\) of positive formulas in leaf normal form are satisfiable altogether. In a nutshell, we show that solving this problem boils down to building a set \(I\) of the preconditions of those subformulas whose postconditions are falsified in the context of \(\varphi\), and checking whether the formulas in \(I\) are satisfiable or not. Intuitively, the formulas in \(I\) correspond to 'global' formulas. We made precise these ideas in Prop. 5. Proposition 5: _Let \(\varphi=\mathsf{Kh}(\psi_{1},\chi_{1})\wedge\cdots\wedge\mathsf{Kh}(\psi_{n}, \chi_{n})\) be such that \(\mathsf{md}(\varphi)=1\); and let the sets \(I_{0},\ldots,I_{n}\) be defined as follows:_ \[I_{i}=\begin{cases}\{k\in[1,n]\mid\mathsf{Unsat}(\chi_{k})\}&\text{if $i=0$},\\ I_{(i-1)}\cup\{k\in[1,n]\mid\mathsf{Unsat}(\{\neg\psi_{k^{\prime}}\mid k^{ \prime}\in I_{(i-1)}\}\cup\{\chi_{k}\})\}&\text{if $i>0$},\end{cases}\] _where \(i\in[0,n]\); further, let \(I=I_{n}\). Then: (1) \(\mathsf{Sat}(\varphi)\) iff (2) \(\mathsf{Sat}(\bigwedge_{i\in I}\neg\psi_{i})\)._ Proof: (\(\Rightarrow\)) Suppose that \(\mathsf{Sat}(\varphi)\) holds, i.e., exists \(\mathfrak{M}\) s.t. \(\llbracket\varphi\rrbracket^{\mathfrak{M}}=\mathrm{S}\). From this assumption, we know that, for every \(j\in[1,n]\), \(\llbracket\mathsf{Kh}(\psi_{i},\chi_{i})\rrbracket^{\mathfrak{M}}=\mathrm{S}\). The proof is concluded if \(\llbracket\bigwedge_{i\in I}\neg\psi_{i}\rrbracket^{\mathfrak{M}}\neq\emptyset\). We obtain this last result with the help of the following auxiliary lemma: \[(*)\text{ for all }k\in I_{i},\,\llbracket\chi_{k}\rrbracket^{\mathfrak{M}}=\emptyset \text{ and }\llbracket\neg\psi_{k}\rrbracket^{\mathfrak{M}}=\mathrm{S}\] The lemma is obtained by induction on the construction of \(I_{i}\). The base case is direct. Let \(k\in I_{0}\); from the definition of \(I_{0}\), we get \(\mathsf{Unsat}(\chi_{k})\); this implies \(\llbracket\chi_{k}\rrbracket^{\mathfrak{M}}=\emptyset\); which implies \(\mathrm{S}=\llbracket\mathsf{Kh}(\psi_{k},\chi_{k})\rrbracket^{\mathfrak{M}}= \llbracket\mathsf{A}\neg\psi_{k}\rrbracket^{\mathfrak{M}}=\llbracket\neg\psi _{k}\rrbracket^{\mathfrak{M}}\). For the inductive step, let \(k\in I_{(i+1)}\setminus I_{i}\). From the Inductive Hypothesis, for all \(k^{\prime}\in I_{i}\), \(\llbracket\chi_{k^{\prime}}\rrbracket^{\mathfrak{M}}=\emptyset\) and \(\llbracket\neg\psi_{k^{\prime}}\rrbracket^{\mathfrak{M}}=\mathrm{S}\). This implies \((\dagger)\)\(\llbracket\bigwedge_{k^{\prime}\in I_{i}}\neg\psi_{k^{\prime}}\rrbracket^{ \mathfrak{M}}=\mathrm{S}\). From the definition of \(I_{(i+1)}\), \(\mathsf{Unsat}(\{\neg\psi_{k^{\prime}}\mid k^{\prime}\in I_{i}\}\cup\{\chi_{k}\})\). This is equivalent to \(\llbracket\!\bigwedge_{k^{\prime}\in I_{i}}\neg\psi_{k^{\prime}}\rrbracket^{ \mathfrak{M}}\subseteq\llbracket\!\neg\chi_{k}\rrbracket^{\mathfrak{M}}\). From \((\dagger)\), \(\mathrm{S}\subseteq\llbracket\!\neg\chi_{k}\rrbracket^{\mathfrak{M}}=\mathrm{S}\). Thus, \(\llbracket\!\chi_{k}\rrbracket^{\mathfrak{M}}=\emptyset\) and \(\llbracket\!\neg\psi_{k}\rrbracket^{\mathfrak{M}}=\mathrm{S}\). Since \(I=I_{n}\); using \((*)\) we get \(\llbracket\!\bigwedge_{i\in I}\neg\psi_{i}\rrbracket^{\mathfrak{M}}=\mathrm{S} \neq\emptyset\). This proves (2). \((\Leftarrow)\) The proof is by contradiction. Suppose (2) and \(\mathsf{Unsat}(\varphi)\). Then, for all \(\mathfrak{M}\), \((\dagger)\)\(\llbracket\varphi\rrbracket^{\mathfrak{M}}=\emptyset\). Let \(J=\{j\in[1,n]\mid\mathsf{Unsat}(\{(\bigwedge_{i\in I}\neg\psi_{i}),\psi_{j} \})\}\). Moreover, let \(\mathfrak{M}=\langle\mathrm{S},\mathrm{R},\mathrm{V}\rangle\) be s.t. \(\mathrm{S}\) is the smallest set containing all valuations that make \((\bigwedge_{i\in I}\neg\psi_{i})\) true. From (2), we know that \(\mathrm{S}\neq\emptyset\) and \(\llbracket\neg\psi_{k}\rrbracket^{\mathfrak{M}}=\mathrm{S}\) for all \(k\in I\). By induction on the construction of \(I=I_{n}\), we get that \(\llbracket\!\chi_{k}\rrbracket^{\mathfrak{M}}=\emptyset\) for all \(k\in I=\bigcup_{i=0}^{n}I_{i}\). The case for \(k\in I_{0}\) is direct since \(\mathsf{Unsat}(\chi_{k})\), thus \(\llbracket\!\chi_{k}\rrbracket^{\mathfrak{M}}=\emptyset\). For the inductive case, let \(k\in I_{i}\setminus I_{i-1}\), then \(\mathsf{Unsat}(\{\neg\psi_{k^{\prime}}\mid k^{\prime}\in I_{(i-1)}\}\cup\{ \chi_{k}\})\). This is equivalent to say that the implication \(((\bigwedge_{k^{\prime}\in I_{(i-1)}}\neg\psi_{k^{\prime}})\to\neg\chi_{k})\) is valid. Thus, \(\llbracket\!\bigwedge_{k^{\prime}\in I_{(i-1)}}\neg\psi_{k^{\prime}}\rrbracket^{ \mathfrak{M}}\subseteq\llbracket\!\neg\chi_{k}\rrbracket^{\mathfrak{M}}\). By hypothesis, \(\llbracket\!\bigwedge_{k^{\prime}\in I}\neg\psi_{k^{\prime}}\rrbracket^{ \mathfrak{M}}=\mathrm{S}\). Thus, \(\llbracket\!\bigwedge_{k^{\prime}\in I_{(i-1)}}\neg\psi_{k^{\prime}} \rrbracket^{\mathfrak{M}}=\mathrm{S}\), and we get \(\llbracket\!\neg\chi_{k}\rrbracket^{\mathfrak{M}}=\mathrm{S}\) and \(\llbracket\!\chi_{k}\rrbracket^{\mathfrak{M}}=\emptyset\). In turn, for all \(k\in J\), since \(\mathsf{Unsat}(\{(\bigwedge_{i\in I}\neg\psi_{i}),\psi_{k}\})\) and \(\llbracket\!\bigwedge_{i\in I}\neg\psi_{i}\rrbracket^{\mathfrak{M}}=\mathrm{S}\) we can conclude that \(\llbracket\!\psi_{k}\rrbracket^{\mathfrak{M}}=\emptyset\). Thus, we have that \(\llbracket\!\mathsf{Kh}(\psi_{k},\chi_{k})\rrbracket^{\mathfrak{M}}= \llbracket\!\mathsf{A}\neg\psi_{k}\rrbracket^{\mathfrak{M}}=\mathrm{S}\), for all \(k\in I\cup J\). Then, from \((\dagger)\), exists \(K=\{k\mid\llbracket\!\mathsf{Kh}(\psi_{k},\chi_{k})\rrbracket^{\mathfrak{M}}=\emptyset\}\) s.t. \(\emptyset\subset K\subseteq[1,n]\setminus(I\cup J)\). For all \(k\in K\), \(\llbracket\!\psi_{k}\rrbracket^{\mathfrak{M}}\neq\emptyset\) since \(\mathsf{Sat}(\{(\bigwedge_{i\in I}\neg\psi_{i}),\psi_{k}\})\); and \(\llbracket\!\chi_{k}\rrbracket^{\mathfrak{M}}\neq\emptyset\) since \(\mathsf{Sat}(\{\neg\psi_{k^{\prime}}\mid k^{\prime}\in I_{(i-1)}\}\cup\{ \chi_{k}\})\) for all \(i\geq 0\), even \(I_{(i-1)}=I_{n}=I\). Without loss of generality, let \(K=[1,m]\) and \(\mathfrak{M}^{\prime}=\langle\mathrm{S},\mathrm{R}^{\prime},\mathrm{V}\rangle\) be s.t. \(\mathrm{R}^{\prime}=\{\mathrm{R}^{\prime}_{a_{j}}\mid a_{j}\in\mathsf{Act}\}\), where: \[\mathrm{R}^{\prime}_{a_{j}}=\begin{cases}\llbracket\!\psi_{j}\rrbracket^{ \mathfrak{M}^{\prime}}\times\llbracket\!\chi_{j}\rrbracket^{\mathfrak{M}^{ \prime}}&\text{if $j\in K$}\\ \mathrm{R}_{a_{(j-m)}}&\text{if $j\notin K$}.\end{cases}\] In the definition of \(\mathrm{R}^{\prime}\), it is worth noticing that since \(j\notin K\), \(\mathrm{R}_{a_{(j-m)}}\) is defined, i.e., \(\mathrm{R}_{a_{(j-m)}}\in\mathrm{R}\). Then clearly, for all \(k\in K\), \(\llbracket\!\mathsf{Kh}(\psi_{k},\chi_{k})\rrbracket^{\mathfrak{M}^{\prime}}= \mathrm{S}\). The claim is that for all \(k^{\prime}\in I\cup J\), \(\llbracket\!\mathsf{Kh}(\psi_{k^{\prime}},\chi_{k^{\prime}})\rrbracket^{ \mathfrak{M}^{\prime}}=\mathrm{S}\). To prove this claim, consider a function \(\sigma:\mathsf{Act}^{*}\to\mathsf{Act}^{*}\) s.t. \(\sigma(\varepsilon)=\varepsilon\), and \(\sigma(a_{k}\alpha)=a_{(k+m)}\sigma(\alpha)\). For all \(\pi\in\mathsf{Act}^{*}\), if \(\llbracket\!\psi_{k^{\prime}}\rrbracket^{\mathfrak{M}}\subseteq\mathrm{SE}(\pi)\) and \(\mathrm{R}_{\pi}(\llbracket\!\psi_{k^{\prime}}\rrbracket^{\mathfrak{M}}) \subseteq\llbracket\!\chi_{k^{\prime}}\rrbracket^{\mathfrak{M}}\), then \(\llbracket\!\psi_{k^{\prime}}\rrbracket^{\mathfrak{M}^{\prime}}\subseteq \mathrm{SE}(\sigma(\pi))\) and \(\mathrm{R}_{\sigma(\pi)}(\llbracket\!\psi_{k^{\prime}}\rrbracket^{\mathfrak{M}^{ \prime}})\subseteq\llbracket\!\chi_{k^{\prime}}\rrbracket^{\mathfrak{M}^{ \prime}}\) -since the valuation functions for \(\mathfrak{M}\) and \(\mathfrak{M}^{\prime}\) coincide, the truth sets in \(\mathfrak{M}\) and \(\mathfrak{M}^{\prime}\) coincide for formulas with no modalities. Then, \(\llbracket\!\mathsf{Kh}(\psi_{k^{\prime}},\chi_{k^{\prime}})\rrbracket^{ \mathfrak{M}^{\prime}}=\mathrm{S}\). But we had assumed \(\mathsf{Unsat}(\varphi)\). Thus, (1) follows. The following example illustrates the result in Prop. 5. Example 4: Let \(\varphi=\mathsf{Kh}(p,\bot)\wedge\mathsf{Kh}(q,p)\), i.e., \(\psi_{1}=p\), \(\psi_{2}=q\), \(\chi_{1}=\bot\) and \(\chi_{2}=p\). It is clear that \(\mathsf{Sat}(\varphi)\). Let us build the sets \(I_{0}\), \(I_{1}\) and \(I_{2}\): * \(I_{0}=\{1\}\), as \(\mathsf{Unsat}(\chi_{1})\) and \(\mathsf{Sat}(\chi_{2})\) hold; * \(I_{1}=\{1,2\}\), since it holds \(\mathsf{Unsat}(\{\neg\psi_{1},\chi_{2}\})\); * \(I_{2}=\{1,2\}=I\), as \(I_{1}\) already contains all the indices in \([1,2]\). Thus (as it can be easily checked) we get \(\mathsf{Sat}(\{\neg\psi_{1},\neg\psi_{2}\})\) (i.e., \(\mathsf{Sat}(\{\neg p,\neg q\})\)). Interestingly, the result in Prop. 5 tells us that the satisfiability of a formula \(\mathsf{Kh}(\psi_{1},\chi_{1})\wedge\cdots\wedge\mathsf{Kh}(\psi_{n},\chi_{n})\) depends solely on the joint satisfiability of its 'global' subformulas (cf. Prop. 1); i.e., subformulas \(\mathsf{Kh}(\psi_{i},\chi_{i})\) whose postconditions \(\chi_{i}\) are falsified in the context of \(\varphi\). The satisfiability of the 'global' subformulas provides us with the universe, i.e., set of states, on which to build the plans that witness those formulas that are not in \(I\), and that are not 'trivially' true as a result of their preconditions being falsified in this universe. Building on Prop. 5, the function \(\textsc{Sat}^{+}_{\mathsf{Kh}}\) in Alg. 2 gives us a way of checking whether a formula \(\varphi=\mathsf{Kh}(\psi_{1},\chi_{1})\wedge\cdots\wedge\mathsf{Kh}(\psi_{n}, \chi_{n})\) is satisfiable. The algorithm behind this function makes use of a (propositional) \(\mathsf{Sat}\) oracle, and the function Global. The \(\mathsf{Sat}\) oracle tests for pre and postconditions of \(\mathsf{Kh}\) formulas, as these are propositional formulas. Intuitively, Global iteratively computes the indices in the sets \(I_{i}\) in Prop. 5, each of them corresponding to the 'global' subformulas of the input. Once this is done, \(\textsc{Sat}^{+}_{\mathsf{Kh}}\) checks the joint satisfiability of the negation of the preconditions of 'global' subformulas. Proposition 6: _Let \(\varphi\) be as in Prop. 5; Alg. 2 solves \(\mathsf{Sat}(\varphi)\)._ Let us now move to determining the satisfiability conditions of a formula \(\neg\mathsf{Kh}(\psi_{1},\chi_{1})\wedge\cdots\wedge\neg\mathsf{Kh}(\psi_{n}, \chi_{n})\) in leaf normal form. Prop. 7 establishes that, for any such a formula, it is enough to check whether each conjunct \(\psi_{i}\wedge\neg\chi_{i}\) is individually satisfiable. Note that this satisfiability check is purely propositional. Proposition 7: _Let \(\varphi=\neg\mathsf{Kh}(\psi_{1},\chi_{1})\wedge\cdots\wedge\neg\mathsf{Kh}( \psi_{n},\chi_{n})\) be s.t. \(\mathsf{md}(\varphi)=1\); it follows that \(\mathsf{Sat}(\varphi)\) iff for all \(i\in[1,n]\), \(\mathsf{Sat}(\psi_{i}\wedge\neg\chi_{i})\)._ Proof: (\(\Rightarrow\)) The proof is by contradiction. Suppose that \((\dagger)\)\(\mathsf{Sat}(\varphi)\) and for some \(i\in[1,n]\) we have \((\ddagger)\)\(\mathsf{Unsat}(\psi_{i}\wedge\neg\chi_{i})\). Let \(\mathfrak{M}\) be a model such that \([\![\varphi]\!]^{\mathfrak{M}}\neq\emptyset\), which exists by \((\dagger)\). Then, \([\![\mathsf{Kh}(\psi_{i},\chi_{i})]\!]^{\mathfrak{M}}=\emptyset\). From this, we get \([\![\psi_{i}]\!]^{\mathfrak{M}}\neq\emptyset\); otherwise \([\![\mathsf{Kh}(\psi_{i},\chi_{i})]\!]^{\mathfrak{M}}=\mathrm{S}\). From \((\dagger)\), we know that \([\![\psi_{i}]\!]^{\mathfrak{M}}\subseteq[\![\chi_{i}]\!]^{\mathfrak{M}}\). Since \(\varepsilon\in\mathsf{Act}^{*}\), we have \([\![\psi_{i}]\!]^{\mathfrak{M}}\subseteq\mathrm{SE}(\varepsilon)=\mathrm{S}\) and \([\![\psi_{i}]\!]^{\mathfrak{M}}=\mathrm{R}_{\varepsilon}([\![\psi_{i}]\!]^{ \mathfrak{M}})\subseteq[\![\chi_{i}]\!]^{\mathfrak{M}}\). But this means \([\![\mathsf{Kh}(\psi_{i},\chi_{i})]\!]^{\mathfrak{M}}=\mathrm{S}\); which is a contradiction. Thus, \(\mathrm{R}_{\varepsilon}[\![\psi_{i}]\!]^{\mathfrak{M}}\nsubseteq\llbracket \chi_{i}\rrbracket^{\mathfrak{M}}\); i.e., \([\![\psi_{i}]\!]^{\mathfrak{M}}\nsubseteq\llbracket\chi_{i}\rrbracket^{ \mathfrak{M}}\). This means \([\![\psi_{i}\wedge\neg\chi_{i}]\!]^{\mathfrak{M}}\neq\emptyset\). This establishes \(\mathsf{Sat}(\psi_{i}\wedge\neg\chi_{i})\). (\(\Leftarrow\)) Suppose that (\(\dagger\)) for all \(i\in[1,n]\), \(\mathsf{Sat}(\psi_{i}\wedge\neg\chi_{i})\). Let \(\mathfrak{M}=\langle\mathrm{S},\mathrm{R},\mathrm{V}\rangle\) where: (\(\dagger\)) S is s.t. for all \(i\), \(\llbracket\psi_{i}\wedge\neg\chi_{i}\rrbracket^{\mathfrak{M}}\neq\emptyset\); and (\(\lx@sectionsign\)) for all \(\mathrm{R}_{a}\in\mathrm{R}\), \(\mathrm{R}_{a}=\emptyset\). From (\(\dagger\)), we know that at least one S exists, as every \(\psi_{i}\) and \(\chi_{i}\) are propositional; thus, each satisfiable conjunction can be sent to a different \(s\in\mathrm{S}\). From (\(\lx@sectionsign\)), we know for all \(\pi\in\mathsf{Act}^{*}\), \(\mathrm{SE}(\pi)\neq\emptyset\) iff \(\pi=\varepsilon\). From (\(\dagger\)) and (\(\lx@sectionsign\)), we know that \(\llbracket\psi_{i}\rrbracket^{\mathfrak{M}}=\mathrm{R}_{\varepsilon} \llbracket\psi_{i}\rrbracket^{\mathfrak{M}}\nsubseteq\llbracket\chi_{i} \rrbracket^{\mathfrak{M}}\). This means that \(\llbracket\mathsf{Kh}(\psi_{i},\chi_{i})\rrbracket^{\mathfrak{M}}=\emptyset\), for all \(i\in[1,n]\). Hence \(\llbracket\varphi\rrbracket^{\mathfrak{M}}=\mathrm{S}\) which implies \(\mathsf{Sat}(\varphi)\). The key idea behind Prop. 3.1 is to build a discrete universe to force the only possible witness of a formula of the form \(\mathsf{Kh}(\psi_{i},\chi_{i})\) to be the empty plan. If in this discrete universe we always have at hand a state which satisfies \(\psi_{i}\wedge\neg\chi_{i}\), then, the empty plan cannot be a witness for \(\mathsf{Kh}(\psi_{i},\chi_{i})\). If the latter is the case, then the satisfiability of \(\neg\mathsf{Kh}(\psi_{i},\chi_{i})\) is ensured. Building on this result, we define, in Alg. 3, a function \(\mathsf{Sat}_{\mathsf{Kh}}^{-}\) to check the satisfiability of a formula \(\neg\mathsf{Kh}(\psi_{1},\chi_{1})\wedge\cdots\wedge\neg\mathsf{Kh}(\psi_{n},\chi_{n})\) in leaf normal form. The function proceeds by traversing each subformula \(\mathsf{Kh}(\psi_{i},\chi_{i})\) and checking the satisfiability of \(\psi_{i}\wedge\neg\chi_{i}\). Proposition 8: _Let \(\varphi\) be as in Prop. 3.1; Alg. 3 solves \(\mathsf{Sat}(\varphi)\)._ We are now ready to extend the results in Props. 3.1 and 3.1 to work out the joint satisfiability of a formula of the form \(\varphi^{+}=\mathsf{Kh}(\psi_{1},\chi_{1})\wedge\cdots\wedge\mathsf{Kh}(\psi_ {n},\chi_{n})\), and a formula of the form \(\varphi^{-}=\neg\mathsf{Kh}(\psi^{\prime}_{1},\chi^{\prime}_{1})\wedge\cdots \wedge\neg\mathsf{Kh}(\psi^{\prime}_{m},\chi^{\prime}_{m})\), both in leaf normal form. The main difficulty is how to "build" witnesses for the subformulas \(\mathsf{Kh}(\psi_{i},\chi_{i})\) of \(\varphi^{+}\) in a way such that they do not yield witnesses for the subformulas \(\neg\mathsf{Kh}(\psi^{\prime}_{j},\chi^{\prime}_{j})\) of \(\varphi^{-}\). We show that the key to the solution hinges on "composition". We start with a preliminary definition. Definition 8: Let \(\varphi=\mathsf{Kh}(\psi_{1},\chi_{1})\wedge\cdots\wedge\mathsf{Kh}(\psi_{n}, \chi_{n})\) and \(\psi\) be a formula; we define \(\Pi(\varphi,\psi)=\bigcup_{i\geq 0}\Pi_{i}\) where: \[\Pi_{0} =\{(x,x)\mid x\in[1,n]\}\] \[\Pi_{(i+1)} =\Pi_{i}\cup\{(x,z)\mid(x,y)\in\Pi_{i}\,,\,z\in[1,n]\text{, and } \mathsf{Unsat}(\{\psi,\chi_{y},\neg\psi_{z}\})\}.\] In words, \(\Pi(\varphi,\psi)\) captures the notion of composition of formulas \(\mathsf{Kh}(\psi,\chi)\) and \(\mathsf{Kh}(\psi^{\prime},\chi^{\prime})\) into a formula \(\mathsf{Kh}(\psi,\chi^{\prime})\). This composition is best explained by recalling the validity of \((\mathsf{Kh}(\psi,\chi)\wedge\mathsf{A}(\chi\to\psi^{\prime})\wedge\mathsf{Kh }(\psi^{\prime},\chi^{\prime}))\to\mathsf{Kh}(\psi,\chi^{\prime})\) (see, e.g. [31, 33]). The definition of \(\Pi(\varphi,\psi)\) records the conjuncts of \(\varphi\) which can be composed in this sense. Below, we list some properties of \(\Pi(\varphi,\psi)\). **Proposition 9**.: _Let \(\varphi\) and \(\psi\) be as in Def. 8; if \((x,y)\in\Pi(\varphi,\psi)\), then, for any model \(\mathfrak{M}\), it holds \(\llbracket\![\varphi\wedge\mathsf{A}\psi]\!\rrbracket^{\mathfrak{M}}\subseteq \llbracket\mathsf{Kh}(\psi_{x},\chi_{y})\rrbracket^{\mathfrak{M}}\)._ Proof.: We start by stating and proving an auxiliary lemma: \((*)\)\((x,y)\in\Pi_{i}\) iff there is a non-empty sequence \(\pi\) of indices in \([1,n]\) s.t.: 1. \(x=\pi_{1}\) and \(y=\pi_{|\pi|}\); and 2. for all \(j\in[1,|\pi|-1]\), \(\mathsf{Unsat}(\{\psi,\chi_{\pi_{j}},\neg\psi_{\pi_{(j+1)}}\})\). The proof of this lemma is by induction on \(i\). The base case for \((*)\) is \(i=0\). We know that \((x,x)\in\Pi_{0}\), the sequence containing just \(x\) satisfies (\(\dagger\)) and (\(\ddagger\)). Conversely, we know that any sequence \(\pi\) of indices in \([1,n]\) s.t. \(|\pi|=1\) satisfies (\(\dagger\)) and (\(\ddagger\)); it is immediate that \((\pi_{1},\pi_{1})\in\Pi_{0}\). This proves the base case. For the inductive step, let \((x,z)\in\Pi_{(i+1)}\), \((x,y)\in\Pi_{i}\), \(z\in[1,n]\), and \(\mathsf{Unsat}(\{\psi,\chi_{y},\neg\psi_{z}\})\). From the Inductive Hypothesis, there is \(\pi\) that satisfies (\(\dagger\)) and (\(\ddagger\)). Immediately, \(\pi^{\prime}=\pi z\) also satisfies (\(\dagger\)) and (\(\ddagger\)). It is easy to see that, if there is \(\pi\) satisfying (\(\dagger\)) and (\(\ddagger\)), then, (\(\lx@sectionsign\)) for every model \(\mathfrak{M}\) and \(j\in[1,|\pi|-1]\), \(\llbracket\mathsf{A}\psi]\!\rrbracket^{\mathfrak{M}}=\mathrm{S}\) implies \(\llbracket\![\chi_{\pi_{j}}]\!\rrbracket^{\mathfrak{M}}\subseteq\llbracket \![\psi_{\pi_{(j+1)}}]\!\rrbracket^{\mathfrak{M}}\). Let us now resume with the main proof. Let \((x,y)\in\Pi(\varphi,\psi)\) and \(\mathfrak{M}\) be any model. The result is direct if \(\llbracket\![\varphi\wedge\mathsf{A}\psi]\!\rrbracket^{\mathfrak{M}}=\emptyset\). Thus, consider \(\llbracket\![\varphi\wedge\mathsf{A}\psi]\!\rrbracket^{\mathfrak{M}}\neq\emptyset\); i.e., s.t. \(\llbracket\![\varphi\wedge\mathsf{A}\psi]\!\rrbracket^{\mathfrak{M}}=\mathrm{S}\). From \((*)\), we know that exists a sequence \(\pi\) of indices in \([1,n]\) that satisfies (\(\dagger\)) and (\(\ddagger\)). Then, for all \(j\in[1,|\pi|-1]\), \(\llbracket\![\chi_{\pi_{j}}]\!\rrbracket^{\mathfrak{M}}\subseteq\llbracket \![\psi_{\pi_{(j+1)}}]\!\rrbracket^{\mathfrak{M}}\). Using Prop. 3, \(\llbracket\![\varphi\wedge\mathsf{A}\psi]\!\rrbracket^{\mathfrak{M}}\subseteq \bigcap_{j=1}^{|\pi|}\llbracket\mathsf{Kh}(\psi_{\pi_{j}},\chi_{\pi_{j}}) \rrbracket^{\mathfrak{M}}\subseteq\llbracket\mathsf{Kh}(\psi_{x},\chi_{y}) \rrbracket^{\mathfrak{M}}\). **Proposition 10**.: _Let \(\varphi=\mathsf{Kh}(\psi_{1},\chi_{1})\wedge\cdots\wedge\mathsf{Kh}(\psi_{n},\chi_{n})\) and \(\psi\) be a formula; \(\Pi(\varphi,\psi)\) is the smallest set s.t.: (1) for all \(x\in[1,n]\), \((x,x)\in\Pi(\varphi,\psi)\); and (2) if \(\{(x,y_{0}),(y_{1},z)\}\subseteq\Pi(\varphi,\psi)\) and \(\mathsf{Unsat}(\{\psi,\chi_{y_{0}},\neg\psi_{y_{1}}\})\), then, \((x,z)\in\Pi(\varphi,\psi)\)._ The function Plans in Alg. 4 can be used to compute the set \(\Pi(\varphi,\psi)\) in Def. 8. This function looks into whether a pair of indices belongs to this set using the result in Prop. 10. Example 5.: Let \(\varphi=\mathsf{Kh}(p,p\wedge q)\wedge\mathsf{Kh}(q,r)\wedge\mathsf{Kh}(r \lor s,t)\) and \(\psi=\top\); in this case we have: \(\psi_{1}=p\), \(\chi_{1}=p\wedge q\), \(\psi_{2}=q\), \(\chi_{2}=r\), \(\psi_{3}=r\lor s\), and \(\chi_{3}=t\). We can easily verify that \(\Pi(\varphi,\psi)=\{(1,1)(1,2)(1,3)(2,2)(2,3)(3,3)\}\). Indeed, in the initial step we get \(\Pi_{0}=\{(1,1)(2,2)(3,3)\}\). The pairs of indices correspond to those of the pre/post conditions of the subformulas \(\mathsf{Kh}(\psi_{i},\chi_{i})\in\mathsf{sf}(\varphi)\). Then, since we have \(\{(1,1)(2,2)\}\subseteq\Pi_{0}\), \(\mathsf{Unsat}(\{\chi_{1},\neg\psi_{2}\})\), and \(\mathsf{Unsat}(\{\chi_{2},\neg\psi_{3}\})\), it follows that \(\Pi_{1}=\Pi_{0}\cup\{(1,2)(2,3)\}\). The new pairs of indices can intuitively be taken as the formulas \(\mathsf{Kh}(\psi_{1},\chi_{2})\) and \(\mathsf{Kh}(\psi_{2},\chi_{3})\). In this case, note the connection between \(\mathsf{Kh}(\psi_{1},\chi_{2})\) and \((\mathsf{Kh}(\psi_{1},\chi_{1})\wedge\mathsf{A}(\chi_{1}\to\psi_{2})\wedge \mathsf{Kh}(\psi_{2},\chi_{2}))\to\mathsf{Kh}(\psi_{1},\chi_{2})\), and \(\mathsf{Kh}(\psi_{2},\chi_{3})\) and \((\mathsf{Kh}(\psi_{2},\chi_{2})\wedge\mathsf{A}(\chi_{2}\to\psi_{3})\wedge \mathsf{Kh}(\psi_{3},\chi_{3}))\to\mathsf{Kh}(\psi_{2},\chi_{3})\). Finally, since we have \((1,2)\in\Pi_{2}\) and \(\mathsf{Unsat}(\{\chi_{2},\neg\psi_{3}\})\), then \(\Pi_{2}=\Pi_{1}\cup\{(1,3)\}\). The justification for the pair \((1,3)\) is similar to the one just offered. In Fig. 1 we illustrate a run of Plans which computes this set (only the steps in which the matrix is updated are shown). The composition of formulas \(\mathsf{Kh}(\psi,\chi)\) and \(\mathsf{Kh}(\psi^{\prime},\chi^{\prime})\) has an impact if we wish to add a formula \(\neg\mathsf{Kh}(\psi^{\prime\prime},\chi^{\prime\prime})\) into the mix. The reason for this is that witness plans \(\pi\) and \(\pi^{\prime}\) for \(\mathsf{Kh}(\psi,\chi)\) and \(\mathsf{Kh}(\psi^{\prime},\chi^{\prime})\), respectively, yield a witness plan \(\pi^{\prime\prime}=\pi\pi^{\prime}\) for \(\mathsf{Kh}(\psi,\chi^{\prime})\). In adding \(\neg\mathsf{Kh}(\psi^{\prime\prime},\chi^{\prime\prime})\) we need to ensure \(\pi^{\prime\prime}\) is not a witness for \(\mathsf{Kh}(\psi^{\prime\prime},\chi^{\prime\prime})\), as such a plan renders \(\neg\mathsf{Kh}(\psi^{\prime\prime},\chi^{\prime\prime})\) unsatisfiable. We make these ideas precise in the definition of _compatible_ below. Definition 9: Let \(\varphi^{+}\) and \(\varphi^{-}\) be formulas s.t.: \(\mathsf{md}(\varphi^{+})=1\) and \(\mathsf{md}(\varphi^{-})=1\); \(\varphi^{+}=\mathsf{Kh}(\psi_{1},\chi_{1})\wedge\cdots\wedge\mathsf{Kh}(\psi_ {n},\chi_{n})\); and \(\varphi^{-}=\neg\mathsf{Kh}(\psi^{\prime}_{1},\chi^{\prime}_{1})\wedge\cdots \wedge\neg\mathsf{Kh}(\psi^{\prime}_{m},\chi^{\prime}_{m})\). Moreover, let \(I,J\subseteq[1,n]\) be as in Prop. 5 and \(\psi=\bigwedge_{i\in I}\neg\psi_{i}\). We say that \(\varphi^{+}\) and \(\varphi^{-}\) are _compatible_ iff the following conditions are met: 1. \(\mathsf{Sat}(\psi)\); 2. for all \(\mathsf{Kh}(\psi^{\prime}_{k^{\prime}},\chi^{\prime}_{k^{\prime}})\in\mathsf{ sf}(\varphi^{-})\), 1. \(\mathsf{Sat}(\{\psi,\psi^{\prime}_{k^{\prime}},\neg\chi^{\prime}_{k^{\prime}}\})\); and 2. for all \((x,y)\in\Pi(\varphi^{+},\psi)\), \(\text{if $x\notin J$ and $\mathsf{Unsat}(\{\psi,\psi^{\prime}_{k^{\prime}},\neg\psi_{x}\})$}\), then, \(\mathsf{Sat}(\{\psi,\chi_{y},\neg\chi^{\prime}_{k^{\prime}}\})\). Def. 9 aims to single out the conditions under which the formulas \(\varphi^{+}\) and \(\varphi^{-}\) can be jointly satisfied. Intuitively, (1) tells us \(\varphi^{+}\) must be individually satisfied (cf. Prop. 5). In turn, (2.a) tells us \(\varphi^{-}\) must be individually satisfied (cf. Prop. 7), while (2.b) tells us \(\varphi^{+}\) and \(\varphi^{-}\) can be satisfied together if no composition of subformulas in \(\varphi^{+}\) contradicts a subformula in \(\varphi^{-}\). Such a contradiction would originate only as a result of strengthening the precondition and/or weakening the postcondition of a composition of subformulas in \(\varphi^{+}\), in a way such that they would result in the opposite of a subformula in \(\varphi^{-}\). Prop. 11 states that the conditions in Def. 9 guarantee the satisfiability of a combination of \(\varphi^{+}\) and \(\varphi^{-}\) Proposition 11: _It follows that \(\varphi^{+}\) and \(\varphi^{-}\) are compatible iff \(\mathsf{Sat}(\varphi^{+}\wedge\varphi^{-})\)._ Proof: (\(\Rightarrow\)) Suppose that \(\varphi^{+}\) and \(\varphi^{-}\) are compatible. Let \(\mathfrak{M}=\langle\mathrm{S},\mathrm{R},\mathrm{V}\rangle\) be s.t. \(\mathrm{S}\) contains all valuations that make \(\psi\) true; and \(\mathrm{R}=\{\mathrm{R}_{a_{k}}\mid a_{k}\in\mathsf{Act}\}\) where \[\mathrm{R}_{a_{k}}=\begin{cases}\llbracket\psi_{k}\rrbracket^{\mathfrak{M}} \times\llbracket\chi_{k}\rrbracket^{\mathfrak{M}}&\text{if $k\in K$}\\ \emptyset&\text{otherwise,}\end{cases}\] for \(K=[1,n]\setminus(I\cup J)\). From (1), we know \(\mathrm{S}\neq\emptyset\). It is not difficult to see that \(\llbracket\varphi^{+}\rrbracket^{\mathfrak{M}}=\mathrm{S}\) (cf. Prop. 5). The proof is concluded if \(\llbracket\varphi^{-}\rrbracket^{\mathfrak{M}}=\mathrm{S}\). We proceed by contradiction. Let \(k^{\prime}\in[1,m]\) be s.t. \(\llbracket\mathsf{Kh}(\psi^{\prime}_{k^{\prime}},\chi^{\prime}_{k^{\prime}}) \rrbracket^{\mathfrak{M}}=\mathrm{S}\); i.e., \((*)\) exists \(\pi\in\mathsf{Act}^{*}\) s.t. \(\llbracket\psi^{\prime}_{j}\rrbracket^{\mathfrak{M}}\subseteq\mathrm{SE}(\pi)\) and \(\mathrm{R}_{\pi}(\llbracket\psi^{\prime}_{j}\rrbracket^{\mathfrak{M}}) \subseteq\llbracket\chi^{\prime}_{j}\rrbracket^{\mathfrak{M}}\). We consider the following cases. (\(\pi=\varepsilon\)) From (2.a), we know \(\llbracket\psi^{\prime}_{k^{\prime}}\wedge\neg\neg\chi^{\prime}_{k^{\prime}} \rrbracket^{\mathfrak{M}}\neq\emptyset\); i.e., \(\llbracket\psi^{\prime}_{k^{\prime}}\rrbracket^{\mathfrak{M}}\nsubseteq\llbracket \chi^{\prime}_{k^{\prime}}\rrbracket^{\mathfrak{M}}\). This implies \(\llbracket\psi^{\prime}_{k^{\prime}}\rrbracket^{\mathfrak{M}}=\mathrm{R}_{ \varepsilon}(\llbracket\psi^{\prime}_{k^{\prime}}\rrbracket^{\mathfrak{M}}) \nsubseteq\llbracket\chi^{\prime}_{k^{\prime}}\rrbracket^{\mathfrak{M}}\). (\(\pi\neq\varepsilon\) and \(\pi=a_{k_{1}},\ldots,a_{k_{|\pi|}}\) with \(k_{j}\in K\) and \(j\in[1,|\pi|]\)) In this case we have: 1. \(\emptyset\neq\llbracket\psi^{\prime}_{k^{\prime}}\rrbracket^{\mathfrak{M}} \subseteq\mathrm{SE}(\pi)\subseteq\mathrm{SE}(a_{k_{1}})=\llbracket\psi_{k_{ 1}}\rrbracket^{\mathfrak{M}}\); 2. \(\llbracket\chi_{k_{j}}\rrbracket^{\mathfrak{M}}=\mathrm{R}_{a_{k_{j}}}( \llbracket\psi_{k_{j}}\rrbracket^{\mathfrak{M}})\subseteq\llbracket\psi_{k_{ (j+1)}}\rrbracket^{\mathfrak{M}}\); and 3. \(\llbracket\chi_{k_{|\pi|}}\rrbracket^{\mathfrak{M}}=\mathrm{R}_{\pi}( \llbracket\psi^{\prime}_{k^{\prime}}\rrbracket^{\mathfrak{M}})\subseteq \llbracket\chi^{\prime}_{k^{\prime}}\rrbracket^{\mathfrak{M}}\). Since \(\mathrm{S}\) contains all valuations that make \(\psi\) true; from (a)-(d) we get: 1. \(\mathsf{Unsat}(\{\psi,\psi^{\prime}_{k^{\prime}},\neg\psi_{k_{1}}\})\) -from (a); 2. \(\mathsf{Unsat}(\{\psi,\chi_{k_{j}},\neg\psi_{k_{(j+1)}}\})\) -from (b); 3. \(\mathsf{Unsat}(\{\psi,\chi_{k_{|\pi|}},\neg\chi_{k}\})\) -from (c). From (e) and \(\pi\), we obtain a sequence \(k_{1}\ldots k_{|\pi|}\) that satisfies the conditions (\(\dagger\)) and (\(\ddagger\)) in the proof of Prop. 9. Then, \((k_{1},k_{|\pi|})\in\Pi(\varphi^{+},\psi)\). From (a) and (2.a), \(k_{1}\notin J\). We are in an impossible situation: \((k_{1},k_{|\pi|})\in\Pi(\varphi^{+},\psi)\); \(k_{1}\notin J\); and \(\mathsf{Unsat}(\{\psi,\chi_{k_{|\pi|}},\neg\chi_{k}^{\prime}\})\). This contradicts (2.b); meaning that \(\varphi^{+}\) and \(\varphi^{-}\) are not compatible. (\(\pi\) is none of the above) It is clear that \(\llbracket\psi^{\prime}_{k^{\prime}}\rrbracket^{\mathfrak{M}}\nsubseteq \mathrm{SE}(\pi)\). In all the cases above we have: \(\llbracket\psi^{\prime}_{k^{\prime}}\rrbracket^{\mathfrak{M}}\nsubseteq \mathrm{SE}(\pi)\) or \(\mathrm{R}_{\pi}(\llbracket\psi^{\prime}_{k^{\prime}}\rrbracket^{\mathfrak{M}} )\nsubseteq\llbracket\chi^{\prime}_{k^{\prime}}\rrbracket^{\mathfrak{M}}\); i.e., \(\llbracket\mathsf{Kh}(\psi^{\prime}_{k^{\prime}},\chi^{\prime}_{k^{\prime}}) \rrbracket^{\mathfrak{M}}=\emptyset\), a contradiction. Then, \(\llbracket\varphi^{-}\rrbracket^{\mathfrak{M}}=\mathrm{S}\); and so \(\mathsf{Sat}(\varphi^{+}\wedge\varphi^{-})\). (\(\Leftarrow\)) Suppose \(\mathsf{Sat}(\varphi^{+}\wedge\varphi^{-})\); i.e., exists \((\dagger)\)\(\mathfrak{M}\) s.t. \(\llbracket\varphi^{+}\wedge\varphi^{-}\rrbracket^{\mathfrak{M}}=\mathrm{S}\). From (\(\dagger\)) we get \(\llbracket\varphi^{+}\rrbracket^{\mathfrak{M}}=\mathrm{S}\). Using Cor. 1, we get \(\llbracket\mathsf{A}\psi\rrbracket^{\mathfrak{M}}=\mathrm{S}\). This establishes (1). The proof of (2.a) is by contradiction. Let \(\mathsf{Kh}(\psi^{\prime}_{k^{\prime}},\chi^{\prime}_{k^{\prime}})\in\mathsf{ sf}(\varphi^{-})\) be s.t. \(\mathsf{Unsat}(\{\psi,\psi^{\prime}_{k^{\prime}},\neg\chi^{\prime}_{k^{\prime}}\})\). Then, \(\llbracket\psi^{\prime}_{k^{\prime}}\rrbracket^{\mathfrak{M}}\subseteq \llbracket\chi^{\prime}_{k^{\prime}}\rrbracket^{\mathfrak{M}}\). Choosing \(\pi=\epsilon\), we obtain \(\llbracket\mathsf{Kh}(\psi^{\prime}_{k^{\prime}},\chi^{\prime}_{k^{\prime}}) \rrbracket^{\mathfrak{M}}=\mathrm{S}\). This contradicts \(\llbracket\varphi^{-}\rrbracket^{\mathfrak{M}}=\mathrm{S}\). The proof of (2.b) is also by contradiction. Let \(\mathsf{Kh}(\psi^{\prime}_{k^{\prime}},\chi^{\prime}_{k^{\prime}})\in\mathsf{ sf}(\varphi^{-})\), \((*)\)\((x,y)\in\Pi(\varphi^{+},\psi)\), \((\dagger)\)\(\mathsf{Unsat}(\{\psi,\psi^{\prime}_{k^{\prime}},\neg\psi_{x}\})\), and \((\ddagger)\)\(\mathsf{Unsat}(\{\psi,\chi_{y},\neg\chi^{\prime}_{k^{\prime}}\})\). From (\(\dagger\)) and \((\ddagger)\)\(\llbracket\psi^{\prime}_{k^{\prime}}\rrbracket^{\mathfrak{M}}\subseteq \llbracket\psi_{x}\rrbracket^{\mathfrak{M}}\) and \(\llbracket\chi_{y}\rrbracket^{\mathfrak{M}}\subseteq\llbracket\chi^{\prime}_{k^{ \prime}}\rrbracket^{\mathfrak{M}}\). At the same time, from (\(*\)) and Prop. 9, \(\mathrm{S}=\llbracket\varphi^{+}\rrbracket^{\mathfrak{M}}\subseteq\llbracket\mathsf{ Kh}(\psi_{x},\chi_{y})\rrbracket^{\mathfrak{M}}\). Then, using Prop. 3, \(\llbracket\mathsf{Kh}(\psi^{\prime}_{j},\chi^{\prime}_{j})\rrbracket^{\mathfrak{M}}= \mathrm{S}\). This also contradicts \(\llbracket\varphi^{-}\rrbracket^{\mathfrak{M}}=\mathrm{S}\). Thus, \(\varphi^{+}\) and \(\varphi^{-}\) are compatible. Having at hand the result in Prop. 11, we proceed to define an algorithm for checking the satisfiability of compatible formulas \(\varphi^{+}\) and \(\varphi^{-}\). This is done in two stages. In the first stage, we build the set \(\Pi(\varphi^{+},\psi)\), where \(\psi\) is the conjunction of the negation of the precondition of the 'global' subformulas in \(\varphi^{+}\). This task is encapsulated in the function Plans in Alg. 4. Notice that the set \(\Pi(\varphi^{+},\psi)\) corresponds to a matrix which is computed using the result in Prop. 10. The second stage is encapsulated in the function Compatible in Alg. 5. In this function, lines 2 and 3 check condition (1) in Def. 9, i.e., whether \(\varphi^{+}\) is individually satisfiable, by verifying the joint satisfiability of the 'global' subformulas in \(\varphi^{+}\) (cf. Alg. 2). In turn, lines 4 to 6 in Compatible check condition (2.a) of Def. 9, i.e., whether \(\varphi^{-}\) is individually satisfiable, by verifying the individual satisfiability of the subformulas in \(\varphi^{+}\) (cf. Alg. 3). Lastly, in lines 7 to 18 in Compatible, we check whether the result of composing subformulas in \(\varphi^{+}\) contradicts any of the subformulas in \(\varphi^{-}\). We carry out this task by making use of the result of the function Plans which computes such compositions. Notice that the function Compatible in Alg. 5 makes a polynomial number of calls to a propositional Sat solver. From this fact, we get the following result. Proposition 12: _Let \(\varphi^{+}\), \(\varphi^{-}\) be as in Def. 9; it follows that Alg. 5 solves \(\mathsf{Sat}(\varphi^{+}\wedge\varphi^{-})\) and is in (i.e., \(\Delta^{\mathsf{P}}_{2}\) in \(\mathsf{PH}\))._ Proof: By Prop. 11 we get that the function Compatible in Alg. 5 solves \(\mathsf{Sat}(\varphi^{+}\wedge\varphi^{-})\). Moreover, it makes a polynomial number of calls to a Sat solver for formulas of modal depth \(0\). Thus, it runs in polynomial time with access to a Sat oracle. Therefore, \(\mathsf{Sat}(\varphi^{+}\wedge\varphi^{-})\) is in \(\mathsf{P}^{\mathsf{NP}}\), i.e., in \(\Delta^{\mathsf{P}}_{2}\). Prop. 12 is the final step we need to reach the main result of our work. Theorem 4.1: _The satisfiability problem for \(\mathsf{L}_{\mathsf{Kh}}\) is in (i.e., \(\Sigma^{\mathsf{P}}_{2}\) in \(\mathsf{PH}\))._ Proof: Let \(\varphi\) be a \(\mathsf{L}_{\mathsf{Kh}}\)-formula. By Alg. 1, we can obtain, in polynomial time, a formula \(\varphi^{\prime}=\varphi_{0}\wedge(\mathsf{Ap}_{1}\leftrightarrow\mathsf{Kh} (\psi_{1},\chi_{1}))\wedge\cdots\wedge(\mathsf{Ap}_{n}\leftrightarrow\mathsf{ Kh}(\psi_{n},\chi_{n}))\) in leaf normal form such that \(\varphi\equiv_{\mathsf{Sat}}\varphi^{\prime}\). We know \(\mathsf{md}(\varphi_{0})=0\) and \(\mathsf{md}(\mathsf{Kh}(\psi_{i},\chi_{i}))=1\). Let \(Q=\{q_{1}\ldots q_{m}\}\subseteq\mathsf{Prop}\) be the set of proposition symbols in \(\varphi^{\prime}\). To check \(\mathsf{Sat}(\varphi^{\prime})\), we start by guessing a propositional assignment \(v:Q\rightarrow\{0,1\}\) that makes \(\varphi_{0}\) true. Then, we define sets \(P^{+}=\{i\mid v(p_{i})=1\}\) and \(P^{-}=\{i\mid v(p_{i})=0\}\), from which we build formulas \[\varphi^{+}=\bigwedge_{i\in P^{+}}\mathsf{Kh}(\psi_{i},\chi_{i})\qquad\varphi^{-}= \left(\bigwedge_{i\in P^{-}}\neg\mathsf{Kh}(\psi_{i},\chi_{i})\right)\wedge \neg\mathsf{Kh}(\varphi_{0},\bot)\] (recall that \(\neg\mathsf{Kh}(\varphi_{0},\bot)=\neg\mathsf{A}\neg\varphi_{0}=\mathsf{E} \varphi_{0}\).) Finally, we use Alg. 5 to check \(\mathsf{Sat}(\varphi^{+}\wedge\varphi^{-})\). Since Alg. 5 is in \(\mathsf{P}^{\mathsf{NP}}\) (Prop. 12), the whole process is in \(\mathsf{NP}^{\mathsf{NP}}\). We conclude this section with an example of how to check the satisfiability of a formula using the procedure in the proof of Thm. 1. Example 6: Let \(\psi=\mathsf{Kh}(p\wedge q,r\wedge t)\vee\mathsf{Kh}(p,r)\). By applying Alg. 1, we get \((k_{1}\lor k_{2})\wedge(\mathsf{A}k_{1}\leftrightarrow\mathsf{Kh}(p\wedge q,r \wedge t))\wedge(\mathsf{A}k_{2}\leftrightarrow\mathsf{Kh}(p,r))\). Suppose that we set \(k_{1}\) to true and \(k_{2}\) to false. Based on this assignment, we build formulas \(\varphi^{+}=\mathsf{Kh}(p\wedge q,r\wedge t)\) and \(\varphi^{-}=\neg\mathsf{Kh}(p,r)\wedge\neg\mathsf{Kh}(k_{1}\wedge\neg k_{2},\bot)\). Using Alg. 5, we can check that they are not compatible (and hence not satisfiable; we have \(\mathsf{Sat}(p\wedge q)\) and \(\mathsf{Unsat}(\{(p\wedge q),\neg p\})\) but not \(\mathsf{Sat}(\{r\wedge t,\neg r\})\)). However, if we set both \(k_{1}\) and \(k_{2}\) to true, then, \(\varphi^{+}=\mathsf{Kh}(p\wedge q,r\wedge t)\wedge\mathsf{Kh}(p,r)\) and \(\varphi^{-}=\neg\mathsf{Kh}(k_{1}\wedge k_{2},\bot)\). In this case, Alg. 5 returns they are compatible, and thus satisfiable. ## 4 Final Remarks We provided a satisfiability-checking procedure for \(\mathsf{L}_{\mathsf{Kh}}\), the 'knowing how' logic with linear plans from [31, 33], obtaining a \(\Sigma_{2}^{P}\) upper bound. Although not a tight bound (as the best lower bound known is \(\mathsf{NP}\)), we argue this is an interesting result, as our bound is (unless \(\mathsf{PH}\) collapses) below the \(\mathsf{PSpace}\)-complete complexity of model-checking [5]. We argue that, this unusual situation is a consequence of that in model-checking the full expressive power is exploited, while here we showed that plans are almost irrelevant for the satisfiability of a formula. Interestingly also, our procedure uses a polynomial transformation into a normal form without nested modalities, and calls to an \(\mathsf{NP}\) oracle (i.e., to a propositional \(\mathsf{Sat}\) solver). It is well-known that modern \(\mathsf{Sat}\) solvers are able to efficiently deal with large formulas (having millions of variables), and usually support the exploration of the solution state space. Thus, the ideas presented in this paper can be used to implement a \(\mathsf{Sat}\) solver for knowing-how logics relying on modern propositional \(\mathsf{Sat}\) solving tools. We consider this as part of the future work to undertake. Also, we would like to obtain a tight bound for the satisfiability problem. In this regard, we will explore the possibility of providing a reduction from the problem of checking the truth of Quantified Boolean Formula (TQBF) with a single \(\exists\forall\) quantification pattern (called \(\Sigma_{2}\mathsf{Sat}\) in [2]), which is known to be \(\Sigma_{2}^{P}\)-complete. #### Acknowledgments. We thank the reviewers for their valuable comments. Our work is supported by the Laboratoire International Associe SINFIN, the EU Grant Agreement 101008233 (MISSION), the ANPCyT projects PICT-2019-03134, PICT-2020-3780, PICT-2021-00400, PICT-2021-00675, and PICTO-2022-CBA-00088, and the CONICET projects PIBAA-28720210100428CO, PIBAA-28720210100165CO, and PIP-11220200100812CO.
2309.07287
Enhancing Child Vocalization Classification with Phonetically-Tuned Embeddings for Assisting Autism Diagnosis
The assessment of children at risk of autism typically involves a clinician observing, taking notes, and rating children's behaviors. A machine learning model that can label adult and child audio may largely save labor in coding children's behaviors, helping clinicians capture critical events and better communicate with parents. In this study, we leverage Wav2Vec 2.0 (W2V2), pre-trained on 4300-hour of home audio of children under 5 years old, to build a unified system for tasks of clinician-child speaker diarization and vocalization classification (VC). To enhance children's VC, we build a W2V2 phoneme recognition system for children under 4 years old, and we incorporate its phonetically-tuned embeddings as auxiliary features or recognize pseudo phonetic transcripts as an auxiliary task. We test our method on two corpora (Rapid-ABC and BabbleCor) and obtain consistent improvements. Additionally, we outperform the state-of-the-art performance on the reproducible subset of BabbleCor. Code available at https://huggingface.co/lijialudew
Jialu Li, Mark Hasegawa-Johnson, Karrie Karahalios
2023-09-13T20:13:40Z
http://arxiv.org/abs/2309.07287v2
Enhancing Child Vocalization Classification in Multi-Channel Child-Adult Conversations Through Wav2vec2 Children Asr Features ###### Abstract Autism Spectrum Disorder (ASD) is a neurodevelopmentalment disorder that often emerges in early childhood. ASD assessment typically involves an observation protocol including note-taking and ratings of child's social behavior conducted by a trained clinician. A robust machine learning (ML) model that is capable of labeling adult and child audio has the potential to save significant time and labor in manual coding children's behaviors. This may assist clinicians capture events of interest, better communicate with parents, and educate new clinicians. In this study, we leverage the self-supervised learning model, Wav2Vec 2.0 (W2V2), pretrained on 4300-hour of home recordings of children under 5 years old, to build a unified system that performs both speaker diarization (SD) and vocalization classification (VC) tasks. We apply this system to two-channel audio recordings of brief 3-5 minute clinician-child interactions using the Rapid-ABC corpus. We propose a novel technique by introducing auxiliary features extracted from W2V2-based automatic speech recognition (ASR) system for children under 4 years old to improve children's VC task. We test our proposed method of improving children's VC task on two corpora (Rapid-ABC and BabbleCor) and observe consistent improvements. Furthermore, we reach, or perhaps outperform, the state-of-the-art performance of BabbleCor. Jialu Li\({}^{1,2}\), Mark Hasegawa-Johnson\({}^{1,2}\), Karrie Karahalios\({}^{3}\)+\({}^{1}\)Department of Electrical and Computer Engineering, University of Illinois \({}^{2}\)Beckman Institute for Advanced Science and Technology, University of Illinois \({}^{3}\)Department of Computer Science, University of Illinois {jialuli3, jhasegaw, kkarahal}@illinois.edu self-supervised learning, children ASR, vocalization classification, Wav2vec 2.0, autism Footnote †: This work has been funded by the Jump ARCHES endowment through the Health Care Engineering Systems Center at UIUC. ## 1 Introduction ASD is a neurodevelopmental disorder characterized by deficits in social communication and the presence of restricted interests and repetitive behaviors [1]. ASD often begins in early childhood. According to estimates from the Centers for Disease Control and Prevention, 1 in 36 children has been identified with ASD in 2020 in the U.S [2]. ASD is a prevalent disorder in children, but acquiring ASD diagnostic outcomes through conventional evaluations may be protracted due to the limited available healthcare professional services and the complexity of examination procedures [3]. Hence, a robust machine learning (ML) model, capable of automatically detecting features that clinicians consider relevant to the early diagnosis of ASD, may be helpful for clinicians to better explain critical events to parents for early intervention and educate new clinicians for future assessment. Previous work has built ML-based speaker diarization (SD) models for Autism Diagnostic Observation Schedule (ADOS) [4] interviews. ADOS is a well-established protocol administrated by a trained clinician to elicit children's speech. Samples of ADOS interviews between clinicians and children aged 12-16 years were used in the series of DIHARD SD challenges [5, 6, 7]. Other related studies performed child-adult SD [8, 9] or predicted atypical speech for autistic children [10] using ADOS interviews. To help diagnose toddlers aged 1-2, who are in the early stages of language development and often can't express themselves through complete sentences, it's crucial for the ML model to perform children's vocalization classification (VC), in addition to the child-adult SD task. ASD researchers have reported that a low frequency of verbalization and non-verbal cues (vocalization, laughter, crying, etc.) is one of the early behavioral signs for autistic children [11]. Past work has studied automatic classification of autistic children with various levels of severity based on their vocalizations [12]. ASD researchers have relied on manually coding children's social behaviors for assessment, but coding is a time- and labor-intensive task. To combat data sparsity issues, self-supervised learning models, such as wav2vec 2.0 (W2V2) [13], have been recently proposed and have performed well on several speech processing downstream tasks, including speech-to-text [14] and emotion recognition [15], given a limited amount of labeled data. W2V2 first uses large-scale unlabeled data for pre-training followed by fine-tuning on a small amount of labeled data. A few recent studies also explored using self-supervised learning models for child-adult SD and VC tasks [16, 17]. Most of the past work has primarily studied child-adult recordings on a single audio channel. However, using multi-channel recordings of adult and children's speech is potentially beneficial for individually analyzing children's vocalizations. In this study, we explore building a unified end-to-end W2V2-based system for SD and VC tasks on two-channel clinician-child recordings for ASD assessment using RapidABC (RABC) corpus. We propose a novel technique incorporating auxiliary features extracted from a W2V2-based automatic speech recognition (ASR) system for children under 4 years old to improve children's VC task. To the best of our knowledge, this is the first study that leverages a self-supervised learning model to perform ASR mainly targeting children under 4 years old. We further demonstrate the superiority of our proposed method in children's VC on two corpora that shared similar annotation protocols (RABC and BabbleCor). ## 2 Data In this study, we experiment with RABC for child-adult SD and VC task, and we further validate our method of improving children's VC task on BabbleCor. We use MyST and Providence for building children's ASR. **Rapid-ABC (RABC)**[18] contains both video and audio recordings of 3-5 mins brief interactive assessment between a child of 1-2 years old and a clinician. Two audio streams are recorded separately, each from a lapel mic placed in front of a speaker's chest. Two types of annotations are available: protocol segment, and child vocalization. Protocol segment annotations mark the times at which the adult first speaks each of five key phrases, marking the start of five parts of the diagnostic protocol. Child vocalization carefully labels child audio as one of non-lexical vocalization (VOC), verbalization that contains words (VERB), cry (CRY), or laugh (LAU). To evaluate SD, we manually label adult audio as one of VOC or LAU. Ten percent of adult audio was double-coded, and inter-coder reliability (Cohen's kappa score) was 0.90 at a precision of 0.2s. In this study, we analyze 51 sessions of audio recordings from part of the RABC corpus available for research use, where a total of 4 clinicians and 43 children participated (with 8 children being assessed twice). To prevent over-fitting, we use 3-fold cross-validation for evaluation. We follow leave-one-child-out partition to ensure training and testing set in each fold doesn't have overlapped child participants. Table 1 presents details of partition of each fold. For fine-tuning W2V2, we label audio stream in frames of 2s starting every 0.1s; the label is determined by the centered 0.1s of each frame. The mean and standard deviation (std) of the child utterances is 1.54s \(\pm\) 2.30s. **BabbleCor**[19] contains 11k short audio clips of 52 healthy children (2-36 month) without speech delay. The audio clips were collected from day-long recordings of children at home using LENA [20], an infant-wearable audio recording device. Annotators label the clips as one of _canonical_, (containing a consonant to vowel transition), _non-canonical_, (not containing a consonant to vowel transition), CRY, LAU, or junk (JUNK, same as silent or non-speech). The current BabbleCor distribution includes about 90% of the data used for the Baby Sounds (BS) sub-challenge in the 2019 Interspeech Paralinguistic Challenge [21]. We use the speaker split of the BS challenge. Table 2 presents details of our data partition. Mean duration of clips is 0.36s \(\pm\) 0.08s std. **My Science Tutor (MyST)**[22] contains conversational speech of 1371 students between third and fifth grades with a virtual tutor. We select short transcribed utterances (\(<15\)s) for training and evaluating ASR to align with shorter vocalizations produced by younger children. We have 84.2h/14.1h/15.1h for training/development/testing set respectively. The mean and std of the speech is 6.6s \(\pm\) 3.6s. **Providence**[23] contains longitudinal audio and video recordings of six English-speaking children (3 boys: Alex, Ethan, and William, and 3 girls: Lily, Naima, and Violet) from ages 1 to 4 interacting with their mothers at home. Annotators transcribed children's speech using SAMPA phonetic symbols. We manually filter out some of the highly noisy recordings and use transcribed child audio for fine-tuning W2V2 ASR. In total, we train on 84.0h long utterances of four children (Ethan, William, Lily, and Naima) and test on 24.8h utterances of the other two (Alex and Violet). We also examine ASR performance on utterances \(<2s\) on the test set (\(\sim\)4.1h). The mean and std of the utterances is 3.3s \(\pm\) 2.1s. ## 3 Methodology ### Baseline W2V2 systems for child-adult SD and VC Detailed model architecture and training procedures of W2V2 are described in [13]. Briefly, W2V2 encodes raw audio into latent embeddings and learns to predict masked embeddings of quantized speech units from contextual embeddings during pretraining. We use _W2V2-base_ (hidden feature size 768 with 12 transformer (TF) layers) in our study. For the baseline W2V2-based model on RABC, we adapt the model from our previous work [24]. Specifically, input waveforms from adult and child audio channels are separately fed into the W2V2 \begin{table} \begin{tabular}{c|c c c c c c|c} \hline \hline \multirow{2}{*}{partition} & \multicolumn{3}{c|}{ADU m.} & \multicolumn{3}{c|}{CHI m.} & Total \\ & \# of ID & VOC & LAU & VOC & VERB & LAU & CRY & Time m. \\ \hline 1 & 28/ 15 & 38.6/20.3 & 2.4/1.3 & 4.4/1.4 & 2.3/1.5 & 1.4/.38 & 7.0/.53 & 93.9/48.3 \\ \hline 2 & 29/ 14 & 39.4/19.5 & 2.7/1.0 & 4.0/1.8 & 2.3/1.6 & 1.1/.73 & 1.7/5.9 & 93.9/48.4 \\ \hline 3 & 29/ 14 & 39.9/19.1 & 2.3/1.4 & 3.1/2.6 & 3.1/.77 & 1.1/.68 & 6.5/1.1 & 96.7/45.6 \\ \hline \hline \end{tabular} \end{table} Table 1: Number of participants (“# of ID”) and duration of audio (“m.”=minutes) in each of six categories in each fold of cross-validation. Symbol “/” separates training and testing durations. “Total time” counts the length of entire recordings, including silence and non-speech events. \begin{table} \begin{tabular}{c|c c c c c|c} \hline \hline partition & Non-CAN & CAN & LAU & CRY & JUNK & Total \\ \hline training & 1324 & 410 & 42 & 223 & 1651 & 3650 \\ development & 1521 & 352 & 37 & 151 & 1242 & 3303 \\ testing & 1247 & 545 & 55 & 235 & 1252 & 3334 \\ \hline \hline \end{tabular} \end{table} Table 2: Number of samples per class of BabbleCor. model to extract hidden features from 12 TF layers. Mean pooling (MP) across the duration of the utterance is followed by a weighted average (WA) across layers. Two feed-forward networks (FFN) are used as output tiers for classifying adult and child audio as silence, or one of their VC types. Cross-entropy losses of two output tiers are averaged as the overall loss. We test individual vs. joint learning of the two audio channels, fine-tuning either _W2V2-LL4300h_ (pretrained on 4300h large-scale daylong home recordings of children under 5 years old [24]) or _W2V2-base_ (pretrained on 52k-hour of unlabeled adult audio [13]). Figure 1 shows baseline joint learning using _W2V2-LL4300h_ plus the combination module described in Section 3.4. ### Energy thresholding on two audio channels for SD For RABC, we also test two energy thresholding (ET) baselines for SD. Each baseline finds thresholds of the child and adult microphones using training set of each fold, labels vocalization when the energy spectrum is above threshold at frames of 0.1s, and smooths the results with 11-frame median filter. **Unsupervised ET**: For each audio stream, we test multiple ETs and find the one that best matches the Pydub voice activity detector [25]. **Weak supervised ET**: Given labels, for each audio stream, we find an ET where foreground speaker is always louder than the background speaker. ### Learning children's phonetics using W2V2 To train ASR for children under 4 years old, we use two-level fine-tuning to gradually reduce age mismatch from adult ASR to child ASR. _W2V2-MyST_: We build a CTC-based [26] ASR by integrating one linear layer of hidden dimension 384 followed by Leaky Relu activation and softmax on top of _W2V2-Libri960h_ (_W2V2-base_ fine-tuned on 960h LibriSpeech of adult speech). We fine-tune _W2V2-Libri960h_ using MyST data to minimize character-level CTC loss for 40 epochs. For inference, we use CTC greedy decoding without language model, obtaining 11.2% word error rate on the test set. _W2V2-Pro_: We follow similar training and inference setup to fine-tune _W2V2-MyST_ using Providence for 5 epochs. We test two types of learning outputs: sequences of phones, and sequences of binary consonant/vowel indicators (CV). CV is a coarse quantization of the phone sequence, and we hypothesize that CV may provide a useful training signal even when clips are too short for the ASR to learn to generate phone sequences. We change the number of output nodes to three (CTC <blank>, beginning-of-sentence, and ending-of-sentence) plus the number of output targets (47 phones or 2 CV), then re-initialize the weights of the top linear layer. We denote _W2V2-Pro (Pho/CV)_ as W2V2 trained on phonetic/CV sequences respectively for the rest of the paper. For learning phone sequences, we obtain phone error rate of 61.3% on the test set and 56.0% for utterances \(<2s\). For learning CV sequence, we obtain unit error rate of 28.1% on the test set and 22.3% for utterances \(<2s\). Note our goal of training children's ASR is not getting highly accurate transcripts but encouraging ASR to learn children's phonetics. ### Auxiliary W2V2 children's ASR features To leverage background speech information in RABC, we introduce a combination (comb) module to fuse utterance-level W2V2 features from both audio channels via summation or concatenation. Figure 1 presents the detailed model architecture. For summation (comb \(C1\)), we set \(\alpha_{1}=0.8\) and \(\beta_{1}=0.2\) for ADU tier, and \(\alpha_{1}=0.2\) and \(\beta_{1}=0.8\) for CHI tier. Therefore, for each tier, features of foreground speaker have larger weight than background speaker. We also explore combining _W2V2-Children ASR (W2V2-CASR)_ features with _W2V2-LL4300h_ features in the child channel (comb \(C3\) and \(C4\)). During fine-tuning of _W2V2-LL4300h_, we freeze _W2V2-CASR_ and only extract the last TF layer features as auxiliary input. The last TF layer may encode richer phonetics information as it's closest to the output layer. For both summation modules (comb \(C1\) and \(C3\)), we ensure the weights of both input features sum up to 1. ### Experimental Setup All waveforms are downsampled to 16 kHz. For RABC, each fine-tuning experiment is trained 10 epochs with batch size 32 on single NVIDIA 1080 Ti for about 10 hours. Adam optimizer sets learning rates (LR) of 1e-4 for the output linear layer and 1e-5 for the W2V2; scheduler with the new-bob technique adjusts LR based on the test set performance (average of unweighted F1-scores over two channels) after each epoch. Best performance of the test set is used for each fold. SD outputs are computed by merging non-silent VC outputs then applying an 11-frame median filter. SD task is evaluated using diarization error rate (DER). We follow a common standard with 0.25s collar forgiveness over reference segments. Figure 1: (a): W2V2 model architecture combining adult audio & child audio with/without auxiliary W2V2-Children’s ASR (_W2V2-CASR_) features. MP=mean pooling, WA=weighted average, and FFN=feed-forward network. ADU and CHI denote adult and child VC tiers respectively. (b): Illustration of four combination modules. Symbol “\(+\)” means summation and “\(\bigoplus\)” means concatenation. For combination \(C1\) and \(C3\), \(\alpha_{i}+\beta_{i}=1\) for \(i\in\{1,3\}\). (c): Explanation of feature dimension letters. We diarize from scratch without knowing the reference voiced segments. Overlapped speech segments are included for evaluating DER. VC task is evaluated using unweighted F1-scores for each speaker over all classes. We use similar setup on BabbleCor by ignoring the adult audio channel. We tune LR as 3e-5/1e-5 for the output linear layer/W2V2 respectively. Optimal results for summation (comb \(C3\)) are obtained using \(\alpha_{3}=0.8\) and \(\beta_{3}=0.2\). We evaluate children's VC task using unweighted average recall (UAR), the same metric used in BS challenge, and unweighted F1-scores over all classes. We implement all experiments using SpeechBrain [27]. Code and model weights are available. ## 4 Results ### Baseline models Table 3 summarizes the baseline results. Weak supervised ET improves over unsupervised ET by correcting errors related to noises and background speech (B\({}_{1}\) vs. B\({}_{2}\)). Jointly fine-tuning on _W2V2-LL4300h_ greatly improves the performance of W2V2-base (B\({}_{3}\) vs. B\({}_{4}\)) because _W2V2-base_ is pretrained on adult speech only and fails to capture children's phonetics. Separately learning two audio streams yields slight benefit for SD task but not for VC task (B\({}_{4}\) vs. B\({}_{5}\)). Thus, we use joint modeling on _W2V2-LL4300h_ for the rest of our experiments. ### Auxiliary W2V2-C ASR features Table 4 shows the relevant results of combining two audio channels. We observe combining both audio channels (E\({}_{1}\) and E\({}_{2}\)) improves DER over all baselines (B\({}_{1}-\)B\({}_{6}\)). We obtain optimal DER using concatenation module (E\({}_{2}\)). By introducing _W2V2-Pro (Pho)_ features trained on phonetic sequences, E\({}_{3}\) system consistently improves children's VC over 3 folds (not present in Table 4 for brevity) while maintaining a comparable DER. This illustrates the advantages of using children's phonetics to improve children's VC task. We find _W2V2-Pro (CV)_ is not as helpful as _W2V2-Pro (Pho)_, so we only report results for _W2V2-Pro (Pho)_ in Table 4. We perform ablation studies based on comb \(C3\) by replacing _W2V2-Pro (Pho)_ with _W2V2-MyST_ (E\({}_{5}\)), reducing (E\({}_{6}\)) or increasing (E\({}_{7}\)) the weights of _W2V2-Pro (Pho)_ features. Children in the RABC test corpus are similar in age to the 6 Providence children than the 1371 MyST children, which may explain superior performance of _W2V2-Pro_ over _W2V2-MyST_. We also find increasing the weights of _W2V2-Pro (Pho)_ features leads to optimal CHI VC but slightly hurts ADU VC. Perhaps over-weighting children's phonetic features may add variability to adult speech embeddings and thus hurt ADU performance. ### BabbleCor Table 5 shows the results for BabbleCor. Fine-tuning _W2V2-LL4300h_ yields a competitive baseline (BC\({}_{0}\)), compared with previous works. Interpolation with _W2V2-Pro (Pho)_ features gives the best UAR on the dev set (BC\({}_{2}\)), while concatenation with _W2V2-Pro (CV)_ gives the best results on the test set (BC\({}_{3}\)). Possibly, BabbleCor's clips may be too short for _W2V2-Pro (Pho)_ to extract a useful phonetic signal, but since _W2V2-Pro (CV)_ is trained using a simpler binary target sequence, it is able to extract useful phonetic information even from these short clips. Though the distributed BabbleCor does not contain all data from the BS challenge, results suggest that our proposed method achieves or surpasses state-of-the-art performance with almost identical data partition. ## 5 Conclusion & Future Work The use of W2V2 features, pre-trained using 4300h of home recordings, improves child-adult SD and VC on two-channel audio. With children's ASR embeddings as auxiliary features, we validate our proposed method on two corpora with different lengths of child utterances. One limitation of our study is that, for privacy reasons, the RABC corpus does not specify each child's diagnosis (ASD vs. non-ASD), so it is not possible to measure the quality of results across diagnostic categories. In the future, we aim to extend our approach to ASD children's vocalizations when relevant data are available. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline Exp ID & comb ID & W2V2-CASR type & DER & ADU-F1 & CHI-F1 \\ \hline E\({}_{1}\) & \(C1\) & - & 18.7 \(\pm\) 4.883.5 \(\pm\) 2.155.1 \(\pm\) 1.3 \\ E\({}_{2}\) & \(C2\) & & **17.3 \(\pm\) 5.083.5 \(\pm\) 1.5**65.7 \(\pm\) 0.6 \\ \hline E\({}_{3}\) & \(C3\) & W2V2-Pro (\(\beta_{3}\)=0.5) & 18.1 \(\pm\) 5.083.4 \(\pm\) 1.158.2 \(\pm\) 1.2 \\ E\({}_{4}\) & \(C4\) & W2V2-Pro & 18.1 \(\pm\) 5.2 \(\pm\) 1.1 57.3 \(\pm\) 1.1 \\ \hline E\({}_{6}\) & \(C3\) & W2V2-MyST (\(\beta_{3}\)=0.5) & 17.8 \(\pm\) 5.083.1 \(\pm\) 1.1 57.6 \(\pm\) 0.1 \\ E\({}_{6}\) & \(C3\) & W2V2-Pro (\(\beta_{3}\)=0.2) & 17.8 \(\pm\) 3.7 \(\pm\) 1.3 \(\pm\) 1.3 \(\pm\) 1.3 \(\pm\) 1.3 \(\pm\) 1.3 \\ E\({}_{7}\) & \(C3\) & W2V2-Pro (\(\beta_{3}\)=0.8) & 18.3 \(\pm\) 4.6 \(\pm\) 2.5 \(\pm\) 2.2 \(\pm\) 25.8 \(\pm\) 4.0 \\ \hline \end{tabular} \end{table} Table 4: Mean and Std of DER and F1 scores of ADU and CHI VC tasks on RABC corpus trained on different combinations. W2V2-Pro is trained on phonetic sequences. \begin{table} \begin{tabular}{c|c|c|c|c} \hline Exp ID & Method & DER & ADU-F1 & CHI-F1 \\ \hline B\({}_{1}\) & ET (unsupervised) & 64.4 \(\pm\) 6.3 & - & - \\ B\({}_{2}\) & ET (weak supervised) & 41.4 \(\pm\) 4.0 & - & - \\ \hline B\({}_{3}\) & joint (W2V2-base) & 26.7 \(\pm\) 15.1 & 81.7 \(\pm\) 0.9 & 46.8 \(\pm\) 3.5 \\ B\({}_{4}\) & joint (W2V2-LL4300h) & 19.3 \(\pm\) 5.1 & **83.3 \(\pm\) 0.2** & **55.7 \(\pm\) 1.7** \\ \hline B\({}_{6}\) & separate W2V2 & **19.2 \(\pm\) 4.3** & 82.6 \(\pm\) 0.1 & 54.4 \(\pm\) 1.7 \\ \hline \end{tabular} \end{table} Table 3: Mean and Std of DER and F1 scores of ADU and CHI VC tasks, in percent, over 3-fold cross-validation on RABC corpus trained on baseline models. ET=energy thresholding. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline Exp ID & Method & Dev-UAR & Dev-F1 & Test-UAR & Test-F1 \\ \hline & \begin{tabular}{c} ComPar2019 baseline [21] \\ Gosztolya [28] \\ Heysem Kaya et al. [29] \\ Sung-Lin Yeh et al. [30] \\ \end{tabular} & 54.0 & - & 58.7 & - \\ & \begin{tabular}{c} Gosztolya [28] \\ Heysem Kaya et al. [29] \\ Sung-Lin Yeh et al. [30] \\ \end{tabular} & 58.7 & - & 59.5 & - \\ & \begin{tabular}{c} Heysem Kaya et al. [29] \\ Sung-Lin Yeh et al. [30] \\ \end{tabular} & 60.1 & - & 61.4 & - \\ \hline \hline BC\({}_{0}\) & Fine-tuning W2V2-LL4300h & 67.6 & 64.1 & 62.9 & 64.7 \\ \hline BC\({}_{1}\) & \(\bigoplus\) W2V2-Pro (Pho) & 66.7 & 64.3 & 63.2 & 65.0 \\ BC\({}_{2}\) & \(+\) W2V2-Pro (Pho) (\(\beta_{3}\)=0.2) & **70.4** & 64.1 & 62.2 & 64.5 \\ BC\({}_{3}\) & \(\bigoplus\) W2V2-Pro (CV) & 69.2 & 64.3 & **64.9** & **66.3** \\ BC\({}_{4}\) & \(+\) W2V2-Pro (CV) (\(\beta_{3}\)=0.2) & 69.2 & **65.1** & 62.6 & 65.1 \\ \hline \end{tabular} \end{table} Table 5: UAR and F1 scores for development and testing set of BabbleCor in past studies (top 4 rows) and our studies (BC\({}_{0}\)-BC\({}_{4}\)). Our tests lack 10% of the data from BS challenge.
2306.17611
Projection-based first-order constrained optimization solver for robotics
Robot programming tools ranging from inverse kinematics (IK) to model predictive control (MPC) are most often described as constrained optimization problems. Even though there are currently many commercially-available second-order solvers, robotics literature recently focused on efficient implementations and improvements over these solvers for real-time robotic applications. However, most often, these implementations stay problem-specific and are not easy to access or implement, or do not exploit the geometric aspect of the robotics problems. In this work, we propose to solve these problems using a fast, easy-to-implement first-order method that fully exploits the geometric constraints via Euclidean projections, called Augmented Lagrangian Spectral Projected Gradient Descent (ALSPG). We show that 1. using projections instead of full constraints and gradients improves the performance of the solver and 2. ALSPG stays competitive to the standard second-order methods such as iLQR in the unconstrained case. We showcase these results with IK and motion planning problems on simulated examples and with an MPC problem on a 7-axis manipulator experiment.
Hakan Girgin, Tobias Löw, Teng Xue, Sylvain Calinon
2023-06-30T12:36:57Z
http://arxiv.org/abs/2306.17611v1
# Projection-based first-order constrained optimization solver for robotics ###### Abstract Robot programming tools ranging from inverse kinematics (IK) to model predictive control (MPC) are most often described as constrained optimization problems. Even though there are currently many commercially-available second-order solvers, robotics literature recently focused on efficient implementations and improvements over these solvers for real-time robotic applications. However, most often, these implementations stay problem-specific and are not easy to access or implement, or do not exploit the geometric aspect of the robotics problems. In this work, we propose to solve these problems using a fast, easy-to-implement first-order method that fully exploits the geometric constraints via Euclidean projections, called Augmented Lagrangian Spectral Projected Gradient Descent (ALSPG). We show that 1. using projections instead of full constraints and gradients improves the performance of the solver and 2. ALSPG stays competitive to the standard second-order methods such as ILQR in the unconstrained case. We showcase these results with IK and motion planning problems on simulated examples and with an MPC problem on a 7-axis manipulator experiment. ## I Introduction Many tasks in robotics can be framed as constrained optimization problems. The inverse kinematics (IK) problem finds a configuration of the robot that corresponds to a desired pose in the task space while satisfying constraints such as joint limits or center-of-mass stability. Motion planning and optimal control determine a trajectory of configurations and/or control commands achieving the task subject to the dynamics and the constraints of the task and the environment over a certain time horizon. Model predictive control (MPC) recasts the optimal control problems with shorter horizons to solve simpler constrained optimization problems in real-time. In this work, we present a projection-based first-order optimization method that can be implemented and used for all these aforementioned problems. There are many commercially available second-order solvers to address general constrained optimization problems such as SNOPT [1], SLSQP [2], LANCELOT [3] and IPOPT [4]. In robotics, the literature on optimization mainly focuses on developing solvers that effectively solve each robotic problem separately. For example, in the motion planning literature, one can find many constrained variants of differential dynamic programming (DDP) [5] or iterative linear quadratic regulator (iLQR) [6], TrajOpt [7], CHOMP [8] and ALTRO [9]. Furthermore, some of these solvers are not open-source and difficult to implement, which hinders benchmarking and potential improvements. As powerful as these solvers are, their applications for finding real-time feedback mechanisms such as closed-loop inverse kinematics and MPC requires tuning and adaptations of the solver. In this chapter, we address these challenges by proposing a very simple, yet powerful solver that can be easily implemented without having large memory requirements. The constraints in many of these problems are described as geometric set primitives or their combinations (see Table I). Examples include joint angle or velocity limits or center-of-mass stability as bounded domain sets, avoiding/reaching geometric shapes such as spheres and convex polytopes as hyperplane and quadric sets, friction cone constraints as second-order cone sets. These constraints have in common that they can be formulated as projections rather than constraints. We argue that exploiting the projection capability of these sets instead of treating them as generic constraints in the solvers can significantly improve the performance. Projected gradient descent is the simplest algorithm that takes into account these projections. Its idea is to project the gradient to have a next iterate inside the constraint set. In the optimization literature, a first-order projection-based solver called spectral projected gradient descent (SPG) has emerged as an alternative [10]. SPG has been studied and applied to many fields because of its great practical performance even compared to second-order constrained optimization solvers [11]. Its extension to additional arbitrary constraints has been proposed as within augmented Lagrangian methods [12, 13, 14]. However, as the application of this idea to popular second-order methods in robotics is not trivial [15], the Fig. 1: Projection view of inverse kinematics problem. (a) Reaching a point (standard IK problem): \(\mathcal{C}_{\mathbf{p}}=\{\mathbf{p}\mid\mathbf{p}=\mathbf{p}_{d}\}\). (b) Reaching under/above/on a plane (in the halfspace): \(\mathcal{C}_{\mathbf{p}}=\{\mathbf{p}\mid\mathbf{a}^{\top}\mathbf{p}\mathbf{\leq}\mathbf{b}\}\). (c) Reaching inside/outside/on a circle: \(\mathcal{C}_{\mathbf{p}}=\{\mathbf{p}\mid\|\mathbf{p}-\mathbf{p}_{d}\|\leq x^{2}\}\). (d) Reaching inside/outside/on a rectangle: \(\mathcal{C}_{\mathbf{p}}=\{\mathbf{p}\mid\|\mathbf{A}(\mathbf{p}-\mathbf{p}_{d})\|_{\infty}{\leq} L/2\}\). These problems can be tested online with closed-loop controllers1, created as extensions of the _Robotine Codes from Scratch_ toolbox2. usefulness of projections in the field has been overlooked. In this letter, we integrate the recent work of [14] into robotics optimization problems ranging from IK to MPC by providing the most common Euclidean projections with an additional rectangular projection. We propose an extension with multiple projections and additional nonlinear constraints. In particular, we provide an efficient direct-shooting optimal control formulation of this solver to address motion planning and MPC problems. ## II Related work Euclidean projections and the analytical expressions to many projections can be found in [16]. It also gives a general theory on how to project onto a level set of an arbitrary function using KKT conditions. Extensive studies and theoretical background on the projections and their properties can be found in [17]. In [18], Usmanova _et al._ propose an efficient algorithm for the projection onto arbitrary convex constraint sets and show that exploiting projections in the optimization significantly increases the performance. In [19], Bauschke and Koch discuss and benchmark algorithms for finding the projection onto the intersection of convex sets. One of the main algorithms for this is Dykstra's alternating projection algorithm [20]. The simplest algorithm that exploits projections is the projected gradient descent. Spectral projected gradient descent improves over this by exploiting the curvature information via its spectral stepsizes. A detailed review on spectral projected gradient methods is given in [13]. In [21], Torrisi _et al._ propose to use a projected gradient descent algorithm to solve the subproblems of sequential quadratic programming (SQP). They show that their method can solve MPC of an inverted pendulum faster than SNOPT. Our work is closest to theirs with the differences that we use SPG instead of a vanilla projected gradient descent to solve the subproblems of augmented Lagrangian instead of SQP. Instead, we propose a direct way of handling multiple projections and inequality constraints, which is not trivial in [21]. In [22], Giftthaler and Buchli propose a projection of the update direction of the control input onto the nullspace of the linearized constraints in iLQR. This approach can only handle simple equality constraints (for example, velocity-level constraints of second-order systems) and cannot treat position-level constraints for such systems, which is a very common and practical class of constraints in real world applications. ## III Background In this section, we motivate the use of projections in standard robotic tasks such as hierarchical inverse kinematics and obstacle avoidance. ### _Euclidean projections onto sets_ The solution \(\mathbf{x}^{*}\) to the following constrained optimization problem \[\min_{\mathbf{x}}\lVert\mathbf{x}-\mathbf{x}_{0}\rVert_{2}^{2}\quad\text{s.t.}\quad\mathbf{x }\in\mathcal{C} \tag{1}\] is called an Euclidean projection of the point \(\mathbf{x}_{0}\) onto the set \(\mathcal{C}\) and is denoted as \(\mathbf{x}^{*}\)=\(\Pi_{C}(\mathbf{x}_{0})\). This operation determines the point \(\mathbf{x}\in\mathcal{C}\) that is closest to \(\mathbf{x}_{0}\) in Euclidean sense. For many sets \(\mathcal{C}\), \(\Pi_{C}(\cdot)\) admits analytical expressions that are given in Table I. Even though, usually these sets are convex (e.g. bounded domains), some nonconvex sets also admit analytical solution(s) that are easy to compute (e.g. being outside of a sphere). Note that many of these sets are frequently used in robotics, from joint/torque limits and avoiding spherical/square obstacles to satisfying virtual fixtures defined in the task space of the robot. ### _Projection view of inverse kinematics_ The inverse kinematics (IK) problem in robotics corresponds to finding a joint configuration \(\mathbf{q}^{*}\) of the robot that corresponds to a given desired end-effector pose \(\mathbf{p}_{d}\). Iterative procedures are developed to robustly solve this problem considering singularities at the Jacobian level. The success and the convergence speed of these algorithms depend on the initialization of the problem, which is often selected as the current joint configuration of the robot \(\mathbf{q}_{0}\). With this view in mind, we can express IK as a projection problem of the initial joint angles \(\mathbf{q}_{0}\) onto a set \(\mathcal{C}_{\mathbf{q}}\) and of the initial end-effector position \(\mathbf{p}_{0}\) onto a set \(\mathcal{C}_{\mathbf{p}}\). These two sets are assumed to be nonempty and closed sets that admit tractable and efficient projections. A common example for \(\mathcal{C}_{\mathbf{q}}\) is the box constraints for the joint limits. Fig. 1 shows examples for the set \(\mathcal{C}_{\mathbf{p}}\) with Fig. (a)a showing an equality constraint to a desired point, Fig. (b)b shows an affine hyperplane constraint for virtually limiting the robot to be under/on a plane, Fig. (c)c and Fig. (d)d show quadric constraints for the end-effector to stay inside/outside or on the boundary of a circle/square. In this work, we exploit these easy projections in a first-order optimization solver with the claim of finding solutions faster than standard constrained optimization problems. ## IV Augmented Lagrangian Spectral Projected Gradient Descent for Robotics This section gives the spectral projected gradient descent (SPG) algorithm along with the nonmonotone line search procedure. These algorithms are easy to implement without big memory requirements and yet result in powerful solvers. Next, we give the augmented Lagrangian spectral projected gradient descent (ALSPG) algorithm with extensions to general inequality constraints and multiple projections. ### _Spectral projected gradient descent_ Spectral projected gradient descent (SPG) is an improved version of a vanilla projected gradient descent using spectral stepsizes. Its excellent numerical results even in comparison to second-order methods have been a point of attraction in the optimization literature [11]. SPG tackles constrained optimization problems in the form of \[\min_{\mathbf{x}}f(\mathbf{x})\quad\text{s.t.}\quad\mathbf{x}\in\mathcal{C}, \tag{2}\] by constructing a local quadratic model of the objective function \[f(\mathbf{x}) \approx f(\mathbf{x}_{k})+\nabla f(\mathbf{x}_{k})^{\top}(\mathbf{x}-\mathbf{x}_{k})+ \frac{1}{2\gamma_{k}}\|\mathbf{x}-\mathbf{x}_{k}\|_{2}^{2},\] \[=\frac{1}{2\gamma_{k}}\|\mathbf{x}-(\mathbf{x}_{k}-\gamma_{k}\nabla f(\mathbf{ x}_{k}))\|_{2}^{2}+\text{const.},\] and by minimizing it subject to the constraints as \[\min_{\mathbf{x}}\frac{1}{2\gamma_{k}}\|\mathbf{x}-(\mathbf{x}_{k}-\gamma_{k}\nabla f(\mathbf{ x}_{k}))\|_{2}^{2}\quad\text{s.t.}\quad\mathbf{x}\in\mathcal{C}, \tag{3}\] whose solution is an Euclidean projection as described in Section III-A and given by \(\Pi_{\mathcal{C}}(\mathbf{x}_{k}-\gamma_{k}\nabla f(\mathbf{x}_{k}))\). The local search direction \(\mathbf{d}_{k}\) for SPG is then given by \[\mathbf{d}_{k}=\Pi_{\mathcal{C}}(\mathbf{x}_{k}-\gamma_{k}\nabla f(\mathbf{x}_{k}))-\mathbf{x }_{k}, \tag{4}\] which is used in a nonmonotone line search (Algorithm 1) with \(\mathbf{x}_{k+1}=\mathbf{x}_{k}+\alpha_{k}\mathbf{d}_{k}\), to find \(\alpha_{k}\) satisfying \(f(\mathbf{x}_{k+1})\leq f_{\text{max}}+\alpha_{k}\gamma_{k}\nabla f(\mathbf{x}_{k})^{ \top}\mathbf{d}_{k}\), where \(f_{\text{max}}=\max\{f(\mathbf{x}_{k-j})\,|\,0\leq j\leq\min\{k,M-1\}\}\). Nonmonotone line search allows for increasing objective values for some iterations \(M\) preventing getting stuck at bad local minima. The choice of \(\gamma_{k}\) affects the convergence properties significantly since it introduces curvature information to the solver. Note that when choosing \(\gamma_{k}=1\), SPG is equivalent to the widely known projected gradient descent. SPG uses spectral stepsizes obtained by a least-square approximation of the Hessian matrix by \(\gamma_{k}\mathbf{I}\). These spectral stepsizes are computed by proposals \[\gamma_{k}^{(1)}=\frac{\mathbf{s}_{k}^{\top}\mathbf{s}_{k}}{\mathbf{s}_{k}^{\top}\mathbf{y}_{k }}\quad\text{and}\quad\gamma_{k}^{(2)}=\frac{\mathbf{s}_{k}^{\top}\mathbf{y}_{k}}{\bm {y}_{k}^{\top}\mathbf{y}_{k}}, \tag{5}\] where \(\mathbf{s}_{k}=\mathbf{x}_{k}-\mathbf{x}_{k-1}\) and \(\mathbf{y}_{k}=\nabla f(\mathbf{x}_{k})-\nabla f(\mathbf{x}_{k-1})\)[11]. In the case of quadratic objective function in the form of \(\mathbf{x}^{\top}\mathbf{Q}\mathbf{x}\), these two values correspond to the maximum and minimum eigenvalues of the matrix \(\mathbf{Q}\). Recent developments in SPG have shown that an alternating use of these spectral stepsizes lead to better performance. The initial spectral stepsize can be computed by setting \(\bar{\mathbf{x}}_{0}=\mathbf{x}_{0}-\gamma_{\text{small}}\nabla f(\mathbf{x}_{0})\), and computing \(\bar{\mathbf{s}}_{0}=\bar{\mathbf{x}}_{0}-\mathbf{x}_{0}\) and \(\bar{\mathbf{y}}_{0}=\nabla f(\bar{\mathbf{x}}_{0})-\nabla f(\mathbf{x}_{0})\). Note that this heuristic operation costs one more gradient computation. The final algorithm is given by Algorithm 2. ``` 1 Initialize \(\mathbf{x}_{k}\), \(\gamma_{k}\)\(\epsilon{=}10^{-5}\), \(k{=}0\); 2while\(\|\Pi_{\mathcal{C}}(\mathbf{x}_{k}-\nabla f(\mathbf{x}_{k}))-\mathbf{x}_{k}\|_{\infty}>\epsilon\)do 3 Find a search direction by \(\mathbf{d}_{k}=\Pi_{\mathcal{C}}(\mathbf{x}_{k}-\gamma_{k}\nabla f(\mathbf{x}_{k}))-\mathbf{x }_{k}\) 4 Do non-monotone line search using Algorithm 1 to find \(\mathbf{x}_{k+1}\) 5 Update the spectral stepsize 6\(\mathbf{s}_{k+1}=\mathbf{x}_{k+1}-\mathbf{x}_{k}\) and \(\mathbf{y}_{k+1}=\nabla f(\mathbf{x}_{k+1})-\nabla f(\mathbf{x}_{k})\) 7\(\gamma^{(1)}=\frac{\mathbf{s}_{k+1}^{\top}\mathbf{y}_{k+1}}{\mathbf{s}_{k+1}^{\top}\mathbf{y}_ {k+1}}\) and \(\gamma^{(2)}=\frac{\mathbf{s}_{k+1}^{\top}\mathbf{y}_{k+1}}{\mathbf{y}_{k+1}^{\top}\mathbf{y}_ {k+1}}\) 8if\(\gamma^{(1)}<2\gamma^{(2)}\)then 9\(\gamma_{k+1}=\gamma^{(2)}\) 10else 11\(\gamma_{k+1}=\gamma^{(1)}-\frac{1}{2}\gamma^{(2)}\) 12 end if 13\(k=k+1\) 14 end while ``` **Algorithm 1**Non-monotone line search ### _Augmented Lagrangian spectral projected gradient descent (ALSPG)_ The SPG algorithm has been shown to be a powerful competition to second-order solvers in many ways. Each iteration can be significantly cheaper than a second-order method if a computationally efficient projection is used and provides better directions than other first-order methods. However, SPG alone is usually not sufficient to solve problems in robotics with complicated nonlinear constraints. In [14], Jia _et al._ provides an augmented Lagrangian framework to solve problems with constraints \(\mathbf{g}(\mathbf{x})\in\mathcal{C}\) and \(\mathbf{x}\in\mathcal{D}\), where \(\mathbf{g}(\cdot)\) is a convex function, \(\mathcal{C}\) is a convex set, and \(\mathcal{D}\) is a closed nonempty set, both equipped with easy projections. In this section, we build on the work in [14] with the extension of multiple projections and additional general equality and inequality constraints. The general optimization problem that we are tackling here is \[\min_{\mathbf{x}\in\mathcal{D}}f(\mathbf{x})\quad\text{s.t.}\quad\mathbf{g}_{i}(\mathbf{x})\in \mathcal{C}_{i},\quad\forall i\in\{1,\dots,p\} \tag{6}\] where \(\mathbf{g}_{i}(\cdot)\) are assumed to be arbitrary nonlinear functions. Note that even though the convergence results in [14] apply to the case when these are convex functions and convex sets, we found in practice that the algorithm is powerful enough to extend to more general cases. For simplicity, we redefine the additional equality constraints as an additional set to be projected onto with \(\mathcal{C}_{y}=\{\mathbf{y}\mid\mathbf{y}=\mathbf{0}\}\) with \(\Pi_{\mathcal{C}_{y}}(\mathbf{h}(\cdot))=\mathbf{0}\). Also, we transform inequality constraints to equality constraints using the proposed method in the following Section IV-C. We use the following augmented Lagrangian function \[\mathcal{L}(\mathbf{x},\{\mathbf{\lambda}^{\mathcal{C}_{i}},\rho^{ \mathcal{C}_{i}}\}_{i=1}^{p})=f(\mathbf{x})+\] \[\sum_{i=1}^{p}\frac{\rho^{\mathcal{C}_{i}}}{2}\left\|\mathbf{g}(\mathbf{x })+\frac{\mathbf{\lambda}^{\mathcal{C}_{i}}}{\rho^{\mathcal{C}_{i}}}-\Pi_{ \mathcal{C}_{i}}\Big{(}\mathbf{g}(\mathbf{x})+\frac{\mathbf{\lambda}^{\mathcal{C}_{i}}}{ \rho^{\mathcal{C}_{i}}}\Big{)}\right\|_{2}^{2}\] whose derivative wrt \(\mathbf{x}\) is given by \[\nabla\mathcal{L}(\mathbf{x},\{\mathbf{\lambda}^{\mathcal{C}_{i}},\rho^{ \mathcal{C}_{i}}\}_{i=1}^{p})=\nabla f(\mathbf{x})+\] \[\sum_{i=1}^{p}\frac{\rho^{\mathcal{C}_{i}}}{2}\nabla\mathbf{g}_{i}^{ \top}(\mathbf{x})\Big{(}\mathbf{g}_{i}(\mathbf{x})+\frac{\mathbf{\lambda}^{\mathcal{C}_{i}}}{ \rho^{\mathcal{C}_{i}}}-\Pi_{\mathcal{C}_{i}}\Big{(}\mathbf{g}_{i}(\mathbf{x})+\frac{ \mathbf{\lambda}^{\mathcal{C}_{i}}}{\rho^{\mathcal{C}_{i}}}\Big{)}\Big{)},\] using the property of convex Euclidean projections derivative \(\nabla\|\mathbf{g}(\mathbf{x})-\Pi(\mathbf{g}(\mathbf{x}))\|_{2}^{2}=\nabla\mathbf{g}(\mathbf{x})^{\top }(\mathbf{g}(\mathbf{x})-\Pi(\mathbf{g}(\mathbf{x}))\), see [17] for details. This way, we obtain a formulation which does not need the gradient of the projection function \(\Pi_{\mathcal{C}_{i}}(\cdot)\). One iteration of ALSPG optimizes the subproblem \(\arg\min_{\mathbf{x}\in\mathcal{D}}\mathcal{L}(\mathbf{x},\{\mathbf{\lambda}^{\mathcal{C} _{i}},\rho^{\mathcal{C}_{i}}\}_{i=1}^{p})\) given \(\{\mathbf{\lambda}^{\mathcal{C}_{i}},\rho^{\mathcal{C}_{i}}\}_{i=1}^{p}\), and then updates these according to the next iterate. Defining the auxiliary function \(V(\mathbf{x},\mathbf{\lambda}^{\mathcal{C}_{i}},\rho^{\mathcal{C}_{i}})\)=\(\left\|\mathbf{g}(\mathbf{x})-\Pi_{\mathcal{C}_{i}}\Big{(}\mathbf{g}(\mathbf{x})+ \frac{\mathbf{\lambda}^{\mathcal{C}_{i}}}{\rho^{\mathcal{C}_{i}}}\Big{)}\right\|\), the algorithm is summarized in Algorithm 3. Note that one can define and tune many heuristics around augmented Lagrangian methods with possible extensions to primal-dual methods. Here, we give only one possible way of implementing ALSPG. ### _Handling of inequality constraints_ In robotics, one frequent and intuitive way of incorporating the constraints into the optimization problem is to use soft constraints and tune the weights until a satisfactory result is obtained. However, this approach breaks the hierarchy of the task without any real guarantee of constraint satisfaction. Soft constraints are obtained by transforming the hard constraint function into a positive cost function using auxiliary functions such as the barrier function. In this section, we propose to exploit such soft constraint functions as hard constraints to reduce each inequality constraint to an equality constraint, eliminating the need of using slack variables. Note that this procedure is in line with the construction of a standard augmented Lagrangian for inequality constraints. Let \(g_{i}(\mathbf{x})\leq 0\) be the \(i^{\text{th}}\) inequality constraint with \(i{=}1,\ldots,M\) and \(g(\cdot):\mathbb{R}^{n}\rightarrow\mathbb{R}\). We define \(h_{i}(\mathbf{x})=\max(0,g_{i}(\mathbf{x})\), where \(h(\cdot):\mathbb{R}^{n}\rightarrow\mathbb{R}^{+}\). Then, the statement \(g_{i}(\mathbf{x}))\leq 0\) is equivalent to \(h_{i}(\mathbf{x})=0\). Moreover, we can generalize this statement to obtain one single equality constraint from any number of inequality constraints in order to increase the computational speed. This generalization is given by the theorem below. **Theorem 1**: _The statement \(g_{i}(\mathbf{x})\leq 0,\forall i{=}1,\ldots,M\) is equivalent to \(h(\mathbf{x})=\sum_{i=1}^{M}h_{i}(\mathbf{x})=0\), where \(h_{i}(\mathbf{x})=\max(0,g_{i}(\mathbf{x}))\)._ 1. _If_ \(g_{i}(\mathbf{x})\leq 0,\forall i{=}1,\ldots,M\)_, then it is by definition that_ \(\sum_{i=1}^{M}h_{i}(\mathbf{x})=0\)_._ 2. _Assume_ \(\sum_{i=1}^{M}h_{i}(\mathbf{x})=0\) _and_ \(\exists j\) _s.t._ \(g_{j}(\mathbf{x})>0,\ \forall j{=}1,\ldots,N<M\)_. Then,_ \(\sum_{i=1}^{M}h_{i}(\mathbf{x})=\sum_{j=1}^{N}h_{j}(\mathbf{x})=\sum_{j=1}^{N}g_{j}(\mathbf{ x})>0\)_, which contradicts the assumption._ Although it seems to simplify the problem in terms of dimensions, using Theorem 1 to compactly reduce all inequality constraints into one single constraint would result in the loss of some information about the gradients from each constraint in one iteration of any solver. In practice, this presents itself as a trade-off between the number of iterations and the computational complexity of each iteration to solve the optimization problem. ## V Optimal Control with ALSPG We consider the following generic constrained optimization problem \[\min_{\mathbf{x}\in\mathcal{C}_{\mathbf{x}},\mathbf{u}\in\mathcal{C}_{\mathbf{u}}}c(\mathbf{x}, \mathbf{u})\quad\text{s.t.}\quad\begin{array}{l}\mathbf{x}=\mathbf{F}(\mathbf{x}_{0},\mathbf{u }),\\ \mathbf{h}(\mathbf{x},\mathbf{u})=\mathbf{0},\end{array} \tag{7}\] where the state trajectory \(\mathbf{x}{=}\big{[}\mathbf{x}_{1}^{\top},\mathbf{x}_{2}^{\top},\ldots,\mathbf{x}_{t}^{\top}, \ldots,\mathbf{x}_{T}^{\top}\big{]}^{\top}\), the control trajectory \(\mathbf{u}=\big{[}\mathbf{u}_{0}^{\top},\mathbf{u}_{1}^{\top},\ldots,\mathbf{u}_{t}^{\top}, \ldots,\mathbf{u}_{T-1}^{\top}\big{]}^{\top}\) and the function \(\mathbf{F}(\cdot,\cdot)\) correspond to the forward rollout of the states using a dynamics model \(\mathbf{x}_{t+1}{=}\mathbf{f}(\mathbf{x}_{t},\mathbf{u}_{t})\). We use a direct shooting approach and transform Eq. (7) into a problem in \(\mathbf{u}\) only by considering \[\min_{\mathbf{u}\in\mathcal{C}_{\mathbf{u}}}c(\mathbf{F}(\mathbf{x}_{0},\mathbf{u}),\mathbf{u})\quad \text{s.t.}\quad\begin{array}{l}\mathbf{F}(\mathbf{x}_{0},\mathbf{u})\in\mathcal{C}_{ \mathbf{x}},\\ \mathbf{h}(\mathbf{F}(\mathbf{x}_{0},\mathbf{u}),\mathbf{u})=\mathbf{0},\end{array} \tag{8}\] which is exactly in the form of Eq. (6), if \(\mathbf{g}_{1}(\mathbf{u})=\mathbf{F}(\mathbf{x}_{0},\mathbf{u})\) and \(\mathbf{g}_{2}(\mathbf{u})=\mathbf{h}(\mathbf{F}(\mathbf{x}_{0},\mathbf{u}),\mathbf{u})\). The unconstrained version of this problem can be solved with least-square approaches. However, assuming \(\mathbf{x}_{t}\in\mathbb{R}^{m}\), \(\mathbf{u}_{t}\in\mathbb{R}^{n}\), this requires the inversion of a matrix of size \(Tn\times Tn\), whereas here we only work with the gradients of the objective function and the functions \(\mathbf{g}_{i}(\cdot)\). The component that requires a special attention is \(\nabla\mathbf{F}(\mathbf{x}_{0},\mathbf{u})\) and in particular, its transpose product with a vector. It turns out that this product can be efficiently computed with a recursive formula (as also described in [21]), resulting in fast SPG iterations. Denoting \(\mathbf{A}_{t}=\nabla_{\mathbf{x}_{t}}\mathbf{f}(\mathbf{x}_{t},\mathbf{u}_{t})\), \(\mathbf{B}_{t}=\nabla_{\mathbf{u}_{t}}\mathbf{f}(\mathbf{x}_{t},\mathbf{u}_{t})\), and \(\nabla_{\mathbf{u}}\mathbf{F}(\mathbf{x}_{0},\mathbf{u})^{\top}\mathbf{y}=\mathbf{z}\) with \(\mathbf{y}=\begin{bmatrix}\mathbf{y}_{0},\mathbf{y}_{1},\ldots,\mathbf{y}_{t},\ldots,\mathbf{y}_{ T-1}\end{bmatrix}\), \(\mathbf{z}=\begin{bmatrix}\mathbf{z}_{0},\mathbf{z}_{1},\ldots,\mathbf{z}_{t},\ldots,\mathbf{z}_{ T-1}\end{bmatrix}\), one can show that the matrix vector product \(\nabla_{\mathbf{u}}\mathbf{F}(\mathbf{x}_{0},\mathbf{u})^{\top}\mathbf{y}\) can be written as \[\begin{bmatrix}\mathbf{B}_{0}^{\top}&\mathbf{B}_{0}^{\top}\mathbf{A}_{1}^{ \top}&\mathbf{B}_{0}^{\top}\mathbf{A}_{1}^{\top}\mathbf{A}_{2}^{\top}&\ldots&\mathbf{B}_{0}^{ \top}\prod_{t=1}^{T-1}\mathbf{A}_{t}^{\top}\\ \mathbf{0}&\mathbf{B}_{1}^{\top}&\mathbf{B}_{1}^{\top}\mathbf{A}_{2}^{\top}&\ldots&\mathbf{B}_{1}^{ \top}\prod_{t=2}^{T-1}\mathbf{A}_{t}^{\top}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ \mathbf{0}&\mathbf{0}&\mathbf{0}&\ldots&\mathbf{B}_{T-1}^{\top}\end{bmatrix}\begin{bmatrix} \mathbf{y}_{0}\\ \mathbf{y}_{1}\\ \vdots\\ \mathbf{y}_{T-1}\end{bmatrix}\] \[=\begin{bmatrix}\mathbf{B}_{0}^{\top}(\mathbf{y}_{0}+\mathbf{A}_{1}^{\top} \mathbf{y}_{1}+\mathbf{A}_{1}^{\top}\mathbf{A}_{2}^{\top}\mathbf{y}_{2}+\prod_{t=1}^{T-1}\mathbf{A }_{t}^{\top}\mathbf{y}_{T-1})\\ \mathbf{B}_{1}^{\top}(\mathbf{y}_{1}+\mathbf{A}_{2}^{\top}\mathbf{y}_{2}+\mathbf{A}_{2}^{\top}\mathbf{ A}_{3}^{\top}\mathbf{y}_{3}+\prod_{t=2}^{T-1}\mathbf{A}_{t}^{\top}\mathbf{y}_{T-1})\\ \vdots\\ \mathbf{B}_{T-2}^{\top}(\mathbf{y}_{T-2}+\mathbf{A}_{T-1}^{\top}\mathbf{y}_{T-1})\\ \mathbf{B}_{T-1}^{\top}\mathbf{y}_{T-1}\end{bmatrix},\] where the terms in parannheses can be computed recursively backward by \(\bar{\mathbf{z}}_{t+1}=(\mathbf{y}_{t+1}+\mathbf{A}_{t}^{\top}\bar{\mathbf{z}}_{t})\), \(\mathbf{z}_{t}=\mathbf{B}_{t-1}^{\top}\bar{\mathbf{z}}_{t}\) and \(\bar{\mathbf{z}}_{T-1}=\mathbf{y}_{T-1}\), without having to construct the big matrix \(\nabla_{\mathbf{u}}\mathbf{F}(\mathbf{x}_{0},\mathbf{u})^{\top}\). Note that when there are no constraints on the state and \(\mathbf{h}(\cdot)=0\), Eq. (8) can be solved directly with the SPG algorithm. We believe that SPG can be used to solve problems with higher horizons even faster than iLQR. Fig. 2 shows a breakdown of computational times compared to the number of timesteps for a reaching planning tasks without constraints with a 7-axis manipulator. Here, we plotted the average convergence time in (s) for both algorithms with 5 different end positions in task space and horizons of 100, 1000, 2000, 3000 and 5000 timesteps. iLQR is implemented with dynamic programming. SPG is implemented as detailed in the previous section. Both implementations are in Python. ## VI Convex polytope projections and linear transformations Often, Euclidean projection problems are composed of a linear transformation of the constraint set onto which the projection is easy, hindering the analytical projection property of this set. ALSPG can be used directly to solve this kind of problems in a very efficient way, still exploiting the projection capability of the base constraint set. For example, consider a unit second-order cone set as \(\mathcal{C}_{\text{SOC}}=\{(\mathbf{z},t)\mid\|\mathbf{z}\|_{2}\leq t\}\) and a generic second-order cone (SOC) constraint as \(\mathcal{C}{=}\{\mathbf{x}\mid\|\mathbf{Ax}+\mathbf{b}\|_{2}\leq\mathbf{c}^{\top}\mathbf{x}+d\}\). This can be transformed to a unit second-order cone set by taking \(\mathbf{g}(\mathbf{x}){=}\begin{bmatrix}\mathbf{Ax}+\mathbf{b}&\mathbf{c}^{\top}\mathbf{x}+d\end{bmatrix}\) and therefore \(\mathcal{C}{=}\{\mathbf{x}\mid\mathbf{g}(\mathbf{x})\in\mathcal{C}_{\text{SOC}}\}\). Then the optimization problem of projection onto a generic second-order cone, namely, \(\arg\min_{\mathbf{x}}\|\mathbf{x}-\mathbf{x}_{0}\|_{2}^{2}\quad\text{s.t.}\quad\|\mathbf{Ax}+ \mathbf{b}\|_{2}\leq\mathbf{c}^{\top}\mathbf{x}+d\) can be rewritten as \(\arg\min_{\mathbf{x}}\|\mathbf{x}-\mathbf{x}_{0}\|_{2}^{2}\quad\text{s.t.}\quad\mathbf{g}(\mathbf{x })\in\mathcal{C}_{\text{SOC}}\) and can be solved efficiently using the ALSPG algorithm with only unit second-order cone projections, without requiring explicit derivatives of the cone constraints. In the case of convex polytope projections, we can even find some conditions when the linear transformation does not break the analytical projections. Especially for rectangular projections, which are a special case of convex polytope projections, we can find special conditions such that we can still find analytical expressions even if we rotate and scale the rectangles. In the next section, we give the development and insights of convex polytope projections as these are one of the most commonly encountered constraint types in robotics problems such obstacle avoidance. We then explain what kind of linear transformations can be applied to projections to preserve their analytical projection capability. ### _Convex polytope projections_ A convex polytope of \(n\) sides can be described by \(n\) lines with slopes \(\mathbf{a}_{i}\) and intercepts \(b_{i}\). The inside region of this polytope (e.g. for reaching) is given by "_and_" constraints \(\mathcal{C}_{\text{polytope}}^{\text{in}}=\{\mathbf{x}|\bigwedge_{i=0}^{n}\mathbf{a}_{i}^ {\top}\mathbf{x}\leq u_{i}\}\) while the outside region (e.g. for obstacle avoidance) is given by its negative statement with "_or_" constraints \(\mathcal{C}_{\text{polytope}}^{\text{out}}=\{\mathbf{x}|\bigvee_{i=0}^{n}\mathbf{a}_{i}^ {\top}\mathbf{x}>l_{i}\}\). The projection onto \(\mathcal{C}_{\text{polytope}}^{\text{in}}\) can be described as a summation of \(n\) hyperplane projections in ALSPG. Even though constraints for the set \(\mathcal{C}_{\text{polytope}}^{\text{out}}\) can not be easily described in general optimization solvers, we can show that the projection of a point \(\mathbf{x}_{0}\) onto this set requires finding the closest hyperplane \(i\) to \(\mathbf{x}_{0}\), then the projection outside the hyperplane with index \(i\), namely \(\Pi_{\mathcal{C}_{\text{polytope}}}^{i}(\mathbf{x}_{0})\). The minimum value of the objective function of the projection becomes \(\|\Pi_{\mathcal{C}_{\text{polytope}}^{i}}^{i}(\mathbf{x}_{0}){-}\mathbf{x}_{0}\|\) which is equal to the distance of \(\mathbf{x}_{0}\) to the hyperplane \(i\) (one can check this by inserting the corresponding values from Table I). This observation makes significant simplifications for the solvers that can take projections into account. In this section, we give the simplifications of this idea for often-encountered rectangular regions. The constraint of being inside a square region, also called a box constraint, can be described by infinity norms as the set \(\mathcal{C}_{\text{rect}}^{\text{in}}{=}\{\mathbf{x}\,|\,\|\mathbf{x}\|_{\infty}\leq \mathbf{x}\,|\mathbf{x}|_{\infty}\}\). Fig. 2: Comparison of iLQR and SPG in terms of convergence time evolution vs the number of timesteps or horizon. \(u\)} represents the inside region of a square of width \(u\) centered at the origin. This is basically a compact description of 4 lines (in 2D) describing the square, i.e., \(x\leq u\), \(-u\leq x\), \(y\leq u\), \(-u\leq y\). This observation allows us to write down \(\mathcal{C}_{\text{rect}}^{\text{out}}{=}\{x\,|\,l\leq\|\mathbf{x}\|_{\infty}\}\) which represents the outside region of a square of width \(l\) centered at the origin. \(\mathcal{C}_{\text{rect}}^{\text{in}}\) is a simple clipping operation for \(\mathbf{x}_{0}\) as described in Table I. However, \(\mathcal{C}_{\text{rect}}^{\text{out}}\) requires setting up the optimization problem for the Euclidean projection and checking the KKT conditions. For conciseness, we give here only the resulting projection. Denoting \(k\) the index where \(k{=}\operatorname*{arg\,max}_{i}|\mathbf{x}_{0,i}|\), the projection onto \(\mathcal{C}_{\text{rect}}^{\text{out}}\) is then given by \[\Pi_{\mathcal{C}}(\mathbf{x}_{0})_{j}=\begin{cases}\mathbf{x}_{0,j}&\text{if }\mathbf{x}_{0,k}{ \leq}l,\\ l\operatorname*{sign}\mathbf{x}_{0,k}&\text{otherwise}.\end{cases} \tag{9}\] ### _Linear transformation of projections_ Having stated projections for some basic geometric primitives, one may need to apply rotation and translation operations to such shapes to exploit more complex ones. One such example is the transformation of square projections onto rotated and translated square regions. Considering a convex set \(\mathcal{C}=\{\mathbf{x}|f(\mathbf{x})\leq t\}\), one can show that the projection onto \(\mathcal{C}^{\prime}=\{\mathbf{x}|f(\mathbf{A}(\mathbf{x}-\mathbf{x}_{c}))\leq t\}\) is given by \(\Pi_{\mathcal{C}^{\prime}}(\mathbf{x}_{0})=\mathbf{A}^{-1}\Pi_{\mathcal{C}}(\mathbf{A}( \mathbf{x}_{0}-\mathbf{x}_{c}))+\mathbf{x}_{c}\), where \(\mathbf{A}\) is an orthogonal matrix. For creating rectangular regions, one needs to scale each dimension of the variable, i.e., multiplying by a diagonal matrix. Even though this does not generalize to all cases, for the rectangular regions, one can show that \(\mathbf{A}\) can be in the form of a multiplication of an orthogonal matrix and a diagonal matrix. For example, while a square of length \(L\) can be described by the set \(\mathcal{C}=\{\mathbf{x}|\|\mathbf{x}\|_{\infty}=L/2\}\), a rectangle of length \(L\) and width \(W\), which is rotated by an angle \(\theta\) can be described with the transformation matrix \(\mathbf{A}=\mathbf{R}(\theta)\mathbf{D}\), where \(\mathbf{R}\) is the rotation matrix, and \(\mathbf{D}=\operatorname*{diag}(1,L/W)\). ## VII Experiments In this section, we perform experiments solving inverse kinematics problems, motion planning and MPC for a task with hybrid dynamics, and motion planning for rectangular obstacle avoidance. The motivation behind these experiments is to show that: 1) the proposed way of solving these robotics problem can be unconventionally faster than the second-order methods such as iLQR; and 2) exploiting projections whenever we can, instead of leaving the constraints for the solver to treat them as generic constraints, increases the performance significantly. ### _Constrained inverse kinematics_ A constrained inverse kinematics problem can be described in many ways using projections. One typical way is to find a \(\mathbf{q}\in\mathcal{C}_{\mathbf{q}}\) that minimizes a cost to be away from a given initial configuration \(\mathbf{q}_{0}\) while respecting general constraints \(\mathbf{h}(\mathbf{q})=\mathbf{0}\) and projection constraints \(\mathbf{f}(\mathbf{q})\in\mathcal{C}_{\mathbf{x}}\) \[\min_{\mathbf{q}\in\mathcal{C}_{\mathbf{q}}}\|\mathbf{q}-\mathbf{q}_{0}\|_{2}^{2}\quad\text{ s.t. }\quad\begin{array}{l}\mathbf{h}(\mathbf{q})=\mathbf{0},\\ \mathbf{f}(\mathbf{q})\in\mathcal{C}_{\mathbf{x}},\end{array} \tag{10}\] where \(\mathbf{f}(\cdot)\) can represent entities such as the end-effector pose or the center of mass for which the constraints are easier to be expressed as projections onto \(\mathcal{C}_{\mathbf{x}}\), and \(\mathcal{C}_{\mathbf{q}}\) can represent the configuration space within the joint limits. Fig. 1 shows a 3-axis planar manipulator with \(\mathbf{f}(\cdot)\) representing the end-effector position and \(\mathcal{C}_{\mathbf{x}}\) denoting (a) \(\mathcal{C}_{\mathbf{x}}=\{\mathbf{x}\,|\,\mathbf{x}{=}\mathbf{x}_{d}\}\), (b) \(\mathcal{C}_{\mathbf{x}}=\{\mathbf{x}|{a}^{\top}\mathbf{x}{+}b{=}\}\), (c) \(\mathcal{C}_{\mathbf{x}}=\{\mathbf{x}|{\leq}\,r_{i}^{2}\leq\|\mathbf{x}{-}\mathbf{x}_{d}\|_{2} ^{2}\leq r_{o}^{2}\}\) and (d) \(\mathcal{C}_{\mathbf{x}}=\{\mathbf{x}\,|\,\|\mathbf{x}{-}\mathbf{x}_{d}\|_{\infty,\mathbf{W}}\leq L\}\). We applied the ALSPG algorithm iteratively to obtain a reactive control loop and we implemented it on an interactive webpage running Python, see Fig. 1. **Talos IK:** We tested our algorithm on a high dimensional (32 DoF) inverse kinematics problem of TALOS robot (see Fig. 2(a)) subject to constraints: i) center of mass inside a box; ii) end-effector constrained to lie inside a sphere; and iii) foot position and orientations are given. We compared two versions of ALSPG algorithm: 1) by casting these constraints as projections onto \(\mathcal{C}_{\mathbf{x}}\); and 2) by keeping all the constraints inside the function \(\mathbf{h}(\cdot)\) to see direct advantages of exploiting projections in ALSPG. We ran the algorithm from 1000 different random initial configurations for both cases and compared the number of function and Jacobian evaluations, \(n_{f}\) and \(n_{j}\). For case 1), we obtained \(n_{f}{=}897.64\pm 82.84\) and \(n_{j}{=}883.44\pm 81.7\), while for case 2), we obtained \(n_{f}{=}6459.4\pm 3765.8\) and \(n_{j}{=}3791.79\pm 1061.05\). **Robust IK:** In this experiment, we would like to achieve a task of reaching and staying in the half-space under a plane whose slope is stochastic because, for example, of the uncertainties in the measurements of the vision system. The constraint can be written as \(\mathbf{a}^{\top}\mathbf{f}(\mathbf{q})\leq 0\), where \(\mathbf{a}\sim\mathcal{N}(\mathbf{\mu},\mathbf{\Sigma})\). We can transform it into a chance-constraint to provide some safety guarantees in a probabilistic manner. The idea is to find a joint configuration \(\mathbf{q}\) such that it will stay under a stochastic hyperplane with a probability of \(\eta\geq 0.5\). This inequality can be written as a second-order cone constraint wrt \(\mathbf{f}(\mathbf{q})\) as \(\mathbf{\mu}^{\top}\mathbf{f}(\mathbf{q})+\Psi^{-1}(\eta)\|\mathbf{\Sigma}^{\frac{1}{2}}\mathbf{f }(\mathbf{q})\|_{2}\leq 0\), where \(\Psi(\cdot)\) is the cumulative distribution function of zero mean unit variance Gaussian variable. Defining \(\mathbf{g}(\mathbf{q})=\left[(\mathbf{\Sigma}^{\frac{1}{2}}f(\mathbf{q}))^{\top}\ \mathbf{\mu}^{\top}\mathbf{f}(\mathbf{q})\right]^{\top}\), the optimization problem can then be defined as \[\min_{\mathbf{q}\in\mathcal{C}_{\mathbf{q}}}\|\mathbf{q}-\mathbf{q}_{0}\|_{2}^{2}\quad\text{s.t.}\quad\mathbf{g}(\mathbf{q})\in\mathcal{C}_{\text{SOC}}, \tag{11}\] which can be solved efficiently without using second-order cone (SOC) gradients, by using the proposed algorithm. We tested the algorithm on the 3-axis robot shown in Fig. 2(b) by optimizing for a joint configuration with a probability of \(\eta=0.8\) and then computing continuously the constraint violation for the last 1000 time steps by sampling a line slope from the given distribution. We obtained a constraint violation percentage of around 80%, as expected. ### _Motion planning and MPC on planar push_ Non-prehensile manipulation has been widely studied as a challenging task for model-based planning and control, with the pusher-slider system as one of the most prominent examples (see Fig. 4). The reasons include hybrid dynamics with various interaction modes, underactuation and contact uncertainty. In this experiment, we study motion planning and MPC on this planar push system, without any constraints, to compare to a standard iLQR implementation. Motion planning convergence results for 10 different tasks are given for iLQR and ALSPG along with means and variances in Fig. (b)b. Although iLQR seems to converge to medium accuracy faster than ALSPG, because of the difficulties in the task dynamics, it seems to get stuck at local minima very easily. On the other hand, ALSPG seems to perform better in terms of variance and local minima. We applied MPC with iLQR and ALSPG with an horizon of 60 timesteps and stopped the MPC as soon as it reached the goal position with a desired precision. Table II shows this comparison in terms of convergence time (s), number of function evaluations and number of Jacobian evaluations. According to these findings, ALSPG performs better than a standard iLQR, even when there are no constraints in the problem. ### _Motion planning with obstacle avoidance_ Obstacle avoidance problems are usually described using geometric constraints. In robot manipulation tasks, capsules and spheres are typically used to represent the robot and the environment, such that the shortest distance computations and their gradients can be computed efficiently. In autonomous parking tasks, obstacles and cars are usually described as 2D rectangular objects. In this experiment, we take a 2D double integrator point car reaching a target pose in the presence of rectangular obstacles (see Fig. 7). We apply ALSPG algorithm with and without projections (ALSPG-Proj. and ALSPG-WoProj.) to illustrate the main advantages of having an explicit projection function over direct constraints. The main difference is without projections, the solvers need to compute the gradient of the constraints, whereas with projections, this is not necessary. In order to understand the differences between first-order and second-order methods, we also compared ALSPG-Proj to AL-SLQP with projections (SLSQP-Proj.), which is the same algorithm except the subproblem is solved by a second-order solver SLSQP from Scipy. The objective function is \(c(\mathbf{x},\mathbf{u})=10^{-1}\|\mathbf{x}_{T}-\mathbf{x}_{T}^{\text{G}}\|_{2}^{2}+10^{-4} \|\mathbf{u}_{T}\|_{2}^{2}\). We performed 5 experiments, each with different settings of 4 rectangular obstacles and compared the convergence properties. The results are given in Table III. The comparison between ALSPG-Proj. and ALSPG-WoProj. reports a clear advantage of using projections instead of plain constraints in the convergence properties. Although the convergence time comparison is not necessarily fair for SPG implementations as the SLSQP solver calls C++ functions, the comparison of ALSPG-Proj. and SLSQP-Proj. shows that ALSPG-Proj. still achieves lower convergence time. ### _MPC on 7-axis manipulator_ We tested ALSPG algorithm on the MPC problem of tracking an object with box constraints on the end-effector position of a Franka Emika robot (see Fig. 5). An Aruco marker on the object is tracked by a camera held by another robot. In this experiment, the goal is to show the real-time applicability of the proposed algorithm for a constrained problem in the presence of disturbances. In Fig. 6, the error of the constraints and the objective function is given for a 1 min. time period of MPC with a short-horizon of 50 timesteps. Between 20s and 30s, the robot is disturbed by the user thanks to the compliant torque controller run on the robot. We can see that the algorithm drives smoothly the error to zero, see accompanying video. \begin{table} \begin{tabular}{|l|l|l|} \cline{2-3} \multicolumn{1}{c|}{} & iLQR & ALSPG \\ \hline Convergence time (s) & \(14.5\pm 1.3\) & \(2.9\pm 0.5\) \\ \hline Number of function ev. & \(26689.5\pm 1830.5\) & \(6104.0\pm 1455.6\) \\ \hline Number of jacobian ev. & \(225.9\pm 27.4\) & \(78.4\pm 8.5\) \\ \hline \end{tabular} \end{table} TABLE II: Comparison of MPC with iLQR and ALSPG for planar push Fig. 4: SPG algorithm applied to a pusher-slider system. \begin{table} \begin{tabular}{|l|l|} \hline \multicolumn{1}{c|}{} & iLQR & ALSPG \\ \hline Convergence time (ms) & \(14.5\pm 1.3\) & \(2.9\pm 0.5\) \\ \hline Number of function ev. & \(26689.5\pm 1830.5\) & \(6104.0\pm 1455.6\) \\ \hline Number of jacobian ev. & \(225.9\pm 27.4\) & \(78.4\pm 8.5\) \\ \hline \end{tabular} \end{table} TABLE III: Comparison of motion planning for obstacle avoidance for three cases Fig. 3: Inverse kinematics problems solved with the proposed algorithm. (a) Talos inverse kinematics problem with foot pose, center of mass stability (red point inside yellow rectangular prism) and end-effector inside a (pink) sphere constraints. (b) Robust inverse kinematics solution with \(\mathcal{C}_{\mathbf{p}}=\{\mathbf{p}\,|\,\mathbf{\mu}^{\top}\mathbf{p}+\Psi^{-1}(\eta)\| \mathbf{\Sigma}^{\frac{1}{2}}\mathbf{p}\|_{2}\leq 0\}\). ## VIII Conclusion In this work, we presented a fast first-order constrained optimization framework based on geometric projections, and applied it to various robotics problems ranging from inverse kinematics to motion planning. We showed that many of the geometric constraints can be rewritten as a logic combination of geometric primitives onto which the projections admit analytical expressions. We built an augmented Lagrangian method with spectral projected gradient descent as subproblem solver for constrained optimal control. We demonstrated: 1) the advantages of using projections when compared to setting up the geometric constraints as plain constraints with gradient information to the solvers; and 2) the advantages of using spectral projected gradient descent based motion planning compared to a standard second-order iLQR algorithm through different robot experiments. Sample-based MPC have been increasingly popular in recent years thanks to its fast practical implementations, despite their lack of theoretical guarantees. In contrast, second-order methods for MPC require a lot of computational power but with somewhat better convergence guarantees. We argue that ALSPG, being already in-between these two methodologies in terms of these properties, promises great future work to combine it with sample-based MPC to further increase its advantages on both sides.
2307.16626
Perturbative quasinormal mode frequencies
We often encounter a situation that black hole solutions can be regarded as continuous deformations of simpler ones, or modify general relativity by continuous parameters. We develop a general framework to compute high-order perturbative corrections to quasinormal mode frequencies in such deformed problems. Our method has many applications, and allows to compute numerical values of the high-order corrections very accurately. For several examples, we perform this computation explicitly, and discuss analytic properties of the quasinormal mode frequencies for deformation parameters.
Yasuyuki Hatsuda, Masashi Kimura
2023-07-31T13:04:04Z
http://arxiv.org/abs/2307.16626v2
# Perturbative quasinormal mode frequencies ###### Abstract We often encounter a situation that black hole solutions can be regarded as continuous deformations of simpler ones, or modify general relativity by continuous parameters. We develop a general framework to compute high-order perturbative corrections to quasinormal mode frequencies in such deformed problems. Our method has many applications, and allows to compute numerical values of the high-order corrections very accurately. For several examples, we perform this computation explicitly, and discuss analytic properties of the quasinormal mode frequencies for deformation parameters. + Footnote †: preprint: RUP-23-13 ###### Contents * I Introduction * II General framework * III Technical remark: the Bender-Wu approach * III.1 Leading order solution * III.2 First order correction * III.3 On higher order corrections * IV Examples * IV.1 A toy model: the Rosen-Morse potential * IV.2 Massive scalar perturbations * IV.3 Slowly rotating black holes * IV.4 Almost asymptotically flat black holes * IV.5 Reissner-Nordstrom black holes * IV.5.1 Almost chargeless limit * IV.5.2 Almost extremal limit * IV.5.3 An interpolating function * IV.6 Parameterized black hole QNMs * IV.7 Series expansion method * V Outlook * A Recursion relations among coefficients in parameterized QNM approach * A.1 Parameterized QNM approach * A.2 Ambiguity of effective potential * A.3 Recursion relations for odd parity case * A.4 Recursion relations from the Regge-Wheeler potential * A.5.1 Improved recursion relation for \(e_{j,k}\) * A.5.2 Reduction of the effective potential * A.6 **References** ## I Introduction Perturbation theory is one of the most powerful tools in physics. We have a typical situation that a system cannot be solved analytically but its special limit can be. Perturbation around the special limit provides us a good approximation method and more importantly a clue to get global information on the total system by combining with the analytic continuation in complex analysis or asymptotic analysis. The application range of perturbation theory is extremely wide. It is important to clarify what we can learn about from perturbation theory. In this work, we propose a systematic way to compute high-order perturbative corrections to quasinormal mode (QNM) frequencies of black holes. QNMs are solutions to linearized field equations, which satisfy purely ingoing (outgoing) boundary conditions at the horizon (infinity), around a background black hole spacetime. It is known that QNMs are related to the late time behavior of the field dynamics around black holes [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. In many cases, one can regard some parameters of black hole solutions as smooth deformation parameters of simpler black holes. We apply perturbation theory for such deformation parameters.1 Similar situations also happen if one considers possibilities of effective field theories or modified gravity theories beyond general relativity. Since such modification parameters are expected to be small, it is natural to expand physical quantities perturbatively. It is desirable to develop a general framework widely applicable for such cases. Footnote 1: One should not confuse it with so-called black hole perturbation theory. We just treat deformation parameters of black holes as formal perturbation parameters. There are two obstacles to achieve it. One is that we cannot solve the QNM spectral problem analytically even in spherically symmetric black holes. Therefore we have only numerical or semi-analytic eigenvalues and eigenfunctions in this simplest case. The other point is more serious. In the QNM problem, a set of the eigenfunctions is not complete in the usual sense. This means that we cannot apply the well-known formula in quantum mechanics to the computation of perturbative corrections to the QNM spectrum. There is already an extended formula to compute perturbative corrections to QNM frequencies [12; 13; 14]. However it is not clear for us how to use this formulation for our interested examples systematically and practically. For this reason, we revisit the similar problem in this work, and propose another way to get high-order perturbative corrections to QNM frequencies. A possible resolution for this problem is simply to use numerical fittings.2 However it is hard to predict high-order corrections accurately in this way. Recently, a smart way to compute perturbative quadratic order corrections to the QNM frequencies was proposed in [15; 16]. We are strongly motivated by these works. We extend them to more general setups. Our approach is based on the first principle of perturbation theory. We do not use any numerical fittings to determine the perturbative coefficients though we need numerical solutions at each order in perturbation. Our method is quite general and applicable to various situations with smoothly continuous deformations. In fact, we give, for the first time, the high-precision perturbative expansion of the QNM frequencies around the extremal Reissner-Nordstrom black holes. Combining a recently proposed method [17], our approach allows to compute numerical values of high-order corrections very accurately. Once we get the high-order perturbative data, we can discuss analytic properties (convergence, singularity, analytic continuation, non-perturbative effect etc.) of the QNM frequencies in principle. Footnote 2: There is another resolution. One can analytically continue eigenvalue problems from the real line to the complex domain. This is well-known as the complex scaling method for resonance problems in quantum mechanics. The organization is as follows. In Section II, we start by explaining a general framework of our formulation. We illustrate our basic idea to compute high-order perturbative corrections systematically. In Section III, we present a technical way to perform the idea in Section II explicitly. In Section IV, we show various examples in which our method works well. We particularly use the method proposed in [17], but this is not only the possibility. For instance, we give another way in Sec. IV.7. In Section V, we consider possible future directions. In Appendix A, we give some remarks on the so-called parameterized black hole quasinormal mode approach. These are useful by combining the results in the main text. ## II General framework We first illustrate our idea. In this section, we set up a problem, and explain a conceptual way to obtain perturbative series of QNM frequencies systematically. We will show a technical method to achieve it in the next section. We expect that the problem proposed in this section is solved in many other ways developed in numerical computations of QNMs, such as Leaver's continued fraction method [5; 18], the direct integration method [19; 5] or the pseudospectral method [20]. We consider a perturbative deformation of a black hole in a certain theory. We would like to know perturbative corrections to quasinormal mode frequencies for a small deformation parameter. Our starting point is the following (radial) master equation:3 Footnote 3: Our idea is not restricted to this form. To make an explanation simpler, we assume it in this paper. \[\left(\frac{d^{2}}{dx^{2}}+\omega^{2}-V(x)\right)\Phi(x)=0, \tag{2.1}\] where \(x\) is the tortoise coordinate. This is related to the radial variable \(r\) as \[\frac{dx}{dr}=\frac{1}{f(r)}. \tag{2.2}\] where \(f(r)\) is a function that has a zero at the event horizen \(r=r_{H}\). Explicit forms of \(f(r)\), of course, depend on problems. The QNM boundary condition is then given by4 Footnote 4: If the field has a mass term \(\mu^{2}\), \(\omega\) should be changed into \(\sqrt{\omega^{2}-\mu^{2}}\) at \(x\to\infty\) in Eq. (2.3). In this case, we can still apply the same method. See subsection IV.2. \[\Phi(x)\sim e^{\pm i\omega x}\quad(x\to\pm\infty). \tag{2.3}\] We assume that all the quantities in the master equation have smooth perturbative expansions in a parameter \(\alpha\): \[V(x)=\sum_{k=0}^{\infty}\alpha^{k}V_{k}(x),\qquad\omega^{2}=\sum_{k=0}^{ \infty}\alpha^{k}\mathcal{E}_{k},\qquad\Phi(x)=\sum_{k=0}^{\infty}\alpha^{k} \Phi_{k}(x). \tag{2.4}\] Typically, the parameter \(\alpha\) appears as a deformation parameter of a black hole or of a modified theory. At this stage, we do not ask its physical origin for generality. In general, the function \(f(r)\) may also depends on \(\alpha\). This dependence causes subtlety on our perturbative treatment. We will discuss this issue later. Expanding \(\omega\) as a series of \(\alpha\), \[\omega=\sum_{k=0}^{\infty}\alpha^{k}\omega_{k}, \tag{2.5}\] the QNM boundary condition in Eq. (2.3) can be written as \[\Phi \sim e^{\pm i\omega_{0}x}e^{\pm i(\alpha\omega_{1}+\alpha^{2} \omega_{2}+\cdots)x}\] \[=e^{\pm i\omega_{0}x}(1+\alpha P_{1}^{\pm}+\alpha^{2}P_{2}^{\pm}+ \cdots), \tag{2.6}\] where \(P_{1}^{\pm},P_{2}^{\pm},\cdots\) are polynomials of \(x\). This implies that the QNM boundary condition for \(\Phi_{k}\) is \[\Phi_{k}\sim e^{\pm i\omega_{0}x}\quad(x\to\pm\infty). \tag{2.7}\] We solve the master equation perturbatively in \(\alpha\). We start with the zeroth order, at which the eigen-equation is \[\left(\frac{d^{2}}{dx^{2}}+{\cal E}_{0}-V_{0}(x)\right)\Phi_{0}(x)=0,\qquad{ \cal E}_{0}=\omega_{0}^{2}. \tag{2.8}\] Note that \({\cal E}_{0}\) denotes the zeroth order eigenvalue in perturbation of \(\alpha\), not the fundamental mode eigenvalue. Typically, the zeroth order equation is the master equation for spherically symmetric black holes, but our formalism is not restricted to this specific situation. At each order, we solve the differential equation by requiring proper boundary conditions, and then get the perturbative corrections to the eigenvalues. We first solve the zeroth order equation (2.8) by imposing the ordinary QNM boundary condition: \[\Phi_{0}(x)\sim e^{\pm i\omega_{0}x}\quad(x\to\pm\infty). \tag{2.9}\] This is natural in perturbation theory. There are many techniques to solve (2.8) numerically. To go to the next order, we need both the eigenvalue and the eigenfunction at a fixed energy level (or an overtone number in the QNM terminology).5 Though, in this work, we will use a method recently proposed in [17], we stress that our idea should work for many other techniques. Footnote 5: This point is quite different from the textbook-like method in quantum mechanics, in which one needs all the eigenvalues and the eigenfunctions at the lowest order. Once we obtain the eigenvalue and the eigenfunction at the zeroth order, we can proceed to the first order equation. The equation we should solve is \[\left(\frac{d^{2}}{dx^{2}}+{\cal E}_{0}-V_{0}(x)\right)\Phi_{1}(x)=(V_{1}(x)-{ \cal E}_{1})\Phi_{0}(x). \tag{2.10}\] We regard this equation as the inhomogeneous differential equation for \(\Phi_{1}(x)\) with the unknown constant \({\cal E}_{1}\), while \(\Phi_{0}(x)\) and \({\cal E}_{0}\) are known. For the function \(\Phi_{1}(x)\), we impose the same QNM boundary condition for \(\Phi_{0}(x)\): \[\Phi_{1}(x)\sim e^{\pm i\omega_{0}x}\quad(x\to\pm\infty), \tag{2.11}\] as explained in Eq. (2.7). This is also natural from the perturbative point of view. As shown in the next section, this inhomogeneous equation is also solved by the same method as the zeroth order equation. Therefore, we get \(\mathcal{E}_{1}\) and \(\Phi_{1}(x)\) at least numerically. The computations at higher orders are similar. We regard the \(k\)-th order equation \[\left(\frac{d^{2}}{dx^{2}}+\mathcal{E}_{0}-V_{0}(x)\right)\Phi_{k}(x)=\sum_{ \ell=1}^{k}(V_{\ell}(x)-\mathcal{E}_{\ell})\Phi_{k-\ell}(x). \tag{2.12}\] as the inhomogeneous equation for \(\mathcal{E}_{k}\) and \(\Phi_{k}(x)\) with the known \(\mathcal{E}_{j}\) and \(\Phi_{j}(x)\) (\(0\leq j\leq k-1\)). We solve it under the boundary condition in Eq. (2.7). We repeat this computation as many times as possible. If the function \(f(r)\) depends on the perturbative parameter \(\alpha\), there is a subtle point. In this case, we also expand \(f(r)\) in \(\alpha\). This gives a perturbative relation between \(r\) and \(x\) via the relation (2.2). Schematically, we have \[x=x(r,\alpha)=\sum_{k=0}^{\infty}\alpha^{k}x_{k}(r), \tag{2.13}\] where \(x_{k}(r)\) are functions of \(r\). On the other hand, we can inverse this relation by \[r=r(x,\alpha)=\sum_{k=0}^{\infty}\alpha^{k}r_{k}(x). \tag{2.14}\] There is an ambiguity which variable, \(r\) or \(x\), is fundamental in the perturbative expansion. In this paper, we regard \(x\) as a fundamental variable, and use (2.14) to eliminate \(r\) to expand the potential perturbatively. This is because boundary conditions in terms of \(x\) seem to be more natural. There is a caveat when we apply our framework to a specific system and calculate the QNM frequencies by numerical calculations. Our framework is introduced based on the form of the master equation in Eq. (2.1) which is written by the tortoise coordinate \(x\). However, in many cases, it is difficult to explicitly write the tortoise coordinate \(x\) as a function of \(r\) and also the master equation as a function of \(x\). This implies that imposing the boundary condition at each order \(\Phi_{k}\sim e^{\pm i\omega_{0}x}\) is not a trivial task in a concrete example. In that case, the technique to rewrite the master equation used in [15] might be useful. When the function \(f\) has a zero at \(r=r_{H}\), and it is close to \(1-r_{H}/r\), we can write \(f\) as \[f=\left(1-\frac{r_{H}}{r}\right)Z(r;\alpha), \tag{2.15}\] where \(Z(r;\alpha)\) is a function of \(r\) which contains the small parameter \(\alpha\). Choosing \(r_{H}\) and \(\alpha\) as the fundamental parameters, we can write the master equation in the form \[\left(1-\frac{r_{H}}{r}\right)\frac{d}{dr}\left(\left(1-\frac{r_{H}}{r}\right) \frac{d\phi}{dr}\right)+(\omega^{2}-\tilde{V})\phi=0, \tag{16}\] where \(\phi=\sqrt{Z}\Phi\) and \(\tilde{V}\) is the effective potential which depends on \(\alpha\)[15]. Regarding this equation as the basic master equation, we can easily apply our framework to this system because the tortoise coordinate in this system is explicitly written as \(r+r_{H}\ln(1-r_{H}/r)\). We should note that we do not need to care about this point as far as we use the Bender-Wu approach introduced in the next section because the calculation is carried out around potential peak region. Finally note that our formulation is easily extended to multi-parameter perturbations. If one wants to consider a two-parameter perturbation: \[V(x;\alpha,\beta)=V_{0}(x)+\sum_{k=1}^{\infty}(\alpha^{k}V_{k}^{\alpha}(x)+ \beta^{k}V_{k}^{\beta}(x)), \tag{17}\] then the square of the frequency should receive the following perturbative corrections [15; 16]: \[\begin{split}\omega^{2}&=\mathcal{E}_{0}+\alpha \mathcal{E}_{1}^{(1,0)}+\beta\mathcal{E}_{1}^{(0,1)}+\alpha^{2}\mathcal{E}_{2 }^{(2,0)}+\alpha\beta\mathcal{E}_{2}^{(1,1)}+\beta^{2}\mathcal{E}_{2}^{(0,2)} +\cdots\\ &=\mathcal{E}_{0}+\sum_{k=1}^{\infty}\sum_{\ell=0}^{k}\alpha^{ \ell}\beta^{k-\ell}\mathcal{E}_{k}^{(\ell,k-\ell)}.\end{split} \tag{18}\] To fix the coefficients \(\mathcal{E}_{k}^{(\ell,k-\ell)}\), we can choose various combinations of \((\alpha,\beta)\). For instance, to fix the second order corrections \(\mathcal{E}_{2}^{(2,0)}\), \(\mathcal{E}_{2}^{(1,1)}\) and \(\mathcal{E}_{2}^{(0,2)}\), it is sufficient to consider three particular slices: \((\alpha,\beta)\rightarrow(\alpha,0),(\alpha,\alpha),(0,\alpha)\), in which the problem is reduced to the one-parameter problem. We will return to this issue in Section IV. ## III Technical remark: the Bender-Wu approach In the previous section, we proposed a general idea to compute the perturbative corrections \(\mathcal{E}_{k}\) systematically. The main problem is of course how we solve the differential equation (12) for our interested QNM problems. In this section, we see that this is done by the so-called Bender-Wu approach [21] that is recently extended to the QNM computation in [17; 22], based on [23; 24; 25]. The main advantage of this approach is that it is widely applicable to many models, as in the WKB approach [26; 27]. The Bender-Wu approach itself also highly depends on perturbation theory. Since we need eigenfunctions as well as eigenvalues, we review the Bender-Wu approach for our problem. We follow the notation in [28] as much as possible. ### Leading order solution Let us solve the zeroth order equation (II.2). We first introduce a formal parameter \(g\) by hand, \[\left(-g^{4}\frac{d^{2}}{dx^{2}}+\mathcal{E}_{0}-V_{0}(x)\right)\Phi_{0}(x)=0, \qquad\mathcal{E}_{0}=\omega_{0}^{2}. \tag{3.1}\] It is clear to see that \(g^{2}\) plays the role of a Planck parameter. Setting \(g=e^{\pi i/4}\), the original equation (II.2) is reproduced. 6 The basic idea is the following. We first consider the eigenvalue problem for \(g\in\mathbb{R}\). In this case, we have the Schrodinger-type equation with the _inverted_ potential \(-V_{0}(x)\), which admit bound states, and we can apply the standard perturbative method in quantum mechanics near the minimum of \(-V_{0}(x)\). The important observation in [17] is that the boundary conditions for bound states and QNMs are simply related by the analytic continuation of \(g\). This implies that if we know the bound state energy for \(g\in\mathbb{R}\), we can obtain the QNM eigenvalue by the analytic continuation \(g=e^{\pi i/4}\). Footnote 6: Note that there is another possibility: \(g=e^{-\pi i/4}\). This ambiguity reflects the fact that the QNM frequencies have two branches for the real part [17]. Let \(\bar{x}\) be the value of \(x\) at which \(-V_{0}(x)\) takes the minimal value. We expand the inverted potential \(-V_{0}(x)\) around \(x=\bar{x}\): \[-V_{0}(x)=V_{00}+\sum_{j=2}^{\infty}V_{0j}(x-\bar{x})^{j}. \tag{3.2}\] We introduce a new variable by \(x-\bar{x}=gq\). This change means that as \(g\) decreases, we zoom in on the neighborhood of the minimum at \(x=\bar{x}\). Then (3.1) leads to \[\left(-\frac{1}{2}\frac{d^{2}}{dq^{2}}+\frac{1}{2}\Omega^{2}q^{2}+v_{0}(q)- \epsilon_{0}\right)\psi_{0}(q)=0, \tag{3.3}\] where \(\Omega:=\sqrt{V_{02}}\), \(\epsilon_{0}:=-(\mathcal{E}_{0}+V_{00})/(2g^{2})\) and \[v_{0}(q)=\frac{1}{2g^{2}}\sum_{j=3}^{\infty}V_{0j}(gq)^{j}=\sum_{j=1}^{\infty }g^{j}v_{0j}q^{j+2},\quad v_{0j}:=\frac{V_{0,j+2}}{2}. \tag{3.4}\] We denoted \(\psi_{0}(q)=\Phi_{0}(\bar{x}+gq)\) to avoid confusion. In this picture, the Planck constant is unity, and \(g\) now plays the role of a coupling constant in the potential. We solve (3.3) perturbatively in \(g\) order by order. At the leading order, we can regard it as the harmonic oscillator with frequency \(\Omega\). To eliminate the exponential factor of the eigenfunction, we rescale \(\psi_{0}(q)=e^{-\Omega x^{2}/2}u_{0}(q)\): \[-\frac{1}{2}u_{0}^{\prime\prime}(q)+\Omega qu_{0}^{\prime}(q)+\left(\frac{ \Omega}{2}+v_{0}(q)-\epsilon_{0}\right)u_{0}(q)=0. \tag{3.5}\] We have the following expansions: \[u_{0}(q)=\sum_{n=0}^{\infty}g^{n}u_{0n}(q),\qquad\epsilon_{0}=\sum_{n=0}^{ \infty}g^{n}\epsilon_{0n}. \tag{3.6}\] Plugging these expansions into (3.3), we get \[-\frac{1}{2}u_{0n}^{\prime\prime}+\Omega qu_{0n}^{\prime}+\frac{\Omega}{2}u_ {0n}+\sum_{j=1}^{n}v_{0j}q^{j+2}u_{0,n-j}-\sum_{j=0}^{n}\epsilon_{0j}u_{0,n-j} =0. \tag{3.7}\] Let us focus on the ground state for simplicity. The ground state corresponds to the lowest (or fundamental) overtone mode in the QNM problem. For \(n=0\), we have the trivial solution \(u_{00}(q)=1\) and \(\epsilon_{00}=\Omega/2\). Using it, we get \[-\frac{1}{2}u_{0n}^{\prime\prime}+\Omega qu_{0n}^{\prime}+\sum_{j=1}^{n}(v_{0 j}q^{j+2}-\epsilon_{0j})u_{0,n-j}=0,\quad n\geq 1. \tag{3.8}\] The very important fact is that \(u_{0n}\) is a _polynomial of \(q\) whose degree is \(3n\)_[21; 28]: \[u_{0n}=\sum_{m=1}^{3n}A_{0n}^{m}q^{m},\quad n\geq 1. \tag{3.9}\] As shown in [21], the differential equation (3.8) determines all the coefficients \(A_{0n}^{m}\) and \(\epsilon_{0n}\) recursively. This is what the _Mathematica_ program in [28] is doing. One has to keep in mind that the above result is valid only for the ground state. For the excited states, we need to modify it slightly. See [28] for these cases. We finally want to set \(g=e^{\pi i/4}\) in the perturbative series. However, in general, the formal power series (3.6) are not convergent for any \(g\neq 0\). The substitution of \(g=e^{\pi i/4}\) merely gives a meaningless answer. To avoid it, one needs to truncate all the high-order corrections beyond a certain optimal order or to use summation methods. Note that the former turns out to be equivalent to the WKB series in the literature [26; 27]. We use the latter, called the Borel summation method, to decode a meaningful result for finite \(g\) from formal divergent series.7 The conclusion in [17] is that the Borel summation of (3.6) correctly reproduces the QNM frequencies. We emphasize that the above method allows us to construct not only the eigenvalue \(\mathcal{E}_{0}\) but also the eigenfunction \(\psi_{0}(q)\). In summary, for the ground state, we have Footnote 7: An alternative way is to use Padé approximants [29; 30; 31]. \[\begin{split}\mathcal{E}_{0}&=-V_{00}-2g^{2}\sum_{n =0}^{\infty}g^{n}\epsilon_{0n},\\ \psi_{0}(q)&=e^{-\Omega q^{2}/2}\sum_{n=0}^{\infty}g ^{n}u_{0n}(q),\qquad u_{0n}(q)=\sum_{m=1}^{3n}A_{0n}^{m}q^{m},\end{split} \tag{3.10}\] where \(\epsilon_{00}=\Omega/2\) and \(u_{00}(q)=1\). ### First order correction Let us proceed to the first order correction. We need to solve \[\left(-g^{4}\frac{d^{2}}{dx^{2}}+\mathcal{E}_{0}-V_{0}(x)\right)\Phi_{1}(x)=( V_{1}(x)-\mathcal{E}_{1})\Phi_{0}(x). \tag{3.11}\] Note that we already know the zeroth order eigenfunction \(\Phi_{0}(x)\) and eigenvalue \(\mathcal{E}_{0}\) in the previous subsection. As in the computation above, we can rewrite it as \[\left(-\frac{1}{2}\frac{d^{2}}{dq^{2}}+\frac{1}{2}\Omega^{2}q^{2}+v_{0}(q)- \epsilon_{0}\right)\psi_{1}(q)=\frac{V_{1}(x)-\mathcal{E}_{1}}{2g^{2}}\psi_{0 }(q), \tag{3.12}\] We also expand \(-V_{1}(x)\) around \(x=\bar{x}\) as \[-V_{1}(x)=\sum_{j=0}^{\infty}V_{1j}(gq)^{j}. \tag{3.13}\] Note that \(x=\bar{x}\) does not extremize \(V_{1}(x)\) in general. As mentioned in the previous subsection, we have to impose the same boundary conditions for \(\psi_{0}(q)\) and \(\psi_{1}(q)\). Therefore we set \(\psi_{1}(q)=e^{-\Omega q^{2}/2}u_{1}(q)\) as well as \(\psi_{0}(q)=e^{-\Omega q^{2}/2}u_{0}(q)\), and get \[-\frac{1}{2}u_{1}^{\prime\prime}+\Omega qu_{1}^{\prime}+\left(\frac{\Omega}{2 }+v_{0}-\epsilon_{0}\right)u_{1}+\left(\frac{V_{11}}{2g}q+v_{1}-\epsilon_{1} \right)u_{0}=0, \tag{3.14}\] where \[\begin{split}\epsilon_{1}&:=-\frac{\mathcal{E}_{1} +V_{10}}{2g^{2}},\\ v_{1}(q)&:=\frac{1}{2g^{2}}\sum_{j=2}^{\infty}V_{1j }(gq)^{j}=\sum_{j=0}^{\infty}g^{j}v_{1j}q^{j+2},\qquad v_{1j}=\frac{V_{1,j+2}} {2}.\end{split} \tag{3.15}\] We use the zeroth order perturbative solution (3.6). From the consistency at the orders \(1/g^{2}\) and \(1/g\), we should take \[u_{1}(q)=-\frac{V_{11}}{2\Omega g}q+\sum_{n=0}^{\infty}g^{n}u_{1n}(q),\qquad \epsilon_{1}=\sum_{n=0}^{\infty}g^{n}\epsilon_{1n}. \tag{3.16}\] It is observed that for the ground state, \(u_{1n}(q)\) is a polynomial of degree \(3n+4\). After putting an ansatz for the polynomial \(u_{1n}(q)\), we can determine all the coefficients of \(u_{1n}(q)\) and \(\epsilon_{1n}\) from the perturbative equations. The remaining computation is the same as the zeroth order one. By performing the Borel summation of \(\epsilon_{1}\), we obtain the first correction \(\mathcal{E}_{1}\). ### On higher order corrections The computations for higher orders are straightforward. At the \(k\)-th order, we have \[\left(-g^{4}\frac{d^{2}}{dx^{2}}+\mathcal{E}_{0}-V_{0}(x)\right)\Phi_{k}(x)= \sum_{\ell=1}^{k}(V_{\ell}(x)-\mathcal{E}_{\ell})\Phi_{k-\ell}(x). \tag{3.17}\] It leads to \[-\frac{1}{2}u_{k}^{\prime\prime}+\Omega qu_{k}^{\prime}+\left(\frac{\Omega}{ 2}+v_{0}-\epsilon_{0}\right)u_{k}+\sum_{\ell=1}^{k}\left(\frac{V_{\ell 1}}{2g}q+v_{\ell}- \epsilon_{\ell}\right)u_{k-\ell}=0, \tag{3.18}\] where \(\Phi_{k}(x)=e^{-\Omega q^{2}/2}u_{k}(q)\) and \[\epsilon_{\ell}:=-\frac{\mathcal{E}_{\ell}+V_{\ell 0}}{2g^{2}},\qquad v_{\ell}(q ):=\frac{1}{2g^{2}}\sum_{j=2}^{\infty}V_{\ell j}(gq)^{j}. \tag{3.19}\] We observe that the ground state solution in general behaves as \[\begin{split} u_{k}(q)&=\frac{u_{k,-k}(q)}{g^{k}}+ \cdots=\sum_{n=-k}^{\infty}g^{n}u_{kn}(q),\\ \epsilon_{k}&=\frac{\epsilon_{k,-2}}{g^{2}}+\cdots= \sum_{n=-1}^{\infty}g^{2n}\epsilon_{k,2n},\end{split} \tag{3.20}\] where \(u_{kn}(q)\) is a polynomial of degree \(3n+4k\). Under this assumption, we can easily compute \(\epsilon_{k}\) perturbatively in \(g\). ## IV Examples In this section, we apply our formalism to various examples. ### A toy model: the Rosen-Morse potential We demonstrate that the idea in Section II actually works in the QNM problem for a simple exactly solvable toy model. What we consider is the so-called Rosen-Morse potential, which is regarded as an integrable deformation of the Poschl-Teller potential. The Rosen-Morse potential was studied in the context of the quasinormal modes in massive scalar perturbations [32]. We revisit the same model to validate our framework. This model is given by \[\left(\frac{d^{2}}{dx^{2}}+\omega^{2}-V_{\rm RM}(x)\right)\phi(x)=0, \tag{4.1}\] \[V_{\rm RM}(x)=\frac{1}{2\cosh^{2}x}+\mu^{2}\frac{1+\tanh x}{2}.\] where \(\mu\) is a deformation parameter. If \(\mu=0\), the potential reduces to the well-known Poschl-Teller potential. The Rosen-Morse potential (4.1) for \(\mu\neq 0\) is very similar to the potential for the spherically symmetric black hole in the massive scalar perturbation [32]. We will see it in the next subsection. We treat this system as a perturbation in the parameter \(\mu\). We first show that this system is in fact exactly solvable. To do so, we perform a change of variables and a transformation of the wave function by \[z=\frac{1}{2}(1+\tanh x),\qquad\phi(x)=z^{-i\omega/2}(1-z)^{-i\sqrt{\omega^{2 }-\mu^{2}}/2}y(z). \tag{4.2}\] Then, the new function \(y(z)\) satisfies the standard hypergeometric equation: \[z(1-z)y^{\prime\prime}(z)+[c-(a+b+1)z]y^{\prime}(z)-aby(z)=0, \tag{4.3}\] where \[a =\frac{1}{2}-\frac{i}{2}(\omega+\sqrt{\omega^{2}-\mu^{2}}+1), \tag{4.4}\] \[b =\frac{1}{2}-\frac{i}{2}(\omega+\sqrt{\omega^{2}-\mu^{2}}-1),\] \[c =1-i\omega.\] For a given \(\mu\), we impose the QNM-like boundary condition: \[\lim_{x\rightarrow-\infty}\phi(x)\sim e^{-i\omega x},\qquad\lim_{x\rightarrow+ \infty}\phi(x)\sim e^{+i\sqrt{\omega^{2}-\mu^{2}}\,x}, \tag{4.5}\] where we have to choose a branch of the square root so that \(\sqrt{z^{2}}=z\) for \(z\in\mathbb{C}\) in order to match the boundary condition for \(\mu=0\). In terms of \(y(z)\), this boundary condition is translated into the regularity condition both at \(z=0,1\) simultaneously. The regular solution at \(z=0\) is given by the Gauss hypergeometric function \[y(z)=F(a,b;c;z). \tag{4.6}\] Using the well-known analytic connection formula of the hypergeometric function: \[\begin{split} F(a,b,c;z)&=\frac{\Gamma(c)\Gamma(c-a- b)}{\Gamma(c-a)\Gamma(c-b)}F(a,b,a+b-c+1;1-z)\\ &\quad+\frac{\Gamma(c)\Gamma(a+b-c)}{\Gamma(a)\Gamma(b)}(1-z)^{c- a-b}F(c-a,c-b,c-a-b+1;1-z),\end{split} \tag{4.7}\] the regularity condition at \(z=1\) requires \[\frac{1}{\Gamma(a)\Gamma(b)}=0. \tag{4.8}\] Therefore we obtain \(a=-n\) or \(b=-n\) for \(n=0,1,2,\dots\). This condition leads to the following exact spectrum: \[\omega^{(n,\pm)}=\pm\left(\frac{1}{2}+\mu^{2}\frac{1}{4(2n^{2}+2n+1)}\right)-i \left(n+\frac{1}{2}-\mu^{2}\frac{2n+1}{4(2n^{2}+2n+1)}\right). \tag{4.9}\] We have two symmetric branches of the spectra. The exact eigenfunction is also given by \[\begin{split}\phi^{(n,\pm)}(x)&=\left(\frac{1+\tanh x }{2}\right)^{-i\omega^{(n,\pm)}}\left(\frac{1-\tanh x}{2}\right)^{-i\sqrt{ \omega^{(n,\pm)2}-\mu^{2}}/2}\\ &\quad\quad\quad\quad\quad\times F\left(-n,-n\mp i;1-i\omega^{(n,\pm)};\frac{1+\tanh x}{2}\right)\end{split} \tag{4.10}\] Note that for a non-negative integer \(n\), the hypergeometric function in this equation is a polynomial of degree \(n\). For simplicity, we consider the case of \(b=-n\), and abbreviate the upper index in these expressions. For the lowest overtone number \(n=0\), we have \[\begin{split}\omega&=\frac{1-i}{2}+\mu^{2}\frac{1+ i}{4},\\ \phi(x)&=\left(\frac{1+\tanh x}{2}\right)^{-i\omega/ 2}\left(\frac{1-\tanh x}{2}\right)^{-i\sqrt{\omega^{2}-\mu^{2}}/2}.\end{split} \tag{4.11}\] In the small \(\mu\) limit, we have \[\begin{split}\omega^{2}&=\mathcal{E}_{0}+\mu^{2} \mathcal{E}_{1}+\mu^{4}\mathcal{E}_{2}=-\frac{i}{2}+\frac{\mu^{2}}{2}+\frac{i \mu^{4}}{8},\\ \phi(x)&=\phi_{0}(x)+\mu^{2}\phi_{1}(x)+\mu^{4}\phi_ {2}(x)+\mathcal{O}(\mu^{6}),\end{split} \tag{4.12}\] where \[\phi_{0}(x) =\left(\frac{1}{2\cosh x}\right)^{-i\omega_{0}}, \tag{4.13}\] \[\phi_{1}(x) =\frac{1-i}{4}x\left(\frac{1}{2\cosh x}\right)^{-i\omega_{0}},\] \[\phi_{2}(x) =-\frac{i}{16}x^{2}\left(\frac{1}{2\cosh x}\right)^{-i\omega_{0}},\] and \(\omega_{0}=(1-i)/2\). These functions satisfy the same boundary condition: \[\lim_{x\rightarrow-\infty}\phi_{k}(x)\sim e^{-i\omega_{0}x},\qquad\lim_{x \rightarrow+\infty}\phi_{k}(x)\sim e^{+i\omega_{0}x},\qquad k=0,1,2,\dots. \tag{4.14}\] Note that this boundary condition is slightly different from the true QNM boundary condition (4.5), but after resumming the perturbative series it is reproduced correctly. Now we confirm this result from perturbation theory. We consider the perturbation in \(\mu^{2}\): \[V_{\rm RM}(x) =V_{0}(x)+\mu^{2}V_{1}(x), \tag{4.15}\] \[V_{0}(x) =\frac{1}{2\cosh^{2}x},\qquad V_{1}(x)=\frac{1+\tanh x}{2}.\] At the lowest order, we of course obtain the Poschl-Teller potential: \[\left(\frac{d^{2}}{dx^{2}}+\mathcal{E}_{0}-V_{0}(x)\right)\phi_{0}(x)=0, \tag{4.16}\] Its eigenvalue and the eigenfunction for the fundamental QNM are exactly given by the zeroth order in (4.12) and (4.13). At the first and the second orders, we have \[\left(\frac{d^{2}}{dx^{2}}+\mathcal{E}_{0}-V_{0}(x)\right)\phi_{1 }(x)+(\mathcal{E}_{1}-V_{1}(x))\phi_{0}(x)=0, \tag{4.17}\] \[\left(\frac{d^{2}}{dx^{2}}+\mathcal{E}_{0}-V_{0}(x)\right)\phi_{2 }(x)+(\mathcal{E}_{1}-V_{1}(x))\phi_{1}(x)+\mathcal{E}_{2}\phi_{0}(x)=0.\] We would like to solve these inhomogeneous equations under the boundary condition (4.14). Instead, it is sufficient to confirm that the functions in (4.12) and (4.13) satisfy these differential equations. One can immediately check it. For higher overtone modes, since the hypergeometric function in (4.10) does not change the asymptotic behavior of the solution, the same structure holds. ### Massive scalar perturbations The simplest example in black hole problems is a massive scalar perturbation of the Schwarzschild geometry. The functions in the master equation are given by \[f(r)=1-\frac{2M}{r},\qquad V(x)=f(r)\left(\frac{l(l+1)}{r^{2}}+\frac{2M}{r^{3}}+ \mu^{2}\right). \tag{4.18}\] As in the Rosen-Morse potential, we regard the scalar mass square \(\mu^{2}\) as a deformation parameter: \(\alpha=\mu^{2}\). Note that the function \(f(r)\) does not receive any correction. The explicit relation between \(r\) and \(x\) is given by \[x=r+2M\log\left(\frac{r}{2M}-1\right). \tag{4.19}\] We regard \(r\) as a function of \(x\). The unperturbed system is just the massless scalar case: \[V_{0}(x)=f(r)\left(\frac{l(l+1)}{r^{2}}+\frac{2M}{r^{3}}\right). \tag{4.20}\] The correction in the potential is \[V_{1}(x)=f(r),\qquad V_{k\geq 2}(x)=0. \tag{4.21}\] The QNM frequency receives the perturbative corrections in \(\mu^{2}\). To keep the generality of \(M\), we write the perturbative series as the dimensionless form \[M\omega=\sum_{k=0}^{\infty}(M\mu)^{2k}w_{k}, \tag{4.22}\] where the correction coefficients \(w_{k}\) does not depend on \(M\). Our task is to compute \(w_{k}\) order by order. We can apply the method in Section III. Let us briefly see the boundary condition. In the case of (4.18), the total boundary condition for the QNM is \[\lim_{x\rightarrow-\infty}\Phi(r)\sim e^{-i\omega x},\qquad\lim_{x\to+ \infty}\Phi(r)\sim e^{+i\sqrt{\omega^{2}-\mu^{2}}\,x}, \tag{4.23}\] If \(\mu\) is small, the boundary condition at infinity is expanded as \[e^{+i\sqrt{\omega^{2}-\mu^{2}}\,x}=e^{+i\omega x}\left(1-\frac{ix}{2\omega}\mu ^{2}-\frac{(i+\omega x)x}{8\omega^{3}}\mu^{4}+\mathcal{O}(\mu^{6})\right). \tag{4.24}\] This is indeed consistent with our requirement (2.7). To show an explicit result, we focus on the cases of \(l=2,3\).8 It is sufficient for us to compute the coefficients in (4.22) for the case of \(M=1\) actually. The zeroth order frequency for the lowest overtone number9 is well-known: Footnote 8: As explained in [17], the Bender-Wu approach works well for larger \(\ell\). This is why we consider \(\ell=2,3\) rather than \(\ell=0,1\). It is desirable to solve (2.12) in other approaches. Footnote 9: The reader should not confuse the subscript index here with the overtone number. \[w_{0}^{l=2}=0.4836438722-0.0967587760i,\quad w_{0}^{l=3}=0.6753662325-0.0964996 277i. \tag{4.25}\] We have computed the numerical values of the perturbative coefficients \(w_{k}\) up to \(k=40\). The first six values are shown in Table 1. In this table, we showed stable digits in our numerical computations. The leading and next-to-leading corrections are consistent with the early results in [15; 16]. What do we learn about from these perturbative data? The most basic question would be whether the perturbative series (4.22) is convergent or not. To see it, we show the behavior of the ratio \(w_{k-1}/w_{k}\) up to \(k=40\) in figure 1. The ratio seems to converge to a finite value, but the convergence is slow. Using basic knowledge of complex analysis, we can estimate the radius of convergence in a different way. The radius of convergence is determined by the nearest singular point from the origin. In our framework, we have only the finite number of \(w_{k}\). We would like to decode the singularity structure from these data. Probably the best tool to do so is _Pade approximants_. Pade approximants tell us the analytic structure of a given power series. In particular, \begin{table} \begin{tabular}{l l l} \hline \(k\) & \(w_{k}^{l=2}\) & \(4^{k}w_{k}^{l=3}\) \\ \hline 0 & \(0.4836438722-0.0967587760i\) & \(0.6753662325-0.0964996277i\) \\ 1 & \(0.3156326579+0.1081551348i\) & \(0.9437297621+0.2278771948i\) \\ 2 & \(0.03541170393+0.02620890155i\) & \(0.2263735226+0.1075217988i\) \\ 3 & \(0.01199156679+0.02204684913i\) & \(0.2085153094+0.1986390780i\) \\ 4 & \(0.00092115819+0.02209374509i\) & \(0.2333370885+0.4509679860i\) \\ 5 & \(-0.01001596605+0.02211024342i\) & \(0.1500437709+1.0963976002i\) \\ 6 & \(-0.02390151862+0.01898789685i\) & \(-0.580414699+2.681826119i\) \\ \hline \end{tabular} \end{table} Table 1: The first six perturbative corrections to the fundamental QNM frequency (4.22) with \(l=2,3\) in the massive scalar perturbation. it gives us information on sigularity structure on the original function. See Appendix C in [33], for instance. Since we have the perturbative data of (4.22) up to \((M\mu)^{80}\), we can construct its diagonal Pade approximant \(M\omega^{[40/40]}\). We read off the zeros and the poles of this approximant. The results are illustrated in Figure 2. These figures imply that the perturbative series (4.22) is likely a convergent series. One can estimate its radius of convergence by computing the distance to the nearest singular point. In this computation, one has to be care about "false" singular points of Pade approximants. These singular points disappear if orders of Pade approximants are changed. These are artifacts in the approximant, while the "true" singular points are stable for Pade orders. In Figure 2, we observe that the black dashed circle is expected to be the convergence circle. The estimation of the radius of convergence \(R\) for (4.22) in the complex \(M\mu\)-plane is approximately given by \[R_{\rm fund.}^{\ell=2}\approx 0.643,\qquad R_{\rm fund.}^{\ell=3}\approx 0.900. \tag{4.26}\] We do not a clear physical meaning of this radius so far. It would be interesting to understand it. By using the Pade approximants, we finally extrapolate our perturbative results to the finite parameter region, as shown in Figure 3. Figure 1: To see whether the perturbative series (4.22) is convergent or not, we plot the ratio \(|w_{k-1}|/|w_{k}|\) for \(1\leq k\leq 40\). It looks to converge to a finite value. ### Slowly rotating black holes Another simple application is the Kerr geometry. We regard the angular momentum as a deformation parameter. Here we consider the slow rotation limit. We briefly explain how to get the slow rotation expansion of the QNM frequency reported in [34]. The perturbation of the rotating black holes are governed by the Teukolsky equation [35]. In [34], an isospectral equation to the Teukolsky equation was proposed. This isospectral equation is much more useful for our purpose in this paper. We start with the radial master Figure 3: The mass dependence of the \(\ell=2\) fundamental QNM frequency for the massive scalar perturbation. The (red) points represent the numerical values. The (orange) dashed line and the (blue) solid line are the perturbative series (4.22) up to \(k=40\) and its diagonal Padé approximant, respectively. The Padé approximant is extrapolated beyond the radius of convergence. Figure 2: The singularity structure of the [40/40] Padé approximant of (4.22) for \(\ell=2\) (Left) and \(\ell=3\) (Right) in the complex \(M\mu\)-plane. We show its zeros by the blue points and poles by the orange points. The dashed curve is a conjectural convergence circle of the perturbative series (4.22). Note that the zeros and the poles inside the circle disappear when the degrees of the Padé approximant are varied. These are artifacts for the [40/40] Padé approximant. equation \[\left(\frac{d^{2}}{dx^{2}}+(2M\omega)^{2}-V(x)\right)\Phi(x)=0, \tag{4.27}\] where \[\begin{split}& V(x)=f(z)\biggl{[}4c^{2}+\frac{4c(m-c)}{z}+\tfrac{sA_ {\ell m}(c)+s(s+1)-c(2m-c)}{z^{2}}-\frac{s^{2}-1}{z^{3}}\biggr{]},\\ & f(z)=1-\frac{1}{z},\qquad x=z+\log(z-1)\end{split} \tag{4.28}\] and \(c=a\omega\) is related to the rotation parameter \(a\). For the notational detail, see [34]. Of course, the slow rotation limit corresponds to the small \(c\) limit. The separation constant \({}_{s}A_{\ell m}(c)\) is determined by the regularity condition of the angular master equation at \(\xi=\pm 1\): \[\biggl{[}\frac{d}{d\xi}(1-\xi^{2})\frac{d}{d\xi}+(c\xi)^{2}-2cs\xi+{}_{s}A_{ \ell m}(c)+s-\frac{(m+s\xi)^{2}}{1-\xi^{2}}\biggr{]}{}_{s}S_{\ell m}(\xi)=0. \tag{4.29}\] To compute the small \(c\) expansion of the potential, we need the perturbative series of \({}_{s}A_{\ell m}(c)\). This can be done as follows. In \(c\to 0\), the angular master equation can be solved exactly. The regular solution at \(\xi=\pm 1\) exists only for the discrete eigenvalue \[{}_{s}A_{\ell m}(0)=\ell(\ell+1)-s(s+1), \tag{4.30}\] and the exact eigenfunction is given by \[{}_{s}S_{\ell m}^{(c=0)}(\xi)=(1-\xi)^{-\frac{m+s}{2}}(1+\xi)^{\frac{m-s}{2}}P _{\ell+s}^{(-m-s,m-s)}(\xi), \tag{4.31}\] where \(P_{n}^{(\alpha,\beta)}(z)\) is the Jacobi polynomial. We have assumed \(\ell\geq|s|\) and \(|m|\leq\ell\). As in the very similar treatment in the Bender-Wu approach, the eigenvalue \({}_{s}A_{\ell m}(c)\) and the eigenfunction \({}_{s}S_{\ell m}(\xi)\) admit the perturbative series in \(c\): \[{}_{s}A_{\ell m}(c)=\sum_{k=0}^{\infty}c^{k}{}_{s}A_{\ell m}^{(k)},\qquad{}_{s }S_{\ell m}(\xi)=\sum_{k=0}^{\infty}c^{k}{}_{s}S_{\ell m}^{(k)}(\xi). \tag{4.32}\] The crucial step is to find the following general structure of the regular function \({}_{s}S_{\ell m}^{(k)}(\xi)\): \[{}_{s}S_{\ell m}^{(k)}(\xi)=(1-\xi)^{-\frac{m+s}{2}}(1+\xi)^{\frac{m-s}{2}}{} _{s}Q_{\ell m}^{(k)}(\xi), \tag{4.33}\] where \({}_{s}Q_{\ell m}^{(k)}(\xi)\) is a polynomial of degree \(\ell+s+k\) in \(\xi\). From the differential equation (4.29), we can fix all the coefficients in the polynomial \({}_{s}Q_{\ell m}^{(k)}(\xi)\) and \({}_{s}A_{\ell m}^{(k)}\) order by order. This method allows us to compute the exact value of \({}_{s}A_{\ell m}^{(k)}\) up to very high orders for given \(s\), \(\ell\) and \(m\). We have confirmed that the first few coefficients indeed agree with the results in [36; 37]. Once we know the small \(c\) expansion of \({}_{s}A_{\ell m}(c)\), we obtain the perturbative expansion of the potential \(V(x)\). Then we can apply the method in Section II. The result is given by the following small \(c\) expansion: \[M_{s}\omega_{\ell m}=\sum_{k=0}^{\infty}c^{k}{}_{s}v^{(k)}_{\ell m}. \tag{4.34}\] However, we are interested in the perturbative expansion in terms of the rotation parameter \(a\) rather than \(c=a\omega\). This expansion is easily obtained by plugging (4.34) into \(c=a\omega\) and by inversely expanding \(c\) in \(a/M\). We finally obtain the following perturbative series \[M_{s}\omega_{\ell m}=\sum_{k=0}^{\infty}\left(\frac{a}{M}\right)^{k}{}_{s}w^{ (k)}_{\ell m}, \tag{4.35}\] where the explicit values of \({}_{s}w^{(k)}_{\ell m}\) for \((s,\ell,m)=(-2,2,0),(-2,2,1),(-2,2,2)\) up to \(k=12\) are found in Table 1 in [34]. ### Almost asymptotically flat black holes We can also apply our formalism to asymptitically non-flat geometries. We focus on the Schwarzschild de Sitter black holes. In this case, the functions in the minimally coupled massless scalar/vector/odd-parity gravitational perturbations are all given by \[\begin{split} f(r)&=1-\frac{2M}{r}-\frac{\Lambda r ^{2}}{3},\\ V(x)&=f(r)\left(\frac{l(l+1)}{r^{2}}+(1-s^{2}) \left(\frac{2M}{r^{3}}-\frac{4-s^{2}}{6}\Lambda\right)\right),\end{split} \tag{4.36}\] where \(s=0,1,2\) denotes the spin-weight of the perturbation fields, and \(\Lambda\) is the cosmological costant. We regard \(\Lambda\) as a deformation parameter. In contrast to the previous examples, the function \(f(r)\) depends on \(\Lambda\). The explicit relation between \(r\) and \(x\) is now quite complicated. As discussed in Section II, we have to use the relation (2.14) to eliminate \(r\). This can be done at least perturbatively with respect to \(\Lambda\). After this prescription, the potential in terms of \(x\) receives an infinite number of perturbative corrections. We apply the Bender-Wu approach for such a perturbative series of the potential. In the Bender-Wu approach, we need the Taylor series of the perturbative corrections to the potential around the extremal point \(x=\bar{x}\) of the zeroth potential. This can be done systematically. We expand the frequency as \[M\omega=\sum_{k=0}^{\infty}(9M^{2}\Lambda)^{k}w_{k}. \tag{4.37}\] The numerical values of \(w_{k}\) for the fundamental mode with \(l=2\) in the gravitational perturbation (\(s=2\)) up to \(k=8\) are given in Table 2. A non-trivial test of our result is to check the isospectrality between the odd-parity and even-parity gravitational perturbations. The potential in the even-parity gravitational perturbation is \[V^{\rm even}(x)=f(r)\frac{2}{r^{3}}\frac{9M^{3}+3\lambda^{2}Mr^{2}+\lambda^{2 }(1+\lambda)r^{3}+9M^{2}\lambda r-3M^{2}\Lambda r^{3}}{(3M+\lambda r)^{2}}, \tag{4.38}\] where \(\lambda=(l-1)(l+2)/2\). It is well-known that the QNM spectra in the odd/even-parity perturbations are exactly same. The reason behind this remarkable fact is a supersymmetric structure. See appendix A in [4]. Our formalism is also applicable to this potential, and we have checked that the isospectrality indeed holds at the perturbative level at least up to \(k=8\): \[w_{k}^{\rm odd}=w_{k}^{\rm even}. \tag{4.39}\] This is an evidence of the validity of our method. \begin{table} \begin{tabular}{l r} \hline \(k\) & \(w_{k}\) \\ \hline 0 & \(0.3736716844-0.0889623157i\) \\ 1 & \(-0.1864855559+0.0372042528i\) \\ 2 & \(-0.04819480629+0.01428258071i\) \\ 3 & \(-0.02302643485+0.00713463072i\) \\ 4 & \(-0.01415049627+0.00398414719i\) \\ 5 & \(-0.010032759238+0.002550521089i\) \\ 6 & \(-0.007668666891+0.001893042626i\) \\ 7 & \(-0.006085692144+0.001548612387i\) \\ 8 & \(-0.004939500648+0.001314426006i\) \\ \hline \end{tabular} \end{table} Table 2: The first eight perturbative corrections to the fundamental QNM frequency (4.37) with \(l=2\) in the odd-parity gravitational perturbation for the asymptotically dS black holes. It turns out that the same values are also obtained by the even-parity perturbation. Let us discuss the extrapolation of (4.37) to finite \(\Lambda\). We first observe that the perturbative series is likely convergent, but it is hard to guess the radius of convergence from the coefficient \(w_{k}\). We consider the \([4/4]\) Pade approximant by using the values in Table 2. The Pade approximant \(\omega^{[4/4]}\) for \((s,l)=(2,2)\) has four poles at \[M^{2}\Lambda=0.101-0.0134i,\quad 0.142+0.00389i,\quad 0.323+0.0678i,\quad 2.45+0. 687i, \tag{4.40}\] where the first pole is relatively close to \(M^{2}\Lambda=1/9\), at which the event horizon and the de Sitter horizon coincide. It is expected that higher-order Pade approximants capture this observation more precisely, but it is technically difficult to check it at the moment. This observation implies that the radius of convergence of (4.37) is just \(|M^{2}\Lambda|=1/9\). The extrapolation of (4.37) by its Pade approximant is compared to the numerical value of the QNM frequency directly computed from (4.36). For \(M^{2}\Lambda=0.06\), we have \[M\omega_{s=2,l=2}^{[4/4]}(M^{2}\Lambda=0.06)\approx 0.2533-0.06304i, \tag{4.41}\] which agrees with the WKB result in [38] and also a recent high-precision computation in [39]. We should note that the QNM spectral problem becomes quite different for \(\Lambda>0\) (dS) and \(\Lambda<0\) (AdS). The boundary condition in the AdS case is much more involved than the dS case [4; 40]. In this paper, we restrict ourselves to the dS case for simplicity. It would be interesting to clarify a physical meaning of a naive continuation to \(\Lambda<0\) of our result. Another perturbative treatment for the (A)dS spectral problem will be also found in [41]. Figure 4: The cosmological constant dependence of the fundamental QNM frequency for the asymptotically Schwarzschild de Sitter black holes. The (red) points represent the numerical values, while the (blue) solid line represents the \([4/4]\) Padé approximant. ### Reissner-Nordstrom black holes The spectrum for the Reissner-Nordstrom black holes are more involved. The master equation in the odd-parity gravitational perturbation consists of \[f(r) =1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}, \tag{4.42}\] \[V(x) =f(r)\left(\frac{l(l+1)}{r^{2}}-\frac{q}{r^{3}}+\frac{4Q^{2}}{r^{ 4}}\right),\] where \[q=3M+\sqrt{9M^{2}+4Q^{2}(l-1)(l+2)}. \tag{4.43}\] We have two characteristic regimes: \(Q=0\) and \(Q=M\). We discuss perturbative series around these two points. #### iv.5.1 Almost chargeless limit The first is the small charge expansion. In this case, \(Q^{2}\) is a natural deformation parameter. We write the perturbative QNM frequency as \[M\omega=\sum_{k=0}^{\infty}\left(\frac{Q}{M}\right)^{2k}w_{k}. \tag{4.44}\] The potential receives an infinite number of corrections. The strategy is the same as that in the previous subsection. We show the numerical values of the perturbative coefficients \(w_{k}\) for the fundamental QNM frequency with \(l=2\) up to \(k=4\) in Table 3. The quadratic correction \(w_{1}\) matches well with [15]. #### iv.5.2 Almost extremal limit We can also consider another limit \(Q\to M\). In this case, \(1-Q/M\) is a good parameter. Therefore we write the frequency as \[M\omega=\sum_{k=0}^{\infty}\alpha^{k}w_{k}^{\text{ext}},\qquad\alpha:=1-\frac{ Q}{M}. \tag{4.45}\] Now we have \[f(r)=\left(1-\frac{M}{r}\right)^{2}-\alpha\frac{2M^{2}}{r^{2}}+\alpha^{2} \frac{M^{2}}{r^{2}}. \tag{4.46}\] We also expand the potential perturbatively with respect to \(\alpha\). The QNM frequencies in the strictly extremal case (\(\alpha=0\)) can be computed by the Bender-Wu approach [17]. We do the same computation for high-order corrections. The numerical values of \(w_{k}^{\rm ext}\) for the fundamental QNM frequency with \(l=2\) up to \(k=4\) are shown in Table 3. The zeroth order coefficients \(w_{0}^{\rm ext}\) agrees with the early result [42]. We do not find any references on the perturbative corrections near the extremal limit. #### iv.2.3 An interpolating function We have the two perturbative expansions of the same spectrum in the different regimes. In each regime, we determine its Pade approximant, and can extrapolate it to the other regime. However, to know the global behavior, there is a better approximation, called multi-point Pade approximants [43; 44]. Let us consider a rational function \[M\omega^{[p/q]}=\frac{a_{0}+a_{1}Q/M+\cdots+a_{p}(Q/M)^{p}}{1+b_{1}Q/M+\cdots+ b_{q}(Q/M)^{q}}. \tag{4.47}\] We fix the coefficients \(a_{n}\) and \(b_{n}\) so that the rational function reproduces the _both_ perturbative expansions around \(Q/M=0\) and \(Q/M=1\). For instance to get the rational function \(M\omega^{[4/4]}\) we totally need nine data in (4.44) and in (4.45). A balanced choice is to take \(w_{k}\) (\(0\leq k\leq 2\)) in (4.44) and \(w_{k}^{\rm ext}\) (\(0\leq k\leq 3\)) in (4.45). Recall the expansion (4.44) has no odd-order terms. We can use this information to fix \(a_{n}\) and \(b_{n}\). The explicit values of \(a_{n}\) and \(b_{n}\) in this case are shown in Table 4. The interpolating function remarkably reproduces the numerical values in the whole regime \(0\leq Q/M\leq 1\), as shown in Figure 5. \begin{table} \begin{tabular}{c c c} \hline \(k\) & \(w_{k}\) & \(w_{k}^{\rm ext}\) \\ \hline 0 & \(0.3736716844-0.0889623157i\) & \(0.4313408007-0.0834603151i\) \\ 1 & \(0.02581767285-0.00282403214i\) & \(-0.2070138464-0.0853606869i\) \\ 2 & \(0.02518778870+0.00020532453i\) & \(0.2543444995+0.4939946909i\) \\ 3 & \(-0.004748170246+0.002508402108i\) & \(0.758606111-1.429576400i\) \\ 4 & \(0.01557265014+0.00041287974i\) & \(-6.158687644+0.575432188i\) \\ \hline \end{tabular} \end{table} Table 3: The low order corrections to the fundamental QNM frequency for \(l=2\) in the Reissner-Nordström gravitational perturbation. We consider the two distinct perturbative series (4.44) and (4.45). Interpolating functions will be improved if one considers further perturbative expansions around other points in the middle region. For instance, a perturbative expansion around \(Q/M\sim 0.8\) will provide us an important information on the global structure of the imaginary part of the QNM frequency for \(l=2\). We do not compute it in this work, but expect that our method is still applicable in such situations. ### Parameterized black hole QNMs Recently, a simple and effective way to compute perturbative corrections was proposed in [15; 16; 45]. We refer to it as the _parameterized QNM approach_. As one can see in \begin{table} \begin{tabular}{c c c} \hline \(n\) & \(a_{n}\) & \(b_{n}\) \\ \hline 0 & \(0.3736716844-0.0889623157i\) & \\ 1 & \(-0.349769907+0.062882011i\) & \(-0.92374126-0.051639318i\) \\ 2 & \(-0.342112665-0.038824176i\) & \(-0.91011322-0.313017895i\) \\ 3 & \(0.492170748-0.023942699i\) & \(1.32244473+0.24735505i\) \\ 4 & \(-0.169504965+0.033347865i\) & \(-0.454637541-0.004795289i\) \\ \hline \end{tabular} \end{table} Table 4: The nine coefficients in the rational approximation \(M\omega^{[4/4]}\) for the \(l=2\) fundamental mode. Figure 5: The (red) points represent the numerical values of the QNM frequency of the RN black holes. The (blue) solid curve is the graph of the rational function (4.47) for \(p=q=4\) with the coefficients in Table 4. The (orange) dashed and (black) dotted lines represent the perturbative expansions (4.44) and (4.45) up to \(k=4\), respectively. the previous examples, most deformation terms in the potential take the form as linear combinations of \(1/r^{j}\) with integral \(j\). At the first order in the perturbation, corrections to the QNM frequencies are the same linear combinations of the potential. See (2.17) and (2.18). The main idea of the parameterized QNM approach is the following. We make a list of corrections generated by only the \(1/r^{j}\)-deformations beforehand, and use it for a more complicated potential to which corrections are linear combinations of the \(1/r^{j}\)-deformations. The extension to high-order corrections is straightforward [16]. Physical applications of the parameterized QNM approach have been shown in [46; 47; 48; 49; 50; 51; 52; 53; 54; 55]. (See also Appendix. A for complementary discussion.) At the technical level, it is not so easy to compute the precise values of the quadratic corrections. In [15; 16], the authors used numerical fittings. Since our formalism is easily applied to the setup of the parameterized QNM approach, we re-evaluate the corrections up to the quadratic order. This re-evaluation played an important role in the computation of perturbative corrections for slowly rotating black holes [47]. We keep at least ten-digit precision for all the corrections listed in this section. We focus on deformation of the odd-parity gravitational perturbation of the Schwarzschild black holes. The computations for the other cases are straightforward. The potential is \[V_{0}(x)=f(r)\left(\frac{l(l+1)}{r^{2}}-\frac{3r_{H}}{r^{3}}\right),\quad V_{ 1}(x)=\frac{f(r)}{r_{H}^{2}}\left(\frac{r_{H}}{r}\right)^{j},\quad V_{k\geq 2 }(x)=0, \tag{4.48}\] where \(r_{H}\) is the location of the event horizon and \(j=0,1,2,\dots\). For this deformation, the spectrum receives the corrections: \[\omega=\omega_{0}+\sum_{k=1}^{\infty}\alpha^{k}e_{j}^{(k)}. \tag{4.49}\] For \(\ell=2\), we show the numerical values of \(e_{j}^{(k)}\) (\(0\leq j\leq 8\), \(k=1,2\)) in Table 5. To make a list at the quadratic order, we also have to consider two-parameter perturbations (2.17) with \[\begin{split} V_{1}^{\alpha}(x)&=\frac{f(r)}{r_{H}^ {2}}\left(\frac{r_{H}}{r}\right)^{i},\qquad V_{k\geq 2}^{\alpha}(r)=0,\\ V_{1}^{\beta}(x)&=\frac{f(r)}{r_{H}^{2}}\left( \frac{r_{H}}{r}\right)^{j},\qquad V_{k\geq 2}^{\beta}(r)=0.\end{split} \tag{4.50}\] For this perturbation, the frequency receives the corrections: \[\omega=\omega_{0}+\sum_{k=1}^{\infty}\sum_{\ell=0}^{k}\alpha^{\ell}\beta^{k- \ell}e_{ij}^{(\ell,k-\ell)}. \tag{4.51}\] where we have \(e_{ij}^{(k,0)}=e_{i}^{(k)}\) and \(e_{ij}^{(0,k)}=e_{j}^{(k)}\) by construction. Therefore at the second order, the only unknown coefficient is \(e_{ij}^{(1,1)}\). This can be evaluated by the trick explained in (2.18). The numerical values are shown in Table 6. We compare these results with [15; 16], and found that there are significant differences. For the error estimation of the coefficients in Tables 5 and 6, we use the recursion relations among the coefficients in Eqs. (A51) and (A52). We checked that (A51) is satisfied at \(\mathcal{O}(10^{-15})\) for linear coefficients, and (A52) is satisfied at \(\mathcal{O}(10^{-11})\) for quadratic coefficients. This also shows that the perturbative approach developed in the present paper works well. \begin{table} \begin{tabular}{r r r} \hline \hline \(k\) & \(j\) & \(r_{H}e_{j}^{(k)}\) \\ \hline & 0 & \(0.2472519654+0.0926430738i\) \\ & 1 & \(0.1598547870+0.0182084818i\) \\ & 2 & \(0.09663224013-0.00241549645i\) \\ & 3 & \(0.05849078501-0.00371786129i\) \\ 1 & 4 & \(0.03667943678-0.00043869695i\) \\ & 5 & \(0.02403794775+0.00273079314i\) \\ & 6 & \(0.01634281096+0.00484267168i\) \\ & 7 & \(0.011363575081+0.006013991932i\) \\ & 8 & \(0.007951997735+0.006536996457i\) \\ \hline & 0 & \(0.002868401222-0.001011345890i\) \\ & 1 & \(-0.01439027937-0.00572350838i\) \\ & 2 & \(-0.005756554781+0.000336740545i\) \\ & 3 & \(-0.0006273259154-0.0004693348600i\) \\ 2 & 4 & \(0.0007234494450-0.0011595941966i\) \\ & 5 & \(0.000987182421-0.001122519006i\) \\ & 6 & \(0.0010046849768-0.0008403243677i\) \\ & 7 & \(0.0009526541187-0.0005456646402i\) \\ & 8 & \(0.0008715569057-0.0003017937415i\) \\ \hline \hline \end{tabular} \end{table} Table 5: The one-parameter corrections up to the second order for \(l=2\) in the parameterized QNM approach. ### Series expansion method As an application of our perturbative framework based on a method other than the Bender-Wu approach, we study the series expansion method known as Leaver's method [5; \begin{table} \begin{tabular}{c c c} \hline \hline \(i\) & \(j\) & \(r_{H}e_{ij}^{(1,1)}\) \\ \hline 0 & 1 & \(-0.02588238896-0.02792966573i\) \\ & 2 & \(-0.03870432587-0.02320896618i\) \\ & 3 & \(-0.03739171923-0.01523959074i\) \\ & 4 & \(-0.03119143980-0.01062473399i\) \\ & 5 & \(-0.02473633363-0.00886735210i\) \\ & 6 & \(-0.01939362275-0.00853499059i\) \\ & 7 & \(-0.01523819641-0.00870770164i\) \\ \hline 1 & 2 & \(-0.02293084111-0.00341311941i\) \\ & 3 & \(-0.01688392216-0.00102025764i\) \\ & 4 & \(-0.01249473743-0.0009507495878i\) \\ & 5 & \(-0.009533459569-0.001537281111i\) \\ & 6 & \(-0.007497937650-0.002167588509i\) \\ & 7 & \(-0.006026376454-0.002674270477i\) \\ \hline 2 & 3 & \(-0.005785247726+0.0002460429730i\) \\ & 4 & \(-0.003236295992-0.0006934041512i\) \\ & 5 & \(-0.002075023229-0.001296233004i\) \\ & 6 & \(-0.001466727846-0.001581112859i\) \\ & 7 & \(-0.001083345654-0.001682173863i\) \\ \hline 3 & 4 & \(0.000315183631-0.001771852361i\) \\ & 5 & \(0.000806605055-0.002028059473i\) \\ & 6 & \(0.000954987015-0.001956886197i\) \\ & 7 & \(0.001002044048-0.001756226616i\) \\ \hline 4 & 5 & \(0.001737194187-0.002338806036i\) \\ & 6 & \(0.001773835947-0.002098671851i\) \\ & 7 & \(0.001741580709-0.001777161054i\) \\ \hline 5 & 6 & \(0.001993021672-0.001958873499i\) \\ & 7 & \(0.001947091748-0.001620453116i\) \\ \hline 6 & 7 & \(0.001959730533-0.001367519282i\) \\ \hline \hline \end{tabular} \end{table} Table 6: The off-diagonal quadratic corrections for \(l=2\) in the two-parameter perturbation. 18]. We consider the system with the parameterized QNM potential in Eqs (A3)-(A4) with a single correction term \[\delta V=\alpha\frac{f}{r_{H}^{2}}\left(\frac{r_{H}}{r}\right)^{j}, \tag{4.52}\] where \(f\) is given by \(f=1-r_{H}/r\). We assume the following series expansion of the wave function as \[\Phi=e^{i\omega r_{*}}\sum_{k=0}^{\infty}a_{k}f^{k+n}, \tag{4.53}\] where the characteristic exponent \(n\) is given by \[n=-2ir_{H}\omega, \tag{4.54}\] so that the QNM boundary condition at \(r=r_{H}\) is satisfied. After some calculations, we obtain recursion relations for \(a_{k}\) \[A_{k}a_{k-1}+B_{k}a_{k}+C_{k}a_{k+1}+\alpha\sum_{m=0}^{j-2}D_{m}a_{k-m}=0, \tag{4.55}\] where coefficients \(A_{k},B_{k},C_{k}\) and \(D_{m}\) are given by \[A_{k} =(k-2-2ir_{H}\omega)(k+2-2ir_{H}\omega), \tag{4.56}\] \[B_{k} =3-2k(1+k)-\ell(\ell+1)+4ir_{H}\omega(1+2k)+8r_{H}^{2}\omega^{2},\] (4.57) \[C_{k} =(1+k)(1+k-2ir_{H}\omega),\] (4.58) \[D_{m} =\frac{(-1)^{m+1}(j-2)!}{m!(j-2-m)!}. \tag{4.59}\] The coefficients \(a_{k}\) with large \(k\) take exponentially small value only for the wave function with the appropriate QNM boundary condition at \(r\rightarrow\infty\). Thus, we can calculate the approximate QNM frequency by setting \[a_{k_{\rm max}}=0, \tag{4.60}\] with a large integer \(k_{\rm max}\). However, directly solving Eq. (4.60) numerically is very difficult, and then we usually use Leaver's continued fraction method [5, 18] whose basic equation is mathematically same as Eq. (4.60). In this section, we study this problem based on our perturbative approach. Expanding the coefficients \(a_{k}\) and the QNM frequency \(\omega\) as \[a_{k} =a_{k}^{(0)}+\alpha a_{k}^{(1)}+\alpha^{2}a_{k}^{(2)}+\cdots, \tag{4.61}\] \[\omega =\omega_{0}+\alpha\omega_{1}+\alpha^{2}\omega_{2}+\cdots, \tag{4.62}\] the coefficients \(A_{k},B_{k},C_{k}\) become \[A_{k} =A_{k}^{(0)}+\alpha\omega_{1}A_{k}^{(1)}+\alpha^{2}\omega_{1}^{2} A_{k}^{(2,0)}+\alpha^{2}\omega_{2}A_{k}^{(0,1)}+\cdots, \tag{4.63}\] \[B_{k} =B_{k}^{(0)}+\alpha\omega_{1}B_{k}^{(1)}+\alpha^{2}\omega_{1}^{2} B_{k}^{(2,0)}+\alpha^{2}\omega_{2}B_{k}^{(0,1)}+\cdots,\] (4.64) \[C_{k} =A_{k}^{(0)}+\alpha\omega_{1}C_{k}^{(1)}+\alpha^{2}\omega_{1}^{2} C_{k}^{(2,0)}+\alpha^{2}\omega_{2}C_{k}^{(0,1)}+\cdots, \tag{4.65}\] where the coefficients in RHS depend only on \(\omega_{0}\). The recursion relations in Eq. (4.55) at each order become \[\mathcal{O}(\alpha^{0}): \quad A_{k}^{(0)}a_{k-1}^{(0)}+B_{k}^{(0)}a_{k}^{(0)}+C_{k}^{(0)} a_{k+1}^{(0)}=0, \tag{4.66}\] \[\mathcal{O}(\alpha^{1}): \quad A_{k}^{(0)}a_{k-1}^{(1)}+B_{k}^{(0)}a_{k}^{(1)}+C_{k}^{(0)} a_{k+1}^{(1)}\] \[\quad+\omega_{1}\Big{[}A_{k}^{(1)}a_{k-1}^{(0)}+B_{k}^{(1)}a_{k}^ {(0)}+C_{k}^{(1)}a_{k+1}^{(0)}\Big{]}+\sum_{m=0}^{j-2}D_{m}a_{k-m}^{(0)}=0,\] (4.67) \[\mathcal{O}(\alpha^{2}): \quad A_{k}^{(0)}a_{k-1}^{(2)}+B_{k}^{(0)}a_{k}^{(2)}+C_{k}^{(0)} a_{k+1}^{(2)}\] \[\quad+\omega_{1}\Big{[}A_{k}^{(1)}a_{k-1}^{(1)}+B_{k}^{(1)}a_{k}^ {(1)}+C_{k}^{(1)}a_{k+1}^{(1)}\Big{]}\] \[\quad+\omega_{1}^{2}\Big{[}A_{k}^{(2,0)}a_{k-1}^{(0)}+B_{k}^{(2,0 )}a_{k}^{(0)}+C_{k}^{(2,0)}a_{k+1}^{(0)}\Big{]}\] \[\quad+\omega_{2}\Big{[}A_{k}^{(0,2)}a_{k-1}^{(0)}+B_{k}^{(0,2)}a_ {k}^{(0)}+C_{k}^{(0,2)}a_{k+1}^{(0)}\Big{]}+\sum_{m=0}^{j-2}D_{m}a_{k-m}^{(1)}=0. \tag{4.68}\] We note that these equations correspond to the perturbative equations in Eqs (2.10) and (2.12). First, at \(\mathcal{O}(\alpha^{0})\), we obtain \(\omega_{0}\) using Leaver's continued fraction method by setting a large integer \(k_{\rm max}\). Next, at \(\mathcal{O}(\alpha^{1})\), we solve the equation \[a_{k_{\rm max}}^{(1)}=0, \tag{4.69}\] directly with respect to \(\omega_{1}\). For this purpose, we rewrite \(a_{k_{\rm max}}^{(1)}\) as a function of \(\omega_{0},\omega_{1},a_{0}^{(0)},a_{0}^{(1)}\) by using Eqs. (4.66)-(4.67) recursively, then \(a_{k_{\rm max}}^{(1)}\) depends on \(\omega_{1}\) linearly. This implies that we obtain a unique \(\omega_{1}\) if we fix the value of \(\omega_{0}\). In a similar way, we can solve the equation \[a_{k_{\rm max}}^{(2)}=0, \tag{4.70}\] directly with respect to \(\omega_{2}\). In the calculation, we can set \(a_{0}^{(0)}=a_{0}^{(1)}=a_{0}^{(2)}=1\) without loss of generality. We have confirmed that this method can reproduce a consistent result with Table V. We finally note that we do not need to perform the Gaussian elimination to obtain the three term recursion relations at \(\mathcal{O}(\alpha^{1})\) and higher order analysis unlike usual Leaver's continued fraction method [5; 18], and this is also one of the advantage of our perturbative approach. ## V Outlook In this paper, we proposed a systematic way to compute high-order perturbative corrections to black hole quasinormal mode frequencies with continuous deformation parameters. Our method is widely applicable to many situations, and allows to compute the high-order corrections very accurately. We showed various explicit examples. In particular, for the Reissner-Nordstrom black holes, we can expand the quasinormal mode frequency not only around the chargeless limit but also around the extremal limit. There are several future directions. It is interesting to consider the near extremal expansion of the Kerr black holes. It was argued that the QNM frequencies in the extremal Kerr geometry have an interesting behavior in [56]. It is also interesting to develop the perturbative expansion of rotating black holes in modified gravity theories [57; 58; 59; 60; 61; 62; 63; 64; 65]. In this case, the full analytic solution with the general rotating parameter has not yet known. We inevitably have to restrict ourselves to the perturbative treatment in terms of the rotating parameter. We would like to extend our framework to coupled master equations. Typically, the master equations in general relativity are decoupled, but in modified gravity theories, they are sometimes coupled [66; 67; 68; 69; 70; 71]. Therefore if we consider perturbative expansions of modified parameters, it is desirable to generalize our formalism to such a situation. ###### Acknowledgements. This research is supported by JSPS KAKENHI Grant Nos. JP22K03641 (YH) and JP22K03626 (MK). ## Appendix A Recursion relations among coefficients in parameterized QNM approach When the master equation is given in a series expansion of a small parameter, there is an ambiguity of the effective potential due to the choise of the master variable. In this appendix, we first give a general discussion of the ambiguity of effective potential by extending the result in [45]. This ambiguity leads to recursion relations among coefficients in the parameterized QNM approach. ### Parameterized QNM approach We consider the case with \(f=f_{0}=1-r_{H}/r\), and the master equation is given by \[f\frac{d}{dr}\left(f\frac{d\Phi}{dr}\right)+(\omega^{2}-V)\Phi=0, \tag{10}\] with \[V =V_{0}+\delta V, \tag{11}\] \[\delta V =\frac{f}{r_{H}^{2}}\sum_{j=0}^{\infty}\alpha_{j}\left(\frac{r_{H} }{r}\right)^{j}, \tag{12}\] where \(V_{0}\) is the effective potential for non-perturbative case and \(\alpha_{j}\) denote the small parameters which can be written as series of a single parameter \(\alpha\) \[\alpha_{j}=\sum_{i=1}^{\infty}\alpha^{i}A_{j}^{(i)}. \tag{13}\] We note that many systems can be written in this form of the master equation [15; 16; 47]. The QNM frequency behaves \[\omega=\omega_{0}+\sum_{j=0}^{\infty}\alpha_{j}e_{j}+\sum_{j,k=0}^{\infty} \alpha_{j}\alpha_{k}e_{j,k}+\cdots. \tag{14}\] where \(e_{j},e_{j,k},\cdots\) are model independent coefficients in parameterized QNM approach. When \(V_{0}\) is the Regge-Wheeler potential for the odd parity gravitational perturbation, the coefficients are related to the coefficients appearing in subsection IV.6 as \[e_{j}=e_{j}^{(1)},\qquad e_{j,j}=e_{j}^{(2)}, \tag{15}\] \[e_{j,k}=e_{k,j}=\frac{e_{j,k}^{(1,1)}}{2}\quad(j<k), \tag{16}\] where numerical values of \(e_{j}^{(1)},e_{j}^{(2)},e_{j,k}^{(1,1)}\) can be seen in Tables 5 and 6. ### Ambiguity of effective potential In this subsection, we use the coordinate \(x\) defined by \(dx/dr=1/f\). The master equation Eq. (16) in this coordinate becomes \[\frac{d^{2}\Phi}{dx^{2}}+(\omega^{2}-V)\Phi=0. \tag{17}\] We introduce a new variable \(\Psi\) as10 Footnote 10: Note that signatures of \(X\) and \(Y\) are opposite from [45]. \[\Psi=(1+X)\,\Phi+Y\frac{d\Phi}{dx}, \tag{18}\] where \(X\) and \(Y\) are \(\mathcal{O}(\alpha)\) functions of \(x\). If \(X\) and \(Y\) satisfy the relation \[-Y^{2}\frac{dV}{dx}+Y\left(2(\omega^{2}-V)\frac{dY}{dx}-\frac{d^{2}X}{dx^{2}} \right)+(1+X)\left(2\delta\frac{dX}{dx}+\frac{d^{2}Y}{dx^{2}}\right)=0, \tag{19}\] \(\Psi\) satisfies an equation \[\frac{d^{2}\Psi}{dx^{2}}+(\omega^{2}-V-\delta W)\Psi=0, \tag{20}\] where \(\delta W\) is given by11 Footnote 11: \(\delta W\) also can be written in the form \(\delta W=(2dX/dx+d^{2}Y/dx^{2})/Y\). \[\delta W=\frac{1}{1+X}\left(Y\frac{dV}{dx}-2(\omega^{2}-V)\frac{dY}{dx}+\frac {d^{2}X}{dx^{2}}\right), \tag{21}\] and this denotes the ambiguity of effective potential. We can regard that the effective potential changes \[V\to V+\delta W \tag{22}\] due to the change of the master variable, and then the small parameters \(\alpha_{j}\) in Eq (14) are also changed. Eq. (19) can be integrated as \[2C+Y\left((V-\omega^{2})Y+\frac{dX}{dx}\right)-\frac{dY}{dx}-2X-X\left(X+\frac {dY}{dx}\right)=0, \tag{23}\] where \(C\) is the constant of integration. If we expand \[X =\sum_{i=1}^{\infty}\alpha^{i}X_{i}, \tag{115}\] \[Y =\sum_{i=1}^{\infty}\alpha^{i}Y_{i},\] (116) \[V =V_{0}+\delta V=\sum_{i=0}^{\infty}\alpha^{i}V_{i},\] (117) \[\omega^{2} =\sum_{i=0}^{\infty}\alpha^{i}\mathcal{E}_{i},\] (118) \[C =\sum_{i=1}^{\infty}\alpha^{i}\mathcal{C}_{i}, \tag{119}\] Eq. (114) can be solved order by order as \[X_{i}=C_{i}-\frac{1}{2}Y_{i}^{\prime}-\frac{1}{2}\sum_{k=1}^{i- 1}\sum_{j=0}^{i-k-1}\left(\mathcal{E}_{j}-V_{j}\right)Y_{k}Y_{i-k-j}+\frac{1}{ 2}\sum_{k=1}^{i-1}\left(Y_{i-k}X_{k}^{\prime}-X_{i-k}Y_{k}^{\prime}-X_{i-k}X_{ k}\right). \tag{120}\] If we also expand \(\omega=\sum_{i=0}^{\infty}\alpha^{i}\omega_{i}\), \(\mathcal{E}_{i}\) is given by \[\mathcal{E}_{i}=\sum_{j=0}^{i}\omega_{i-j}\omega_{j}. \tag{121}\] Substituting the result (120) into Eq. (112), we can calculate the deformation of the effective potential \(\delta W\) as the series of \(\alpha\) \[\delta W=\sum_{i=1}^{\infty}\alpha^{i}W_{i}. \tag{122}\] From Eq (112), we can write \(W_{i}\) as \[W_{i}=\frac{d^{2}X_{i}}{dx^{2}}+\sum_{j=0}^{i-1}\left(Y_{i-j} \frac{dV_{j}}{dx}-2(\mathcal{E}_{j}-V_{j})\frac{dY_{i-j}}{dx}\right)-\sum_{j= 1}^{i-1}W_{i-j}X_{j}. \tag{123}\] For lower \(i\), the explicit forms are \[X_{1} =C_{1}-\frac{1}{2}\frac{dY_{1}}{dx}, \tag{124}\] \[X_{2} =C_{2}-\frac{1}{2}(\mathcal{E}_{0}-V_{0})Y_{1}^{2}+\frac{1}{8} \left(\frac{dY_{1}}{dx}\right)^{2}-\frac{1}{2}\left(C_{1}^{2}+\frac{dY_{2}}{ dx}\right)-\frac{1}{4}Y_{1}\frac{d^{2}Y_{1}}{dx^{2}}, \tag{125}\] and \[W_{1} =Y_{1}\frac{dV_{0}}{dx}-2(\mathcal{E}_{0}-V_{0})\frac{dY_{1}}{dx}- \frac{1}{2}\frac{d^{3}Y_{1}}{dx^{3}}, \tag{101}\] \[W_{2} =(Y_{2}-C_{1}Y_{1})\frac{dV_{0}}{dx}-2(\mathcal{E}_{0}-V_{0})\frac{ d(Y_{2}-C_{1}Y_{1})}{dx}-\frac{1}{2}\frac{d^{3}(Y_{2}-C_{1}Y_{1})}{dx^{3}}\] (102) \[+\frac{Y_{1}}{2}\left(2\frac{dV_{1}}{dx}+Y_{1}\frac{d^{2}V_{0}}{ dx^{2}}\right)-(\mathcal{E}_{0}-V_{0})\left[2\left(\frac{dY_{1}}{dx}\right)^{2}+Y_{1} \frac{d^{2}Y_{1}}{dx^{2}}\right]\] (103) \[+\frac{dY_{1}}{dx}\left(-2(\mathcal{E}_{1}-V_{1})+\frac{5Y_{1}}{ 2}\frac{dV_{0}}{dx}-\frac{1}{2}\frac{d^{3}Y_{1}}{dx^{3}}\right)-\frac{Y_{1}}{ 4}\frac{d^{4}Y_{1}}{dx^{2}}. \tag{104}\] We note that \(W_{i}\) contains arbitrary functions \(Y_{1},Y_{2},\cdots\). If we set \(V_{i}=0\) for \(i\geq 1\), the system is just a non-perturbative case whose effective potential is \(V_{0}\). Nevertheless, there is an ambiguity of effective potential due to the change of the master variable. In this case, the ambiguity of effective potential does not change the QNM spectrum, and we can obtain recursion relations among coefficients in parameterized QNM approach by setting the functions \(Y_{i}\) appropriately as shown in the next subsection. ### Recursion relations for odd parity case #### a.3.1 Recursion relations from the Regge-Wheeler potential As an example, we consider the odd parity case \[V=V_{0}=f_{0}\left(\frac{\ell(\ell+1)}{r^{2}}-\frac{3r_{H}}{r^{3}}\right). \tag{105}\] In this case, \(\mathcal{E}_{1}=0\) because there is no correction term in the effective potential \(V\), i.e., \(V_{i}=0\) for \(i\geq 1\). Setting12 Footnote 12: From the degrees of freedom of \(Y_{2}\), we can obtain the same relation as the first order relation among \(e_{j}\). Also, \(C_{1}\) does not affect the result. Thus, we can set \(Y_{2}=0\) and \(C_{1}=0\). \[Y_{1} =y_{j}\left(\frac{r_{H}}{r}\right)^{j}+y_{k}\left(\frac{r_{H}}{r} \right)^{k}, \tag{106}\] \[Y_{2} =0,\] (107) \[C_{1} =0, \tag{108}\] where \(j,k\geq-1\) are integers and \(y_{j},y_{k}\) are constants, Eqs (A22)-(A29) lead to \[\delta V+\delta W=\alpha y_{j}f_{0}\left(\frac{r_{H}}{r}\right)^{j} \left[\frac{2j\mathcal{E}_{0}}{r}+\frac{(j+1)(j-2\ell)(j+2\ell+2)}{2r^{3}}\right.\] \[-\frac{(2j+3)r_{H}\left(j(j+3)-2\left(\ell^{2}+\ell+3\right) \right)}{2r^{4}}+\frac{(j-2)(j+2)(j+6)r_{H}^{2}}{2r^{5}}\right]+(j\leftrightarrow k)\] \[+\alpha^{2}y_{j}^{2}f_{0}\left(\frac{r_{H}}{r}\right)^{2j}\bigg{[} -\frac{j(3j+1)\mathcal{E}_{0}}{r^{2}}+\frac{j(3j+2)r_{H}\mathcal{E}_{0}}{r^{3} }-\frac{3(j+1)^{2}(j-2\ell)(j+2\ell+2)}{4r^{4}}\] \[+\frac{(3j+4)r_{H}\left(3j^{3}+12j^{2}-j(8\ell(\ell+1)+1)-2(5 \ell(\ell+1)+9)\right)}{4r^{5}}\] \[-\frac{(3j+5)r_{H}^{2}\left(3j^{3}+15j^{2}-j(4\ell(\ell+1)+7)-6 \left(\ell^{2}+\ell+7\right)\right)}{4r^{6}}\] \[+\frac{3(j-2)(j+2)^{2}(j+6)r_{H}^{3}}{4r^{7}}\bigg{]}+(j \leftrightarrow k)\] \[+\alpha^{2}y_{j}y_{k}f_{0}\left(\frac{r_{H}}{r}\right)^{j+k} \bigg{[}-\frac{\mathcal{E}_{0}(j^{2}+4jk+j+k^{2}+k)}{r^{2}}+\frac{\mathcal{E} _{0}r_{H}(j^{2}+j(4k+2)+k(k+2))}{r^{3}}\] \[+\frac{1}{4r^{4}}\Big{(}j^{2}(-6k+4\ell(\ell+1)-11)-2j^{3}(k+3)-j ^{4}+4(k(k+6)+6)\ell-k(k+1)(k+2)(k+3)\] \[+2j(4(2k+3)\ell^{2}+4(2k+3)\ell-k(k(k+3)+4)-3)+4(k(k+6)+6)\ell^{2} \Big{)}\] \[+\frac{r_{H}}{4r^{5}}\Big{(}j^{2}(24k-8\ell(\ell+1)+47)+6j^{3}(k+ 4)+3j^{4}+k^{2}(47-8\ell(\ell+1))+3k^{4}+24k^{3}\] \[-2k(31\ell(\ell+1)+29)-16(5\ell(\ell+1)+9)+j(6k^{3}+24k^{2}-4k(8 \ell(\ell+1)+1)-62\ell(\ell+1)-58)\Big{)}\] \[+\frac{r_{H}^{2}}{4r^{6}}\Big{(}j^{2}(4(\ell^{2}+\ell-17)-30k)-6j ^{3}(k+5)-3j^{4}+4k^{2}(\ell^{2}+\ell-17)-3k^{4}-30k^{3}\] \[+k(38\ell(\ell+1)+161)+60(\ell^{2}+\ell+7)+j(-6k^{3}-30k^{2}+4k(4 \ell(\ell+1)+7)+38\ell(\ell+1)+161)\Big{)}\] \[+\frac{r_{H}^{3}(2j^{3}(k+6)+4j^{2}(3k+8)+j^{4}+2j(k+6)(k^{2}-8)+ k^{2}(k+4)(k+8)-96(k+3))}{4r^{7}}\bigg{]}\] \[+\mathcal{O}(\alpha^{3}),\] (A34) where we used the relation \(d/dx=fd/dr\). From this result, we can read \(\alpha_{i}\) for \(\delta V+\delta W\). We decompose the coefficients \(\alpha_{i}=A_{i}^{(1)}\alpha+A_{i}^{(2)}\alpha^{2}+\mathcal{O}(\alpha^{3})\) in Eq. (A4) as \[A_{i}^{(1)} =y_{j}\partial_{y_{j}}A_{i}^{(1)}+y_{k}\partial_{y_{k}}A_{i}^{(1)}\] (A35) \[A_{i}^{(2)} =\frac{y_{j}^{2}}{2}\partial_{y_{j}}^{2}A_{i}^{(2)}+\frac{y_{k}^{ 2}}{2}\partial_{y_{k}}^{2}A_{i}^{(2)}+y_{j}y_{k}\partial_{y_{j}}\partial_{y_{k }}A_{i}^{(2)}.\] (A36) Introducing \(\partial_{y_{j}}A_{i}^{(1)}=r_{H}^{-1}B_{i}^{(1)}\), \(\partial_{y_{j}}\partial_{y_{k}}A_{i}^{(2)}=r_{H}^{-2}B_{i}^{(2)}\), then one can see that the relations \[\partial_{y_{k}}A_{i}^{(1)} =r_{H}^{-1}B_{i}^{(1)}|_{j\to k},\] (A37) \[\partial_{y_{j}}^{2}A_{i}^{(2)} =\frac{r_{H}^{-2}}{2}B_{i}^{(2)}|_{k\to j},\] (A38) \[\partial_{y_{k}}^{2}A_{i}^{(2)}=\frac{r_{H}^{-2}}{2}B_{i}^{(2)}|_{j\to k} \tag{111}\] hold from the expression of Eq. (110). The explicit forms of \(B_{i}^{(1)}\) and \(B_{i}^{(2)}\) become \[B_{j+1}^{(1)} =2jr_{H}^{2}\mathcal{E}_{0}, \tag{112}\] \[B_{j+3}^{(1)} =\frac{1}{2}(j+1)(j-2\ell)(j+2\ell+2),\] (113) \[B_{j+4}^{(1)} =-\frac{1}{2}(2j+3)\left(j(j+3)-2\left(\ell^{2}+\ell+3\right) \right),\] (114) \[B_{j+5}^{(1)} =\frac{1}{2}(j-2)(j+2)(j+6), \tag{115}\] and \[B_{j+k+2}^{(2)} =-(j^{2}+4jk+j+k^{2}+k)r_{H}^{2}\mathcal{E}_{0}, \tag{116}\] \[B_{j+k+3}^{(2)} =(j^{2}+j(4k+2)+k(k+2))r_{H}^{2}\mathcal{E}_{0},\] (117) \[B_{j+k+4}^{(2)} =\frac{1}{4}\Big{(}j^{2}(-6k+4\ell(\ell+1)-11)-2j^{3}(k+3)-j^{4}+ 4(k(k+6)+6)\ell\] \[\quad-k(k+1)(k+2)(k+3)+2j(4(2k+3)\ell^{2}+4(2k+3)\ell\] \[\quad-k(k(k+3)+4)-3)+4(k(k+6)+6)\ell^{2}\Big{)},\] (118) \[B_{j+k+5}^{(2)} =\frac{1}{4}\Big{(}j^{2}(24k-8\ell(\ell+1)+47)+6j^{3}(k+4)+3j^{4}\] \[\quad+k^{2}(47-8\ell(\ell+1))+3k^{4}+24k^{3}-2k(31\ell(\ell+1)+29 )-16(5\ell(\ell+1)+9)\] \[\quad+j(6k^{3}+24k^{2}-4k(8\ell(\ell+1)+1)-62\ell(\ell+1)-58) \Big{)},\] (119) \[B_{j+k+6}^{(2)} =\frac{1}{4}\Big{(}j^{2}(4(\ell^{2}+\ell-17)-30k)-6j^{3}(k+5)-3j^{ 4}+4k^{2}(\ell^{2}+\ell-17)\] \[\quad-3k^{4}-30k^{3}+k(38\ell(\ell+1)+161)+60(\ell^{2}+\ell+7)\] \[\quad+j(-6k^{3}-30k^{2}+4k(4\ell(\ell+1)+7)+38\ell(\ell+1)+161) \Big{)},\] (120) \[B_{j+k+7}^{(2)} =\frac{1}{4}\Big{(}2j^{3}(k+6)+4j^{2}(3k+8)+j^{4}+2j(k+6)(k^{2}-8)\] \[\quad+k^{2}(k+4)(k+8)-96(k+3)\Big{)}. \tag{121}\] Because \(\mathcal{E}_{1}=0\) and then \(\omega=\omega_{0}\), from Eq. (109), we obtain a relation \[\sum_{j=0}^{\infty}\alpha_{j}e_{j}+\sum_{j,k=0}^{\infty}\alpha_{j}\alpha_{k}e_ {j,k}=0. \tag{122}\] From \(\mathcal{O}(\alpha)\) and \(\mathcal{O}(\alpha^{2})\) terms in Eq. (100), we obtain independent recursion relations among \(e_{j}\) and \(e_{j,k}\) \[0 =\sum_{a=1}^{5}B_{j+a}^{(1)}e_{j+a}\] \[=B_{j+1}^{(1)}e_{j+1}+B_{j+3}^{(1)}e_{j+3}+B_{j+4}^{(1)}e_{j+4}+B_ {j+5}^{(1)}e_{j+5}, \tag{101}\] and \[0 =\sum_{a,b=1}^{5}B_{j+a}^{(1)}B_{k+b}^{(1)}e_{j+a,k+b}+\frac{1}{2} \sum_{a=2}^{7}B_{j+k+a}^{(2)}e_{j+k+a}\] \[=B_{j+1}^{(1)}B_{k+1}^{(1)}e_{j+1,k+1}+B_{j+1}^{(1)}B_{k+3}^{(1)} e_{j+1,k+3}+B_{j+1}^{(1)}B_{k+4}^{(1)}e_{j+1,k+4}+B_{j+1}^{(1)}B_{k+5}^{(1)}e_{j+1,k+5}\] \[+B_{j+3}^{(1)}B_{k+1}^{(1)}e_{j+3,k+1}+B_{j+3}^{(1)}B_{k+3}^{(1)} e_{j+3,k+3}+B_{j+3}^{(1)}B_{k+4}^{(1)}e_{j+3,k+4}+B_{j+3}^{(1)}B_{k+5}^{(1)}e_{j+3,k+5}\] \[+B_{j+4}^{(1)}B_{k+1}^{(1)}e_{j+4,k+1}+B_{j+4}^{(1)}B_{k+3}^{(1)} e_{j+4,k+3}+B_{j+4}^{(1)}B_{k+4}^{(1)}e_{j+4,k+4}+B_{j+4}^{(1)}B_{k+5}^{(1)}e_{j+4,k+5}\] \[+B_{j+5}^{(1)}B_{k+1}^{(1)}e_{j+5,k+1}+B_{j+5}^{(1)}B_{k+3}^{(1)} e_{j+5,k+3}+B_{j+5}^{(1)}B_{k+4}^{(1)}e_{j+5,k+4}+B_{j+5}^{(1)}B_{k+5}^{(1)}e_{j+5,k+5}\] \[+\frac{1}{2}\Big{[}B_{j+k+2}^{(2)}e_{j+k+2}+B_{j+k+3}^{(2)}e_{j+k+ 3}+B_{j+k+4}^{(2)}e_{j+k+4}\] \[+B_{j+k+5}^{(2)}e_{j+k+5}+B_{j+k+6}^{(2)}e_{j+k+6}+B_{j+k+7}^{(2)} e_{j+k+7}\Big{]}. \tag{102}\] We note again that \(e_{j}=e_{j}^{(1)},e_{j,j}=e_{j}^{(2)}\) and \(e_{j,k}=e_{j,k}^{(1,1)}/2\) for \(j\neq k\), where numerical values of \(e_{j}^{(1)},e_{j}^{(2)},e_{j,k}^{(1,1)}\) can be seen in Tables 5 and 6. Using the first order recursion relation in Eq. (101), \(e_{j}\) with higher \(j\) can be written only from those with a few lower \(j\), _i.e.,_\(e_{0},e_{2}\) and \(e_{7}\)[45]. However, this is not the case for the second order recursion relation in Eq. (102). In fact, to calculate \(e_{j,k}\) with higher \(j,k\) using Eq. (102), we need the values of \(e_{j,0},e_{j,2},e_{j,7},e_{k,0},e_{k,2},e_{k,7}\). To improve this point, we study the case with the potential which contains first order correction terms in the next subsection. #### a.2.2 Improved recursion relation for \(e_{j,k}\) We consider the Regge-Wheeler potential with first order correction terms \[V =V_{0}+\delta V\] \[=f_{0}\left(\frac{\ell(\ell+1)}{r^{2}}-\frac{3r_{H}}{r^{3}} \right)+\frac{\alpha f_{0}}{r_{H}^{2}}\left[v_{j}\left(\frac{r_{H}}{r}\right)^ {j+5}+v_{k}\left(\frac{r_{H}}{r}\right)^{k+5}\right], \tag{103}\] where \(j,k\geq-1\) are integers and \(v_{j},v_{k}\) are constants. We also assume that \(j\neq 2\) and \(k\neq 2\). In this case, the QNM frequency behaves \[\omega=\omega_{0}+\alpha\omega_{1}+\alpha^{2}\omega_{2},\] (A54) with \[\omega_{1} =v_{j}e_{j+5}+v_{k}e_{k+5},\] (A55) \[\omega_{2} =v_{j}^{2}e_{j+5,j+5}+2v_{j}v_{k}e_{j+5,k+5}+v_{k}^{2}e_{k+5,k+5}.\] (A56) \(\mathcal{E}_{1}=2\omega_{0}\omega_{1}\) becomes \[\mathcal{E}_{1}=2v_{j}e_{j+5}\omega_{0}+2v_{k}e_{k+5}\omega_{0}.\] (A57) For this potential \(V=V_{0}+\delta V\), we set \[Y_{1} =y_{j}\left(\frac{r_{H}}{r}\right)^{j}+y_{k}\left(\frac{r_{H}}{r }\right)^{k},\] (A58) \[Y_{2} =0,\] (A59) \[C_{1} =0,\] (A60) with \[y_{j} =-\frac{2v_{j}r_{H}}{(j-2)(j+2)(j+6)},\] (A61) \[y_{k} =-\frac{2v_{k}r_{H}}{(k-2)(k+2)(k+6)}.\] (A62) Then, Eqs (A22)-(A29) lead to \[\delta V+\delta W=\alpha y_{j}f_{0}\left(\frac{r_{H}}{r}\right)^ {j}\left[\frac{2j\mathcal{E}_{0}}{r}+\frac{(j+1)(j-2\ell)(j+2\ell+2)}{2r^{3}}\right.\] \[-\left.\frac{(2j+3)r_{H}\left(j(j+3)-2\left(\ell^{2}+\ell+3\right) \right)}{2r^{4}}\right]+(j\leftrightarrow k)+\alpha^{2}\mathcal{E}_{1}\left[y_ {j}f_{0}\left(\frac{r_{H}}{r}\right)^{j}\frac{2j}{r}+y_{k}f_{0}\left(\frac{r_ {H}}{r}\right)^{k}\frac{2k}{r}\right]\] \[+\alpha^{2}y_{j}^{2}f_{0}\left(\frac{r_{H}}{r}\right)^{2j}\left[ -\frac{j(3j+1)\mathcal{E}_{0}}{r^{2}}+\frac{j(3j+2)r_{H}\mathcal{E}_{0}}{r^{3} }-\frac{3(j+1)^{2}(j-2\ell)(j+2\ell+2)}{4r^{4}}\right.\] \[+\left.\frac{(3j+4)r_{H}\left(3j^{3}+12j^{2}-j(8\ell(\ell+1)+1)-2 (5\ell(\ell+1)+9)\right)}{4r^{5}}\right.\] \[-\left.\frac{(3j+5)r_{H}^{2}\left(j^{3}+3j^{2}+j(1-4\ell(\ell+1) )-6(\ell^{2}+\ell-1)\right)}{4r^{6}}\right.\] \[-\left.\frac{3(j-2)(j+2)^{2}(j+6)r_{H}^{3}}{4r^{7}}\right]+(j \leftrightarrow k)\] \[+\alpha^{2}y_{j}y_{k}f_{0}\left(\frac{r_{H}}{r}\right)^{j+k}\bigg{[} -\frac{\mathcal{E}_{0}(j^{2}+4jk+j+k^{2}+k)}{r^{2}}+\frac{\mathcal{E}_{0}r_{H}( j^{2}+j(4k+2)+k(k+2))}{r^{3}}\] \[+\frac{1}{4r^{4}}\Big{(}j^{2}(-6k+4\ell(\ell+1)-11)-2j^{3}(k+3)-j^ {4}+4(k(k+6)+6)\ell-k(k+1)(k+2)(k+3)\] \[+2j(4(2k+3)\ell^{2}+4(2k+3)\ell-k(k(k+3)+4)-3)+4(k(k+6)+6)\ell^{2} \Big{)}\] \[+\frac{r_{H}}{4r^{5}}\Big{(}j^{2}(24k-8\ell(\ell+1)+47)+6j^{3}(k+ 4)+3j^{4}+k^{2}(47-8\ell(\ell+1))+3k^{4}+24k^{3}\] \[-2k(31\ell(\ell+1)+29)-16(5\ell(\ell+1)+9)+j(6k^{3}+24k^{2}-4k(8 \ell(\ell+1)+1)-62\ell(\ell+1)-58)\Big{)}\] \[-\frac{r_{H}^{2}}{4r^{6}}\Big{(}2j^{2}(3k-2(\ell^{2}+\ell-4))+2j^ {3}(k+4)+j^{4}+j(2k^{3}+6k^{2}-16k\ell(\ell+1)+4k-38\ell(\ell+1)+23)\] \[-4k^{2}(\ell^{2}+\ell-4)+k^{4}+8k^{3}+k(23-38\ell(\ell+1))-60( \ell^{2}+\ell-1)\Big{)}\] \[-\frac{r_{H}^{3}(2j^{3}(k+6)+4j^{2}(3k+8)+j^{4}+2j(k+6)(k^{2}-8)+ k^{2}(k+4)(k+8)-96(k+3))}{4r^{7}}\bigg{]}\] \[+\mathcal{O}(\alpha^{3}).\] (A63) We note that the above potential at \(\mathcal{O}(\alpha)\) does not have terms with \((r_{H}/r)^{j+5}\) and \((r_{H}/r)^{k+5}\) unlike Eq. (A34). Similar to the discussion in the previous subsection, we can read the coefficients \(B_{i}^{(1)}\) and \(B_{i}^{(2)}\) as \[B_{j+1}^{(1)} =2jr_{H}^{2}\mathcal{E}_{0},\] (A64) \[B_{j+3}^{(1)} =\frac{1}{2}(j+1)(j-2\ell)(j+2\ell+2),\] (A65) \[B_{j+4}^{(1)} =-\frac{1}{2}(2j+3)\left(j(j+3)-2\left(\ell^{2}+\ell+3\right) \right),\] (A66) and \[B_{j+1}^{(2)} =-2j(k-2)(k+2)(k+6)r_{H}^{2}\omega_{0}e_{k+5},\] (A67) \[B_{k+1}^{(2)} =-2k(j-2)(j+2)(j+6)r_{H}^{2}\omega_{0}e_{j+5},\] (A68) \[B_{j+k+2}^{(2)} =-(j^{2}+4jk+j+k^{2}+k)r_{H}^{2}\mathcal{E}_{0},\] (A69) \[B_{j+k+3}^{(2)} =(j^{2}+j(4k+2)+k(k+2))r_{H}^{2}\mathcal{E}_{0},\] (A70) \[B_{j+k+4}^{(2)} =\frac{1}{4}\Big{(}j^{2}(-6k+4\ell(\ell+1)-11)-2j^{3}(k+3)-j^{4}+ 4(k(k+6)+6)\ell\] \[-k(k+1)(k+2)(k+3)+2j(4(2k+3)\ell^{2}+4(2k+3)\ell\] \[-k(k(k+3)+4)-3)+4(k(k+6)+6)\ell^{2}\Big{)},\] (A71) \[B_{j+k+5}^{(2)} =\frac{1}{4}\Big{(}j^{2}(24k-8\ell(\ell+1)+47)+6j^{3}(k+4)+3j^{4}\] \[+k^{2}(47-8\ell(\ell+1))+3k^{4}+24k^{3}-2k(31\ell(\ell+1)+29)-16( 5\ell(\ell+1)+9)\] \[+j(6k^{3}+24k^{2}-4k(8\ell(\ell+1)+1)-62\ell(\ell+1)-58)\Big{)}, \tag{100}\] \[B^{(2)}_{j+k+6} =-\frac{1}{4}\Big{(}2j^{2}(3k-2(\ell^{2}+\ell-4))+2j^{3}(k+4)+j^{4}\] \[+j(2k^{3}+6k^{2}-16k\ell(\ell+1)+4k-38\ell(\ell+1)+23)\] \[-4k^{2}(\ell^{2}+\ell-4)+k^{4}+8k^{3}+k(23-38\ell(\ell+1))-60( \ell^{2}+\ell-1)\Big{)},\] (101) \[B^{(2)}_{j+k+7} =-\frac{1}{4}\Big{(}(2j^{3}(k+6)+4j^{2}(3k+8)+j^{4}+2j(k+6)(k^{2}-8)\] \[+k^{2}(k+4)(k+8)-96(k+3))\Big{)}. \tag{102}\] Then, the QNM frequency can be calculated from Eq. (100), and it should be same as Eq. (101) with Eqs (102) and (103). From this condition, we obtain independent recursion relations at \(\mathcal{O}(\alpha^{2})\) as \[\frac{1}{2}(j-2)(j+2)(j+6)(k-2)(k+2)(k+6)e_{j+5,k+5}\] \[=2\sum_{a,b=1}^{4}B^{(1)}_{j+a}B^{(1)}_{k+b}e_{j+a,k+b}+\sum_{a=2 }^{7}B^{(2)}_{j+k+a}e_{j+k+a}+B^{(2)}_{j+1}e_{j+1}+B^{(2)}_{k+1}e_{k+1}. \tag{103}\] We note again that \(j,k\geq-1\) and \(j\neq 2,k\neq 2\) in the above equation. In fact, we can obtain further independent recursion relations for \(e_{j,k}\). We consider the potential in Eq. (102) with \(j\geq-1,j\neq 2\) and \(k\geq-5\). Setting \[Y_{1} =y_{j}\left(\frac{r_{H}}{r}\right)^{j}, \tag{104}\] \[Y_{2} =0,\] (105) \[C_{1} =0, \tag{106}\] with \[y_{j}=-\frac{2v_{j}r_{H}}{(j-2)(j+2)(j+6)}, \tag{107}\] we can calculate \(\delta V+\delta W\) from Eqs (101)-(102), and derive the recursion relations similar to the above discussion. Here, we only show the result: \[0 =(j-2)(j+2)(j+6)e_{j+5,k+5}-(2j+3)(j(j+3)-2(\ell^{2}+\ell+3))e_{j+ 4,k+5}\] \[+(j+1)(j-2\ell)(j+2\ell+2)e_{j+3,k+5}+4jr_{H}^{2}\mathcal{E}_{0}e _{j+1,k+5}\] \[+4jr_{H}^{2}\omega_{0}e_{j+1}e_{k+5}-(2j+k+5)e_{j+k+6}+(2j+k+6)e_{ j+k+7}. \tag{108}\] Using Eqs. (100) and (101), the second order coefficients \(e_{j,k}\) with higher \(j,k\) can be written by those with \(j,k\leq 7\) and the first order coefficients \(e_{j}\).13 We note that we can derive recursion relations for higher order \(\alpha\) from a straightforward extension of the above discussion. Footnote 13: Some of coefficients \(e_{j,k}\) with \(j,k\leq 7\) are not independent. For example, we can choose \(e_{0,0},e_{1,0},e_{1,1},e_{2,0},e_{2,1},e_{2,2},e_{3,0},e_{3,1},e_{3,2},e_{3,3},e_{7,0},e_{7,1},e_{7,2},e_{7,3}\) and \(e_{7,7}\) as independent \(e_{j,k}\), then, the other \(e_{j,k}\) can be written by these. ### Reduction of the effective potential Using the ambiguity of effective potential, we can reduce the the effective potential so that \(\delta V\) only has lower order coefficients \(\alpha_{j}\). In [45], the first order case is discussed, but in fact, the discussion holds even for the higher order case. For the linear order case, we can reduce effective potential by using \(O(\alpha)\) ambiguity according to [45]. For the quadratic order case, setting \(Y_{1}=0\) and \(Y_{2}=y_{j}(r_{H}/r)^{j}\) for the odd parity perturbation, the form of the ambiguity of the effective potential at \(O(\alpha^{2})\) becomes same as the linear order case. Then, from the same discussion as linear case in [45], we can reduce the effective potential at \(O(\alpha^{2})\) so that \(\delta V\) only has \(\alpha_{0},\alpha_{1},\alpha_{2}\) and \(\alpha_{7}\) terms. Repeating this process to higher order, we can reduce \(O(\alpha^{n})\) effective potential.
2309.10923
Semi-automatic staging area for high-quality structured data extraction from scientific literature
We propose a semi-automatic staging area for efficiently building an accurate database of experimental physical properties of superconductors from literature, called SuperCon2, to enrich the existing manually-built superconductor database SuperCon. Here we report our curation interface (SuperCon2 Interface) and a workflow managing the state transitions of each examined record, to validate the dataset of superconductors from PDF documents collected using Grobid-superconductors in a previous work. This curation workflow allows both automatic and manual operations, the former contains ``anomaly detection'' that scans new data identifying outliers, and a ``training data collector'' mechanism that collects training data examples based on manual corrections. Such training data collection policy is effective in improving the machine-learning models with a reduced number of examples. For manual operations, the interface (SuperCon2 interface) is developed to increase efficiency during manual correction by providing a smart interface and an enhanced PDF document viewer. We show that our interface significantly improves the curation quality by boosting precision and recall as compared with the traditional ``manual correction''. Our semi-automatic approach would provide a solution for achieving a reliable database with text-data mining of scientific documents.
Luca Foppiano, Tomoya Mato, Kensei Terashima, Pedro Ortiz Suarez, Taku Tou, Chikako Sakai, Wei-Sheng Wang, Toshiyuki Amagasa, Yoshihiko Takano, Masashi Ishii
2023-09-19T20:53:13Z
http://arxiv.org/abs/2309.10923v2
# Semi-automatic staging area for high-quality structured data extraction from scientific literature ###### Abstract We propose a semi-automatic staging area for efficiently building an accurate database of experimental physical properties of superconductors from literature, called SuperCon\({}^{2}\), to enrich the existing manually-built superconductor database SuperCon. Here we report our curation interface (SuperCon\({}^{2}\) Interface) and a workflow managing the state transitions of each examined record, to validate the dataset of superconductors from PDF documents collected using Grobid-superconductors in a previous work [1]. This curation workflow allows both automatic and manual operations, the former contains "anomaly detection" that scans new data identifying outliers, and a "training data collector" mechanism that collects training data examples based on manual corrections. Such training data collection policy is effective in improving the machine-learning models with a reduced number of examples. For manual operations, the interface (SuperCon\({}^{2}\) interface) is developed to increase efficiency during manual correction by providing a smart interface and an enhanced PDF document viewer. We show that our interface significantly improves the curation quality by boosting precision and recall as compared with the traditional "manual correction". Our semi-automatic approach would provide a solution for achieving a reliable database with text-data mining of scientific documents. RESEARCH PAPER m RESEARCH PAPER materials informatics, superconductors, machine learning, database, tdm ## 1 Introduction The emergence of new methodologies using machine learning for materials exploration has given rise to a growing research area called materials informatics (MI) [2]. This field leverages the knowledge of the materials data accumulated in the past to efficiently screen candidates of the materials with desired properties. As a matter of course, such an approach requires a larger amount of material-related data for training models. Researchers have been developing large aggregated databases of physical properties generated by first-principles calculations based on Density Functional Theory (DFT), such as Materials Project [3], JARVIS (Joint Automated Repository for Various Integrated Simulations) [4], NOMAD (Novel Materials Discovery) [5], that played a role of a strong driving force for the development of materials informatics. Using DFT data for machine learning (ML) in materials science has become popular since, in principle, it allows researchers to simulate and obtain various types of physical properties of the target materials only by knowing the crystal structures of the subjects. Those DFT codes are designed to reproduce/simulate the physical properties that should be observed by experiments in reality. Nonetheless, caution must be exercised while utilising these computed figures for constructing ML models aimed at steering experiments. This caution arises due to the potential lack of validity in their predictions when dealing with specific simplifications of the interactions between atoms and electrons in solids, such as electron-electron Coulomb correlation, spin-orbit coupling, and similar factors. On the contrary, accumulated datasets of experimental data from scientific publications are still scarce, despite abundant publication availability, and exponential growth in materials science [6]. Currently, only a few limited resources exist, such as the Pauling File [7] and SuperCon [8], necessitating reliance on manual extraction methods. This scarcity can be attributed to inadequate infrastructure and a shortage of expertise in computer science within the materials science field. The SuperCon database was built manually from 1987 [8] by the National Institute for Materials Science (NIMS) in Japan and it is considered a reliable source of experimental data on superconductors [9; 10; 11; 12]. However, the updates of SuperCon have become increasingly challenging due to the high publication rate. In response to the need for a more efficient approach to sustain productivity, we embarked on the development of an automated system for extracting material and property information from the text contained in relevant scientific publications. This automated process enabled the rapid creation of "SuperCon\({}^{2}\) Database", a comprehensive database of superconductors containing around 40000 entries, within an operational duration of just a few days [1]. Matching the level of quality seen in SuperCon while simultaneously automating the extraction of organised data can be achieved with a properly designed curation process. We use the term _curation_ to describe the overall process of reviewing and validating database records, while _correction_ refers to the specific action of altering the values of one or more properties within an individual record. At the moment of writing this article, we are not aware of any other curation tool focusing on structured databases of extracted information. There are several tools for data annotation, such as Inception [13], and Doccano [14] which concentrate on text labelling and classification. In this work, we designed and developed a workflow with a user interface, "SuperCon\({}^{2}\) Interface", crafted to produce structured data of superior quality and efficiency to the one obtained by the "traditional" manual approach consisting of reading documents and noting records, usually on an Excel file. We developed this framework around the specific use case of SuperCon, however, our goal is to be adapted to alternative data frameworks. Our contributions can be summarised as follows: * We developed a workflow and a user interface that allow the curation of a machine-collected database. We demonstrate that using it for data correction resulted in higher quality than the "traditional" (manual) approach. * We devise an anomaly detection process for incoming data lower rejection rate (false positive rate) from domain experts. * We propose a mechanism that selects training data based on corrected records, and we demonstrate that such selections are rapidly improving the ML models. The subsequent sections, Section 2 describes the curation workflow and Section 3 the user interface on top of it. Finally, we discuss our evaluation experiments and results in Section 4. ## 2 Curation workflow The curation of the SuperCon2 Database acts as a workflow where user actions result in database records state transitions (Figure 1). Allowed manual actions include a) _mark as valid_ (validation) when a record is considered correct or corrected by someone else. When a record is not valid, users can: b) _mark as invalid_ when considered "potentially" invalid (or the curator is not confident), c) perform _manual correction_ to update it according to the information from the original PDF document, and d) _remove_ the record when it was not supposed to be extracted. Footnote 2: “internal status” indicates that their records should be hidden in the interface Besides manual operations from users, this workflow supports also automatic actions: "anomaly detection" for pre-screening records (Section 2.2) and the "training data collector" for accumulating training data for improving ML models (Section 2.3). Although only the most recent version of a record can be viewed on this system, the correction history is recorded (Section 3.3). ### Workflow control The workflow state is determined by the "curation status" (Section 2.1.1), the user action, and the error type (Section 2.1.2). #### 2.1.1 Curation status The curation status (Figure 1) is defined by _type_ of action, manual or automatic, and _status_, which can assume the following values: * **new**: default status when a new record is created. * **curated**: the record has been amended manually. * **validated**: the record was manually marked as valid. * **invalid**: the record is wrong or inappropriate for the situation (e.g., \(\mathrm{T}_{\mathrm{m}}\) or \(\mathrm{T}_{\mathrm{curie}}\) extracted as superconducting critical temperature). * **obsolete**: the record has been updated and the updated values are stored in a new record (internal status1). Footnote 1: “internal status” indicates that their records should be hidden in the interface * **removed**: the record has been removed by a curator (internal status). #### 2.1.2 Error types We first introduced _error type_ in [1] and extended their scope in this work to consider data curation and anomaly detection. Users are required to select one _Error Type_ at every record update or removal. This information is stored in the "original" record and can be different at every record modification. The error type values can be summarised as follows: * **From table**: the entities Material \(\rightarrow\) T\({}_{\rm c}\)\(\rightarrow\) Pressure are identified in a table. At the moment, table extraction is not performed * **Extraction**: The material, temperature, and pressure are not extracted (no box) or extracted incorrectly. * **Linking**: The material is incorrectly linked to the T\({}_{\rm c}\) given that the entities are correctly recognised. * **T\({}_{\rm c}\) classification**: The temperature is not correctly classified as "superconductors critical temperature" (e.g., Curie temperature, Magnetic temperature...). * **Composition resolution**: The exact composition cannot be resolved (e.g., the stoichiometric values cannot be resolved). * **Value resolution**: The extracted formula contains variables that cannot be resolved, even after having read the paper. This includes when data is from tables * **Anomaly detection**: The data has been modified by anomaly detection, which facilitates their retrieval from the interface. * **Curation amends**: The curator is updating the data which does not present issues due to the automatic system. ### Anomaly detection Anomaly detection is the process of identifying unusual events or patterns in data. In our context, this means identifying data that are greatly different from the expected values. This post-process was introduced in a limited scope to draw attention to certain cases during the curation. The anomaly detection uses a rule-based approach and marks any record that matches the following conditions * the extracted T\({}_{\rm c}\) is greater than room temperature (273 K), negative, or contains invalid characters and cannot be parsed (e.g. "41]") * the chemical formula cannot be processed by an ensemble composition parser that combines Pymatgen [15], and text2chem [16] * 250 GPa. Records identified as anomalies have _status_ "invalid" and _error type_ "anomaly detection" for easy identification. Since this process may find false positives, its output requires validation from curators. For example, in certain contexts, T\({}_{\rm c}\) values above room temperature or applied pressure up to 500 GPa may be valid in researchers' hypotheses, calculations, or simulated predictions. We ran the anomaly detection on the full SuperCon\({}^{2}\) Database (40324 records [1]). The anomaly detection identified 1506 records with invalid T\({}_{\rm c}\), 5021 records with an incomplete chemical formula, 304 records with invalid applied pressure, and 1440 materials linked to multiple T\({}_{\rm c}\) values. Further analysis and cross-references with contrasting information may be added in future. ### Automatic training data collector The curation process is a valuable endeavour demanding significant knowledge and human effort. To maximise the use of this time for collecting as much information as possible. We integrated an automatic procedure in the curation process that, for every correction, accumulates the related data examples that can be used to improve the underlying ML models. #### 2.3.1 Training data collection In the event of a correction (update, removal) in a database record, this process retrieves the corresponding raw data: the text passage, the recognised entities (spans), and the layout tokens information. This information is sufficient to be exported as training examples, which can be examined and corrected, and feedback to the ML model. #### 2.3.2 Training data management We designed a specific page of the interface (Section 3) to manage the collected data (Figure 2) in which each row corresponds to a training example composed by the decorated text showing the identified entities, the document identifier, and the status. The users can examine the data, delete it, send it to the annotation tool to be corrected, and then export them. We integrated our interface with Label-studio [17] for the correction of the collected training examples. Label-studio is an open-source, python-based, and modern interface supporting many different TDM tasks (NER, topic modelling, image recognition, etc.). ## 3 Curation interface The workflow is operated through the user interface, which offers several key features to facilitate the data curation process (Figure 1). It provides a comprehensive view of materials and their related properties as a table which includes search, filtering, and sorting functionality (Figure 3). The detailed schema, including examples, is reported in our previous work [1]. During the curation process, it is often necessary to switch back and forth between the database record and the related context in the paper (the related paragraph or sentence). Our interface provides a viewer for individual documents, which visualises in the same window a table with the extracted records and the original PDF document decorated with annotations that identify the extracted materials and properties (Figure 4). ### Manual curation approach In this section, we discuss our strategy concerning manual curation, which is still indispensable for developing high-quality structures. We selected curators from domain experts in the field, to certify sufficient data quality. Nevertheless, as confirmed from our experiment in Section 4.3, the experience of each individual may have an impact on the final result. We followed two principles to guarantee robustness in the curation process. First, we built solid curation documentation as a form of example-driven guidelines with an iterative approach we first introduced in [18]. Then, we used a double-round validation approach, in which the data was initially corrected by one person, and validated in a second round, by a different individual. ### _Curation guidelines_ The guidelines consist mainly of two parts: the general principles and the correction rules with examples of solutions. The guidelines are designed to provide general information applied to corrections and very basic explanations containing illustrations for a faster understanding (e.g. the meaning of the colours of the annotations). Differently from our previous work [18], these guidelines are divided into examples for different scenarios based on the error types mentioned in Section 2.1.2. Each example described the initial record, its context, the expected corrected record and a brief explanation, as illustrated in Figure 5. ### _Curation and processing logs_ The Supercon\({}^{2}\) interface gives access to information regarding the ingestion (processing log) and the curation process (curation log). The processing log is filled up when the new data is ingested, it was built to have minimal functions able to explain why certain documents haven't been processed (Figure 6 top). For example, sometimes documents fail because they don't contain any text (image PDF documents) or they are too big (more than 100 pages). The curation log provides a view of what, when and how a record has been corrected (Figure 6 bottom). ## 4 Results and evaluation In this section, we illustrate the experiments we have run to evaluate our work. The evaluation is composed of three sets of results. The anomaly detection rejection rate (Section 4.1) indicates how many anomalies were rejected by curators after validation. Then, we demonstrate that the training data automatically selected contributed to improving the ML model with a small set of examples (Section 4.2) Finally, we evaluated the quality of the data extraction using the interface (and the semi-automatic TDM process) against the classical method of reading the PDF articles and noting the experimental information in an Excel file. In Section 4.3 we find out that using the interface improves the quality of the curated data by reducing missing experimental data. ### _Anomaly detection rejection rate_ We evaluated the anomaly detection by observing the "rejection rate" which consists of the number of detected anomalies that were rejected by human validation. Running the anomaly detection on a database subset with 667 records, it found 17 anomalies in \(\mathrm{T_{c}}\), 1 anomaly in applied pressure, and 16 anomalies in the chemical formulas. Curators examined each reported record and rejected 4 (23%) anomalies in \(\mathrm{T_{c}}\), 6 anomalies (37%) in chemical formulas and 0 anomalies in applied pressure. This indicates an appropriate low rate of false positives although a study with a larger dataset might be necessary. ### Training data generation We selected around 400 records in the Supercon2 Database that were marked as invalid by the anomaly detection process and we corrected them following the curation guidelines (Section 3.2). Then, we examined the corresponding training data corrected by the interface (Section 2.3) and obtained a set of 352 training data examples for our ML models. We call the obtained dataset _curation_ to be distinguished from the original SuperMat dataset which is referred to as _base_. Footnote 2: In our previous work [1] we reported 77.03% F1-score. There is a slight decrease in absolute scores between DeLFT 0.2.8 and DeLFT 0.3.0. One cause may be the use of different hyperparameters in version 0.3.0 such as batch size and learning rate. However, the most probable cause could be the impact of using the Huggingface tokenizers library which is suffering from quality issues [https://github.com/kermitt2/delft/issues/150](https://github.com/kermitt2/delft/issues/150). We prepared our experiment using SciBERT [19] that we fine-tuned for our downstream task as in [1]. We trained five models that we evaluated using a fixed holdout dataset from SuperMat averaging the results to smooth out the fluctuations. We use the DeLFT (Deep Learning For Text) [20] library for training, evaluating, and managing the models for prediction. A model can be trained with two different strategies: 1. _"from scratch"_: when the model is initialised randomly. We denote this strategy with an _(s)_. 2. _"incremental"_: when the initial model weights are taken from an already existing model. We denote this strategy with an _(i)_. The latter can be seen as a way to "continue" the training from a specific checkpoint. We thus define three different training protocols: 1. **base(s)**: using the _base_ dataset and training from scratch (s). 2. **(base+curation)(s)**: using both the _base_ and _curation_ datasets and training from scratch (s). 3. **base(s)+(base+curation)(i)**: Using the _base_ dataset to train from scratch (s), and then continuing the training with the _curation_ dataset (i). We merge "curation" with the base dataset because the curation dataset is very small compared to "base", and we want to avoid catastrophic forgetting [21] or overfitting. The trained models are then tested using a fixed holdout dataset that we designed in our previous work [1] and the evaluation scores are shown in Table 1. This experiment demonstrates that with only 352 examples (2% of the SuperMat dataset) comprising 1846 additional entities (11% of the entities from the SuperMat dataset) (Table 2), we obtain an improvement of F1-score from 76.67%2 to values between 77.44% (+0.77) and 77.48% (+0.81) for (base+curation)(s) and base(s)+(base+curation)(i), respectively. Footnote 2: In our previous work [1] we reported 77.03% F1-score. There is a slight decrease in absolute scores between DeLFT 0.2.8 and DeLFT 0.3.0. One cause may be the use of different hyperparameters in version 0.3.0 such as batch size and learning rate. However, the most probable cause could be the impact of using the Huggingface tokenizers library which is suffering from quality issues [https://github.com/kermitt2/delft/issues/150](https://github.com/kermitt2/delft/issues/150). This experiment gives interesting insight relative to the positive impact on the way we select the training data. However, there are some limitations: the _curation_ dataset is small compared to the _base_ dataset. This issue could be verified by correcting all the available training data, repeating this experiment, and studying the interpolation between the size of the two datasets and the obtained evaluation scores. A second limitation is that the hyperparameters we chose for our model, in particular, the learning rate and batch size could be still better tuned to obtain better results with the second and third training protocols. ### Data quality We conducted an experiment to evaluate the effectiveness and accuracy of data curation using two methods: a) the user interface (_interface_), and b) the "traditional" manual approach consisting of reading PDF documents and populating an Excel file (_PDF documents_). We selected a dataset of 15 papers, which we assigned to three curators -- a senior researcher (SD), a PhD student (PS), and a master's student (MS). Each curator received 10 papers: half to be corrected with the _interface_ and half with the _PDF Document_ method. Overall, each pair of curators had 5 papers in common which they had to process using opposite methods. For instance, if curator A receives paper 1 to be corrected with the _interface_, curator B, who receives the same paper 1, will correct it with the _PDF document_ method. After curation, a fourth individual manually reviewed the curated content. The raw data is available in the Appendix A. We evaluated the curation considering a double perspective: time and correctness. Time was calculated as the accumulated minutes required using each method. Correctness was assessed using standard measures such as precision, recall, and the F1-score. Precision measures the accuracy of the extracted information, while recall assesses the ability to capture all expected information. F1-Score is a harmonic means of precision and recall. #### 4.3.1 Discussion Overall, both methods required the same accumulated time: 185 minutes using the _interface_ and 184 minutes using the _PDF Document_ method. When the experiment was carried out, not all the curators were familiar with the _interface_ method. Although they had access to the user documentation, they had to get acquainted with the user interface, thus the accumulated 185 minutes included such activities. We examined the quality of the extracted data and we observed an improvement of +5.55% in precision and a substantial +46.69% in recall when using the _interface_ as compared with the _PDF Document_ method (Table 3). The F1-score improved by 39.35%. The disparity in experience significantly influenced the accuracy of curation, particularly in terms of high-level skills. Senior researchers consistently achieved an average F1-Score approximately 13% higher than other curators (see Table 4). Furthermore, we observed a modest improvement between master's students and PhD students. These findings indicate also that for large-scale projects, employing master students instead of PhD students may be a more cost-effective choice. Thus, using only a few senior researchers for the second round of validation (Section 3.1). Finally, the collected data suggest that all three curators had overall more corrected results by using the interface as illustrated in Table 5. The results of this experiment confirmed that our curation interface and workflow significantly improved the quality of the extracted data, with an astonishing improvement in recall, thus preventing curators from overlooking important information. ## 5 Code availability This work is available at [https://github.com/lfoppiano/supercon2](https://github.com/lfoppiano/supercon2). The repository contains the code of the SuperCon\({}^{2}\) interface, the curation workflow, and the ingestion processes for harvesting the SuperCon\({}^{2}\) Database of materials and proper ties. The guidelines are accessible at [https://supercon2.readthedocs.io](https://supercon2.readthedocs.io). ## 6 Conclusions We built a semi-automatic staging area, called SuperCon\({}^{2}\), to validate efficiently new experimental records automatically collected from superconductor research articles (SuperCon\({}^{2}\) Database [1]) before they are ingested into the existing, manually-build database of superconductors, SuperCon [8]. The system provides a curation workflow and a user interface (SuperCon\({}^{2}\) Interface) tailored to efficiently support domain experts in data correction and validation with fast context switching and an enhanced PDF viewer. Under the hood, the workflow ran "anomaly detection" to automatically identify outliers and a "training data collector" based on human corrections, to efficiently accumulate training data to be feedback to the ML model. Compared with the traditional manual approach of reading PDF documents and extracting information in an Excel file, SuperCon\({}^{2}\) significantly improves the curation quality by approximately 6% and +47% for precision and recall, respectively. In future, this work can be expanded to support other materials science domains such as magnetic materials, spintronic and thermoelectric research and expanding the evaluation to a larger dataset. ## Acknowledgements Our warmest thanks to Patrice Lopez, the author of Grobid [22], DeLFT [20], and other open-source projects for his continuous support and inspiration with ideas, suggestions, and fruitful discussions. We thank Pedro Baptista de Castro for his support during this work. Special thanks to Erina Fujita for useful tips on the manuscript. ## Funding This work was partly supported by MEXT Program: Data Creation and Utilization-Type Material Research and Development Project (Digital Transformation Initiative Center for Magnetic Materials) Grant Number JPMXP1122715503. ## Notes on Contributors LF wrote the manuscript and KT helped with the editing. LF and POS discussed the ML results and experiments. LF implemented the workflow as a standalone service, and TM wrote the front end of the user interface. LF designed the user interface experiment with KT, TT and WS as curators. KT led the materials-science work on the data with CS, TT and WS. KT, TA, YT and MI revised the paper. YT and MI supervised the work of the respective teams.
2309.04956
Anatomy Completor: A Multi-class Completion Framework for 3D Anatomy Reconstruction
In this paper, we introduce a completion framework to reconstruct the geometric shapes of various anatomies, including organs, vessels and muscles. Our work targets a scenario where one or multiple anatomies are missing in the imaging data due to surgical, pathological or traumatic factors, or simply because these anatomies are not covered by image acquisition. Automatic reconstruction of the missing anatomies benefits many applications, such as organ 3D bio-printing, whole-body segmentation, animation realism, paleoradiology and forensic imaging. We propose two paradigms based on a 3D denoising auto-encoder (DAE) to solve the anatomy reconstruction problem: (i) the DAE learns a many-to-one mapping between incomplete and complete instances; (ii) the DAE learns directly a one-to-one residual mapping between the incomplete instances and the target anatomies. We apply a loss aggregation scheme that enables the DAE to learn the many-to-one mapping more effectively and further enhances the learning of the residual mapping. On top of this, we extend the DAE to a multiclass completor by assigning a unique label to each anatomy involved. We evaluate our method using a CT dataset with whole-body segmentations. Results show that our method produces reasonable anatomy reconstructions given instances with different levels of incompleteness (i.e., one or multiple random anatomies are missing). Codes and pretrained models are publicly available at https://github.com/Jianningli/medshapenet-feedback/ tree/main/anatomy-completor
Jianning Li, Antonio Pepe, Gijs Luijten, Christina Schwarz-Gsaxner, Jens Kleesiek, Jan Egger
2023-09-10T08:07:58Z
http://arxiv.org/abs/2309.04956v1
# Anatomy Completor: A Multi-class Completion Framework for 3D Anatomy Reconstruction ###### Abstract In this paper, we introduce a completion framework to reconstruct the geometric shapes of various anatomies, including organs, vessels and muscles. Our work targets a scenario where one or multiple anatomies are missing in the imaging data due to surgical, pathological or traumatic factors, or simply because these anatomies are not covered by image acquisition. Automatic reconstruction of the missing anatomies benefits many applications, such as organ 3D bio-printing, whole-body segmentation, animation realism, paleoradiology and forensic imaging. We propose two paradigms based on a 3D denoising auto-encoder (DAE) to solve the anatomy reconstruction problem: (i) the DAE learns a _many-to-one_ mapping between incomplete and complete instances; (ii) the DAE learns directly a _one-to-one_ residual mapping between the incomplete instances and the target anatomies. We apply a loss aggregation scheme that enables the DAE to learn the _many-to-one_ mapping more effectively and further enhances the learning of the residual mapping. On top of this, we extend the DAE to a multiclass complictor by assigning a unique label to each anatomy involved. We evaluate our method using a CT dataset with whole-body segmentations. Results show that our method produces reasonable anatomy reconstructions given instances with different levels of incompleteness (i.e., one or multiple random anatomies are missing). Codes and pretrained models are publicly available at [https://github.com/Jianningli/medshapenet-feedback/tree/main/anatomy-completor](https://github.com/Jianningli/medshapenet-feedback/tree/main/anatomy-completor). Keywords:Anatomical Shape CompletionShape Reconstruction Shape Inpainting Whole-body Segmentation Residual Learning MedShapeNet Diminished Reality ## 1 Introduction 3D anatomy reconstructions play important roles in medical applications and beyond, such as (1) 3D bio-printing and organ transplantation, where damaged/diseased organs from traumatic injuries or pathologies are replaced by 3D bio-printed artificial organs [23]; (2) paleoradiology and forensic imaging, in which the full anatomical structures are re-established based on the skeleton remains [31, 13, 21]; (3) whole-body segmentation, where pseudo labels of whole-body anatomies are generated given only sparse manual annotations [8, 26, 30]; (4) animation realism [2]; and (5) diminished reality, where the 3D view of an anatomy blocked by medical instruments is reconstructed. Such an anatomy reconstruction task is well aligned with the shape completion problem in computer vision, which is commonly solved based on the symmetry of geometric shapes [28] or using learning-based approaches, where auto-encoder and generative adversarial networks (GANs) [33, 3, 32, 25] are popular choices. Recent years have witnessed a growing interest in medical shape completion, with the rapid development of medical deep learning [4]. Nevertheless, existing works in this direction are mostly focused on reconstructing a pre-defined and geometrically simple bone structure, such as the cranium [10, 11, 16, 17, 14, 32, 18, 22], maxilla [34], spine [19] and teeth [29], which restricts their scope of application to implant and prosthetic design. Existing methods for medical shape completion are commonly based on variants of auto-encoder and U-Net [10] and statistical shape models (SSMs) [5, 24]. Reconstructing random anatomies with varied geometric complexity is significantly harder than when the reconstruction target is pre-defined as in prior works. To realize the former, a network learns not only to identify the targets (i.e., what are missing in the input) but to reconstruct them, a process analogous to object instance segmentation [6], where a network first identifies all objects in an image and then segments them. However, random anatomy reconstruction has not been covered by existing research, which only completes one fixed anatomy with missing part(s), and remains to be an open problem. The goal of this work is to extend medical shape completion to the whole body, covering the majority of anatomy classes, and to realize random anatomy reconstruction in a single shape completion framework. To achieve this goal, we derived a 3D anatomical shape dataset from a fully-segmented CT dataset and trained a 3D convolutional denoising auto-encoder on the dataset to learn a mapping relationship between the incomplete instances and the corresponding targets, i.e., the full segmentations or the missing anatomies. Both quantitative and qualitative evaluations have demonstrated the effectiveness of our proposed method towards solving the anatomical shape reconstruction problem. ## 2 Methods ### Problem Formulation Reconstructing random missing anatomies is formulated as a shape completion problem, where the goal is to learn a mapping \(\mathcal{F}\) between the incomplete instances from \(N\) subjects \(\mathcal{X}=\left\{x_{n}^{m}\right\}_{n=1,...,N}^{m=1,...,M}\) and the corresponding complete ground truth \(\mathcal{Y}=\left\{y_{n}\right\}_{n=1}^{N}\) derived from whole-body anatomy segmentations. For subject \(x_{n}\), there exist \(M\) instances i.e., \(x_{n}^{1},x_{n}^{2},...,x_{n}^{m},...,x_{n}^{M}\) with different degrees of incompleteness, where one or multiple random anatomies are missing. Therefore, \(\mathcal{F}\) is supposed to be a _many-to-one_ mapping, i.e., \[\mathcal{F}:\left\{x_{n}^{m}\right\}_{m=1}^{M}\to y_{n},\,n=1,2,...,N \tag{1}\] We use binary voxel grids to represent 3D anatomies, such that \(x_{n}^{m},y_{n}\in R^{L\times W\times H}\). The value of a voxel in \(x_{n}^{m}\), \(y_{n}\) is '1' if the voxel belongs to an anatomy and '0' otherwise. Such a formulation extends existing medical shape completion methods that target only a single, pre-defined anatomy to random anatomies. ### Denoising Auto-encoder with Residual Connections Given the notations in Section 2.1, the missing anatomies for subject \(x_{n}\) can be conveniently expressed in a residual form: \(\left\{y_{n}-x_{n}^{m}\right\}_{m=1}^{M}\). Therefore, apart from learning the full mapping \(\mathcal{F}\), we can instead learn a residual mapping \(\mathcal{F}_{res}\): \[\mathcal{F}_{res}:\left\{x_{n}^{m}\right\}_{m=1}^{M}\rightarrow\left\{y_{n}-x _{n}^{m}\right\}_{m=1}^{M},\,n=1,2,...,N \tag{2}\] Unlike \(\mathcal{F}\), the residual mapping \(\mathcal{F}_{res}\) is obviously _one-to-one_, which can be straightforwardly realized based on deep residual learning [7]. Motivated by this observation, we propose to solve the shape completion problem using a 3D denoising auto-encoder (DAE) with a residual connection between the input and the output. The input \(x_{n}^{m}\) is treated as a corrupted version of \(y_{n}\) with random noise. The DAE denoises the input by restoring the anatomies missing in \(x_{n}^{m}\). The DAE is trained in a supervised fashion, with the input being \(\mathcal{X}\) and the ground truth being \(\mathcal{Y}\). Even though both mappings are learnable by the DAE, we presume that a _one-to-one_ mapping relationship is easier to learn than a _many-to-one_ mapping, so that the DAE can reach a superior reconstructive performance by learning \(\mathcal{F}_{res}\). Figure 1: Illustration of the pre-processed dataset. (A, B): the full anatomy segmentations from two subjects. (A-1, A-2, A-3) and (B-1, B-2, B-3): three incomplete instances with random missing anatomies (shown in red). (C): the skeleton in a CT scan. ### Loss Aggregation for Random Anatomy Completion To learn the _many-to-one_ mapping \(\mathcal{F}\), we train the DAE by optimizing a Dice loss function \(\mathcal{L}_{dice}\) aggregated over \(M\) versions of incomplete instances with random missing anatomies: \[\mathcal{L}_{\mathcal{F}}=\sum_{m=1}^{M}\sum_{n=1}^{N}\mathcal{L}_{dice}(y_{n}, \tilde{y}_{n}^{m}) \tag{3}\] where \(\mathcal{L}_{dice}=\frac{2\sum(y_{n}\bigcirc\tilde{y}_{n}^{m})}{\sum(y_{n} \bigcirc y_{n})+\sum(\tilde{y}_{n}^{m}\bigcirc\tilde{y}_{n}^{m})}\) is the standard Dice loss [20]. \(\hat{y}_{n}^{m}\) denotes the prediction for \(x_{n}^{m}\) given the mapping \(\mathcal{F}\), and \(\bigcirc\) denotes the Hadamard product (i.e., element-wise multiplication between two matrices). \(\sum\) denotes the summation of all the elements of a matrix. Optimizing such an aggregated loss function \(\mathcal{L}_{\mathcal{F}}\) ensures that the DAE learns to reconstruct a complete set of anatomies regardless of the class and/or number of anatomies that are absent in the input. Similarly, to learn the _one-to-one_ residual mapping \(\mathcal{F}_{res}\), the following loss function is optimized: \[\mathcal{L}_{\mathcal{F}_{res}}=\sum_{m=1}^{M}\sum_{n=1}^{N}\mathcal{L}_{dice} (y_{n},\tilde{x}_{n}^{m}+x_{n}^{m}) \tag{4}\] where \(\tilde{x}_{n}^{m}\) denotes the reconstructed missing anatomies for \(x_{n}\). Depending on the mapping to be learned, the respective loss function (\(\mathcal{L}_{\mathcal{F}}\) or \(\mathcal{L}_{\mathcal{F}_{res}}\)) is used. ### Multi-class Anatomy Completion For the multi-anatomy completion task, compared to representing \(x_{n}^{m}\) and \(y^{n}\) as binary voxel grids in which different anatomies are not differentiated (Section 2.1), it is more desirable to assign a unique label to each anatomy in \(x_{n}^{m}\) and \(y_{n}\). This extension can be easily achieved by setting the number of output channels of the penultimate layer of the DAE network to the number of anatomy classes. Each channel predicts the probability of occupancy of the voxel grids for an anatomy. The same Dice loss \(\mathcal{L}_{dice}\) can be calculated between the output and the ground truth in one-hot encoding. ## 3 Experiments and Results ### Dataset and Pre-processing We validate our method using a public CT dataset with whole-body anatomy segmentations, which is publicly available at [https://zenodo.org/record/6802614#.Y_YMwXbMIQ8](https://zenodo.org/record/6802614#.Y_YMwXbMIQ8). The dataset comprises 1024 CT images, each accompanied by a set of segmentation masks of 104 anatomies (organs, bones, muscles, vessels) [30]. After screening (discarding images with corrupted segmentations), 737 sets of segmentations are included in this work, which are further randomly split into a training (451) and test set (286). For each set of segmentations, we randomly remove anatomies accounting for at least 10%, 20% and 40% of the entire segmentation's volume to create the incomplete instances \(\mathcal{X}\). The original segmentations serve as the ground truth \(\mathcal{Y}\). Considering that anatomy ratios are subject-specific, different type and/or number of anatomies could have been removed for different subjects given the same threshold, as can be seen from Figure 1. Thus, anatomy removal is analogous to inserting random noise to \(\mathcal{Y}\). In general, using a 10% threshold (Figure 1, A-2, B-2) removes more anatomies than using higher thresholds (20% and 40%), and using a threshold of 40% removes only large anatomies, such as the aorta and the autochthonous back muscles (Figure 1, A-3, B-3). The small bones such as the individual ribs and vertebrae that form the skeleton (Figure 1, C) enclosing the internal anatomies are generally not removed, providing a natural constraint for anatomy reconstruction. We use the ratio-based method to remove anatomies, so that each full segmentation yields three instances with random incompleteness in the training and test set. We denote the three test sets as \(D_{test1}\) (10%), \(D_{test2}\) (20%) and \(D_{test3}\) (40%). Besides random anatomy removal, we create another test set \(D_{test4}\) by removing only one specific anatomy from the full segmentations randomly selected from the test set. All the images are re-scaled to a uniform size of \(128^{3}\) (\(L,W,H=128\)). We made the anatomical shape dataset used in this study publicly available through _MedShapeNet_[15]. ### Implementation Details The DAE is comprised of four two-strided 3D convolutional (conv3D) and transposed convolutional (t_conv3D) layers for downsampling and upsampling. To increase the learning capacity, we add a single-strided conv3D layer after each t_conv3D layer, and further append four single-strided conv3D layers at the end of the DAE. We use _ReLu_ activations and a kernel size of three for all layers, amounting to around 22M trainable parameters. The residual connection is implemented as an addition between the input and the output of the penultimate layer. The DAE is implemented using TensorFlow [1] and trained on an NVIDIA RTX 3090 GPU using the ADAM optimizer [9]. The learning rate is set to 0.0001 and the exponential decay rate for the first moment estimates is set to 0.3 for the ADAM optimizer. ### Experimental Setup Since, to our knowledge, our paper is the first to investigate random anatomy reconstruction, we adhere to the following steps to validate our methods: (i) A baseline is established by training the DAE without residual connection using a conventional Dice loss from existing single anatomy completion studies [18, 16]; (ii) On top of the baseline, we train the DAE using the aggregated Dice loss (Equation 3); (iii) We train the DAE with residual connection (Equation 2) using a conventional Dice loss; (iv) We train the DAE with residual connection using the aggregated Dice loss (Equation 4). For all experiments, the DAE is trained for 100 epochs. The baseline experiment evaluates the feasibility of realizing random anatomy reconstruction using a single shape completion framework, and experiments (ii-iv) verify the effectiveness of each proposed components (i.e., residual connection, loss aggregation) for the anatomy reconstruction task. We denote the trained DAE models from experiment (i-iv) as \(DAE_{b}\), \(DAE_{agg}\), \(DAE_{res}\) and \(DAE_{agg+res}\), respectively. Dice similarity coefficient (DSC) is used for quantitative evaluation of the results on test set \(D_{test1}\), \(D_{test2}\), and \(D_{test3}\). The output of the DAE is interpolated to the original size to calculate the DSC against the ground truth. On \(D_{test4}\), we perform an empirical evaluation of our method in reconstructing one specific anatomy. ### Results #### 3.4.1 Quantitative Evaluation and Statistical Comparison Table 1 presents the quantitative results of the ablation experiments, where the mean and standard deviations (SD) of DSC on test set \(D_{test1}\), \(D_{test2}\) and \(D_{test3}\) are reported. The quantitative comparisons show that both loss aggregation (\(DAE_{agg}\)) and residual connection (\(DAE_{res}\)) help improve the anatomy reconstruction performance compared to the baseline (\(DAE_{b}\)). Furthermore, the comparison between \(DAE_{agg}\) and \(DAE_{res}\) demonstrates that the DAE is significantly better at learning the residual (Equation 2) than the full anatomy (Equation 1). Combining both components (\(DAE_{agg+res}\)) further improves the reconstructive performance of the DAE compared to using each component individually. Furthermore, \(DAE_{agg}\), \(DAE_{res}\) and \(DAE_{agg+res}\) also perform more stably across test instances (smaller SD) than the baseline, on all three test sets. Compared with \(D_{test1}\) and \(D_{test2}\), we notice an obvious drop of mean DSC on \(D_{test3}\) for the baseline model, suggesting that \(DAE_{b}\) tends to perform worse when the combined ratio of all missing anatomies becomes smaller. The combined ratio of all missing anatomies in \(D_{test3}\) is likely to be lower, since fewer anatomies can be removed due to the higher ratio threshold. Higher sensitivity is required to detect and reconstruct smaller anatomies. Applying loss aggregation (\(DAE_{agg}\)) enforces the _many-to-one_ mapping and therefore mitigates the low sensitivity issue. The residual mapping (\(DAE_{res}\)) overcomes the low-sensitivity issue even \begin{table} \begin{tabular}{c c c c} \hline Methods & \(D_{test1}\) & \(D_{test2}\) & \(D_{test3}\) \\ \hline \(DAE_{b}\) & 0.783 (0.075) & 0.778 (0.061) & 0.757 (0.058) \\ \(DAE_{agg}\) & 0.789 (0.073) & 0.803 (0.059) & 0.812 (0.053) \\ \(DAE_{res}\) & **0.865** (0.069) & 0.885 (0.046) & 0.887 (0.047) \\ \(DAE_{agg+res}\) & **0.865** (0.074) & **0.904** (0.039) & **0.931** (0.030) \\ \hline \end{tabular} \end{table} Table 1: Mean (Standard Deviation) of DSC on \(D_{test1}\), \(D_{test2}\), \(D_{test3}\) without loss aggregation. A statistical comparison of the DSC between different models on the three test sets is also performed based on a t-test, and the \(p\) values are reported in Table 2. \(p<0.05\) indicates a statistically significant improvement. Based on Table 1 and the statistical comparisons of \(DAE_{agg}\leftrightarrow DAE_{b}\) and \(DAE_{agg+res}\leftrightarrow DAE_{res}\), we can also conclude that loss aggregation does not significantly improve the results on \(D_{test1}\), which has a very high combined ratio of missing anatomies. \begin{table} \begin{tabular}{c c c c} \hline Methods & \(D_{test1}\) & \(D_{test2}\) & \(D_{test3}\) \\ \hline \(DAE_{agg}\leftrightarrow DAE_{b}\) & 0.328 & 4.301e-07 & 8.176e-29 \\ \(DAE_{res}\leftrightarrow DAE_{b}\) & 2.839e-35 & 2.147e-86 & 7.427e-114 \\ \(DAE_{agg+res}\leftrightarrow DAE_{b}\) & 2.295e-33 & 1.437e-110 & 2.985e-163 \\ \(DAE_{res}\leftrightarrow DAE_{agg}\) & 9.931e-32 & 3.051e-59 & 5.644e-57 \\ \(DAE_{agg+res}\leftrightarrow DAE_{res}\) & 0.989 & 2.866e-07 & 1.797e-34 \\ \hline \end{tabular} \end{table} Table 2: Statistical Comparison of DSC on Test Set \(D_{test1}\), \(D_{test2}\), \(D_{test3}\) Between Different Methods. The Table Reports the \(p\) Values From a T-test. Figure 2: Qualitative comparison of anatomy reconstruction performance. indicates the overlap between the reconstruction and the input, and indicates the reconstructed missing anatomies. Small white blocks in the reconstructions indicate false negative predictions. #### 3.2.3 Qualitative Evaluation Figure 2 illustrates the reconstruction results in 2D coronal planes. Multiple test instances with different degrees of incompleteness are presented. As seen from the ground truth (Figure 2, second column), an ideal reconstruction covers 100% of the input and does not extend beyond the region enclosed by the ribs (Figure 2, first column). The qualitative comparison shows that the DAE models trained for full anatomy reconstruction (\(DAE_{b}\) and \(DAE_{agg}\), Equation 1) have a tendency to produce false negatives, i.e., they fail to fully reconstruct existing anatomies, as shown by the small white blocks in the third and fourth column of Figure 2, as well as false positives, i.e, they generate a reconstruction beyond the missing anatomies. Resorting to residual learning (\(DAE_{res}\) and \(DAE_{agg+res}\), Equation 2) obviously mitigates the false prediction issue. Figure 3 shows the reconstruction results from the best Figure 3: The first to last row show the reconstructed aorta, autochthonous back muscles, liver and lung by \(DAE_{agg+res}\). Two test instances are presented for each anatomy class. performing model \(DAE_{agg+res}\) for a single anatomy, specifically the aorta, the autochthonous back muscles, liver and lung. For single anatomy reconstruction, only one random anatomy is missing in the input (\(D_{test4}\)). For smaller anatomies like the kidney and spleen, these models are not sufficiently sensitive to detect their absence and produce a reasonable reconstruction. Only for relatively large anatomies, such as livers and lungs, single anatomy reconstruction is feasible (Figure 3). Increasing the loss aggregation scope (i.e., the \(M\) in Equation 3, 4) to explicitly cover the individual small anatomies during the training process is a promising solution to the low sensitivity problem. Appendix (A) provides preliminary results that support this observation regarding the reconstruction of small missing anatomies. In Appendix (B), we show that it is feasible to reconstruct the whole anatomies given only the skeleton (rib cage + spine). These findings are potentially useful for (semi-)supervised whole-body segmentations, in which a human annotator provides manual segmentations for only a few of the anatomies, while the anatomy complotor generates the segmentation masks in 3D for the rest. Even though the quality of the generated segmentations might not be sufficient to serve as the ground truth, they could be used as the initial pseudo labels that can be iteratively refined [27]. Appendix (B) gives an extreme example where only the skeleton is given or annotated. It should be noted that the current results for such examples are not optimal, and serve only as a proof of concept. #### 4.1.1 Multi-class Anatomy Completion For the multi-class experiment, we choose 12 anatomies, including the lung, heart, spleen, stomach, pancreas, spine, rib cage, liver, kidney, aorta, a pair of autochthonous muscles, and the pulmonary artery (Figure 4 (A)). We extract the 12 above-mentioned anatomy segmentations from 18 whole-body segmentations randomly chosen from the training set. We create 10 incomplete instances for each case by randomly removing some of the 12 anatomies (e.g., Figure 4 (B-D)), resulting in \(18\times 10=180\) training samples. Figure 4: Dataset for the multi-class anatomy completor. (A) the 12 anatomy segmentations. (B-D) three incomplete instances where some of the 12 anatomies are missing. Images are resized to \(256\times 256\times 128\) (\(L,W=256,H=128\)) and the \(DAE_{agg}\) method is used for the experiment (i.e., to learn a _many-to-one mapping_). Figure 5 presents the multi-class anatomy completion results on test samples that are not involved during training. It is noticeable from the reconstructions that the long thin structures i.e., the ribs, are not well reconstructed (e.g., the last row of Figure 5). Terracing artifacts are also obvious on the reconstructed anatomical shapes compared to the ground truth, which can be partly attributed to downsampling. ## 4 Discussion and Conclusion In this paper, we demonstrated that multi-class anatomy reconstruction can be realized in a single shape completion framework. Given an incomplete instance with random missing anatomies, a DAE network reconstructs the missing anatomies specific to the instance, so that the new reconstructions geometrically align with existing anatomies. We further verified that residual learning and loss aggregation can significantly boost the performance of the DAE for the reconstructive task, and mitigate the low sensitivity and false prediction issues. Besides the baseline DAE, residual connection and loss aggregation can be easily Figure 5: Qualitative results of multi-class anatomy completion. The first and second column show three incomplete instances from the same subject in 3D and coronal views. The last two columns show the corresponding reconstruction results. implemented on top of more complicated network architectures. The models can not only reconstruct multiple missing anatomies simultaneously (Figure 2) but also a specific anatomy, despite their sizes (Figure 3 and Appendix A). There are several known limitations remaining to be addressed in future work: (i) Not all anatomy classes are covered by the segmentations of the CT dataset, such as the skull, full limb, brain, skins and soft tissues (e.g., facial soft tissues and most of the muscles); (ii) A quantitative evaluation for each specific anatomy is lacking (only qualitative results are provided in Figure 3 and Appendix A); (iii) The reconstructions from the multi-class anatomy completor suffer from terracing artifacts and discontinuous ribs. A super-resolution procedure can be applied to refine the initial reconstructions using sparse convolutional neural networks [12]. An interesting direction for future work is to use the multi-class anatomy completor in whole-body segmentation, where it can be used to generate the initial pseudo labels of the organs given only skeletal annotations (e.g., the rib case and spine. See Appendix B). ## Acknowledgement The work is supported by the Plattform fur KI-Translation Essen (KITE) from the REACT-EU initiative (EFRE-0801977, [https://kite.ikim.nrw/](https://kite.ikim.nrw/)) and,,NUM 2.0" (FKZ: 01KX2121). The anatomical shape dataset used in this paper can be accessed through _MedShapeNet_ at [https://medshapenet.ikim.nrw/](https://medshapenet.ikim.nrw/). ## Appendix A Reconstructing Small Anatomies ## Appendix B Anatomy Completion from Skeletons (rib cage + spine) Figure 1: Reconstruction results of individual, small anatomies by \(DAE_{agg+res}\) trained with an increased loss aggregation scope (\(M\)). From the top: heart (2.4%), spine (4.3%), kidney (1.7%) and spleen (1.2%). The percentages in the brackets are the approximate volume ratio of the anatomy to the corresponding whole-body segmentations. The preliminary results demonstrate that increasing \(M\) (in Equation 3, 4 in the main manuscript) increases also the sensitivity of the reconstructive model, which helps the model identify and reconstruct very small anatomies. Two test instances are presented for each anatomy class.
2309.13594
Probing Schwarzschild-like Black Holes in Metric-Affine Bumblebee Gravity with Accretion Disk, Deflection Angle, Greybody Bounds, and Neutrino Propagation
In this paper, we investigate Schwarzschild-like black holes within the framework of metric-affine bumblebee gravity. We explore the implications of such a gravitational setup on various astrophysical phenomena, including the presence of an accretion disk, the deflection angle of light rays, the establishment of greybody bounds, and the propagation of neutrinos. The metric-affine bumblebee gravity theory offers a unique perspective on gravitational interactions by introducing a vector field that couples to spacetime curvature. We analyze the behavior of accretion disks around Schwarzschild-like black holes in this modified gravity scenario, considering the effects of the bumblebee field on the accretion process. Furthermore, we scrutinize the deflection angle of light rays as they traverse the gravitational field, highlighting potential deviations from standard predictions due to the underlying metric-affine structure. Investigating greybody bounds in this context sheds light on the thermal radiation emitted by black holes and how the modified gravity framework influences this phenomenon. Moreover, we explore neutrino propagation around Schwarzschild-like black holes within metric-affine bumblebee gravity, examining alterations in neutrino trajectories and interactions compared to conventional general relativity. By comprehensively probing these aspects, we aim to unravel the distinctive features and consequences of Schwarzschild-like black holes in the context of metric-affine bumblebee gravity, offering new insights into the nature of gravitational interactions and their observable signatures.
G. Lambiase, L. Mastrototaro, Reggie C. Pantig, Ali Ovgun
2023-09-24T09:38:27Z
http://arxiv.org/abs/2309.13594v1
Probing Schwarzschild-like Black Holes in Metric-Affine Bumblebee Gravity with Accretion Disk, Deflection Angle, Greybody Bounds, and Neutrino Propagation ###### Abstract In this paper, we investigate Schwarzschild-like black holes within the framework of metric-affine bumblebee gravity. We explore the implications of such a gravitational setup on various astrophysical phenomena, including the presence of an accretion disk, the deflection angle of light rays, the establishment of greybody bounds, and the propagation of neutrinos. The metric-affine bumblebee gravity theory offers a unique perspective on gravitational interactions by introducing a vector field that couples to spacetime curvature. We analyze the behavior of accretion disks around Schwarzschild-like black holes in this modified gravity scenario, considering the effects of the bumblebee field on the accretion process. Furthermore, we scrutinize the deflection angle of light rays as they traverse the gravitational field, highlighting potential deviations from standard predictions due to the underlying metric-affine structure. Investigating greybody bounds in this context sheds light on the thermal radiation emitted by black holes and how the modified gravity framework influences this phenomenon. Moreover, we explore neutrino propagation around Schwarzschild-like black holes within metric-affine bumblebee gravity, examining alterations in neutrino trajectories and interactions compared to conventional general relativity. By comprehensively probing these aspects, we aim to unravel the distinctive features and consequences of Schwarzschild-like black holes in the context of metric-affine bumblebee gravity, offering new insights into the nature of gravitational interactions and their observable signatures. Black hole; Lorentz symmetry breaking; Metricaffine; Bumblebee gravity; Shadow; Quasinormal modes; Greybody; Neutrino oscillation pacs: 95.30.Sf, 04.70.-s, 97.60.Lf, 04.50.+h ## I Introduction A significant hurdle in the field of theoretical physics involves the harmonization of Einstein's widely accepted theory of gravitation, known as general relativity (GR), with the standard model of particle physics (SM), which adeptly brings together all other fundamental forces. One potential avenue for addressing this quandary lies in the concept of spontaneous symmetry disruption, a pivotal factor in the realm of elementary particle physics. During the initial stages of the universe, it's plausible that the temperature reached a point where such symmetry disruption could have been activated. One form of symmetry disruption that arises while attempting to quantize general relativity is the breakdown of Lorentz symmetry. Studies reveal that this symmetry might experience significant violations at the Planck scale (around \(10^{19}GeV\)) in various approaches to quantum gravity (QG), indicating its potential non-fundamental nature in the natural order. Additionally, this Lorentz symmetry breakdown (LSB) could furnish conceivable indicators of the underlying quantum gravity framework at lower energy levels. However, implementing consistent Lorentz symmetry breakdown (LSB) within the gravitational context presents distinct challenges when compared to incorporating Lorentz-breaking extensions into non-gravitational field theories. In the context of flat spacetimes, it's feasible to introduce additive terms that break Lorentz symmetry, such as the Carroll-Field-Jackiw term [1], either time [2], and other analogous terms (as seen in [3]). These terms can be rooted in a constant vector (tensor) multiplied by functions of fields and their derivatives. Nevertheless, when dealing with curved spacetimes, these features cannot be suitably adapted. Nevertheless, effects stemming from the underlying quantum gravity theory may manifest themselves at lower energy scales, offering glimpses of this grand unification. In 1989, Kostelecky and Samuel pioneered a simple model for spontaneous Lorentz violation known as bumblebee gravity [4]. In this model, a bumblebee field with a vacuum expectation value disrupts Lorentz symmetry, and Lorentz violation emerges from the dynamics of a single vector field, denoted as \(B_{\mu}\)[5; 6; 7; 8]. Hence one avenue to explore the Planck-scale signals and the potential breaking of relativity is through the violation of Lorentz symmetry [9]. Theories that violate Lorentz symmetry at the Planck scale, while incorporating elements of both GR and the SM, are encompassed by effective field theories known as the Standard Model Extension (SME) [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35]. Einstein's theory of general relativity has proven its mettle through numerous experimental tests, including the groundbreaking technique of gravitational lensing [36]. Gravitational lensing not only helps us comprehend galaxies, dark matter, dark energy, and the universe but also plays a crucial role in understanding black holes, wormholes, global monopoles, and other celestial objects [37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57]. A novel approach to calculate the deflection angle of light has been introduced by Gibbons and Werner. This method allows for the computation of light deflection in non-rotating asymptotically flat spacetimes. It leverages the Gauss-Bonnet theorem within the context of the optical geometry surrounding a black hole [58]. Furthermore, this pioneering technique has been extended to encompass stationary spacetimes by Werner [59]. Einstein's theory of gravity has yielded one of its most profound predictions: the existence of black holes. The existence of black holes has been firmly established through a wealth of astrophysical observations. Notably, the detection of gravitational waves (GWs) by LIGO/VIRGO collaborations has provided compelling evidence for black holes [60]. Additionally, the Event Horizon Telescope (EHT) has made history by capturing the first images of the shadow cast by supermassive black holes at the centers of galaxies, including M87* and Sgr A* [61]. These remarkable achievements have not only confirmed the existence of black holes but have also opened new avenues for the study of their properties and the nature of gravity in the strong-field regime. Main aim of this paper is to examine the characteristics of Schwarzschild-like black holes within the framework of metric-affine bumblebee gravity [62]. We seek to unravel the far-reaching consequences of this gravitational framework on a wide range of astrophysical phenomena. Specifically, we investigate its impact on the formation and behavior of accretion disks, the deflection patterns of light rays, the establishment of greybody bounds, and the propagation behaviors of neutrinos. Through these investigations, we aim to deepen our understanding of the unique properties and effects associated with black holes in metric-affine bumblebee gravity. The organization of this manuscript is outlined as follows. In Section II, we investigate the weak deflection angle of spherically symmetric metric of Black holes in a metric-affine bumblebee gravity. In the subsequent Section III, we undertake an analysis of the spherically infalling accretion disk exhibited by these spherical black holes in a metric-affine bumblebee gravity featuring a LSV parameters. In Section IV, we study the greybody factors of the black hole, and investigate the neutrino energy deposition in the Section V. A summary of our investigation is presented in Section VI. ## II Weak deflection angle of black holes in a metric-affine bumblebee gravity In this section, we study the weak deflection angle of spherically symmetric metric of black holes in a metric-affine bumblebee gravity [62] \[ds_{(g)}^{2}=-\frac{\left(1-\frac{2M}{r}\right)}{\sqrt{\left(1+\frac{3\Lambda }{4}\right)\left(1-\frac{X}{4}\right)}}dt^{2}+\frac{dr^{2}}{\left(1-\frac{2M}{ r}\right)}\sqrt{\frac{\left(1+\frac{3X}{4}\right)}{\left(1-\frac{X}{4}\right)}}+r^{2} \left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right). \tag{1}\] In our analysis, we've employed a condensed symbol, denoted as \(X\), which succinctly signifies the Lorentz-violating parameter, and this symbol is defined as \(X=\xi b^{2}\). To compute the weak deflection angle by GBT theorem, it was demonstrated in a non-asymptotic black hole spacetime by Li et al. [21] that the GBT can be expressed as follows: \[\hat{\alpha}=\iint_{D}KdS+\phi_{\text{RS}}, \tag{2}\] Here, let's define some key terms: \(r_{\text{ps}}\) represents the radius of the particle's circular orbit, while S and R indicate the radial positions of the source and receiver, respectively. These positions are the integration domains. It is important to note that the infinitesimal curved surface element \(dS\) can be expressed as: \[dS=\sqrt{g}\,dr\,d\phi. \tag{3}\] Additionally, \(\phi_{\text{RS}}\) denotes the coordinate position angle between the source and the receiver, defined as \(\phi_{\text{RS}}=\phi_{\text{R}}-\phi_{\text{S}}\) This angle can be determined through an iterative solution of the equation: \[F(u)=\left(\frac{du}{d\phi}\right)^{2}=\frac{C(u)^{2}u^{4}}{A(u)B(u)}\Bigg{[} \left(\frac{E}{J}\right)^{2}-A(u)\left(\frac{1}{J^{2}}+\frac{1}{C(u)}\right) \Bigg{]}. \tag{4}\] Then we apply the substitution \(r=1/u\) and derive the angular momentum and energy of the massive particle based on the given impact parameter \(b\). \[J=\frac{\mu vb}{\sqrt{1-v^{2}}},\quad E=\frac{\mu}{\sqrt{1-v^{2}}}. \tag{5}\] With Eq. (1), we find \[F(u)=\frac{1}{b^{2}}-u^{2}-\frac{\left[1+\left(3b^{2}u^{2}-3\right)v^{2} \right]X}{4v^{2}b^{2}}-\frac{3\left[1+\left(b^{2}u^{2}-1\right)v^{2}\right]MXu }{2v^{2}b^{2}}+\frac{2\left[1+\left(b^{2}u^{2}-1\right)v^{2}\right]Mu}{v^{2}b ^{2}}. \tag{6}\] The above enables one to solve for the azimuthal separation angle \(\phi\) as \[\phi=\arcsin(bu)+\frac{M\left[v^{2}\left(b^{2}u^{2}-1\right)-1 \right]}{bv^{2}\sqrt{1-b^{2}u^{2}}}-\frac{X}{8v^{2}\sqrt{-b^{2}u^{2}+1}}+\frac{ \left[b^{3}u^{3}v^{2}+2u^{2}v^{2}b^{2}+\left(-v^{2}+1\right)ub-2v^{2}\right] XM}{8\left(-b^{2}u^{2}+1\right)^{\frac{3}{2}}b\,v^{4}}, \tag{7}\] which is the also the direct expression for \(\phi_{S}\) as \(u\) is replaced by \(u_{S}\). Meanwhile, the expression for the receiver is \(\phi_{R}=\pi-\phi_{S}\) where \(u_{S}\) should be replaced by \(u_{R}\). Leaving the angle \(\phi\) for a while, the Gaussian curvature \(K\) in terms of connection coefficients can be calculated as \[K=\frac{1}{\sqrt{g}}\left[\frac{\partial}{\partial\phi}\left( \frac{\sqrt{g}}{g_{rr}}\Gamma_{rr}^{\phi}\right)-\frac{\partial}{\partial r} \left(\frac{\sqrt{g}}{g_{rr}}\Gamma_{r\phi}^{\phi}\right)\right]=-\frac{1}{ \sqrt{g}}\left[\frac{\partial}{\partial r}\left(\frac{\sqrt{g}}{g_{rr}}\Gamma _{r\phi}^{\phi}\right)\right] \tag{8}\] since \(\Gamma_{rr}^{\phi}=0\). If there exists an analytical solution for \(r_{\text{ps}}\) within a particular spacetime, then we can establish the following relationship: \[\int_{r_{\text{ps}}}^{r(\phi)}K\sqrt{g}dr=-\frac{A(r)\left(E^{2}-A (r)\right)C^{\prime}-E^{2}C(r)A(r)^{\prime}}{2A(r)\left(E^{2}-A(r)\right) \sqrt{B(r)C(r)}}\bigg{|}_{r=r(\phi)} \tag{9}\] then \[\left[\int K\sqrt{g}dr\right]\bigg{|}_{r=r_{\text{ps}}}=0. \tag{10}\] The prime notation signifies differentiation with respect to the radial coordinate, \(r\). Consequently, the weak deflection angle [21], is given by: \[\hat{\alpha}=\int_{\phi_{\text{S}}}^{\phi_{\text{R}}}\left[-\frac {A(r)\left(E^{2}-A(r)\right)C^{\prime}-E^{2}C(r)A(r)^{\prime}}{2A(r)\left(E^{ 2}-A(r)\right)\sqrt{B(r)C(r)}}\bigg{|}_{r=r(\phi)}\right]d\phi+\phi_{\text{RS}}. \tag{11}\] Then we find \[\left[\int K\sqrt{g}dr\right]\bigg{|}_{r=r_{\phi}}=-\phi_{\text{RS }}-\frac{\left(\cos\phi_{R}-\cos\phi_{S}\right)\left(v^{2}+1\right)M}{v^{2}b} +\frac{3X\phi_{\text{RS}}}{8}\] \[+\frac{XM}{8bv^{4}}\left[\phi_{\text{RS}}(1+v^{2})+(\cos\phi_{R} -\cos\phi_{S})(v^{2}+3v^{4}+2)\right]. \tag{12}\] With the above expression, Eq. (7) is needed. One should note that if \(\phi_{S}\) is given, \(\phi_{\text{RS}}=\pi-2\phi_{S}\). The cosine of \(\phi\) is then \[\cos\phi=\sqrt{1-b^{2}u^{2}}-\frac{Mu\left[v^{2}\left(b^{2}u^{2}-1 \right)-1\right]}{\sqrt{v^{2}\left(1-b^{2}u^{2}\right)}}+\frac{buX}{8v^{2} \sqrt{-b^{2}u^{2}+1}}-\frac{\left[1+\left(2b^{4}u^{4}+2b^{3}u^{3}-3b^{2}u^{2}-2 bu+1\right)v^{2}\right]XM}{8\left(-b^{2}u^{2}+1\right)^{\frac{3}{2}}v^{4}b} \tag{13}\] which should be applied to the source and the receiver. Using the above expression to Eq. (12), we get the final analytic expression for the weak deflection angle that accommodates both time-like particles and finite distance as \[\alpha =\frac{2M\left(v^{2}+1\right)}{bv^{2}}\left(\sqrt{1-b^{2}u_{\text {S}}^{2}}+\sqrt{1-b^{2}u_{\text{S}}^{2}}\right)+\frac{3}{8}X\left[\pi-2(\sin^{ -1}(bu_{\text{S}})+\sin^{-1}(bu_{\text{R}}))\right]\] \[+\frac{MX}{8bv^{4}}\left\{\left(v^{2}+1\right)\left[\pi-2(\sin^{ -1}(bu_{\text{S}})+\sin^{-1}(bu_{\text{S}}))\right]-2\left(3v^{4}+v^{2}+2 \right)\left(\sqrt{1-b^{2}u_{\text{S}}^{2}}+\sqrt{1-b^{2}u_{\text{R}}^{2}} \right)\right\}. \tag{14}\] Assuming that \(u_{R}=u_{S}\), and these are distant from the black hole (\(u\to\infty\)), \[\alpha=\frac{2\left(v^{2}+1\right)M}{v^{2}b}+\frac{3X\pi}{8}+\frac{XM}{8v^{4} b}\left[-6v^{4}+\left(\pi-2\right)v^{2}+\pi-4\right]. \tag{15}\] Finally, when \(v=1\), \[\alpha=\frac{4M}{b}+\frac{3X\pi}{8}+\frac{\left(\pi-6\right)XM}{4b}. \tag{16}\] We saw that the weak deflection angle is sensitive to the metric-affine bumblebee parameter \(X\), in contrast to the shadow, which cannot detect the effects of the bumblebee parameter in the strong field regime. To visualize the derived equation, we plot it numerically, and the results are shown in Fig. 1. Here, we used some of the M87* SMBH parameters as an example, such as its mass \(M=6.5\text{x}10^{9}M_{\odot}\), and \(D=16.8\) Mpc, which is the distance used between the SMBH and the receiver, as well as the source. It is also useful to plot the deflection behavior in a log-log plot because of these enormous numbers. In the left plot, we observe that time-like particles give a higher value for \(\hat{\alpha}\) as the impact parameter of the trajectory lessens. The time-like and null particles give the same \(\hat{\alpha}\) value as \(b\) becomes comparable to \(D\). Next, the bumblebee parameter \(X\) increases the value of \(\hat{\alpha}\) relative to the Schwarzschild case, but it has a peculiar effect of leveling off this value as \(b\) changes. At lower impact parameters, \(X\)'s effect seems to diminish as it merely follows and coincides with the Schwarzschild behavior. The sensitivity of the weak deflection angle with \(X\)'s influence is then strong at large values of \(b\) leading to its potential detection. Finally, we also plot the comparison between the finite distance effect correction Eq. (14) with the approximated case Eq. (16). We saw that there was no distinction upon the use of these expressions except when \(b\) is nearly the same as \(D\). ## III Shadows with infalling accretions In this section, we use the techniques from references [63] and [64] to explore a realistic model of the shadow cast by a spherical accretion disk around a black hole. This method accounts for the dynamic nature of accretion disks and their Figure 1: Weak deflection angle (in \(\mu\)as) using M87* parameters. The left figure compares the behavior of the deflection angle between a massive particle with speed \(v=0.75\)(broken lines) and photons where \(v=1\) (solid lines). The right plot compares the effect of finite distance to the deflection angle of photons, where the solid lines represent Eq. (14), and the broken lines represent Eq. (16). These are all for different theoretical values for the bumblebee parameter \(X\). The vertical dotted line represents our location from M87* SMBH, which is \(16.8\) Mpc. synchrotron emission, which are important factors for obtaining an accurate image of the shadow. We begin by considering the specific intensity of light observed at frequency \(\nu_{\text{obs}}\). This is achieved by solving the integral along the path of the light ray: \[I(\nu_{\text{obs}},b_{\gamma})=\int_{\gamma}g^{3}j(\nu_{e})dl_{\text{prop}}. \tag{17}\] The redshift factor accounts for the fact that photons emitted from a free-falling accretion disk are redshifted due to the strong gravitational field of the black hole. The amount of redshift depends on the impact parameter, which is the distance between the photon's trajectory and the center of the black hole. The redshift factor is important for calculating the observed spectrum of an accreting black hole. By taking the redshift factor into account, astronomers can more accurately model the accretion process and learn more about the properties of the black hole. The redshift factor for accretion in free fall is defined as: \[g=\frac{k_{\mu}u_{o}^{\mu}}{k_{\mu}u_{e}^{\mu}}. \tag{18}\] where where \(b_{\gamma}\) is the impact parameter, \(j(\nu_{e})\) is the emissivity per unit volume, \(dl_{\text{prop}}\) is the infinitesimal proper length, and \(\nu_{e}\) is the frequency of the emitted photon. In this context, the 4-velocity of the photon is denoted as \(k^{\mu}\), which corresponds to \(\dot{x}_{\mu}\), while the 4-velocity of the distant observer is represented by \(u_{o}^{\mu}\) and can be written as \((1,0,0,0)\). Additionally, \(u_{e}^{\mu}\) represents the 4-velocity of the accretion in free fall \[u_{e}^{t}=\frac{1}{A(r)},\quad u_{e}^{r}=-\sqrt{\frac{1-A(r)}{A(r)B(r)}},\quad u _{e}^{\theta}=u_{e}^{\phi}=0. \tag{19}\] By employing the relation \(k_{\alpha}k^{\alpha}=0\), we can derive constants of motion for photons, namely \(k_{r}\) and \(k_{t}\). \[k_{r}=\pm k_{t}\sqrt{B(r)\left(\frac{1}{A(r)}-\frac{b^{2}}{r^{2}}\right)}. \tag{20}\] The redshift factor \(g\) tells us how much the photon's frequency is redshifted or blueshifted due to the gravitational field of the black hole. \[g=\Big{(}u_{e}^{t}+\frac{k_{r}}{k_{t}}u_{e}^{r}\Big{)}^{-1}, \tag{21}\] The proper distance \(dl_{\gamma}\) is the distance traveled by the photon along its trajectory, taking into account the curvature of spacetime. When a photon approaches the black hole \(+\), it is redshifted. This is because the photon has to lose energy in order to overcome the gravitational pull of the black hole. When a photon moves away \(-\) from the black hole, it is blueshifted. This is because the photon gains energy as it escapes the gravitational pull of the black hole. \[dl_{\gamma}=k_{\mu}u_{e}^{\mu}d\lambda=\frac{k^{t}}{g|k_{r}|}dr. \tag{22}\] To focus exclusively on monochromatic emission, we can use the specific emissivity with a rest-frame frequency \(\nu_{*}\): \[j(\nu_{e})\propto\frac{\delta(\nu_{e}-\nu_{*})}{r^{2}}. \tag{23}\] The intensity equation presented in (17) transforms into the following form for monochromatic emission: \[F(b_{\gamma})\propto\int_{\gamma}\frac{g^{3}}{r^{2}}\frac{k_{e}^{t}}{k_{e}^{ r}}dr. \tag{24}\] We delve into the shadow produced by the thin-accretion disk of Schwarzschild-like black hole in metric-affine Bumblebee gravity framework. To start, we numerically solve the above equation using the _Mathematica_ notebook package [55], which has also been utilized in previous works [65; 66; 67; 68; 69; 70; 27; 48; 52; 67; 68; 69; 71; 46; 72; 73; 74; 75; 76; 77; 78; 79; 80]. This integration of the flux demonstrates how the metric-affine Bumblebee gravity parameter \(X\) affects the specific intensity observed by a distant observer for an infalling accretion, as illustrated in Figs. (2, and 3). These plots in Figs. (2, and 3) depict specific intensities for various \(X\) values versus the impact parameter \(b\) as observed by a distant observer. We observe that increasing the value of \(X\) leads to a rise in intensity. Subsequently, the intensity sharply peaks when photons are swiftly captured by the black hole (at the photon sphere). Beyond this peak, intensity gradually diminishes. The plots (2, and 3) also show that the intensity peaks when photons are swiftly captured by the black hole (at the photon sphere). This is because the photon sphere is the closest distance to the black hole at which a photon can orbit without falling in. Photons that orbit at the photon sphere are very tightly bound, and therefore emit a lot of light. Beyond the peak, the intensity gradually diminishes. This is because photons that are farther away from the black hole are less tightly bound, and therefore emit less light. The plots also show that the intensity is higher for smaller impact parameters. This is because photons with smaller impact parameters have to travel a shorter distance to escape the black hole, and therefore lose less energy. As a result, they are more likely to be observed. The plots in Figs. (2, and 3) are important because they provide us with a better understanding of the light emitted by accreting black holes. This information can be used to study the physics of accretion and to constrain the parameters of black holes. Note that, we indicate that the metric-affine bumblebee gravity parameters cannot be constrained through the black hole Figure 3: The specific intensity \(I_{\text{obs}}\) seen by a distant observer for an infalling accretion from black hole at fixed \(M=1\), and variable \(X=0.3\) (orange), \(X=0.5\) (blue), \(X=0.8\) (red), \(X=1\) (gray), and Schwarzschild black hole (black). Figure 2: Observational appearance of a spherically free-falling accretion emission near a black hole of charge \(M=1\),variable \(X=0.3\),and \(X=0.8\) and first one is for Schwarzschild black hole. It is observed that as the value of \(X\) increases, the intensity of the emission also increases. shadow. That is, we get the final result as \(R_{\text{sh}}=3\sqrt{3}M\). ## IV Greybody factors The greybody factor (GF) is a parameter that characterizes the probability of a quantum field escaping from a black hole. It is defined as the ratio of the outgoing flux of Hawking radiation to the total flux of Hawking radiation. A high GF indicates a greater likelihood that Hawking radiation can reach infinity. The GF is important for estimating the intensity of Hawking radiation, which can be used to learn more about the properties of black holes. For example, the GF can be used to constrain the mass and spin of a black hole. The idea of the rigorous Greybody bound was originally introduced in [71; 72], offering a qualitative characterization of a black hole. In this section, we will investigate the greybody factor associated with the Schwarzschild-like black hole within the framework of metric-affine Bumblebee gravity. The greybody factor for a massless scalar field propagating around a Schwarzschild-like black hole in metric-affine Bumblebee gravity can be calculated using the Klein-Gordon equation, which is given by: \[\square\Phi=\frac{1}{\sqrt{-g}}\partial_{\mu}(\sqrt{-g}g^{\mu\nu}\partial_{ \nu}\Phi)=0. \tag{25}\] By disregarding the impact of the field on the spacetime (back-reaction), we can focus solely on Eq. (7) at the zeroth order: \[ds^{2}=-|g_{tt}|dt^{2}+g_{rr}dr^{2}+r^{2}d\Omega_{2}^{2} \tag{26}\] The scalar field can be traditionally decomposed using spherical harmonics as follows: \[\Phi(t,r,\theta,\phi)=\frac{1}{r}\sum_{l,m}\psi_{l}(t,r)Y_{lm}(\theta,\phi), \tag{27}\] Here, \(\psi_{l}(t,r)\) represents the time-dependent radial wave function, with \(l\) and \(m\) serving as indices for the spherical harmonics \(Y_{lm}\). The spherical harmonics are a complete set of basis functions for representing scalar fields on a sphere. They are also eigenfunctions of the angular momentum operators. The radial functions \(\psi_{l}(t,r)\) are determined by solving the Klein-Gordon equation in the radial direction. The solution depends on the angular momentum quantum numbers \(l\) and \(m\), as well as the mass of the scalar field m and the effective potential \(V(r)\). Once the radial functions have been calculated, the scalar field can be reconstructed using the equation above. The decomposition of the scalar field using spherical harmonics is useful for studying the behavior of scalar fields around black holes. For example, the greybody factor for a massless scalar field propagating around a Schwarzschild-like black hole can be calculated using the spherical harmonic decomposition. The spherical harmonic decomposition is also useful for studying other types of physical systems, such as atoms and molecules. Substituting these into Eq. (25), we obtain the following expression: \[\partial_{r_{*}}^{2}\psi(r_{*})_{l}+\omega^{2}\psi(r_{*})_{l}=V(r)\psi(r_{*})_ {l}, \tag{28}\] Here, we introduce the tortoise coordinate \(r_{*}\), which is defined as: \[\frac{dr_{*}}{dr}=\sqrt{g_{rr}\left|g_{tt}^{-1}\right|} \tag{29}\] furthermore, the effective potential of the field, denoted as \(V(r)\), is defined as follows: \[V(r)=|g_{tt}|\left(\frac{l(l+1)}{r^{2}}+\frac{1}{r\sqrt{|g_{tt}|g_{rr}}}\frac{ d}{dr}\sqrt{|g_{tt}|g_{rr}^{-1}}\right). \tag{30}\] The figure 4 displays the effective potential \(V\) as functions of \(r\). Notably, the effective potentials exhibit distinct behaviors for variable \(X\), and it's evident that they approach the Schwarzschild potentials as \(X\) approaches zero. Consequently, we proceed to calculate the bound on the greybody factor \[T\geq\mathrm{sech}^{2}\left(\int_{-\infty}^{\infty}\vartheta dr_{*}\right), \tag{31}\] where \[\vartheta=\frac{\sqrt{\left[h^{\prime}\left(r_{*}\right)\right]^{2}+ \left[w^{2}-V\left(r_{*}\right)-h^{2}\left(r_{*}\right)\right]^{2}}}{2h\left(r_ {*}\right)}. \tag{32}\] It's worth noting that the function \(h(r_{*})\) fulfills the condition \(h(-\infty)=h(\infty)=w,\) as specified in [71]. By choosing \(h=w\) and substituting the tortoise coordinate \(r_{*}\), we can express it as follows: \[T_{b}\geq\mathrm{sech}^{2}\left(\frac{1}{2w}\int_{-\infty}^{\infty}\left|V \right|dr\sqrt{g_{rr}\left|g_{tt}^{-1}\right|}\right). \tag{33}\] Using the effective potential \(V\) for the massless scalar field, we can compute the bound in the following manner: \[T\geq T_{b}=\mathrm{sech}^{2}\left(\frac{\sqrt{4-X}\sqrt{-\frac{1}{(X-4)^{3}}} \left(8l^{2}+8l+\sqrt{\frac{(4-X)^{9/2}\sqrt{-\frac{1}{(X-4)^{3}}}}{3X+4}} \right)}{8M\sqrt{4-X}\omega}\right). \tag{34}\] The bound simplifies to the Schwarzschild case when \((X)\to 0\), resulting in \(T_{\mathrm{Sch}}\geq\mathrm{sech}^{2}\left(\frac{2l(l+1)+1}{8mw}\right)\). We demonstrate the impact of the screening parameter \(X\) on the greybody bound for a scalar field in a Schwarzschild-like black hole within Figure 4: Effective potentials \(M=1\), \(l=0\) (Top), \(l=1\) (Bottom) and variable X. metric-affine Bumblebee gravity in Figures 5 and 6. Indeed, for \(l=0\), it's clear that as the value of the parameter \(X\) increases, the greybody bound \(T_{b}\) also increases. Conversely, for \(l=1\), as the value of the parameter \(X\) increases, the greybody bound \(T_{b}\) decreases. ## V Neutrino energy deposition The energy deposition rate from the \(\nu\bar{\nu}\to e^{+}e^{-}\) process has been studied to justify the GRB emission. The reference scenario is the final stage of neutron star (NS) merging, conceptualized as a black hole (BH) with an accretion disk. Salmonson and Wilson, as outlined in Ref. [73; 74], were the first to consider the effects of strong gravitational field regimes. They demonstrated that, for a Schwarzschild spacetime and for neutrinos emitted from the central core, the efficiency of the annihilation \(\nu\bar{\nu}\to e^{+}e^{-}\) gets amplified, with respect to the Newtonian counterpart, by a factor \(\sim 30\) for collapsing neutron stars (NS). In [75; 76], the authors investigated the effects of general relativity on neutrino pair annihilation near the neutrinosphere and in the vicinity of a thin accretion disk (assuming an isothermal profile), with the gravitational background described by the Schwarzschild and Kerr geometries. We consider a black hole (BH) surrounded by a thin accretion disk that emits neutrinos, as discussed in [76]. We focus on an idealized model which does not depend on the specifics of disk formation and neglects self-gravitational effects. This disk has defined inner and outer edges, corresponding to radii denoted as \(R_{\rm in}\) and \(R_{\rm out}\), respectively. The general metric, exhibiting spherical symmetry, is given by: \[g_{\mu\nu}=\left(g_{00},g_{11},-r^{2},-r^{2}\sin^{2}\theta\right)\,. \tag{35}\] The Hamiltonian can be used to study the motion of the test particle in spacetime. For example, the Hamiltonian allows us to calculate the energy and angular momentum of the test particle, as well as its equations of motion. For a test particle propagating in a curved background, the Hamiltonian is given by \[2\mathcal{H}=-E\dot{t}+L\dot{\phi}+g_{11}\dot{r}^{2}=0\,, \tag{36}\] In the given context, where \(E\) represents the energy and \(L\) signifies the angular momentum of the test particles, the Figure 5: (Scalar Field) The Greybody Bound \(T_{b}\) versus the \(\omega\) for different values of values of \(X\) parameter, with \(M=1\), \(l=0\). Figure 6: (Scalar Field) The Greybody Bound \(T_{b}\) versus the \(\omega\) for different values of values of \(X\) parameter, with \(M=1\), \(l=1\). non-vanishing components of the 4-velocity can be derived as follows [77]: \[U^{3} =\dot{\phi}=-\frac{L}{r^{2}}\ \ ; \tag{37}\] \[U^{0} =\dot{t}=-\frac{E}{g_{00}}\ \ ;\] (38) \[\dot{r}^{2} =\frac{E\dot{t}-L\dot{\phi}}{g_{11}}\ \, \tag{39}\] Our focus lies in determining the energy deposition rate in close proximity to the axis, which is perpendicular to the disk, specifically at \(\theta=0^{\circ}\). To evaluate the energy emitted within a half cone with an angular extent of approximately \(\Delta\theta\sim 10^{\circ}\), we need to consider the scalar product of the momenta of a neutrino and an antineutrino at \(\theta=0^{\circ}\). This scalar product can be expressed as follows: \[p_{\nu}\cdot p_{\bar{\nu}}=E_{\nu}E_{\bar{\nu}}\left[1-\sin\theta_{\nu}\sin \theta_{\bar{\nu}}\cos\left(\phi_{\nu}-\phi_{\bar{\nu}}\right)-\cos\theta_{ \nu}\cos\theta_{\bar{\nu}}\right]\ \, \tag{40}\] In this context, the term \(E_{\nu}\) is defined as the energy of the neutrino, which is given by \(E_{0\nu}/\sqrt{g_{00}}\), where \(E_{0\nu}\) represents the observed energy of the neutrino at infinity \[\sin\theta_{\nu}=\frac{\rho_{\nu}}{r}\sqrt{g_{00}(r)}\ \, \tag{41}\] Furthermore, it's worth noting that \(\rho_{\nu}\) is defined as the ratio of the angular momentum \(L_{\nu}\) to the observed energy \(E_{0\nu}\). Additionally, due to geometric considerations, there are both a minimum and maximum value denoted as \(\theta_{m}\) and \(\theta_{M}\) respectively, for a neutrino originating from \(R_{\rm in}=2R_{\rm ph}\) and \(R_{\rm out}=30M\), where \(R_{\rm ph}\) is the photosphere radius. In addition, it can be demonstrated that the following relationship holds, as outlined in [76]: \[\rho_{\nu}=\frac{r_{0}}{\sqrt{g_{00}(r_{0})}}\ \, \tag{42}\] in which \(r_{0}\) is the nearest position between the particle and the centre before arriving at \(\theta=0\). The final component is the trajectory equation, which is presented in [76] as follows: \[\frac{\pi}{2}=\int_{C}\frac{dr^{\prime}}{r^{\prime}\sqrt{(r^{\prime}/\rho_{ \nu})^{2}-g_{00}(r^{\prime})}}\,. \tag{43}\] Equation (43) considers that the neutrinos are emitted from the position \((R,\pi/2)\), where \(R\) is within the range \([R_{\rm in},R_{\rm out}]\), and then they arrive at \((r,0)\). Consequently, the energy deposition rate resulting from neutrino pair annihilation is given by [76]: \[\frac{dE_{0}(r)}{dtdV}=\frac{21\pi^{4}}{4}\zeta(5)KG_{F}^{2}k^{9}T_{\rm eff}^{ 9}(2R_{ph})F(r)\ \, \tag{44}\] In the given expression, \(G_{F}\) represents the Fermi constant, \(k\) stands for the Boltzmann constant, and \(T_{\rm eff}(2R_{\rm ph})\) denotes the effective temperature at a radius of \(2R_{\rm ph}\) (the temperature observed in the comoving frame) \[K=\frac{1\pm 4\sin^{2}\omega_{W}+8\sin^{4}\theta_{W}}{6\pi}\ \, \tag{45}\] in this context, for \(\nu_{e}\), the positive sign is used, while for \(\nu_{\mu/\tau}\), the negative sign is applied. Additionally, \(\sin^{2}\theta_{W}\) represents the Weinberg angle and is equal to 0.23 \[F(r)=\frac{2\pi^{2}}{T_{\rm eff}^{9}(2R_{ph})}\frac{1}{g_{00}(r) ^{4}}\Bigg{(}2\int_{\theta_{m}}^{\theta_{M}}d\theta_{\nu}T_{0}^{5}(\theta_{ \nu})\sin\theta_{\nu}\int_{\theta_{m}}^{\theta_{M}}d\theta_{\bar{\nu}}T_{0}^{ 4}(\theta_{\bar{\nu}})\sin\theta_{\bar{\nu}}+ \tag{46}\] \[+\int_{\theta_{m}}^{\theta_{M}}d\theta_{\nu}T_{0}^{5}(\theta_{ \nu})\sin^{3}\theta_{\nu}\int_{\theta_{m}}^{\theta_{M}}d\theta_{\bar{\nu}}T_{0} ^{4}(\theta_{\bar{\nu}})\sin^{3}\theta_{\bar{\nu}}+\] \[+2\int_{\theta_{m}}^{\theta_{M}}d\theta_{\nu}T_{0}^{5}(\theta_{ \nu})\cos^{2}\theta_{\nu}\sin\theta_{\nu}\int_{\theta_{m}}^{\theta_{M}}d \theta_{\bar{\nu}}T_{0}^{4}(\theta_{\bar{\nu}})\cos^{2}\theta_{\bar{\nu}}\sin \theta_{\bar{\nu}}-\] \[-4\int_{\theta_{m}}^{\theta_{M}}d\theta_{\nu}T_{0}^{5}(\theta_{ \nu})\cos\theta_{\nu}\sin\theta_{\nu}\int_{\theta_{m}}^{\theta_{M}}d\theta_{ \bar{\nu}}T_{0}^{4}(\theta_{\bar{\nu}})\cos\theta_{\bar{\nu}}\sin\theta_{\bar {\nu}}\Bigg{)}\ \,\] The term \(T_{0}\) represents the temperature observed at infinity \[T_{0}(R) =\frac{T_{\rm eff}(R)}{\gamma}\sqrt{g_{00}(R)}\enspace, \tag{47}\] \[\gamma =\frac{1}{\sqrt{1-v^{2}/c^{2}}}\enspace,\] (48) \[\frac{v^{2}}{c^{2}} =\frac{g_{33}}{g_{00}}\frac{g_{00,r}}{2r}\enspace. \tag{49}\] \(T_{\rm eff}\) is the effective temperature as measured by a local observer. All quantities are evaluated at \(\theta=\pi/2\). In the analysis, the effects of the reabsorption of the deposited energy by the black hole are not considered. We therefore focus on a scenario with a simple temperature gradient, as described in [76]: \[T_{\rm eff}(r)\propto\frac{2R_{ph}}{r}\enspace. \tag{50}\] The assumptions regarding the temperature values and the shape of the gradient model are consistent with recent findings from neutrino-cooled accretion disk models, as exemplified in references such as [78; 79; 80]. Typically, it is anticipated that the effective maximum temperature, denoted as \(T_{\rm eff}\), falls in the order of \(\mathcal{O}(10~{}{\rm MeV})\). This order of magnitude is crucial for achieving the observed neutrino disk luminosity. Consequently, the disk luminosity is not expected to differ significantly across various models. Additionally, since we are not conducting numerical simulations, we assume \(T_{\rm eff}\sim\mathcal{O}(10~{}{\rm MeV})\) to facilitate the comparison of the effects of different gravitational models under the same conditions. It's important to emphasize that despite these theoretical assumptions, the precise temperature profile can only be determined through a disk simulation originating from neutron star (NS) merging with an explicitly defined geometry. In this Section, we have evaluated the energy deposition rate for neutrino annihilation in the affine bumblebee metric described in Eq. (1). In fig. 7, we plot the function \(G(r)=F(r)r^{2}/4M^{2}\) shown versus the Neutron star radius. It is evident how the parameter \(X\) impacts the energy deposition rate at small radius, \(r<10~{}{\rm M}\), while \(G(r)\) has a similar value with respect to \(GR\) at larger radius, \(r\sim 20~{}{\rm M}\). The function \(G(r)\) plays a pivotal role in computing the energy deposition rate (EDR) and, consequently, in determining the energy available for a GRB (Gamma-Ray Burst) explosion. We calculate the EDR within an infinitesimal angle \(d\theta\), considering a characteristic angle of \(10^{\circ}\) and a temperature of \(10~{}{\rm MeV}\), as outlined in [76]: \[\frac{dE_{0}}{dt}\simeq 4.41\times 10^{48}\left(\frac{\Delta\theta}{10^{\circ} }\right)^{2}\left(\frac{kT_{\rm eff}(R_{\rm in})}{10~{}{\rm MeV}}\right)^{9} \left(\frac{2M}{10~{}{\rm km}}\right)\int_{R_{\rm in}}^{R_{\rm out}}\frac{G(r )}{2M}dr~{}{\rm erg~{}s}^{-1}\enspace. \tag{51}\] Using Eq. (51), one obtains that the energy viable for a GRB emission is \[\frac{dE_{0}^{GR}}{dt}\simeq 4.5\times 10^{49}~{}{\rm erg~{}s}^{-1}\enspace, \tag{52}\] Figure 7: Plot of \(G(r)\) vs \(r/M\) for different values of \(X\) as shown in the legend. \[\max\left(\frac{dE_{0}^{B}}{dt}\right)\simeq 3.5\times 10^{50}\ {\rm erg\ s^{-1}}\ . \tag{53}\] Many recent time-dependent General Relativity simulations including pair-annihilation and its dynamical impact self-consistently during the evolution [81; 82; 83; 84] has shown that the process needs at least one order of magnitude for being competitive with the Blandford-Znajek (BZ) process. The usual energy extraction for the BZ process is, indeed, of the order \(\sim 6\times 10^{50}\ {\rm erg/s}\)[85]. It is hence relevant that the geometrical background provides an enhancement of the energy deposition of almost one order of magnitude with respect to GR in order that the \(\nu\bar{\nu}\) annihilation process might be important for the GRB emission. We finally point out that a simulation of the disk from NS-NS merging with assigned geometry is needed in order to give the exact energy deposition value for the processes of GRBs emission. ## VI Conclusions In this paper, we have investigated Schwarzschild-like black holes within the framework of metric-affine bumblebee gravity. The metric-affine bumblebee gravity theory offers a unique perspective on gravitational interactions by introducing a vector field that couples to spacetime curvature, which can lead to Lorentz symmetry breaking at the Planck scale level. In conclusion, our investigation into the effects of metric-affine bumblebee gravity on various astrophysical phenomena has yielded several noteworthy findings. We have demonstrated that the weak deflection angle is sensitive to the metric-affine bumblebee parameter \(X\) while the shadow of the Schwarzschild-like black hole remains unaffected by this parameter in the strong field regime. Through numerical analysis, we have visualized the behavior of the deflection angle in response to different parameters. By using representative values for the M87* supermassive black hole, we observed that time-like particles exhibit higher values of \(\hat{\alpha}\) as the impact parameter decreases. The bumblebee parameter \(X\) influences \(\hat{\alpha}\), particularly at larger impact parameters, highlighting its potential detectability. We have compared the finite distance effect correction equation with the approximated case. In most scenarios, these expressions yield similar results, except when the impact parameter approaches the distance to the observer. We delved into the specific intensity observed by a distant observer for an infalling accretion and found that increasing the value of \(X\) leads to a rise in intensity, peaking at the photon sphere and gradually decreasing thereafter. The impact of the screening parameter \(X\) on the greybody bound for a scalar field within a Schwarzschild-like black hole has also been explored. Notably, for different angular momentum quantum numbers (\(l=0\) and \(l=1\)), the behavior of the greybody bound varies with changing \(X\) values. Analyzing the behaviour of accretion disks around Schwarzschild-like black holes in this modified gravity scenario, we explore the disk neutrino energy deposition reaction \(\nu\bar{\nu}\to e^{+}e^{-}\). We have computed the energy deposition rate with a link to the GRBs. The release of enormous energy into \(e^{+}e^{-}\) and subsequently the annihilation process powers high energetic photons. Using the idealized models of the accretion disk with \(T_{\rm eff}\sim r^{-1}\), we have found that the metric-affine bumblebee gravity enhances the energy deposition up to one order of magnitude, with respect to the General Relativity, for values of the free parameter of the theory \(X\) such that \(X<0.4\). This enhancement is relevant for the gamma-ray burst emission because is of the same order of the BZ process and therefore the neutrino EDR can justify or contribute as a relevant source the observed luminosity of ultra-long GRBs. In summary, our study has provided valuable insights into the intricate interplay between metric-affine bumblebee gravity and astrophysical phenomena, shedding light on the potential detectability of bumblebee parameters and their impact on observables such as deflection angles, intensity profiles, and greybody bounds. These findings contribute to our broader understanding of gravity in the context of modified theories and its implications for black hole astrophysics. ###### Acknowledgements. The work of G.L. and L.M. is supported by the Italian Istituto Nazionale di Fisica Nucleare (INFN) through the "QGSKY" project and by Ministero dell'Istruzione, Universita e Ricerca (MIUR). G.L., A. O. and R. P. would like to acknowledge networking support by the COST Action CA18108 - Quantum gravity phenomenology in the multi-messenger approach (QG-MM). A. O. would like to acknowledge networking support by the COST Action CA21106 - COSMIC WISPers in the Dark Universe: Theory, astrophysics and experiments (CosmicWISPers).
2303.17971
Rule Enforcing Through Ordering
In many real world situations, like minor traffic offenses in big cities, a central authority is tasked with periodic administering punishments to a large number of individuals. Common practice is to give each individual a chance to suffer a smaller fine and be guaranteed to avoid the legal process with probable considerably larger punishment. However, thanks to the large number of offenders and a limited capacity of the central authority, the individual risk is typically small and a rational individual will not choose to pay the fine. Here we show that if the central authority processes the offenders in a publicly known order, it properly incentives the offenders to pay the fine. We show analytically and on realistic experiments that our mechanism promotes non-cooperation and incentives individuals to pay. Moreover, the same holds for an arbitrary coalition. We quantify the expected total payment the central authority receives, and show it increases considerably.
David Sychrovský, Sameer Desai, Martin Loebl
2023-03-31T11:14:59Z
http://arxiv.org/abs/2303.17971v2
# Promoting Non-Cooperation Through Ordering ###### Abstract. In many real world situations, like minor traffic offenses in big cities, a central authority is tasked with periodic administering punishments to a large number of individuals. Common practice is to give each individual a chance to suffer a smaller _fine_ and be guaranteed to avoid the legal process with probable considerably larger punishment. However, thanks to the large number of offenders and a limited capacity of the central authority, the individual risk is typically small and a rational individual will _not_ choose to pay the fine. Here we show that if the central authority processes the offenders in a publicly known order, it properly incentives the offenders to pay the fine. We show analytically and on realistic experiments that our mechanism promotes non-cooperation and incentives individuals to pay. Moreover, the same holds for an arbitrary coalition. We quantify the expected total payment the central authority receives, and show it increases considerably. mechanism design; non-cooperation + Footnote †: journal: Information Systems ## 1. Introduction In this work, we study a special case of a classic dilemma, how to effectively enforce a rule in a large population with only a very small number of enforcing agents. This task is impossible if the large population cooperates and thus a critical aspect of any suggested mechanism is the promotion of non-cooperation. A well-known count Dracula way is to make the punishment for breaking the rule extremely severe. We suggest an alternative mechanism, for a special case of the dilemma motivated by collecting fines for traffic violations. In many large cities, there is a huge number of traffic offences, highly exceeding the capacity of state employees assigned to manage them. The assigned state employees should primarily concentrate on serious and repetitive offenders. However, a large number of minor offences are still to be settled which makes the former considerably harder. A common practise is that a smaller _fine_ is assigned in an almost automated way and if an offender settles this fine then the legal process does not start. Otherwise, the legal process should start with considerably larger cost for the offender. The offence is also forgotten after a certain _judicary period_. However, thanks to the limited capacity of state employees, legal processes for non-repetitive minor traffic offenses are typically enforced in a small number of cases1. The individual risk is thus small and a large fraction of the offenders _choose_ to ignore the fine. In this paper, we propose a simple mechanism which properly incentives the offenders to pay the fine even under these conditions. Footnote 1: For instance, in the city of Prague considerably more than 100 000 such offenses are dismissed every year because the judicary period expires. ### Main Contribution In our proposed mechanism, the central authority processes the offenders in a given order. Each offender is aware of his position in this 'queue of offenders' and has the option of publicly donating money to a fund of traffic infrastructure or a charity predetermined by the central authority. If their total donations amount to at least the fine, it is used to settle the offence. After the judicary period expires, or if the legal process is started, the fund retains the individual donation. The central authority periodically sorts the offenders in ascending order of their average donation, and starts the legal process with those who paid the least on average. Compared to processing the offenders in random order, this mechanism increases the individual risk of some offenders. This incentives them to pay the fine, which in turn puts others in danger. We show both analytically and on realistic experiments that under the proposed mechanism, the strategic behaviour of the offenders is to engage with the mechanism, and quantify the expected revenue of the charity. Moreover, we show it is not beneficial for any group of offenders to ignore the mechanism and share the cost of those who enter the legal process. Finally, we study how the central authority can most efficiently use its limited capacity to maximize the revenue of the charity. ### Related Work To our best knowledge, the field of non-cooperative mechanism design has not been studied extensively yet. Our approach is somewhat similar to that of (Brandrand, 2005), where the authors consider a variation of the elimination game which includes bids. Our model can also be viewed as a generalization of the stopping games (Brand, 2005), where participants choose a time to stop bidding and trade off their gain from outlasting other players for the cost accumulated over time in the game. In our case, the "prize" won by the lowest paying participant is cost of entering the legal process. However, both approaches did not consider the ranking of players, which is at the core of our mechanism. ## 2. Problem Definition Informally, we model the interaction of agents as a game we call _Queue_. Queue consists of a finite sequence of _Round_, in which each agent can choose to pay, however with some probability they forget and pay nothing. Those who paid at least the fine in total, or spent enough time in Queue are removed. The rest is ordered according to the amount they paid on average. A fixed number of those at the start are then forced to pay a large penalty, and leave Queue. Let us now define the interaction formally, starting with how Round is realized. ### Round: One Step in Queue Round is a parametric game \(\mathbb{O}\) (\(\mathcal{N}\))=\(\mathbb{O}(\mathcal{N},F,Q,T,k,p)\), where \(\mathcal{N}\) is an ordered subset of agents2, \(F\in\mathbb{N}\) is the fine, \(Q>F\) is the cost associated with entering the legal process, \(T\in\mathbb{N}\) is the judiciary period, i.e., the number of Round instances after which agents are removed, \(k\in\mathbb{N}\) is the number of agents forced to pay \(Q\) in each Round, \(p\in[0,1]\) is the probability of ignorance. Footnote 2: The agents are ordered according to their average payment in ascending order, i.e. those who paid the least on average are sorted to the front of \(\mathcal{N}\). Each \(a\in\mathcal{N}\) is characterized by a triplet \((n_{a},t_{a},m_{a})\) and his strategy \(\pi_{a}\). The triplet corresponds to his _observations_ -- his position \(n_{a}\) in \(\mathcal{N}\), the number \(t_{a}\) of past Round games he participated in, and his total individual payment \(m_{a}\) in the past Round games. Round proceeds in three phases 1. Each agent \(a\in\mathcal{N}\), based on his observation, declares his strategy for this Round \(\pi_{a}\in\Lambda^{F+1}\), where \(\Lambda\) is the probability simplex. His payment \(\mu_{a}\) is then sampled from3 Footnote 3: This simulates that with probability \(p\), the agent frogot to act in this Round. \[\mu_{a}\sim p\sigma^{0}+(1-p)\pi_{a}(n_{a},t_{a},m_{a}),\] (1) where \(\sigma^{v}\) is the pure strategy of paying \(v\). 2. Each agent's total payment and time is updated \[m_{a} \gets m_{a}+\mu_{a},\] (2) \[t_{a} \gets t_{a}+1,\] (3) and \(\mathcal{N}\) is sorted4 according to the ratio of current total payment and time \(m_{a}/t_{a}\). Footnote 4: We use stable sort, i.e. whenever there is a tie, the original order is preserved. 3. Some agents are removed from \(\mathcal{N}\), which is done in three sub-phases. We call such agents _terminal_ and denote the set of terminal agents in this Round as \(\mathcal{T}\). 1. All agents \(a\in\mathcal{N}\) with \(m_{a}\geq F\) are removed. 2. First \(k\) agents in \(\mathcal{N}\) have their \(m_{a}\) increased by \(Q\) and are removed. 3. All agents \(a\in\mathcal{N}\) with \(t_{a}\geq T\) are removed. The result of each Round is the ordered set of agents \(\mathcal{N}\setminus\mathcal{T}\), and the set of terminal agents \(\mathcal{T}\). Only the terminal agents are assigned their final utility. **Definition 2.1** (Utility).: The utility of each agent \(a\in\mathcal{T}\) is the negative amount he paid \[u_{a}=-m_{a}. \tag{4}\] ### Queue: A Game on Updating Sequences Formally, Queue is \(\mathbb{G}=\mathbb{G}(F,Q,T,k,p,x,x_{0},w)\), where \(F,Q,T,k\) and \(p\) have the same meaning as in Section 2.1, \(x\) is the number of entering agents after each Round, \(x_{0}\) is the initial size of \(\mathcal{N}\) and \(w\) is the horizon, i.e. the number of repetitions of Round. Queue aggregates Round in the following two simple phases. Starting with \(\mathcal{N}^{1}\) s.t. \(|\mathcal{N}^{1}|=x_{0}\), and \(m_{a},t_{a}=0\) for each \(a\in\mathcal{N}^{1}\). We repeat them \(w\)-times. 1. The agents in \(\mathcal{N}^{t}\) play Round and non-terminal agents proceed to the next iteration. \[\mathcal{N}^{t+1},\mathcal{T}^{t+1}\leftarrow\mathbb{O}(\mathcal{N}^{t}).\] (5) 2. \(x\) new agents enter the game \[\mathcal{N}^{t+1}\leftarrow\mathcal{N}^{t+1}\cup X,\] (6) where \(X\) is a set of agents with \(m_{a},t_{a}=0\), and \(|X|=x\). These new agents are sorted to the end of \(\mathcal{N}^{t+1}\). In the last Round, all agents terminate, \(\mathcal{T}^{w}\leftarrow\mathcal{T}^{w}\cup\mathcal{N}^{w}\). The new agents come from universum \(U\). The strategy of all agents is then given as \(\pi=\times_{a\in U}\pi_{a}\). We denote space of all such strategies as \(\Pi\). Each agents wants to choose strategy \(\pi_{a}\), which maximizes their utility in \(\mathbb{G}\) given strategies of other agents \(\pi_{-a}\). A strategy profile \(\pi\in\Pi\) is an equilibrium, if no agent can increase his utility. Formally, **Definition 2.2** (\(\epsilon\)-Equilibrium).: \(\pi\in\Pi\) is an \(\epsilon\)-equilibrium of \(\mathbb{G}\) if \(\forall\ \overline{\pi}\in\Pi\), \(\forall\ \epsilon\in\{1,\ldots w\}\) and \(\forall a\in\mathcal{T}^{t}\), \[\mathbb{E}_{\pi}\left[u_{a}(\pi)\right]\geq\mathbb{E}_{(\overline{\pi}_{a} \pi_{-a})}\left[u_{a}(\overline{\pi}_{a},\pi_{-a})\right]-\epsilon. \tag{7}\] We note that the equilibrium always exists which can be shown by a standard transformation to a normal form game. ### Avalanche Effect Intuitively, every agent wants to pay as little as possible, while avoiding paying \(Q\). This translate to paying more than the others. However, if all agents adapt this reasoning, the only option to avoid paying \(Q\) is to pay \(F\). We formally show this in Section 3.1 Crucially, not all other agents can use this reasoning thanks to the probability of ignorance. But as that vanishes, the agents should be incentivised to pay more. Similarly, if the number of entering agents increases, so should the total payment. We formally capture this in the _avalanche effect_. **Definition 2.3** (Avalanche effect).: We say that Queue exhibits the _avalanche effect_ if at least one of the following holds in equilibrium when changing \(p\), or \(x\). 1. The expected terminal payment of all agents is increasing as \(p\to 0^{+}\) (8) \[\lim_{p\to 0^{+}}\frac{\mathrm{d}}{\mathrm{d}p}\sum_{a\in\mathcal{T}}m_{a}>0.\] 2. The expected terminal payment of all agents decreases slower than \(1/x\) as \(x\rightarrow\infty\) \[\lim_{x\rightarrow\infty}\sum_{a\in\mathcal{T}}m_{a}=\infty.\] (9) ### Division Problem In our model, the judiciary period is split into \(T\) equal time intervals and sorted at the start of each interval. The central authority can process \(kT\) offenders over the judiciary period, and \(xT\) will enter the system. The central authority can influence the system in two ways. 1. it can choose how often the sorting takes place, and 2. it can virtually split the entering offenders into \(g\) groups of size \(x/g\), and process \(k/g\) offenders in each. The _Division problem_ is how to set \(T\) and \(g\) to maximize the expected revenue the central authority receives. We refer to the two cases as _Time-Division problem_ and _Group-Division problem_ respectively. ## 3. Analytic Solution As described in Section 1, the individual risk when the central authority processes the agents in random order is typically small, i.e. \(kQ/|\mathcal{N}|\ll F\). Each agent is also guaranteed to pay \(kQ/|\mathcal{N}|\) if everyone cooperates and shares the costs of those entering the legal process. Let us begin by showing that this is not the case in our proposed system. That is, there is no coalition can benefit from choosing to pay nothing and share the cost of those forced to pay \(Q\). In our setting, this is analogous to coalition proofness. Proposition 3.1 ().: _Let \(\mathcal{A}\) be a set of agents using strategy \(\pi_{a}=\sigma^{0}\ \forall a\in\mathcal{A}\), and sharing the cost, i.e. their utility becomes_ \[\tilde{u}_{a}=-\frac{1}{|\mathcal{A}|}\sum_{i=1}^{w}\sum_{a\in\mathcal{A} \cap\mathcal{N}}m_{a},\quad\forall a\in\mathcal{A}.\] _If \(\tilde{u}_{a}>0\), then \(\exists a^{\prime}\in\mathcal{A}\) s.t. \(a^{\prime}\) can deviate and increase his utility._ Proof.: We split the proof into two parts according to how much an individual needs to contribute. 1. \(0<\tilde{u}_{a}<Q\): In this situation, not all agents of \(\mathcal{A}\) were forced to pay \(Q\). Consider the agent \(a^{\prime}\in\mathcal{N}\) who terminated last. Then, since \(a^{\prime}\) paid zero, his original utility is zero and \(\tilde{u}_{a}>u_{a}\). Therefore, \(a^{\prime}\) would benefit from leaving \(\mathcal{A}\). 2. \(\tilde{u}_{a}=Q\): In this case, all agents were forced to enter the legal process. Any \(a\in\mathcal{A}\) would therefore benefit from paying the fine, since then \(u_{a}=F<Q=\tilde{u}_{a}\). While existence of an analytic solution of Queue remains an open question, we can find it in certain special cases. ### Active participants Let us first focus on a situation when no agent forgets to participate in Round, i.e. \(p=0\). Then it is easy to see that \(\pi_{a}=\sigma^{F}\) is unique equilibrium. Consider the first agent \(a\in\mathcal{N}\) in the first Round, who chose to pay \(\mu_{a}<F\). Then he is forced to pay \(Q\), resulting to utility \(u_{a}=-Q-\mu_{a}<-F\). Therefore, switching to paying \(F\) is beneficial and the strategy of paying \(\mu_{a}<F\) is not an equilibrium. This means all agents will pay \(F\) in the first Round, and the situation thus repeat in the following Round. ### \(w\)-Fines: Special Case of Queue Let us focus on the system without the introduction of the option to donate a portion of the fine. Thus after scaling currency we can let \(F=1\), and there are only two pure strategies \(\sigma^{0},\sigma^{F}\) the agents can take. If now \(T=w\) and no agents are added after each Round \(x=0\), we call the game \(w\)_-Fines_. Definition 3.2 (\(w\)-Fines).: Let \(w\in\mathbb{N}\), then we refer to reduced Queue \(\mathbb{F}(w,F,Q,k,p,x_{0})=\mathbb{G}(F,Q,w,k,p,0,x_{0},w)\) as \(w\)-Fines. We begin by showing a crucial property of \(w\)-Fines. Lemma 3.3 ().: _In the \(w\)-Fines, the expected payment of \(\forall a\in\mathcal{N}\) depends only on the actions of agents in front of \(a\)._ Proof.: If \(a\) pays zero, he remains in the Queue and is sorted in front of agents who were behind him. He is potentially forced to pay \(Q\), depending on the actions of agents in front of him. If he pays \(F=1\), he is removed. In either case, the actions of agents behind \(a\) have no impact on his payment. In each Round, \(a\in\mathcal{N}\) has \(n_{a}-1\) agents in front of him. Due to the probability of ignorance, even if all the agents decide to pay, \(a\) can estimate the probability that at most \(k-1\) will forget. If that happens, \(a\) will be forced to pay \(Q\) in this Round. Formally, Definition 3.4 ().: Let \(n\) be a positive integer. We denote by \(\alpha(p,n,k)\) the probability that in \(n-1\) independent coin tosses with the head probability \(p\), the number of heads is less than \(k\). Since \(\alpha\) will be important in the following discussion, we briefly mention some of its properties. Lemma 3.5 ().: _Let \(k<np\), then \(\alpha(p,n+1,k)\leq e^{-\frac{(np-k)^{2}}{2np}}\)._ Proof.: Let \(\xi_{i}\) denote the random variable such that \[\xi_{i}=\begin{cases}1&\text{w.p. $p$,}\\ 0&\text{otherwise}\end{cases},\] and \(\xi^{n}=\frac{n}{\xi_{i}}\). Thus, \(\mathbb{E}[\xi_{i}]=p\) and \(\mathbb{E}[\xi^{n}]=np\). As per the Chernoff bounds, \(\mathbb{P}[\xi^{n}\leq(1-\delta)np]\leq e^{-\frac{(np-k)^{2}}{2n}}\), for all \(0<\delta<1\). Thus \(\alpha(p,n+1,k)=\mathbb{P}[\xi^{n}\leq k]\leq e^{-\left(1-\frac{k}{np}\right) ^{2}np/2}=e^{-\frac{(np-k)^{2}}{2np}}\). Proposition 3.6 ().: _If \(\alpha(p,n,k)\leq F/Q\leq\frac{1}{4}\) then \(np>k\). Moreover for large enough \(n\), \(\alpha(p,n,k)\geq\alpha(p,2n,2k)\)._ Proof.: For \(\gamma\sim B(n,p)\) if \(p<1-\frac{1}{n}\), then \(\frac{1}{4}<\Pr(\gamma\leq np)\)[4]. Therefore, when \(\frac{1}{4}\geq\frac{F}{Q}\), then \(k<np\). Further, we note that Lemma 3.5 is tight for large enough \(np\). Hence, it suffices to prove the proposition for the upper bound \(e^{-\frac{(np-k)^{2}}{2np}}\) for which the statement clearly holds. Finally, we formulate a conjecture that if true, would allow us to extend the analytic study of the division problem presented below. Conjecture 3.7 ().: _For \(pn>k\), \(\alpha(p,n,k)\geq\alpha(p,n+n/p,2k)\)._ #### 3.2.1. Single Sorting Instance We start by analysing the 1-Fines game, which is equivalent to one Round. In this case, when an agent is sufficiently far from the start of \(\mathcal{N}\), it is beneficial to pay nothing, while near the start it is beneficial to pay and avoid paying \(Q\). The boundary between the two will prove important. Definition 3.8 (Critical strategy).: Let \(r\in\mathbb{N}\) be the smallest such that \(\alpha(p,r,k)Q\leq F\). Then \(r\) is called _critical position_. The _critical strategy_ is \[\pi_{a}^{\mathrm{crit}}(n_{a},t_{a},m_{a})=\begin{cases}\sigma^{F}&\text{if } \alpha(p,n_{a},k)Q>F,\\ \sigma^{0}&\text{otherwise}.\end{cases} \tag{10}\] We note that \(t_{a}=1\) and \(m_{a}=0\ \forall a\in\mathcal{N}\) for 1-Fines. We will show that \(\pi_{a}^{\mathrm{crit}}\) is the only equilibrium of the 1-Fines. First, we define \(\alpha^{\mathrm{crit}}\) as the probability with which an agent is forced to pay \(Q\) when all agents follow \(\pi_{a}^{\mathrm{crit}}\). **Proposition 3.9**.: _Let \(r\) be the critical position. Then if agents follow \(\pi_{a}^{\mathrm{crit}},\forall a\in\mathcal{N}\) are forced to pay \(Q\) w.p._ \[\alpha^{\mathrm{crit}}(p,r,n_{a},k)=\begin{cases}\alpha(p,n_{a},k)&\mathrm{if} \ n_{a}<r,\\ \alpha(p,r,k-(n_{a}-r))&\mathrm{otherwise}.\end{cases} \tag{11}\] Proof.: Fix \(a\in\mathcal{N}\). When \(\alpha(p,n_{a},k)>F/Q\) (i.e. \(n_{a}<r\)), then agents in front of a pay \(F\) and thus a will not pay \(Q\) only if enough of them forget. If \(n_{a}\geq r\), then \(n_{a}-r\) agents choose not to pay. Therefore, \(a\) only needs \(k-(n_{a}-r)\) of the \(r\) agents to forget. Observe that \(\alpha^{\mathrm{crit}}\leq\alpha\), since some agents may choose to pay zero. Also, by Definition 3.4, \(\alpha^{\mathrm{crit}}=0\) for \(n_{a}>r+k\). **Proposition 3.10**.: _Let \(r\) be the critical position and let all agents follow \(\pi_{a}^{\mathrm{crit}}\), except for \(a\in\mathcal{N}\), whose strategy is \(\pi_{a}=(q,1-q)\). Then the expected payment of \(a\) is_ \[(1-p-q)F+(p+q)\alpha^{\mathrm{crit}}(p,r,n_{a},k)Q. \tag{12}\] Proof.: By definition of \(\pi_{a}\), \(a\) pays \(F\) w.p. \(1-p-q\) and he does not forget. If he does, or pays zero w.p. \(q\), he is forced to pay \(Q\) w.p. \(\Box\) **Corollary 3.11**.: _Let \(r\) be the critical position and let all agents follow \(\pi_{a}^{\mathrm{crit}}\). Then the expected payment of \(a\in\mathcal{N}\) is_ \[G_{a}(p,n_{a},k)=\begin{cases}(1-p)F+p\alpha^{\mathrm{crit}}(p,r,n_{a},k)Q,& \mathrm{if}\ n_{a}<r,\\ \alpha^{\mathrm{crit}}(p,r,n_{a},k)Q,&\mathrm{otherwise}.\end{cases} \tag{13}\] **Theorem 3.12**.: _The strategy \(\pi_{a}^{\mathrm{crit}}\) is unique equilibrium of \(1\)-Fines._ Proof.: Consider \(a\in\mathcal{N}\) in the sorted order. We will show by induction \(\pi_{a}^{\mathrm{crit}}\) is a unique best-response to strategies of agents in front of a given agent. For the first agent, \(\pi_{a}^{\mathrm{crit}}\) clearly maximizes the utility \(-G_{a}\) of \(a\). In the induction step we assume \(a^{\prime}\) in front of \(a\) follow \(\pi_{a}^{\mathrm{crit}}\). Following Lemma 3.3, the actions of the others can be arbitrary. Observe the \(\pi_{a}^{\mathrm{crit}}\) minimizes the expected payment (12). Thus \(a\) wants to follow \(\pi_{a}^{\mathrm{crit}}\). #### 3.2.2. Two Sorting Instances In this section we present analytic solution of the \(2\)-Fines game. We start by defining extension of \(\pi_{a}^{\mathrm{crit}}\), and showing no agent can benefit by deviating from it. Later, we discuss some properties of this analytic solution. In \(2\)-Fines, no agents are added after sorting. After the first Round the game is thus identical to \(1\)-Fines. This recursive relation motivates us to introduce the analogues of the variables used in the previous section recursively. We use upper index to denote the game length \(w\) and number of Round, i.e. in the previous section we would use \(r^{1,1}\) for the critical position \(r\). We extend Definition 3.8 of critical strategy to pay \(F\) if \(a\)'s position is in front of some critical position \(r^{2,t}\), defined below. Note that since the second Round corresponds to \(1\)-Fines, \(r^{2,2}=r^{1,1}=r\). **Definition 3.13** (\(2\)-Critical strategy).: The _\(2\)-critical strategy_ is \[\pi_{a}^{\mathrm{crit},2}(n_{a},t_{a},m_{a})=\begin{cases}\sigma^{F}&\mathrm{ if}\ n_{a}<r^{2,t_{a}},\\ \sigma^{0}&\mathrm{otherwise}.\end{cases} \tag{14}\] Let all agents follow \(\pi_{a}^{\mathrm{crit},2}\). Then if \(a\in\mathcal{N}^{1}\), \(n_{a}<r^{2,1}\) does not terminate in the first Round, his expected payment in the second Round is \[\mathcal{G}_{a}^{2}(p,n_{a},k)=\mathbb{E}_{y\sim B(n_{a}-1,1-p)}\left[G_{a}(p,n_{a}-\gamma-k,k)\right], \tag{15}\] where \(G_{a}\) is the expected payment given in Corollary 3.11. In words, since all agents in front of \(a\) want to pay \(F\), \(a\)'s position decreases by \(\gamma+k,y\sim B(n_{a}-1,1-p)\). At the new position, he is expected to pay \(G_{a}\). Similarly to Definition 3.8, we define the critical position in the first Round as the smallest \(r^{2,1}\in\mathbb{N}\) such that \(\alpha(p,r^{2,1},k)Q+(1-\alpha(p,r^{2,1},k))\mathcal{G}_{a}^{2}(p,r^{2,1},k)\leq F\). In words, assume all agents in front of \(a\) want to pay \(F\). In the first Round, if \(a\) pays zero he risks paying \(Q\) w.p. \(\alpha\) and the expected payment in the second Round w.p. \(1-\alpha\). The critical position \(r^{2,1}\) is the smallest position \(n_{a}\) at which, assuming all agents in front of that position try to pay \(F\), it is beneficial to pay zero. **Lemma 3.14**.: _Let \(r^{2,t}\) be the critical position in Round \(t\in\{1,2\}\). Then \(r^{2,1}\geq r^{2,2}+k\)._ Proof.: By definition, \(r^{2,1}\) is the smallest such that \(\alpha(p,r^{2,1},k)Q+(1-\alpha(p,r^{2,1},k))\mathcal{G}_{a}^{2}(p,r^{2,1},k)\leq F\). For a contradiction we assume that \(r^{2,1}<r^{2,2}+k\). It suffices to show that \(\mathcal{G}_{a}^{2}(p,r^{2,1},k)>F\) since this inequality along with \(Q>F\) violates the defining property of \(r^{2,1}\). If \(r^{2,1}-k<r^{2,2}=r^{1,1}\) then for each \(\gamma\geq 0\), \[G_{a}(p,r^{2,1}-\gamma-k,k)=(1-p)F+p\alpha(p,r^{2,1}-\gamma-k,k)Q,\] see Corollary 3.11 and Proposition 3.9. Moreover, by the definition of \(r^{1,1}=r\), \[\alpha(p,r^{2,1}-\gamma-k,k)Q>F.\] Hence for each \(\gamma\geq 0\), \[G_{a}(p,r^{2,1}-\gamma-k,k)>F\] and thus \(\mathcal{G}_{a}^{2}(p,r^{2,1},k)>F\). We are now ready to show the main result of this section. **Theorem 3.15** (Equilibrium of \(2\)-Fines).: \(\pi_{a}^{\mathrm{crit},2}\) _is unique equilibrium of \(2\)-Fines._ Proof.: The second Round, corresponding to \(1\)-Fines, has a unique equilibrium (10). In the first Round, we can use a modification of proof of Theorem 3.12. Consider now agents in the sorted order. Using Lemma 3.3, we can again use induction over agents. For each \(a\in\mathcal{N}^{1}\) let his strategy be \(\pi_{a}=(q,1-q)\) and let all agents in front of him follow \(\pi_{a}^{\mathrm{crit},1}\). Then his expected payment in a Round is \[G_{a}^{1}(p,n_{a},k)=(1-p-q)F+(p+q)\left(\alpha^{\mathrm{crit}}Q+(1-\alpha^{ \mathrm{crit}})\mathcal{G}_{a}^{2}\right), \tag{16}\] where we drop the arguments from \(\alpha^{\mathrm{crit}}(p,r^{2,1},n_{a},k)\) for brevity. This is because w.p. \(1-p-q\) be pays \(F\) and leaves. Otherwise, since all agents in front of him follow \(\pi_{a}^{\mathrm{crit},1}\), he is forced to pay \(Q\) w.p. \(\alpha^{\mathrm{crit}}\). The remaining option is that he proceeds to the next Round, where his expected payment is \(\mathcal{G}_{a}^{2}\). Strategy \(\pi_{a}^{\mathrm{crit},1}\) is chosen to minimize \(a\)'s expected payment (16), since \(\alpha^{\mathrm{crit}}\leq\alpha\). Therefore, \(a\) will follow it even in the first Round. **Theorem 3.16**.: _The equilibrium strategies of both \(1\)-Fines and \(2\)-Fines exhibit the avalanche effect._ Proof.: Since \(\lim_{p\to 0^{+}}\alpha(p,n,k)=1\) and \(Q>F\), the critical position in the last Round \(r^{1,1},r^{2,2}\to\infty\). Using Lemma 3.14, the equilibrium strategies of both \(t\)-Fines satisfy \(\pi_{a}^{\text{crit,}t}\to\sigma^{F}\ \forall t=1,2\). Thus, \(\pi_{a}^{\text{crit,}t}\) satisfies Definition 2.3. In this simplified model, decreasing the probability of ignorance virtually increases the number of state employees assigned to processing the fines. This allows the central authority to increase the total payment through advertising, rather than hiring additional employees, which may be much cheaper. We show in Section 4 that these results translate well to a more general case where non-zero number of agents enter the system in each Round. #### 3.2.3. Division problem To give a partial answer to the Division problem in this setting, we will compare the total expected payment of \(2\)-Fines with \(k\), and \(1\)-Fines with \(2k\). **Proposition 3.17**.: _Let \(r^{2,2}\) be the critical position in the second Round of \(2\)-Fines, and let all players follow \(\pi_{a}^{\text{crit,}2}\). Then the total expected payment is at least_ \[2F(1-p)(r^{2,2}-1)+2kQ. \tag{17}\] Proof.: In the first Round, \((1-p)(r^{2,1}-1)\) agents are expected to pay \(F\), and \(k\) are forced to pay \(Q\). In the second, the situation is analogous, so the total expected payment is \[F(1-p)(r^{2,1}+r^{2,2}-2)+2kQ.\] The statement follows from Lemma 3.14. **Theorem 3.18**.: _Let \(Q\gg F\). Then the equilibrium strategy of \(\mathbb{F}(2,F,Q,k,p,x_{0})\) achieves a higher total payment than the equilibrium of \(\mathbb{F}(1,F,Q,2k,p,x_{0})\) in expectation._ Proof.: Let us denote the critical position in \(1\)-Fines with \(2k\) by \(r^{1,1}(2k)\) and in \(2\)-Fines with \(k\) by \(r^{2,1}(k),r^{2,2}(k)\) respectively. In the former, \(r^{1,1}(2k)-1\) agents will pay \(F\) w.p. \((1-p)\), and \(2k\) are forced to pay \(Q\). Therefore, the total expected payment is \[F(1-p)(r^{1,1}(2k)-1)+2kQ. \tag{18}\] By comparing this to Eq. (17), all we need to show is that \(r^{1,1}(2k)<2r^{2,2}(k)\). By Definition 3.8 this is equivalent to \(\alpha(p,2n,2k)<\alpha(p,n,k)\). Since \(Q\gg F\), this is true by Proposition 3.6. **Question.** Is \(n_{1}(2k)<n_{1}(K)+n_{1}(K)/p\)? ## 4. Experiments We investigate two approaches based on how the agents choose their payments. In Section 4.1, we define a simple strategy based on how the agent's position changes over the course of the Queue. In Section 4.2, we use reinforcement learning to obtain a strategy which approximates equilibrium. In both cases we simplify the model by assuming the function \(\pi_{a}\) is the same for all agents. ### Basic Rational Strategy To model behaviour of real decision makers, we introduce _basic rational strategy_ (BRS). Informally, each agent keeps track of a quantity he is willing to pay in each Round. If, based on his shift in Queue since last Round, he determines he will reach the beginning before \(T\) steps, his willingness to pay increases. Formally, **Definition 4.1** (basic rational strategy).: Let \(a\in\mathcal{N}\), \((n_{a}^{\prime},t_{a}^{\prime},m_{a}^{\prime})\) be the observation of \(a\) in previous Round, and \((n_{a},t_{a},m_{a})\) his current observation. We call \(\omega_{a}\) the willingness to pay of \(a\). In the first Round \(a\) participates in, i.e. when \(t_{a}=0\), his willingness to pay is \(\omega_{a}=0\). In subsequent Round games, the willingness to pay is updated before declaring \(\pi_{a}\) according to \[\omega_{a}\leftarrow\begin{cases}\min(F-m_{a},\omega_{a}+1),&n_{a}<(n_{a}-n_{ a}^{\prime})(T-t_{a}),\\ \max(0,\omega_{a}-1),&\text{otherwise}.\end{cases} \tag{19}\] The strategy of \(a\) is to pay \(\omega_{a}\), i.e. \(\pi_{a}=\sigma^{\omega_{a}}\). Note that this is a generalization of the approach introduced in Section 2.1, as \(\pi_{a}\) is not a function of only the observation in the current Round, but also depends on history. This makes this strategy non-Markovian. As such, the Definition 2.2 does not apply. However, in our experiments we simply assess the effect of agents using BRS, and make no claims regarding its optimality. ### Reinforcement Learning In order to approximate an equilibrium of Queue, we employ an iterative algorithm. In each iteration, the algorithm approximates \(\overline{\pi}_{a}\) such that \[\overline{\pi}_{a}\in\operatorname{argmax}\mathbb{E}_{(\overline{\pi}_{a}, \pi_{a-a})}\left[u_{a}(\overline{\pi}_{a},\pi_{-a})\right]. \tag{20}\] In words, we find \(\overline{\pi}_{a}\) such that it maximizes utility of \(a\), assuming \(\mathcal{N}\setminus\{a\}\) follow \(\pi\). We denote as \(\tau\) the iteration of the learning algorithm and \(\pi^{\tau}\) the strategy the algorithm approximates the best-response against in iteration \(\tau\). Figure 1. Evolution of NashConv during training, averaged over ten random seeds. We use PPO (PPO, 2017) to find \(\overline{\pi}\), utilizing trajectories of all terminal agents for the update. For details on our implementation, see Appendix A. This approach is not guaranteed to converge in general but if it does converge, the resulting strategy is an equilibrium (Bradley et al., 2017). Similar approach was successfully used before (Bradley et al., 2017). #### 4.2.1. NashConv In order to quantify the quality of the learned solution, we adapt the notion of NashConv (Kolmogorov, 1954). NashConv measures the negative difference in utility agents are expected to receive under \(\pi^{\tau}\) and the approximate best-response \(\pi^{\tau+1}\). We approximate the latter by having a fraction of agents \(\rho\) follow \(\pi^{\tau+1}\) while the rest follows \(\pi^{\tau}\). Formally, Definition 4.2 (NashConv).: Let each agent added to Queue follow \(\pi^{\tau+1}\) w.p. \(\rho\) and \(\pi^{\tau}\) otherwise. Let \(\overline{\mathcal{N}}\) be the set of agents following \(\pi^{\tau+1}\) and their expected utility \[\mathcal{E}\mathcal{U}(\rho,\pi^{\tau+1},\pi^{\tau})=\mathbb{E}_{\left( \frac{\pi^{\tau+1}}{\overline{N}},\pi^{\tau}_{-\overline{N}}\right)}\left[u_{ a}(\pi^{\tau+1}_{a},\pi^{\tau}_{-a})|a\in\overline{\mathcal{N}}\right]\,.\] Then \[\mathrm{NashConv}^{\tau}(\rho)=\mathcal{E}\mathcal{U}(\rho,\pi^{\tau+1},\pi^{ \tau})-\mathbb{E}_{\pi^{\tau}}\left[u_{a}(\pi^{\tau})\right]\,. \tag{21}\] NashConv and \(\epsilon\)-equilibrium are closely connected. If \(\rho\) is small enough such that \(|\overline{\mathcal{N}}|\ll|\mathcal{N}|\), then \(\mathrm{NashConv}\approx\epsilon\). In Figure 1 we present a representative example of the evolution of NashConv during learning. We averaged the results over ten random seeds, and also show the standard error. The results suggest that, although there is a considerable amount of noise, the algorithm was able to reach a sufficiently close approximation of the equilibrium. Moreover, we verified this trend translates to other experiments presented below. ### Results In this section, we numerically demonstrate the Avalanche effect and the Division problem. Specifically, we show the total expected revenue, which is given as \[\mathbb{E}_{\pi}\left[\sum_{a\in\mathcal{T}}m_{a}\right]\,. \tag{22}\] Unless stated otherwise, we use \(F=T=4\), \(Q=6\), \(x=x_{0}=32\), \(k=2\) and \(p=1/2\) in all our experiments. #### 4.3.1. Avalanche Effect In Figure 2 we show the total expected payment as a function of the probability of ignorance \(p\), and the number of entering agents \(x\). The results suggest that the Queue exhibits the Avalanche effect in a general setting. In fact, it exhibits both properties of Definition 2.3. Interestingly, the learned solution achieves a considerably lower total payment compared to BRS. #### 4.3.2. Division problem In this section we numerically study the Division problem introduced in Section 2.4. Results for both the Time- and Group-Division problem are presented in Figure 3. For the Time-Division problem, BRS seems to drastically overpay the learned strategy if the sorting is frequent, i.e. \(T\) is large. On the other hand, when \(T\) is small the willingness to pay doesn't increase. This leads to paying only \(kQ=48\) for \(T=1\), while the learned strategy prefers to pay more. When the game is sorted more often, the learned strategy seems to favor lower total payments. In the Group-Division scenario, both BRS and the learned strategy pay less in larger system. Splitting the game into several smaller thus increases the total payment of the offenders. This is in agreement with the analytic solution presented in Section 3.2.2, suggesting the incoming agents don't impact Queue much. ## 5. Conclusion In this work, we suggest a simple mechanism for collecting fines for traffic violations in large cities, by a small number of administrators. We show analytically and on realistic experiments that this simple mechanism exhibits the Avalanche effect and thus supports non-cooperation of offenders. We quantify the fines collection in expectation. Finally, we present some initial results towards understanding the effective use of the administrators, i.e., the Division problem. Future work:Further study of the Division problem, in particular possible strengthening of Lemma 3.14 and proving Conjecture 3.7 is our work in progress. We see a limitation of our numerical approach in that we limit ourselves to scenarios where all agents share the same strategy \(\pi_{a}\). We would like to improve on our results by having each agent follow a leader, and training each leader separately. In this framework, we could also access the exploitability of BRS, by having the learned agents in the same system. ## Appendix A Learning Algorithm The shared strategy \(\pi_{a}\) is represented by a neural network and trained from trajectories of all terminal agents. When selecting the strategy for a Round, we mask all actions which would lead to \(m_{a}+\mu_{a}>F\). This makes the agents unable to overpay the fine \(F\). We use fully-connected networks for both the actor and the critic. Both accept the scaled observation of \(a\) in Round, i.e. \((n_{a},t_{a},m_{a})\). The actor network has two hidden layers with four hidden units, and the critic has three hidden layers with 32 units each, all using the ReLU activation function. The rest of the hyperparameters are given in Table 1. \begin{table} \begin{tabular}{|c|c|c|} \hline Parameter & Value & Description \\ \hline \hline \(\varepsilon\) & \(0.05\) & Policy update clipping \\ \hline \(\gamma\) & \(1\) & Reward discounting \\ \hline \(\lambda\) & \(0.95\) & Advantage decay factor \\ \hline \(N_{\text{train}}\) & \(16\) & Number of training updates per cycle \\ \hline \(N_{\text{epochs}}\) & \(512\) & Number of training epochs \\ \hline \(N_{\text{train}}\) & \(10^{4}\) & Train buffer size \\ \hline \(\alpha_{\text{actor}}\) & \(3\cdot 10^{-4}\) & Actor learning rate \\ \hline \(\alpha_{\text{critic}}\) & \(10^{-3}\) & Critic learning rate \\ \hline \(c_{\text{H}}\) & \(10^{-3}\) & Entropy regularization weight \\ \hline \(\overline{c}\) & \(0.1\) & Gradient norm clipping \\ \hline \end{tabular} \end{table} Table 1. Hyperparameters of the learning algorithm. Figure 3: Expected total payment of terminal agents for varying number of sortings \(T\) (left) and number of splits \(g\) (right). The figures investigate the Division problem defined in Section 2.4. Figure 2: Expected total payment of terminal agents for varying probability of ignorance \(p\) (left) and number of incoming offenders \(x\) (right). The figures demonstrate the Avalanche effect defined in Section 2.3. ## Acknowledgments This work has been supported by the E-POKUTY TACR project no. TL05000450. Computational resources were supplied by the project e-Infrastruktura CZ (e-INFRA CZ LM2018140) supported by the Ministry of Education, Youth and Sports of the Czech Republic.
2309.11383
Shadows of a generic class of spherically symmetric, static spacetimes
We explore the characteristics of shadows for a general class of spherically symmetric, static spacetimes, which may arise in general relativity or in modified theories of gravity. The chosen line element involves a sum (with constant but different coefficients) of integer powers of $\frac{1}{\text{r}}$ in $\text{g}_\text{tt}$ and $\text{g}_\text{rr}$, in the Schwarzschild gauge. We begin our discussion by motivating the line element through a study of the energy conditions (null and weak) and the extent to which they are satisfied/violated for diverse choices of the parameters appearing in the metric functions. Subsequently, we construct the circular shadows and analyse the dependence of the shadow radius on the metric parameters. We find that with specific choices of the metric parameters (within the ranges allowed by the energy conditions) one can, in principle, obtain values that conform with recent observations on shadows, as available in the literature. We also mention where such metrics may arise (i.e., in which theory of gravity and the physical scenario therein), thereby proposing that the observed shadows may be representative signatures of different theoretical contexts.
Md. Golam Mafuz, Rishank Diwan, Soumya Jana, Sayan Kar
2023-09-20T15:10:02Z
http://arxiv.org/abs/2309.11383v3
# Shadows of a generic class of spherically symmetric, static spacetimes ###### Abstract We explore the characteristics of shadows for a general class of spherically symmetric, static spacetimes, which may arise in general relativity or in modified theories of gravity. The chosen line element involves a sum (with constant but different coefficients) of integer powers of \(\frac{1}{r}\) in \(\mathrm{g_{tt}}\) and \(\mathrm{g_{rr}}\), in the Schwarzschild gauge. We begin our discussion by motivating the line element through a study of the energy conditions (null and weak) and the extent to which they are satisfied/violated for diverse choices of the parameters appearing in the metric functions. Subsequently, we construct the circular shadows and analyse the dependence of the shadow radius on the metric parameters. We find that with specific choices of the metric parameters (within the ranges allowed by the energy conditions) one can, in principle, obtain values that conform with recent observations on shadows, as available in the literature. We also mention where such metrics may arise (i.e., in which theory of gravity and the physical scenario therein), thereby proposing that the observed shadows may be representative signatures of different theoretical contexts.
2306.00184
Data-scarce surrogate modeling of shock-induced pore collapse process
Understanding the mechanisms of shock-induced pore collapse is of great interest in various disciplines in sciences and engineering, including materials science, biological sciences, and geophysics. However, numerical modeling of the complex pore collapse processes can be costly. To this end, a strong need exists to develop surrogate models for generating economic predictions of pore collapse processes. In this work, we study the use of a data-driven reduced order model, namely dynamic mode decomposition, and a deep generative model, namely conditional generative adversarial networks, to resemble the numerical simulations of the pore collapse process at representative training shock pressures. Since the simulations are expensive, the training data are scarce, which makes training an accurate surrogate model challenging. To overcome the difficulties posed by the complex physics phenomena, we make several crucial treatments to the plain original form of the methods to increase the capability of approximating and predicting the dynamics. In particular, physics information is used as indicators or conditional inputs to guide the prediction. In realizing these methods, the training of each dynamic mode composition model takes only around 30 seconds on CPU. In contrast, training a generative adversarial network model takes 8 hours on GPU. Moreover, using dynamic mode decomposition, the final-time relative error is around 0.3% in the reproductive cases. We also demonstrate the predictive power of the methods at unseen testing shock pressures, where the error ranges from 1.3% to 5% in the interpolatory cases and 8% to 9% in extrapolatory cases.
Siu Wun Cheung, Youngsoo Choi, H. Keo Springer, Teeratorn Kadeethum
2023-05-31T21:01:59Z
http://arxiv.org/abs/2306.00184v2
# Data-scarce surrogate modeling of shock-induced pore collapse process ###### Abstract Understanding the mechanisms of shock-induced pore collapse is of great interest in various disciplines in sciences and engineering, including materials science, biological sciences, and geophysics. However, numerical modeling of the complex pore collapse processes can be costly. To this end, a strong need exists to develop surrogate models for generating economic predictions of pore collapse processes. In this work, we study the use of a data-driven reduced order model, namely dynamic mode decomposition, and a deep generative model, namely conditional generative adversarial networks, to resemble the numerical simulations of the pore collapse process at representative training shock pressures. Since the simulations are expensive, the training data are scarce, which makes training an accurate surrogate model challenging. To overcome the difficulties posed by the complex physics phenomena, we make several crucial treatments to the plain original form of the methods to increase the capability of approximating and predicting the dynamics. In particular, physics information is used as indicators or conditional inputs to guide the prediction. In realizing these methods, the training of each dynamic mode composition model takes only around 30 seconds on CPU. In contrast, training a generative adversarial network model takes 8 hours on GPU. Moreover, using dynamic mode decomposition, the final-time relative error is around 0.3% in the reproductive cases. We also demonstrate the predictive power of the methods at unseen testing shock pressures, where the error ranges from 1.3% to 5% in the interpolatory cases and 8% to 9% in extrapolatory cases. ## 1 Introduction Shock-induced pore collapse is a phenomenon that occurs when a shock wave passes through a porous material, causing the pores to collapse or deform. Figure 1 illustrates a shock-induced pore collapse process. At first, the shock approaches and travels through the pore. The pore eventually deforms and develops into a high-temperature profile after the interaction with the shock. This phenomenon has been observed and studied in a variety of materials, including viscoelastic materials [1], nanoporous metals [2], sedimentary rocks [3], biological cells [4], and polymers [5]. The collapse of pores can have a significant impact on the mechanical properties of the material, including its strength, stiffness, and ductility. For example, in metals, shock-induced pore collapse can lead to a reduction in ductility and toughness, which can make the material more prone to brittle failure. In geological materials, pore collapse can affect the permeability and porosity of the material, which can have implications for groundwater flow and oil recovery. Understanding the mechanisms of shock-induced pore collapse is therefore of great interest in various disciplines in sciences and engineering, including materials science, biological sciences and geophysics. However, accurately analyzing the pore collapse dynamics is challenging due to the complex and nonlinear nature of the deformation process. Traditional analytical models, which rely on simplified assumptions about Figure 1: Schematic diagram for illustration of shock-induced pore collapse process. At first, the shock approaches and travels through the pore. The pore eventually deforms and develops into a high-temperature profile after the interaction with the shock. the material properties and pore geometry, often fail to capture the true behavior of the system. Numerical methods is a powerful alternative to obtain approximate solutions through computer simulation in this scenario. For instance, the pore collapse processes can be accurately simulated by the multi-physics hydrocode, ALE3D [6]. However, a single simulation takes up to 1 week on 1024 cores. It is therefore desirable to develop efficient techniques for resembling the dynamics in these computationally expensive simulations and predicting the dynamics in unseen generic shock pressures. Obtaining computationally economical prediction of complex physics phenomena remains a demanding and challenging task in many applications in engineering and science. In recent years, numerous research efforts have been devoted to develop surrogate models, which work as simplified representation of the underlying physical process and reduce the computational cost of simulating or analyzing the original system. One important class of these surrogate models is the projection-based reduced order models (ROMs), which aims to reduce the dimensionality by projecting high-fidelity physics-based models onto low-dimensional structures, which are constructed from compression of the representative snapshot solution data. The data compression techniques include linear approaches such as proper orthogonal decomposition (POD) [7], balanced truncation [8], and reduced basis method [9], or nonlinear compression approaches such as autoencoders (AE) [10, 11, 12]. Projection-based ROMs are intrusive in the sense that involve incorporating the reduced solution representation into the governing equations, physics laws, and numerical discretization methods, such as finite element, finite volume, and finite difference methods. As a result, these approaches are data-driven but also constrained by physics, requiring less data to achieve the same level of accuracy. Linear subspace ROMs had been applied to different applications with great success, including nonlinear diffusion equations [13, 14], Burgers equation and Euler equations in small-scale [15, 16, 17], convection-diffusion equations [18, 19], Navier-Stokes equations [20, 21], Lagrangian hydrodynamics [22, 23]. porous media flow [24, 25], reservoir simulations [26, 27], computational electro-cardiology [28], shallow water equations [29, 30], Boltzmann transport problems [31], wave equations [32, 33, 34], computing electromyography [35], spatio-temporal dynamics of a predator-prey system [36], acoustic wave-driven microfluidic biochips [37], rocket nozzle shape design [38], flutter avoidance wing shape optimization [39], topology optimization of wind turbine blades [40], and lattice structure design [41, 42]. Survey papers for the projection-based ROMs can be found in [43, 44]. It is noteworthy that in spite of the successes of the classical linear subspace projection-based ROMs in many applications, these approaches are limited to the assumption that the intrinsic solution space falls into a subspace with a small dimension, i.e., the solution space with a Kolmogorov \(n\)-width decays fast. This assumption is violated in advection-dominated problems, due to features such as sharp gradients, moving shock fronts, and turbulence, which prevent these model reduction schemes from being practical. A way to overcome this challenge is to build small and accurate projection-based reduced-order models by decomposing the solution manifold into submanifolds. These reduced-order models are local in the sense that each of them are valid only over a certain subset of the parameter-time domain. The appropriate local reduced order model is chosen based on the current state of the system, and all the local reduced order models cover the whole time marching in the online phase. The concept of a local reduced order model was introduced in [45, 46], where unsupervised clustering is used for the solution manifold decomposition. In [47, 48], windowed ROM apporaches were introduced to construct temporally-local ROMs which are small but accurate within a short period in advection-dominated problems. In [22, 23], windowed ROM approaches were developed for Lagrangian hydrodynamics by decomposing the solution manifold decomposition based on physical time or more generally a suitably defined physics-based indicator. A drawback of projection-based ROMs is that the implementation requires knowledge of the underlying numerical methods used in the high-fidelity simulation. Conversely, the class of non-intrusive surrogate models do not require access to the source code of the high-fidelity physics solver, and they are solely based on data. With the growing availability of data, there has been extensive research on non-intrusive surrogate models of discrete dynamics, using different dimensionality reduction and machine learning techniques. Similar to the projection-based ROMs, many non-intrusive surrogate models construct low-dimensional structure for approximating the solution manifold and approximate the dynamics in the low-dimensional latent code. While the projection-based ROMs use the governing equations to derive the dynamics in the low-dimensional latent space, non-intrusive surrogate models are purely data-driven. For example, several approaches use linear compression techniques, to construct a reduced subspace from snapshots, such as dynamic mode decomposition (DMD) [49, 50, 51, 52] which seeks the best-fit linear model, operator inference (OpInf) [53, 54, 55] which seeks the best-fit polynomial model, and sparse identification of nonlinear dynamics (SINDy) [56, 57] which seeks the best-fit sparse regression. The idea of identifying the best reduced discrete dynamic model within a certain family of functions can be extended to nonlinear compression techniques by AE, for example, using SINDy [58], parametric Latent Space Dynamics Identification (LaSDI) [59, 60], and DeepFluids [61]. Besides dimensionality reduction techniques, neural networks can also be used in approximate the nonlinear operator in the dynamical system as non-intrusive surrogate models, such as Fourier neural operator (FNO) [62, 63], deep operator network (DeepONet) [64], and other relevant works [65, 66, 67, 68, 69]. In this work, we employ and compare two data-driven and machine-learning based methods, namely dynamic mode decomposition (DMD) and U-Net generative adversarial networks (GAN), to serve as efficient non-intrusive surrogate models of the discrete dynamics. As illustrated in Figure 2, these methods are used to model the discrete dynamics and snapshot data from selected training shock pressure are used to train the model. Composition of the trained model is used to perform sequential prediction of the discrete dynamics of the pore collapse process at a general shock pressure. We remark that, since the simulations are expensive, the training data are scarce. To the best of our knowledge, this is the first work in using data-driven non-intrusive surrogate modeling methods for the pore collapse process. We make several crucial treatments to the plain original form of the methods in order to increase the capability of approximating and predicting the dynamics. For enhancing DMD, we combine the idea of physics-indicated local ROM in [22, 23] and parametric DMD with matrix manifold interpolation in [39, 70, 71]. On the other hand, for enhancing GAN, we combine the improved architecture with conditional continuous input in [72] and the residual network structure for approximating discrete dynamics (c.f. [69]). The rest of the paper is organized as follows. In Section 2, we describe the phenomenon of pore collapse process and the physics-based high-fidelity simulations. Next, in Section 3 and Section 4, we discuss the details of surrogate modeling by DMD and GAN, respectively. In Section 5, we present some numerical results to test and compare the performance of the proposed methods. Finally, a conclusion is given in Section 6. ## 2 Physics-based simulations of pore collapse We perform pore collapse simulations using the multi-physics arbitrary Lagrangian Eulerian finite element hydrocode, ALE3D [6]. Our simulations consist of a 10 \(\mu\)m by 10 \(\mu\)m with a central circular pore whose diameter is 1 \(\mu\)m. The applied shock pressure ranges from 10 to 20 GPa. Simulations are performed under 2D plane strain conditions. Symmetry conditions are imposed on the upper and lower boundaries of the domain. Figure 3 depicts some selected representative snapshots of temperature fields at different shock pressures ranging from 11 to 15 GPa, and time instances Figure 2: Schematics of non-intrusive surrogate models of the discrete dynamics of pore collapse. In the offline phase, the snapshot data from training shock pressures are used as the input and the output of the recurrence relation in the discrete dynamics, and dynamic mode decomposition or U-Net generative adversarial networks are employed as functional approximation to model the relation. In the online phase, composition of the trained model is used to perform sequential prediction of the discrete dynamics of pore collapse process at a general shock pressure. ranging from 0.8 to 1.4 \(\mu\)s. Each row corresponds to the same shock pressure and each column corresponds to same time instance. It can be observed that, with higher shock pressure, the pore collapse takes place at an earlier time. Next, we introduce some notations and dimensionless quantities to simplify the discussion. Let \(\mathsf{D}=[P_{\min},P_{\max}]\) denote the range of applied shock pressure measured in GPa, and \(\Omega=[x_{\min},x_{\max}]^{2}\subset\mathbb{R}^{2}\) denote the spatial region of interest with length scale in nm. The temperature fields are measured at \(N_{x}^{2}\) square sub-zones with equal length \(h_{x}\) in \(\Omega\), at a uniform sampling rate \(\Delta t\), and are represented as matrices \(\mathbf{T}(t;P)\in\mathbb{R}^{N_{x}\times N_{x}}\) or vectors \(\mathbf{T}(t;P)\in\mathbb{R}^{N_{x}^{2}}\), depending on the surrogate modeling approach under consideration. For a shock pressure \(P\), the time interval of interest measured in \(\mu\)s is denoted by \(\mathcal{T}(P)=[t^{(0)}(P),t^{(0)}(P)+m\Delta t]\). It is important to note that, since the dynamics is advective and transport in nature and the traveling speed of the shock varies with shock pressure, in order to capture the corresponding physics phenomena, the initial time \(t^{(0)}(P)\) must be adjusted depending on the shock pressure \(P\). We end this section by describing the simulation data used for constructing reduced order models. The samples of temperature fields \(\mathbf{T}_{i}^{k}=\mathbf{T}(t_{i}^{(k)};P_{i})\) are measured at training shock pressures \(\mathsf{D}_{\text{train}}=\{P_{i}\}_{i=1}^{N_{P}}\subset\mathsf{D}\) and time instances \(t_{i}^{(k)}=t^{(0)}(P_{i})+k\Delta t\) for \(0\leq k\leq m\) within the time interval \(\mathcal{T}_{i}=\mathcal{T}(P_{i})\). Our goal is to construct reduced order models from the training samples to resemble the numerical simulations of pore collapse process, and make predictions of the temperature fields \(\widetilde{\mathbf{T}}(t;P)\) in the time interval of query \(t\in\widetilde{\mathcal{T}}(P)\), given the initial condition \(\mathbf{T}^{(0)}(P)=\mathbf{T}(t^{(0)}(P);P)\), at generic shock pressures \(P\in\mathsf{D}\setminus\mathsf{D}_{\text{train}}\). In the rest Figure 3: Selected representative snapshots of temperature fields at different shock pressures (11–15 GPa, row-wise) and time instances (0.8–1.4 \(\mu\)s, column-wise). With higher shock pressure, the pore collapse takes place at an earlier time. of this paper, we will introduce techniques to overcome the difficulties posed to surrogate modeling by the advective and transport nature of the dynamics. ## 3 Dynamic mode decomposition Dynamic mode decomposition (DMD) was introduced in [49] as a numerical technique for extracting discrete dynamical features from a sequence of sample data and further studied in [50, 51]. We will given a brief overview of DMD in Section 3.1 in the context of numerical simulation data. Next, in Section 3.2, we will discuss a specific approach of modifying DMD to tackle the challenges from the nature of advective and transport of the shock front. In Section 3.3, we will introduce the predictive procedure of DMD on generic shock pressure \(P\in\mathsf{D}\), which is in general unseen in the training samples. ### Offline stage: serial DMD We start the offline procedure in DMD with the sequence of samples \(\{\mathbf{T}_{i}^{(k)}\}_{k=0}^{m}\) at a particular training shock pressure \(P_{i}\in\mathsf{D}_{\text{train}}\). The samples \(\{\mathbf{T}_{i}^{(k)}\}_{k=0}^{m}\) are represented as vectors in \(\mathbb{R}^{N_{x}^{2}}\). DMD seeks a linear transformation \(\mathbf{A}_{i}\in\mathbb{R}^{N_{x}^{2}\times N_{x}^{2}}\) which approximates the discrete dynamics \[\mathbf{T}_{i}^{(k+1)}\approx\mathbf{A}_{i}\mathbf{T}_{i}^{(k)}\text{ for all }0\leq k<m.\] The input snapshot matrix \(\mathbf{S}_{i}^{-}\) and the output snapshot matrix \(\mathbf{S}_{i}^{+}\) of the linear recurrence relation are \[\mathbf{S}_{i}^{-} =\left[\mathbf{T}_{i}^{(0)},\mathbf{T}_{i}^{(1)},\cdots,\mathbf{T}_{i}^{(m-1) }\right]\in\mathbb{R}^{N_{x}^{2}\times m}, \tag{1}\] \[\mathbf{S}_{i}^{+} =\left[\mathbf{T}_{i}^{(1)},\mathbf{T}_{i}^{(2)},\cdots,\mathbf{T}_{i}^{(m)} \right]\in\mathbb{R}^{N_{x}^{2}\times m}.\] Performing rank-\(r\) truncated singular value decomposition (SVD) on \(\mathbf{S}_{i}^{-}\) yields \[\mathbf{S}_{i}^{-}=\mathbf{U}_{i}\mathbf{\Sigma}_{i}\mathbf{V}_{i}^{\top},\] where \(\mathbf{U}_{i}\in\mathbb{R}^{N_{x}^{2}\times r},\mathbf{\Sigma}_{i}\in\mathbb{R}^ {r\times r},\mathbf{V}_{i}\in\mathbb{R}^{m\times r}\), and \(r\leq\text{rank}(\mathbf{S}^{-})\leq\min\{m,N_{x}^{2}\}\). We remark that the reduced dimension \(r\) is assumed to be identical for all training parameters in \(\mathsf{D}_{\text{train}}\). Then we define the reduced discrete dynamical system by \[\widehat{\mathbf{A}}_{i}=\mathbf{U}_{i}^{\top}\mathbf{S}_{i}^{+}\mathbf{V}_{ i}\mathbf{\Sigma}_{i}^{-1}\in\mathbb{R}^{r\times r},\] and perform the spectral decomposition on \(\widehat{\mathbf{A}}_{i}\), i.e. \[\widehat{\mathbf{A}}_{i}\mathbf{X}_{i}=\mathbf{X}_{i}\mathbf{\Lambda}_{i},\] where \(\mathbf{X}_{i}\in\mathbb{C}^{r\times r}\) consists of the eigenvectors of \(\widehat{\mathbf{A}}_{i}\) and \(\mathbf{\Lambda}_{i}\in\mathbb{C}^{r\times r}\) is the diagonal matrix containing the DMD eigenvalues. The DMD basis is then given by \(\mathbf{\Phi}_{i}=\mathbf{U}_{i}\mathbf{X}_{i}\in\mathbb{C}^{N_{x}^{2}\times r}\). Then the DMD modes \((\mathbf{\Phi}_{i},\mathbf{\Lambda}_{i})\) are used for reproductive approximation \(\widetilde{\mathbf{T}}_{\text{DMD}}(t;P_{i})\) of the dynamics at the shock pressure \(P_{i}\), which is given by: for \(t\in\mathcal{T}_{i}=[t_{i}^{(0)},t_{i}^{(m)}]\), \[\widetilde{\mathbf{T}}_{\text{DMD}}(t;P_{i})=\mathbf{\Phi}_{i}\mathbf{\Lambda}_{i}^{\frac{ t-t_{i}^{(0)}}{\Delta t}}\mathbf{\Phi}_{i}^{\dagger}\mathbf{T}_{i}^{(0)}.\] ### Offline stage: windowed DMD Section 3.1 presented a serial DMD, in which the high-fidelity temperature fields are represented by ROM subspaces. However, the advective-dominated nature of the temperature field implies the weak linear dependence among the snapshots. As a result, there is no intrinsic low-dimensional subspace that can universally approximate the solution manifold comprised of all the solutions over the temporal domain. In mathematical terms, the solution manifold has slow decay in Kolmogorov \(n\)-width. In order to maintain accuracy with longer simulation time, the dimension of the reduced subspaces becomes large if we use the serial DMD. Furthermore, the large number of high-fidelity snapshot samples also imposes a heavy burden in storage and computational cost for the SVD computations. To this end, we employ multiple reduced order models in time to overcome these difficulties. The main idea is to construct windowed DMDs in the parameter-time domain using a suitable indicator for clustering and classification. In the offline phase, we construct each of these reduced order models from a small subset of the snapshot samples to ensure low dimension. In the online phase, each of these reduced order models are used in a certain subset of the parameter-time domain where they are supposed to provide good approximation. Local ROMs have been well studied in the literature [22, 47, 48, 23]. Following [23], the windowed DMD framework in this paper involves a decomposition of the solution manifold and relies on an indicator which is used to classify the snapshot samples and assign the reduced-order models. The rationale is to decompose the solution manifold into submanifolds where the Kolmogorov \(n\)-width decays fast with respect to the subspace dimension, within which we can collect snapshots with strong linear dependence. This enables us to build accurate multiple low-dimensional subspaces. We describe the general framework of indicator-based decomposition of the solution manifold from which we will derive two practical examples later in this section. Let \(\Psi:\mathbb{R}^{N_{x}^{2}}\times\mathbb{R}^{+}\times\mathsf{D}\to\mathbb{R}\) be an indicator which maps the triplet \((\mathbf{T},t,P)\) to a real value in the range \([\Psi_{\min},\Psi_{\max}]\). For any \(P\in\mathsf{D}\), we assume \(\Psi(\mathbf{T}^{(0)}(P),t^{(0)}(P),P)=\Psi_{\min}\), and \(\Psi(\mathbf{T}(t,P),t,P)\) is increasing with time \(t\). The range of the indicator is partitioned into \(J\) subintervals, i.e. \[\Psi_{\min}=\Psi_{0}<\Psi_{1}<\cdots<\Psi_{J-1}<\Psi_{J}=\Psi_{\max}. \tag{3.2}\] In the training phase, at a given training parameter \(P_{i}\in\mathsf{D}\), instead of directly assembling all the snapshot samples into huge snapshot matrices as in (3.1), the FOM states are first classified into \(J\) groups. Given the samples \(\{\mathbf{T}_{i}^{(k)}\}_{k=0}^{m}\) at a shock pressure \(P_{i}\in\mathsf{D}_{\text{train}}\) and a group index \(1\leq j\leq J\), we denote by \(\mathcal{G}_{i}^{(j)}\) the subset of temporal indices whose corresponding snapshot belongs to the \(j\)-th group, i.e. \[\mathcal{G}_{i}^{(j)}=\left\{0\leq k<m:\Psi\left(\mathbf{T}_{i}^{(k)},t_{i}^{(k)}, P_{i}\right)\in[\Psi_{j-1},\Psi_{j})\right\},\] and denote \(K_{i}^{(j-1)}=\min\mathcal{G}_{i}^{(j)}\) and \(m_{i}^{(j)}=|\mathcal{G}_{i}^{(j)}|\). Then \(m=\sum_{j=1}^{J}m^{(j)}\). Consequently, by extending \(K_{i}^{(J)}=m\) and taking \(\tau_{i}^{(j)}=t_{K_{i}^{(j)}}\) for \(0\leq j\leq J\), the time interval \(\mathcal{T}_{i}=[t_{i}^{(0)},t_{i}^{(m)}]\) at the shock pressure \(P_{i}\) is partitioned into \(J\) subintervals, i.e. \[t_{i}^{(0)}=\tau_{i}^{(0)}<\tau_{i}^{(1)}<\cdots<\tau_{i}^{(J-1)}<\tau_{i}^{(J )}=t_{i}^{(m)}. \tag{3.3}\] For \(1\leq i<N_{P}\) and \(1\leq j\leq J\), we define the snapshot submatrices by \[\mathbf{S}_{i}^{(j),-} =\left[\mathbf{T}_{i}^{(k)}\right]_{k\in\mathcal{G}_{i}^{(j)}}\in \mathbb{R}^{N_{x}^{2}\times m_{i}^{(j)}},\] \[\mathbf{S}_{i}^{(j),+} =\left[\mathbf{T}_{i}^{(k+1)}\right]_{k\in\mathcal{G}_{i}^{(j)}}\in \mathbb{R}^{N_{x}^{2}\times m_{i}^{(j)}}.\] By carrying out the truncated SVD as discussed in Section 3.1 with the pair of snapshot matrices \((\mathbf{S}_{i}^{(j),-},\mathbf{S}_{i}^{(j),+})\), we obtain the modal discrete dynamical system \((\mathbf{U}_{i}^{(j)},\widehat{\mathbf{A}}_{i}^{(j)})\in\mathbb{R}^{N_{x}^{2} \times r_{j}}\times\mathbb{R}^{r_{j}\times r_{j}}\) by \[\mathbf{S}_{i}^{(j),-} =\mathbf{U}_{i}^{(j)}\mathbf{\Sigma}_{i}^{(j)}\left[\mathbf{V}_{i}^{ (j)}\right]^{\top},\] \[\widehat{\mathbf{A}}_{i}^{(j)} =\left[\mathbf{U}_{i}^{(j)}\right]^{\top}\mathbf{S}_{i}^{(j),+} \mathbf{V}_{i}^{(j)}\left[\mathbf{\Sigma}_{i}^{(j)}\right]^{-1}.\] Again, it is assumed that the reduced dimension \(r_{j}\) is identical for all training parameters in \(\mathsf{D}_{\text{train}}\). Then we perform eigenvalue decomposition as in Section 3.1 and obtain the DMD modes \((\mathbf{\Phi}_{i}^{(j)},\mathbf{\Lambda}_{i}^{(j)})\in\mathbb{C}^{N_{x}^{2} \times r_{j}}\times\mathbb{C}^{r_{j}\times r_{j}}\) by \[\widehat{\mathbf{A}}_{i}^{(j)}\mathbf{X}_{i}^{(j)} =\mathbf{X}_{i}^{(j)}\mathbf{\Lambda}_{i}^{(j)},\] \[\mathbf{\Phi}_{i}^{(j)} =\mathbf{U}_{i}^{(j)}\mathbf{X}_{i}^{(j)},\] which are used for the DMD reproductive approximation \(\widetilde{\mathbf{T}}_{\mathrm{DMD}}(t;P_{i})\) at the shock pressure \(P_{i}\) given by: iteratively for \(1\leq j\leq J\), for \(t\in[\tau_{i}^{(j-1)},\tau_{i}^{(j)}]\), \[\widetilde{\mathbf{T}}_{\mathrm{DMD}}(t;P_{i})=\mathbf{\Phi}_{i}^{(j)}\left[\mathbf{\Lambda }_{i}^{(j)}\right]^{\frac{\iota-\tau_{i}^{(j)}}{\Delta t}}\left[\mathbf{\Phi}_{i}^ {(j)}\right]^{\dagger}\widetilde{\mathbf{T}}_{\mathrm{DMD}}(\tau_{i}^{(j-1)};P_{i}), \tag{10}\] where \(\widetilde{\mathbf{T}}_{\mathrm{DMD}}(\tau_{i}^{(j-1)};P_{i})\) is set to be the initial state \(\mathbf{T}_{i}^{(0)}\) if \(j=1\), and is obtained from DMD approximation in the previous time subinterval for \(j>1\). We remark that if \(J=1\), it reduces to the serial DMD as discussed in Section 3.1. We end this subsection with two practical choices of the indicator \(\Psi\) for the decomposition of solution manifold. One natural choice is the time windowing (TW) DMD, where we use the physical time as the indicator, i.e. \(\Psi(\mathbf{T},t,P)=(t-t^{(0)}(P))/\Delta t\). In this case, \(\Psi_{\min}=0\) and \(\Psi_{\max}=m\), and the temporal partition (11) is actually an affine transformation of indicator range partition (10), i.e. \(\tau_{i}^{(j)}=t_{i}^{(0)}+\Psi_{j}\Delta t\), for all \(1\leq j\leq J\). Inspired by [23], another choice of indicator-based decomposition of solution manifold that is applicable to pore collapse process is the distance windowing (DW) DMD, where we use the horizontal translation distance of the primary shock as the indicator. Among the \(N_{x}^{2}\) sub-zones, we select \(N_{x}\) sub-zones on the bottom boundary \(x_{2}=x_{\min}\) as markers, and collect their indices into a subset \(\mathcal{I}\). Then the indicator of shock distance is defined as the number of markers whose temperature values exceed the temperature threshold \(T_{\mathrm{threshold}}=300\), which is the critical value distinguishing between the dark background temperature and the bright hot temperature as illustrated in Figure 1, i.e. \[\Psi(\mathbf{T},t,P)=\left|\{s\in\mathcal{I}:\mathbf{e}_{s}^{\top}\mathbf{T}>T_{ \mathrm{threshold}}\}\right|.\] In this case, \(\Psi_{\min}\geq 0\) and \(\Psi_{\max}=N_{x}\). ### Prediction stage For parametric DMD prediction at a generic shock pressure \(P\in\mathsf{D}\), we construct an appropriate temporal partition and use corresponding DMD models for approximation in each of temporal subintervals. More precisely, for \(1\leq j\leq J\), we need to determine the temporal subinterval endpoint \(\tau^{(j)}(P)\in\mathbb{R}\) by scalar-valued interpolation, and the modal discrete dynamical system \((\mathbf{U}^{(j)}(P),\widehat{\mathbf{A}}^{(j)}(P))\in\mathbb{R}^{N_{x}^{2} \times r_{j}}\times\mathbb{R}^{r_{j}\times r_{j}}\) by matrix-valued interpolation, with the interpolating points as the training shock pressures in \(\mathsf{D}_{\mathrm{train}}\) and the interpolating values in the database obtained at the training shock pressures as described in Section 3.2, i.e. \[\mathcal{D}\mathcal{B}^{(j)}=\left\{\left(P_{i},\tau_{i}^{(j)},\mathbf{U}_{i}^ {(j)},\widehat{\mathbf{A}}_{i}^{(j)}\right)\right\}_{i=1}^{N_{P}}\subset \mathsf{D}_{\mathrm{train}}\times\mathbb{R}\times\mathbb{R}^{N_{x}^{2}\times r _{j}}\times\mathbb{R}^{r_{j}\times r_{j}}.\] We adopt the radial basis functions (RBF) interpolation method. We choose an infinitely smooth radial basis function \(\varphi:[0,\infty)\to[0,\infty)\), and define the interpolation matrix \(\mathbf{B}\in\mathbb{R}^{N_{P}\times N_{P}}\) by \[\mathbf{B}_{i,i^{\prime}}=\varphi\left(\left\|P_{i}-P_{i^{\prime}}\right\| \right)\text{ for all }1\leq i,i^{\prime}\leq N_{P}.\] The scalar-valued interpolant of the temporal subinterval endpoint \(\tau^{(j)}(P)\in\mathbb{R}\) is given by the linear combination \[\tau^{(j)}(P)=\sum_{i=1}^{N_{P}}\omega_{i}^{(j)}\varphi\left(\left\|P-P_{i} \right\|\right),\] where the weights \(\mathbf{\omega}^{(j)}=(\omega_{1}^{(j)},\omega_{2}^{(j)},\ldots,\omega_{N_{P}}^{(j )})^{\top}\in\mathbb{R}^{N_{P}}\) are defined by solving \(\mathbf{B}\mathbf{\omega}^{(j)}=\mathbf{\tau}^{(j)}=(\tau_{1}^{(j)},\tau_{2}^{(j)}, \ldots,\tau_{N_{P}}^{(j)})^{\top}\in\mathbb{R}^{N_{P}}\), which is derived from \[\tau^{(j)}(P_{i})=\tau_{i}^{(j)}\text{ for all }1\leq i\leq N_{P}. \tag{11}\] The interpolated values form a partition for the time interval of query \(\widetilde{\mathcal{T}}(P)=[\tau^{(0)}(P),\tau^{(J)}(P)]\subseteq\mathcal{T}(P)\), i.e. \[\tau^{(0)}(P)<\tau^{(1)}(P)<\cdots<\tau^{(J-1)}(P)<\tau^{(J)}(P).\] It remains to describe the matrix-valued interpolation. For a comprehensive discussion on the theory and practice of interpolation on a matrix manifold in the context of linear subspace reduced order models, the reader is referred to [39, 70, 71]. Here, we present only the necessary details of RBF interpolation of DMD matrix components at a generic shock pressure \(P\in\mathsf{D}\). The first step is identify a reference training shock pressure index \(1\leq i_{\text{ref}}(P)\leq N_{P}\) by \[i_{\text{ref}}(P)=\operatorname*{arg\,min}_{1\leq i\leq N_{P}}|P-P_{i}|.\] Next, we rotate the reduced order operator to enforce the consistency in the generalized coordinate system. For \(1\leq i\leq N_{P}\), we perform SVD of the matrix product \(\left[\mathbf{U}_{i}^{(j)}\right]^{\top}\mathbf{U}_{i_{\text{ref}}(P)}^{(j)}\), i.e. \[\left[\mathbf{U}_{i}^{(j)}\right]^{\top}\mathbf{U}_{i_{\text{ref}}(P)}^{(j)} =\left[\mathbf{Y}_{i}^{(j)}(P)\right]^{\top}\mathbf{\Gamma}_{i}^{(j)}(P) \mathbf{Z}_{i}^{(j)}(P).\] Then we define the rotation matrix \(\mathbf{Q}_{i}^{(j)}(P)\in\mathbb{R}^{r_{j}\times r_{j}}\) by \[\mathbf{Q}_{i}^{(j)}(P)=\left[\mathbf{Y}_{i}^{(j)}(P)\right]^{\top}\mathbf{Z} _{i}^{(j)}(P),\] which is the solution to the classical orthogonal Procrustes problem. The matrix-valued interpolant of the modal discrete dynamical system \((\mathbf{U}^{(j)}(P),\mathbf{\widehat{A}}^{(j)}(P))\in\mathbb{R}^{N_{x}^{2} \times r_{j}}\times\mathbb{R}^{r_{j}\times r_{j}}\) is then given by the linear combination \[\mathbf{U}^{(j)}(P) =\mathbf{U}_{i_{\text{ref}}(P)}^{(j)}+\sum_{i=1}^{N_{P}}\mathbf{ F}_{i}^{(j)}(P)\varphi\left(\left\|P-P_{i}\right\|\right),\] \[\mathbf{\widehat{A}}^{(j)}(P) =\mathbf{\widehat{A}}_{i_{\text{ref}}(P)}^{(j)}+\sum_{i=1}^{N_{P }}\mathbf{G}_{i}^{(j)}(P)\varphi\left(\left\|P-P_{i}\right\|\right).\] Here, for \(1\leq\ell_{1}\leq N_{x}^{2}\) and \(1\leq\ell_{2}\leq r_{j}\), the \((\ell_{1},\ell_{2})\)-entry of the weights \(\mathbf{F}_{i}^{(j)}(P)\in\mathbb{R}^{N_{x}^{2}\times r_{j}}\), denoted by \(\mathbf{f}_{\ell_{1},\ell_{2}}^{(j)}(P)=\left(\left[\mathbf{F}_{i}^{(j)}(P) \right]_{\ell_{1},\ell_{2}}\right)_{i=1}^{N_{P}}\in\mathbb{R}^{N_{P}}\), are defined by solving \[\mathbf{B}\mathbf{f}_{\ell_{1},\ell_{2}}^{(j)}(P)=\left(\left[\mathbf{U}_{i}^ {(j)}\mathbf{Q}_{i}^{(j)}(P)-\mathbf{U}_{i_{\text{ref}}(P)}^{(j)}\right]_{ \ell_{1},\ell_{2}}\right)_{i=1}^{N_{P}}\in\mathbb{R}^{N_{P}}.\] Similarly, for \(1\leq\ell_{1},\ell_{2}\leq r_{j}\), the \((\ell_{1},\ell_{2})\)-entry of the weights \(\mathbf{G}_{i}^{(j)}(P)\in\mathbb{R}^{r_{j}\times r_{j}}\), denoted by \(\mathbf{g}_{\ell_{1},\ell_{2}}^{(j)}(P)=\left([\mathbf{G}_{i}^{(j)}(P)]_{\ell _{1},\ell_{2}}\right)_{i=1}^{N_{P}}\in\mathbb{R}^{N_{P}}\), are defined by solving \[\mathbf{B}\mathbf{g}_{\ell_{1},\ell_{2}}^{(j)}(P)=\left(\left[\mathbf{Q}_{i}^ {(j)}(P)^{\top}\mathbf{\widehat{A}}_{i}^{(j)}\mathbf{Q}_{i}^{(j)}(P)-\mathbf{ \widehat{A}}_{i_{\text{ref}}(P)}^{(j)}\right]_{\ell_{1},\ell_{2}}\right)_{i=1 }^{N_{P}}\in\mathbb{R}^{N_{P}}.\] As in Section 3.1, we perform eigenvalue decomposition and obtain the DMD modes \((\mathbf{\Phi}^{(j)}(P),\mathbf{\Lambda}^{(j)}(P)\in\mathbb{C}^{N_{x}^{2} \times r_{j}}\times\mathbb{C}^{r_{j}\times r_{j}}\) by \[\mathbf{\widehat{A}}^{(j)}(P)\mathbf{X}^{(j)}(P) =\mathbf{X}^{(j)}(P)\mathbf{\Lambda}^{(j)}(P),\] \[\mathbf{\Phi}^{(j)}(P) =\mathbf{U}^{(j)}(P)\mathbf{X}^{(j)}(P).\] With the initial condition \(\widetilde{\boldsymbol{T}}_{\text{DMD}}(t^{(0)}(P);P)=\boldsymbol{T}^{(0)}(P)\), the DMD prediction \(\widetilde{\boldsymbol{T}}_{\text{DMD}}(t;P)\) is then given by: iteratively for \(1\leq j\leq J\), for \(t\in[\tau^{(j-1)}(P),\tau^{(j)}(P)]\), \[\widetilde{\boldsymbol{T}}_{\text{DMD}}(t;P)=\mathbf{\Phi}^{(j)}(P)\left[ \mathbf{\Lambda}^{(j)}(P)\right]^{\frac{t-\tau^{(j)}(P)}{\Delta t}}\left[ \mathbf{\Phi}^{(j)}(P)\right]^{\dagger}\widetilde{\boldsymbol{T}}_{\text{DMD }}(\tau^{(j-1)}(P);P), \tag{3.6}\] where \(\mathbf{\tilde{T}}_{\text{DMD}}(\tau^{(j-1)};P)\) is set to be the initial state \(\mathbf{T}^{(0)}(P)\) if \(j=1\), and is obtained from DMD approximation in the previous time subinterval for \(j>1\). As a final remark, for all \(1\leq i\leq N_{P}\), we have \(i_{\text{ref}}(P_{i})=i\), which implies \(\mathbf{Q}_{i}^{(j)}(P_{i})=\mathbf{I}_{r_{j}}\). Thanks to (10), we have \(\tau^{(j)}(P_{i})=\tau_{i}^{(j)}\) for all \(0\leq j\leq J\), and \(\mathbf{U}^{(j)}(P_{i})=\mathbf{U}_{i}^{(j)}\) and \(\widehat{\mathbf{A}}^{(j)}(P_{i})=\widehat{\mathbf{A}}_{i}^{(j)}\) for all \(1\leq j\leq J\). Therefore, (11) actually reproduces (11) at the training shock pressures \(P_{i}\in\mathsf{D}_{\text{train}}\). ## 4 Continuous conditional generative adversarial network Generative adversarial network (GAN) was introduced in [73] as a deep learning method that learns a parametrized representation by random latent codes for a set of training data in an unsupervised manner and allows fast sampling from the distribution represented by the dataset. In the original work [73], GAN formulates a two-player minimax game with a binary classification score as an optimization problem, and trains two artificial neural networks, the discriminator and the generator, simultaneously to optimize the objective function in opposing ways. These networks compete with each other, with one aiming to maximize the objective function and the other aiming to minimize it. In [74], deep convolutional generative adversarial network (DCGAN) is developed by for image generation tasks by utilizing deep convolutional architectures in GAN. In this section, we introduce a GAN-based dynamical prediction scheme for the numerical simulation data. Our method is based on residual network structure and modified from [72] which adopts several recent improvements to GAN, including batch-based critic architecture [75] and U-Net generator architecture [76] in pix2pix[77] for the image-to-image translation task, earth mover distance as loss function in Wasserstein GAN [78], and continuous conditional generator input [79]. In Section 4.1, we will discuss the details of the neural network. In Section 4.2, we will introduce the predictive procedure of GAN on generic shock pressure \(P\in\mathsf{D}\), which is in general unseen in the training samples. ### Offline stage We begin the discussion of the offline procedure in the continuous conditional generative adversarial network (CcGAN) approach with data preprocessing. We represent the sampled data of the temperature fields \(\mathbf{T}_{i}^{(k)}\) as matrices in \(\mathbb{R}^{N_{x}\times N_{x}}\), and define the residual as \[\mathbf{R}_{i}^{(k)}=\mathbf{T}_{i}^{(k+1)}-\mathbf{T}_{i}^{(k)}\in\mathbb{R}^{N_{x}\times N _{x}}.\] The training data are normalized by: for \(1\leq i\leq N_{P}\) and \(0\leq k<m\), \[\overline{t}^{(k)} =k/(m-1)\in[0,1],\] \[\overline{P}_{i} =(P_{i}-P_{\min})/(P_{\max}-P_{\min})\in[0,1],\] \[\overline{\mathbf{T}}_{i}^{(k)} =\mathbf{T}_{i}^{(k)}/(T_{\max}-T_{\min})\in[0,1]^{N_{x}\times N_{x}},\] \[\overline{\mathbf{R}}_{i}^{(k)} =\mathbf{R}_{i}^{(k)}/(R_{\max}-R_{\min})\in[0,1]^{N_{x}\times N_{x}},\] where \[T_{\max} =\max_{1\leq i\leq N_{P},0\leq k<m}\mathbf{T}_{i}^{(k)},\] \[T_{\text{in}} =\max_{1\leq i\leq N_{P},0\leq k<m}\mathbf{T}_{i}^{(k)},\] \[R_{\max} =\max_{1\leq i\leq N_{P},0\leq k<m}\mathbf{R}_{i}^{(k)},\] \[R_{\text{in}} =\max_{1\leq i\leq N_{P},0\leq k<m}\mathbf{R}_{i}^{(k)}.\] Then the labelled paired training dataset is given by \[\mathcal{S}_{\text{in}} =\left\{\left(\overline{t}^{(k)},\overline{P}_{i},\overline{\mathbf{ T}}_{i}^{(k)}\right):1\leq i\leq N_{P}\text{ and }0\leq k<m\right\}\subset[0,1]\times[0,1]\times[0,1]^{N_{x}\times N_{x}},\] \[\mathcal{S}_{\text{out}} =\left\{\overline{\mathbf{R}}_{i}^{(k)}:1\leq i\leq N_{P}\text{ and }0\leq k<m\right\}\subset[0,1]^{N_{x}\times N_{x}},\] Given the normalized datasets \((\mathcal{S}_{\text{in}},\mathcal{S}_{\text{out}})\), the goal is to learn a generator \(G^{\star}:\mathbb{R}\times\mathbb{R}\times\mathbb{R}^{N_{x}\times N_{x}}\to[0,1]^ {N_{x}\times N_{x}}\) which approximates the discrete dynamics \[\overline{\boldsymbol{R}}_{i}^{(k)}\approx G^{\star}\left(\overline{t}^{(k)}, \overline{P}_{i},\overline{\boldsymbol{T}}_{i}^{(k)}\right)\text{ for all }1\leq i\leq N_{P}\text{ and }0\leq k<m.\] In the GAN framework, the generator \(G^{\star}\) is learnt through optimizing the function \(G\) to minimize a objective functional which measures the distance of the generator distribution and the groundtruth distribution in a certain metric. The generator \(G\) is set to compete with another neural network \(D:\mathbb{R}\times\mathbb{R}\times\mathbb{R}^{N_{x}\times N_{x}}\to\mathbb{R}\), called the critic. The two functions have opposite objectives, as the critic aims to distinguish the generator distribution from the groundtruth distribution, while the generator aims to fool the discriminator. In our work, the overall objective has three components. First, we use the earth mover distance in [78], as the competing objective, which is formally defined as \[\mathcal{L}_{\text{WGAN}}(D,G)=\sum_{i=1}^{N_{P}}\sum_{k=0}^{m-1}D\left( \overline{t}^{(k)},\overline{P}_{i},\overline{\boldsymbol{R}}_{i}^{(k)} \right)-D\left(\overline{t}^{(k)},\overline{P}_{i},G\left(\overline{t}^{(k)}, \overline{P}_{i},\overline{\boldsymbol{T}}_{i}^{(k)}\right)\right).\] Second, we use the gradient penalty in [80] as a regularizer to weakly enforce the 1-Lipschitz continuity in the critic, which is given by \[\mathcal{L}_{\text{Lip}}(D)=\sum_{i=1}^{N_{P}}\sum_{k=0}^{m-1}\left(\left\| \nabla_{\overline{\boldsymbol{T}}}D\left(\overline{t}^{(k)},\overline{P}_{i}, \varepsilon_{i}^{(k)}\overline{\boldsymbol{R}}_{i}^{(k)}+(1-\varepsilon_{i}^{ (k)})G\left(\overline{t}^{(k)},\overline{P}_{i},\overline{\boldsymbol{T}}_{i}^ {(k)}\right)\right)\right\|_{2}-1\right)^{2},\] where \(\varepsilon_{i}^{(k)}\sim\mathcal{U}(0,1)\) is independent and identically distributed. Third, we use the absolute distance as the reconstruction objective, which is defined as \[\mathcal{L}_{\text{recon}}(G)=\sum_{i=1}^{N_{P}}\sum_{k=0}^{m-1}\left| \overline{\boldsymbol{R}}_{i}^{(k)}-G\left(\overline{t}^{(k)},\overline{P}_{ i},\overline{\boldsymbol{T}}_{i}^{(k)}\right)\right|.\] The optimization problem is then formulated as \[\min_{G\in\mathcal{G}}\max_{D\in\mathcal{D}}\mathcal{L}_{\text{WGAN}}(D,G)+ \mu_{\text{Lip}}\mathcal{L}_{\text{Lip}}(D)+\mu_{\text{recon}}\mathcal{L}_{ \text{recon}}(G), \tag{12}\] where \(\mu_{\text{Lip}}>0\) and \(\mu_{\text{recon}}>0\) are regularization parameters which control the tradeoff between the three components in the overall objective, \(\mathcal{G}\) is a class of neural networks with the U-Net architecture, and \(\mathcal{D}\) is a class of convolutional neural networks. The generator and the critic are trained simultaneously and the objective functional is dynamic to each of them in the training process. In our work, we use the adaptive moment estimation (ADAM) method [81] to update the critic \(D\) and the generator \(G\) in alternating direction. ### Prediction stage After sufficient training, the generator \(G^{\star}\) can serve as a global surrogate model for predicting the temperature field. At a generic shock pressure \(P\in\mathsf{D}\), with the initial condition \(\widetilde{\boldsymbol{T}}_{\text{GAN}}(t^{(0)}(P);P)=\boldsymbol{T}^{(0)}(P)\), for \(0\leq k<m\), the GAN prediction \(\widetilde{\boldsymbol{T}}_{\text{GAN}}(t^{(k+1)}(P);P)\in\mathbb{R}^{N_{x} \times N_{x}}\) is iteratively given by \[\widetilde{\boldsymbol{T}}_{\text{GAN}}(t^{(k+1)}(P);P)=\widetilde{\boldsymbol {T}}_{\text{GAN}}(t^{(k)}(P);P)+(R_{\text{max}}-R_{\text{min}})G^{\star} \left(\overline{t}^{(k)},\overline{P},\overline{\boldsymbol{T}}_{\text{GAN}}^ {(k)}(P)\right),\] where \[\overline{P} =(P-P_{\text{min}})/(P_{\text{max}}-P_{\text{min}}),\] \[\overline{\boldsymbol{T}}_{\text{GAN}}^{(k)}(P) =\widetilde{\boldsymbol{T}}_{\text{GAN}}(t^{(k)}(P);P)/(T_{\text{ max}}-T_{\text{min}}).\] ## 5 Numerical experiments In this section, we present some numerical results to test the performance of our proposed methods when applied to the numerical simulation data for the pore collapse process. ### Problem specification In our numerical experiments, the bounds of the range \(\mathsf{D}\) of applied shock pressure are \(P_{\min}=11\) and \(P_{\max}=15\), and the spatial region of interest \(\Omega\) is a square which is partitioned into \(N_{x}^{2}=128^{2}\) square sub-zones with equal length \(h_{x}=250\). In order to depict the pore collapse process, explained in Figure 1, we choose \(m=180\), \(\Delta t=0.0025\), and \(t^{(0)}(P)=0.9875-0.0125P\), as the initial time of the time interval of interest \(\mathcal{T}(P)\) for the shock pressure \(P\in\mathsf{D}\). Figure 3 depicts some selected representative snapshots of temperature fields at different shock pressures ranging from 11 to 15 GPa, in the corresponding time interval of interest. Each row corresponds to the same shock pressure. Unlike Figure 3, the snapshots in the same column do not correspond to the same time instance. ### Methodology specification In this subsection, we discuss the details of the surrogate modeling approaches in performing the numerical experiments. We first discuss the details about DMD in Section 3. We use the DW-DMD approach described in Section 3.2. with \(J=20\) and \(r_{j}\equiv 9\), and we use Gaussian functions in RBF interpolation. All the DMD results are generated using the implementation in libROM 1 on Quartz in Livermore Computing Center2, on Intel Xeon CPUs with 128 GB memory, peak TFLOPS of 3251.4, and peak single CPU memory bandwidth of 77 GB/s. The training of each local DMD model takes around 30 seconds on CPU. Figure 4: Selected representative snapshots of temperature fields at different shock pressures (11–15 GPa, row-wise) in the corresponding time interval of interest, which is adjusted to depict the pore collapse process. Unlike Figure 3, the snapshots in the same column do not correspond to same time instance. Next, we discuss the details about CeGAN in Section 4. The U-Net generator architecture is presented in Figure 5. Following [72], we take \(\mu_{\text{Lip}}=10\) and \(\mu_{\text{recon}}=500\) in the objective (11). All the CeGAN results are generated on Lassen in Livermore Computing Center3, on Intel Power9 CPUs with 256 GB memory and NVIDIA V100 GPUs, peak TFLOPS of 23,047.20, and peak single CPU memory bandwidth of 170 GB/s. With a batch size 6 and 2000 epoches, the training of global CeGAN model takes 8 hours on GPU. Footnote 3: High performance computing at LLNL, [https://hpc.llnl.gov/hardware/platforms/lassen](https://hpc.llnl.gov/hardware/platforms/lassen) ### Prediction and performance evaluation In the remaining of this section, we will present numerical results with various training combinations of surrogate modeling approaches and training shock pressures \(\mathsf{D}\). In Figure 6, we show the comparison of some selected groundtruth snapshots and the corresponding surrogate model approximations at \(P=12\), with each row corresponds to: 1. groundtruth snapshots from simulation data, 2. reproductive predictions with local DW-DMD and \(\mathsf{D}_{\text{train}}=\{12\}\), 3. interpolatory predictions with parametric DW-DMD and \(\mathsf{D}_{\text{train}}=\{11,13,15\}\), 4. extrapolatory predictions with local DW-DMD and \(\mathsf{D}_{\text{train}}=\{13\}\), 5. reproductive predictions with local CeGAN and \(\mathsf{D}_{\text{train}}=\{12\}\), 6. reproductive predictions with global CeGAN and \(\mathsf{D}_{\text{train}}=\{12,14\}\), 7. interpolatory predictions with global CeGAN and \(\mathsf{D}_{\text{train}}=\{11,13,15\}\), and 8. extrapolatory predictions with local CeGAN and \(\mathsf{D}_{\text{train}}=\{13\}\), and each column corresponds to a time instance, with \(k\in\{10,50,90,130,170\}\), in the time interval of query \(\widetilde{\mathcal{T}}(P)\), The reproductive cases will be further explained in Section 5.4, and the interpolatory and extrapolatory cases will be further explained in Section 5.5. It can be seen that the approximations from DMD, in the second row to the fourth row, in general better captures the pore collapse process and resembles the simulation data in the first row. Next, we will introduce some performance metric which allows us to investigate and compare the methods and training combination further. To evaluate the accuracy of the prediction, we compute the relative error between the high-fidelity simulation data \(\mathbf{T}\) and the reduced order model approximation \(\widetilde{\mathbf{T}}\), i.e. \(\widetilde{\mathbf{T}}_{\text{DMD}}\) or \(\widetilde{\mathbf{T}}_{\text{GAN}}\), at testing shock pressure \(P\in\mathsf{D}\) and time instance \(t\in\widetilde{\mathcal{T}}(P)\) by: \[\varepsilon(t;P)=\frac{\|\mathbf{T}(t;P)-\widetilde{\mathbf{T}}(t;P)\|}{\|\mathbf{T}(t;P) \|},\] where \(\|\cdot\|\) denotes the vector Euclidean norm in \(\mathbb{R}^{N_{x}}\), or equivalently the matrix Frobenius norm in \(\mathbb{R}^{N_{x}\times N_{x}}\). ### Reproductive cases As a first experiment, we test the accuracy of surrogate modeling approaches in reproductive cases, where the testing shock pressure is identical to that used in one of the training shock pressures, i.e. \(P\in\mathsf{D}_{\text{train}}\). Figure 7 shows the comparison of reproductive accuracy using local DW-DMD and local CeGAN, in terms of the evolution of relative error (in logarithmic scale), with \(\mathsf{D}_{\text{train}}=\{12\}\) and \(\mathsf{D}_{\text{train}}=\{13\}\) respectively. In both cases, local DW-DMD produces more stable reproductive results, where the relative error stays below 1.2% in the whole time interval of query, and terminates at around 0.3% at final time. On the other hand, Figure 5: U-Net generator architecture used in the examples presented in Section 5. Figure 6: Selected snapshots and predictions of temperature fields at 12GPa. Each row corresponds to: 1. groundtruth snapshots from simulation data, 2. local DW-DMD and \(\mathsf{D}_{\text{train}}=\{12\}\), 3. parametric DW-DMD and \(\mathsf{D}_{\text{train}}=\{11,13,15\}\), 4. local DW-DMD and \(\mathsf{D}_{\text{train}}=\{13\}\), 5. local CeGAN and \(\mathsf{D}_{\text{train}}=\{12\}\), 6. global CeGAN and \(\mathsf{D}_{\text{train}}=\{12,14\}\), 7. global CeGAN and \(\mathsf{D}_{\text{train}}=\{11,13,15\}\), and 8. local CeGAN and \(\mathsf{D}_{\text{train}}=\{13\}\). although local CeGAN is able to produce around \(0.2\%\) error in each time step, the error accumulates quickly and rises to \(32\%\) and \(22\%\) at the final time of query with \(\mathsf{D}_{\text{train}}=\{12\}\) and \(\mathsf{D}_{\text{train}}=\{13\}\) respectively. Figure 8 shows a similar comparison with \(\mathsf{D}_{\text{train}}=\{12,14\}\) and \(\mathsf{D}_{\text{train}}=\{11,13,15\}\) respectively. We remark that the final-time error of DMD at the reproductive cases remains unchanged at around \(0.3\%\) when adding more training shock pressures, as explained in Section 3.3. On the other hand, the final-time error of global CeGAN improves to \(16\%\) with \(\mathsf{D}_{\text{train}}=\{12,14\}\) and remains at \(22\%\) with \(\mathsf{D}_{\text{train}}=\{11,13,15\}\) respectively. ### Predictive cases In this subsection, we test the accuracy of surrogate modeling approaches in predictive cases, where the testing shock pressure is not one of the training shock pressures, i.e. \(P\in\mathsf{D}\setminus\mathsf{D}_{\text{train}}\). We begin with some results in the interpolatory cases, i.e. \(P\in(\min\mathsf{D}_{\text{train}},\max\mathsf{D}_{\text{train}})\setminus \mathsf{D}_{\text{train}}\). Similar to Figure 8, we compare the relative error at \(P=12\) with \(\mathsf{D}_{\text{train}}=\{11,13,15\}\), and at \(P=13\) with \(\mathsf{D}_{\text{train}}=\{11,13,15\}\) respectively. In the former case, the relative error of parametric DW-DMD is higher than that of global CeGAN in an earlier stage, but eventually becomes lower. Throughout the whole time interval of query, the relative error of parametric DW-DMD stays below \(9\%\) and \(4.3\%\) and terminates at around \(4.7\%\) and \(1.3\%\) at final time, in the former and the latter case respectively. Meanwhile, the relative error of global CeGAN accumulates to \(20\%\) at the final time of query in both cases. Figure 8: Relative error (in logarithmic scale) of reproductive case with \(\mathsf{D}_{\text{train}}=\{12,14\}\) (left) and \(\mathsf{D}_{\text{train}}=\{11,13,15\}\) (right), using parametric DW-DMD (in blue) and global CeGAN (in red). Figure 7: Relative error (in logarithmic scale) of reproductive case with \(\mathsf{D}_{\text{train}}=\{12\}\) (left) and \(\mathsf{D}_{\text{train}}=\{13\}\) (right), using local DW-DMD (in blue) and local CeGAN (in red). Next, we will present some results in extrapolatory cases, i.e. \(P\in\mathsf{D}\setminus(\min\mathsf{D}_{\text{train}},\max\mathsf{D}_{\text{train}})\). Figure 10 shows the comparison of extrapolatory accuracy at \(P=15\) using local DW-DMD and local CeGAN, in terms of the relative error of the temperature field over time, with \(\mathsf{D}_{\text{train}}=\{12\}\) and \(\mathsf{D}_{\text{train}}=\{13\}\) respectively. The relative error of DW-DMD attains a maximum of \(15\%\) and \(12\%\) over time and terminates at \(10\%\) and \(7\%\) at final time with \(\mathsf{D}_{\text{train}}=\{12\}\) and \(\mathsf{D}_{\text{train}}=\{13\}\) respectively. Meanwhile, the relative error of CeGAN attains the maximum \(15\%\) and \(23\%\) at the final time, with \(\mathsf{D}_{\text{train}}=\{12\}\) and \(\mathsf{D}_{\text{train}}=\{13\}\) respectively. Unlike the DW-DMD results which shows the extrapolatory accuracy deteriorates as the testing shock pressure is farther away from the training shock pressure, the extrapolatory accuracy of CeGAN is unstable with the distance between testing shock pressure \(P\) and the training shock pressure \(P_{1}\). Figure 11 shows the comparison of reproductive and extrapolatory accuracy at various testing shock pressure \(P\in\{11,12,13,14,15\}\) in terms of the relative error of the temperature field at the final time of query, using local DW-DMD and local CeGAN with respect to different training shock pressure \(P_{1}\in\mathsf{D}_{\text{train}}\). It can be observed that with DW-DMD, the relative error at the reproductive case is always around \(0.3\%\), while the error at the extrapolatory case increases as the testing shock pressure is farther away from the training shock pressure, which is a common phenomenon for parametric reduced order models. The relative error attains the maximum of \(12\%\), when \(|P-P_{1}|=4\), in our testing cases. Meanwhile, the error with CeGAN is always above \(10\%\) and unstable with the distance between testing shock pressure \(P\) and the training shock pressure \(P_{1}\). With \(\mathsf{D}_{\text{train}}=\{12\}\), the relative error goes up to \(45\%\) at \(P=14\). Figure 10: Relative error (in logarithmic scale) of extrapolatory case at \(P=15\) with \(\mathsf{D}_{\text{train}}=\{12\}\) (left) and \(\mathsf{D}_{\text{train}}=\{13\}\) (right), using local DW-DMD (in blue) and CeGAN (in red). Figure 12 shows the comparison of reproductive, interpolatory and extrapolatory accuracy at various testing shock pressure \(P\in\{11,12,13,14,15\}\) in terms of the relative error of the temperature field at the final time of query, using parametric DW-DMD with \(12\in\mathsf{D}_{\text{train}}\) and \(13\in\mathsf{D}_{\text{train}}\) respectively. The error at the newly added training shock pressures is also reduced to around \(0.3\%\), and the error at the predictive cases are also reduced in general, which ranges from \(1.3\%\) to \(5\%\) in the interpolatory cases and \(8\%\) to \(9\%\) in extrapolatory cases. Figure 13 shows a similar comparison using global CcGAN. While adding more training shock pressures and enriching the training datasets in global CcGAN makes an improvement in the overall solution accuracy, the error is always around \(20\%\), which is still a lot higher than the parametric DW-DMD by comparing to the same case in Figure 12. ## 6 Conclusion In this paper, we propose two data-driven surrogate modeling approaches for computationally economical prediction of complex physics phenomena in shock-induced pore collapse processes. The surrogate models are built based on dynamic mode decomposition and U-Net generative adversarial networks, and modified to overcome the challenges of data scarcity and pressure-dependent advective and transport dynamics. The shock pressure is incorporated in the construction of the surrogate models, by means of parametric interpolation in dynamic mode decomposition and conditional input in generative adversarial networks, respectively. Moreover, windowing is used in dynamic mode decomposition for efficient dimensionality reduction by further localizing reduced order models in time. In our numerical realization of these surrogate models, the training of dynamic mode composition is Figure 11: Relative error at various testing shock pressures, using local DW-DMD (left) and CcGAN (right) with various training shock pressure in \(\mathsf{D}_{\text{train}}\). Figure 12: Relative error at various testing shock pressures, using parametric DW-DMD with \(12\in\mathsf{D}_{\text{train}}\) (left) and \(13\in\mathsf{D}_{\text{train}}\) (right). much more efficient than generative adversarial network. Moreover, dynamic mode decomposition produces more stable approximation and accurate prediction for the whole pore collapse processes at unseen shock pressures. It will be interesting to see how improvements in efficiency and accuracy can be made to neural networks approaches for dynamic surrogate modeling of data-scarce large-scale applications with advective and transport phenomena like pore collapse processes. In the meantime, some physics-guided data-driven approach with simpler machine learning methods, like the local distance windowing dynamic mode decomposition, will serve as a powerful tool for these applications. ## Acknowledgments This work was performed at Lawrence Livermore National Laboratory. Lawrence Livermore National Laboratory is operated by Lawrence Livermore National Security, LLC, for the U.S. Department of Energy, National Nuclear Security Administration under Contract DE-AC52-07NA27344 and LLNL-JRNL-849281. ## Disclaimer This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes. This article has been authored by an employee of National Technology & Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The employee owns all right, title and interest in and to the article and is solely responsible for its contents. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
2302.14406
Instruction Clarification Requests in Multimodal Collaborative Dialogue Games: Tasks, and an Analysis of the CoDraw Dataset
In visual instruction-following dialogue games, players can engage in repair mechanisms in face of an ambiguous or underspecified instruction that cannot be fully mapped to actions in the world. In this work, we annotate Instruction Clarification Requests (iCRs) in CoDraw, an existing dataset of interactions in a multimodal collaborative dialogue game. We show that it contains lexically and semantically diverse iCRs being produced self-motivatedly by players deciding to clarify in order to solve the task successfully. With 8.8k iCRs found in 9.9k dialogues, CoDraw-iCR (v1) is a large spontaneous iCR corpus, making it a valuable resource for data-driven research on clarification in dialogue. We then formalise and provide baseline models for two tasks: Determining when to make an iCR and how to recognise them, in order to investigate to what extent these tasks are learnable from data.
Brielen Madureira, David Schlangen
2023-02-28T08:41:53Z
http://arxiv.org/abs/2302.14406v1
Instruction Clarification Requests in Multimodal Collaborative Dialogue Games: Tasks, and an Analysis of the CoDraw Dataset ###### Abstract In visual instruction-following dialogue games, players can engage in repair mechanisms in face of an ambiguous or underspecified instruction that cannot be fully mapped to actions in the world. In this work, we annotate Instruction Clarification Requests (iCRs) in CoDraw, an existing dataset of interactions in a multimodal collaborative dialogue game. We show that it contains lexically and semantically diverse iCRs being produced self-motivated by players deciding to clarify in order to solve the task successfully. With 8.8k iCRs found in 9.9k dialogues, CoDraw-iCR (v1) is a large spontaneous iCR corpus, making it a valuable resource for data-driven research on clarification in dialogue. We then formalise and provide baseline models for two tasks: Determining when to make an iCR and how to recognise them, in order to investigate to what extent these tasks are learnable from data. ## 1 Introduction Somewhere in interstellar space are the Voyager Golden Records1, which left Earth in spacecrafts in 1977 carrying a message about humanity to extraterrestrial civilizations. The committee in charge of designing the message, chaired by Carl Sagan, was careful to include symbolic instructions on how to play the records. But what if these instructions turn out to be incomprehensible to the aliens? Footnote 1: [https://voyager.jpl.nasa.gov/golden-record/](https://voyager.jpl.nasa.gov/golden-record/) In human dialogue, Clarification Requests (CRs), such as those highlighted in Figure 1, are a common and indispensable mechanism to signal misunderstandings and to negotiate meaning, as recently stressed _e.g._ by Benotti and Blackburn (2017). This utterance-anaphoric conversational move can be realized with various forms, functions/readings and contents (Purver et al., 2003; Ginzburg, 2012) and can trigger responses that may or not be satisfactory (Rodriguez and Schlangen, 2004). In addition to the scientific motivation to comprehend CRs as a linguistic phenomenon, timely producing and understanding the vast range of CRs is also a desirable property for dialogue systems (Schlangen, 2004). This ability is especially relevant in scenarios where building common ground is necessary to act and collaboratively achieve a goal. Instructional interactions are a particular instance where an instruction follower (_IF_) often needs to ask for clarification in order to execute actions according to an instruction giver's (_IG_) instructions. Instruction Clarification Requests (iCRs), as we will refer to them, are a type of CRs originating at Clark (1996)'s 4th level of communication, the level of uptake (Schloder and Fernandez, 2014). They are elicited when an instruction utterance is generally understood (_e.g._ acoustically, syntactically, semantically) but some underspecification or ambiguity prevents the _IF_ to carry out an action with enough certainty, as shown in Figure 1. Learning clarification mechanisms from data is still an understudied research problem (Benotti and Blackburn, 2021). We envision the following desiderata for a dataset suitable for data-driven research on iCRs: Figure 1: Instruction Clarification Requests identified in a portion of a CoDraw dialogue (ID 8906, CC BY-NC 4.0), with a scene from Zitnick and Parikh (2013). * **Naturalness**: iCRs should occur by the spontaneous decision process of the _IF_ in real interaction while trying to act and solve a task, ideally not being induced by external incentives in the data collection and also not synthetically generated. * **Specificity**: the annotation should pin down iCRs as a single category, not subsumed within other CRs and dialogue acts. * **Frequency**: relative and absolute occurrence of iCRs should be large enough for data-driven methods and statistical purposes. * **Diversity**: iCRs should occur with various forms and content, being grounded in the game actions and parameters. * **Relevance**: iCRs should be pertinent for players to decide on actions and solve the task successfully. * **Regularity**: iCRs should emerge from underlying strategies of the players and not be the result of random or idiosyncratic behaviour. Our research questions are: i) Can _IF_ dialogue models trained on data learn to recognise when they would profit from receiving more information in order to execute an action, and thus generate an iCR? ii) Can _IG_ dialogue models trained on data learn to recognise when the _IF_ is making an iCR and respond to it? In this work, our contribution to begin addressing these questions is threefold. We (a) perform annotation of naturally occurring iCRs in a collaborative and multimodal dialogue game, namely the CoDraw dataset (Kim et al., 2019), showing that it is a valuable resource for data-driven research on clarification in dialogue; (b) analyse the corpus and provide insights relating iCRs to the game dynamics; and (c) discuss two subtasks and models that can be explored with CoDraw-iCR (v1) and may serve as components of _IF_ and _IG_ dialogue models capable of handling iCRs. ## 2 Related Literature It is a common practice to map CRs to the level of communication (Clark, 1996; Allwood, 2000) where the misunderstanding occurs (Gabsdil, 2003; Schlangen, 2004; Redriguez and Schlangen, 2004; Rieser and Moore, 2005; Rieser et al., 2005; Bohus and Rudnicky, 2005; Benotti, 2009; Koulouri and Lauria, 2009; Benotti and Blackburn, 2021). When ASR used to be a bottleneck for dialogue processing, several works focused on CRs elicited by problems at levels 2 and 3 - perception and understanding (Healey et al., 2003; Schlangen and Fernandez, 2007, 2007, 2013, 2014, _inter alia_). Comparatively less research exists focusing on CRs at level 4, namely intention, uptake or task-level clarifications (Benotti, 2009; Schloder and Fernandez, 2014). We thus contribute to filling this gap, building upon the existing literature we now turn to discuss in more detail. Schloder and Fernandez (2015) perform a corpus-based study splitting level 4 CRs into two types of intention-related conversational problems: recognition and adoption. Instruction-following dialogues, where utterances are intertwined with actions, is one setting where level 4 CRs play a fundamental role in negotiating meaning. Benotti and Blackburn (2017) discuss the relation between instruction, CRs and contexts in such settings and how conversational implicatures are a rich source of CRs. Task-level reformulations, a clarification strategy where the initiator rephrases an utterance with respect to its effects on the task, are typically used to confirm more complex actions in instruction giving dialogues (Gabsdil, 2003) and happen very frequently (Benotti, 2009). Multimodality, _e.g._ gestures, also play a role in instruction-following CRs (Ginzburg and Luecking, 2021). Benotti (2009) proposes using planning to infer and generate the task-level clarification potential of instructions and identify level 4 CRs in one dialogue of a corpus of 15 instruction giving dialogues. Benotti and Blackburn (2021) analyse the same corpus and identify six characteristics that may account for the larger proportion of level 4 CRs found in it: task-oriented dialogues, asymmetry in dialogue participant roles (_IF_ and _IG_), immediate world validation by the informational or physical actions, shared view and consequent verification of the actions, long dialogues that enable more shared background, and irreversible actions that require more certainty. Other corpus studies exist in small datasets. Rodriguez and Schlangen (2004) find that 22.17% of the CRs are level 4 CRs in an instruction-following setting. Similarly, Gervits et al. (2021) collect and annotate 22 dialogues with a human-controlled virtual robot that followed high-level or low-level instructions. They propose a very detailed annotation schema for the content of CRs, but there is no clear distinction of level 4 CRs. A larger dialogue game dataset, the Minecraft Dialogue Corpus Narayan-Chen et al. (2019) with 509 games, has been annotated with CRs. Lambert et al. (2019) annotate the _IF_ utterances with eight dialogue acts, one of which, clarification questions, comprises requests for clarification to a given instruction or statement (26.36% of all utterances). Shi et al. (2022) perform a similar annotation with a category instruction-level questions to request clarification for a previous instruction that was not clear or ambiguous (18.64%). The TEACh dataset Padmakumar et al. (2022) contains 3k dialogues annotated with dialogue acts Gella et al. (2022), of which the 675 RequestOtherInfo spans under the Instruction category relate to iCRs. Kiseleva et al. (2021) extend the Minecraft Dialogue Corpus with 47 games containing 126 CRs for an interactive agent building challenge, but concentrate on the task of modelling a "silent _IF_" that cannot ask questions. The second edition of their challenge, which happened recently Kiseleva et al. (2022); Mohanty et al. (2022), focuses on when the _IF_ should ask for clarification and what it should ask about, similar to Aliannejadi et al. (2021). The dataset for the second challenge is not collected through real, synchronous interaction. Instead, one player builds a structure and generates instructions _a posteriori_, and, in a separate step, another player follows these instructions, deciding whether to make a CR. Similarly, Aliannejadi et al. (2021) collects a large dataset of CRs to user requests, augmented synthetically, in a multiple-step process without interaction. Another large-scale dataset with 53k task-relevant questions and answers about an instruction was constructed Gao et al. (2022). However, the data is created by an annotator that does not have to act, but only watches execution videos, asking a question they think would be helpful and then answering their own question. Although these strategies facilitate data collection, they abstract away the decision-making and repair processes that emerge when humans collaborate to solve a task jointly, which are present in CoDraw. Our work and the existing literature converge in addressing CRs for ambiguous instructions, but CoDraw-iCR (v1) maintains the interactive aspect of _sequential_ rounds and the spontaneous initiative of _IF_ to ask. It is large in absolute number of iCRs and dialogues, with short games that have a relatively constrained action space. Moreover, our annotation pins down iCRs among other types of CRs. A dataset that can be further explored for iCRs is Thomason et al. (2020). It instantiates a navigation task where the _IF_ gets an ambiguous or underspecified command about where to navigate to, and can ask questions to an oracle during the trajectory. In HRI, following commands is a central task. Koulouri and Lauria (2009) investigate miscommunication management mechanisms in robots performing collaborative tasks, in which task-level reformulations is a challenging type of CR that requires identification of the effects of all possible executions of an instruction. Deits et al. (2013) evaluate various clarification question strategies for robots that receive instructions with an ambiguous phrase. Marge and Rudnicky (2015) examine recovery strategies in situated grounding problems, when an agent has to deal with requests containing referential ambiguity or that are impossible to execute. Interestingly, Jackson and Williams (2018) and Jackson and Williams (2019) raise awareness to the fact that merely posing a CR can already imply willingness to follow a command, which is undesirable in morally delicate situations. Other tangent research areas study clarification edits to solve underspecified phrases in instructional texts Roth et al. (2022) and clarification responses in community forum questions or search queries Braslavski et al. (2017); Rao and Daume III (2018); Aliannejadi et al. (2019); Kumar and Black (2020); Hu et al. (2020); Majumder et al. (2021), scenarios with only minimal or no interaction. **Tasks**. Deciding when to initiate a CR in various contexts is a task classically discussed in the CR literature Rieser and Lemon (2006); Stoyanchev et al. (2012); Narayan-Chen et al. (2019); Aliannejadi et al. (2021); Shi et al. (2022); Kiseleva et al. (2022), _inter alia_). Fewer works exist specifically about detecting if a CR was made. Identification of CRs in corpora carry out a similar task, although this is not done from the perspective of an agent knowing that it needs to respond to the CR, of which De Boni and Manandhar (2003) is an example. More generally, this task can be subsumed by dialogue act classification, as in, for instance, Gella et al. (2022). Motivation and Problem Statement CRs occur naturally in human-human interaction and thus also in visual dialogue games. Neural network-based dialogue models trained at such datasets need to properly handle this phenomenon, which comprises various component tasks for identifying, interpreting, generating and responding to CRs. In this section, we formalise the setting and two of these tasks. ### Formalisation: Instruction-Following Dialogue Games A visual instruction-following dialogue game can be formalised as a tuple \(G=(P,S,R,M)\) representing a goal-oriented interaction between players \(P\) (an instruction given _IG_ and an instruction follower _IF_). _IG_ sees a scene \(S\), hidden to _IF_, and instructs _IF_ on how to reconstruct it. They exchange a sequence \(R\) of \(n\) rounds \(r_{i}=(g_{i},a_{i},f_{i})\) comprised of two utterances \((g_{i},f_{i})\), from _IG_ and _IF_, respectively, and of actions \(a_{i}\) that incrementally create partial reconstructions \(s_{i}\) of \(S\). \(R\) is initialised as an empty set and, at each round, it is extended with \(g_{i}\), \(a_{i}\) and \(f_{i}\), in that order. The final state of a completed game contains all filled rounds. A scene similarity metric \(M\) computes how close the reconstructions are to the original image at each round, and the goal is to maximize similarity of the final reconstruction \(M(S,s_{n})\). The dialogue acts by the _IF_ include acknowledgements and clarification requests, whereas the dialogue acts by the _IG_ include instructions and responses to clarifications. Two variations are possible: the state \(s_{i}\) can be accessible for the _IG_ or not. The incremental scenes can be regarded either as the common ground between players (if both can see it) or as what the _IF_ considers to be their common ground (when it is private), akin to what is proposed by Mitsuda et al. (2022). Following Clark (1996), we assume that a pair of equally competent players, committed to the game's goal of maximizing \(M(S,s_{n})\), seek to minimize joint effort. It is acceptable for the _IG_ to produce an underspecified instruction if producing a fully specified instruction would cost more than answering an iCR. Instruction CRs require an extra effort by the _IF_, so they should occur when repair is necessary and the cost of asking is lower than the potential information gain. ### Tasks We propose to use CoDraw-iCR (v1) to advance research in iCRs by modelling two CRs subtasks in an instruction-following dialogue game grounded in a visual modality. Both subtasks can be regarded as a binary decision step happening right before each player's next utterance generation. **Task 1: Ask iCR?** From the _IF_'s perspective as the CR initiator, decide when to initiate a CR. More specifically, after each _IG_ utterance, given the dialogue context \(D_{0:(i-1)}\) (that is, all previous utterances), the current utterance \(g_{i}\) by \(IG\), and the current state2 of the scene \(s_{i}\), the _IF_ must decide on the type of their utterance \(f_{i}\), namely whether to consider the action completed and signal willingness to receive further instructions (_e.g._, produce something like "OK"), or to ask for clarification on some aspect of a previous instruction. That is, this formulation of the task focuses on the dialogue act to perform, abstracting away from the concrete realisation. It deals with the problem of automatically determining what is a good instruction and what is not, on its context. This task relates to slot filling in the sense that an instruction containing all the needed parameters for the mentioned objects should not require clarification. Footnote 2: Under the assumption that the _IF_ has manipulated the scene in response to _IG_ already. For CoDraw, the exact point when the _IF_ types the message has not been preserved. **Task 2: Was this an iCR?** From the _IG_'s perspective as the CR recipient, identify whether an iCR has been made. At each round \(i\), given the dialogue context \(D_{0:i}\) (in which the last utterance, \(f_{i}\), is possibly an iCR) and the original scene \(S\), the _IG_ must decide whether to give further instructions or to (also) respond to an iCR. ## 4 Data and Annotation CoDraw Kim et al. (2019) is a collaborative instruction-following dialogue game, in which a "teller" (in our terminology, the _IG_) observes a clipart scene and instructs a "drawer" (_IF_), who has no access to it, on how to reconstruct it, _i.e._ place cliparts in a canvas with the correct size, direction and position. The corresponding crowdsourced dataset contains 9,993 dialogues in English and has been released under a CC BY-NC 4.0 license. This dataset instantiates the formalisation proposed in Section 3, but adds an additional signal: The teller is allowed to peek at the drawer's canvas once during the game whenever they want, _i.e._ the teller can get access to \(s_{i}\) and thus judge how it differs from \(S\). Players exchange messages of up to 140 characters through a chat interface and must alternate turns. We will use round to refer to a pair of consecutive utterances by teller and drawer with the corresponding actions. The drawer's performance is evaluated with a scene similarity score that ranges from 0 to 5, where 5 is a perfect match. Table 1 summarizes quantitative aspects of the dataset. Each game is about a different abstract scene3 composed of between 6 and 17 out of a set of 58 clipart types (Zitnick and Parikh, 2013; Zitnick et al., 2013), among which the boy and the girl can have 5 facial expressions and 7 body poses, so the resulting clipart set contains 126 elements and the default background. Multiple types of trees, hats, clouds, glasses and balls can introduce the need for ambiguity resolution in the games. As the individual components can be placed freely, the space of possible resulting scene images is practically unlimited in size. Footnote 3: [http://optimus.cc.gatech.edu/clipart/](http://optimus.cc.gatech.edu/clipart/) In the baseline models proposed in the original paper, the authors introduce a simplifying assumption which removes the drawer's utterances from the dialogue history (they call this condition the _silent drawer_). The authors leave the tasks of identifying when a CR is necessary and of generating it for future work. Subsequent works with this dataset have focused on text-to-image generation (El-Nouby et al., 2019; Matsumori et al., 2021; Zhang et al., 2021; Lee et al., 2021; Liu et al., 2020; Fu et al., 2020) but, to the best of our knowledge, no other work has examined CRs in CoDraw. We thus take up this idea to bring back the dialogue modality to this dialogue game. **Identification of Instruction CRs**. We observe that a good portion of the drawer's utterances belongs to one of two dialogue act types: _acknowledgements_, signaling that the teller may proceed with the next instruction, and _clarification requests_, initiating repair on aspects necessary to solve the task. We thus consider CoDraw to be a potentially interesting source of iCRs. The first step we take is identifying instruction-level CRs in this dataset. To achieve that, we perform a binary decision over the drawer's utterances. For our purposes, an utterance is an iCR if the following assertion is likely true: "_This utterance indicates that the drawer is requesting further information about one or more instruction(s) previously given by the teller in order to perform an action accordingly, likely because part of the instruction was underspecified, ambiguous or not clear._" To reduce the annotation workload, we annotate utterance _types_; forms that occur only once (88.97% of the types) are presented with a one-utterance context window around it. All occurrences of each of the other utterance forms are collapsed into a single datum, presented to the annotators without context. ## 5 Corpus Analysis In this section, we present an analysis of iCRs in the CoDraw dataset and their relation to the game dynamics, establishing connections to the items in our desiderata and showing that CoDraw-iCRs (v1) is a promising resource to study the phenomenon and to model dialogue agents that learn what to do in face of unclear instructions, complementing existing initiatives.4 Footnote 4: The dataset is available for the community upon request. ### Descriptive Statistics The 13,727 _IF_'s utterance types have been annotated by two annotators, with a Cohen's \(\kappa\)(Cohen, 1960) of 0.92. Table 2 presents the main descriptive statistics of the annotated corpus.5 8,807 (11.36%) of all drawer's utterances in CoDraw are iCRs. 59.45% of the dialogues contain no iCRs. For the purpose of analysis, we also compute numbers relative to the subset of dialogues that contain at least \begin{table} \begin{tabular}{l r r r} \hline \hline & **train** & **val** & **test** \\ \hline dialogues & 7,989 & 1,002 & 1,002 \\ with peek & 7,315 & 923 & 913 \\ avr. final score & 4.20 & 4.19 & 4.17 \\ before peek & 3.97 & 3.95 & 3.96 \\ avr. rounds/dialogue & 7.76 & 7.69 & 7.70 \\ avr. utterance len teller & 14.36 & 14.48 & 14.31 \\ avr. utterance len drawer & 2.58 & 2.67 & 2.58 \\ \hline vocab size _IG_ & & 4,506 & \\ vocab size _IF_ & & 2,200 & \\ \hline \hline \end{tabular} \end{table} Table 1: Descriptive statistics: CoDraw dataset. one iCR; the idea here is that this excludes players who may not have been willing to use the opportunity to ask iCRs. In this subset, the percentage of iCRs is 24.36%. We also separate out numbers computed from the dialogues up the "peek" action described above, as from that move on, the state of the common ground changes. Figure 2 presents the most frequent iCR utterance types, ordered by rank. 7,260 (94.13% of the types) are _hapax legomena_. Types occupying the highest ranks relate to size, position and orientation, which directly map to the possible actions on cliparts, and to disambiguation of _e.g._ facial expression and body pose. Few types occur more than 5 times, which is evidence that the dataset contains a rich diversity of iCR surface forms. Figure 3 aggregates iCRs by initial bigrams, after removing punctuation and initial _ok_ and _okay_ tokens (which realise a different dialogue act). Common iCR forms are polar questions and wh-questions also related to the main actions (placement, resize, flip, disambiguation). The drawer's vocabulary contains 2,200 token types, out of which 1,468 occur in iCRs. Figure 4 shows an overview of the 100 most common tokens. The frequent iCR vocabulary contains many nouns relating to cliparts (slide, table, bear, dog), in particular those that refer to nouns involving ambiguity (boy, girl, cloud, tree, ball). Question words occur frequently (what, how, where, which) as well as words about object placement (horizon, facing, size, top, touching, edge). Non-iCR utterances commonly contain words related to the task (scenery, picture, image, check, next), greetings and thanks, and acknowledgement words (ok, ready, done). ### Relations to Game Dynamics We now turn to examining how the occurrence of iCRs relate to the overall game dynamics. To analyse CRs, three positions in a dialogue are particularly relevant: the source utterance in which the communication problem occurs, the CR utterance where repair is initiated, and the response utterance where the problem should ideally be dealt with. Since the dialogue is organized into a sequence of rounds with pairs of utterances \((g_{i},f_{i})\), if an iCR occurs at round \(i\), then \(f_{i}\) is an iCR, \(g_{i}\) is the likely source utterance, and \(g_{i+1}\) is possibly the response utterance. In Figure 1, turns 1, 5 and 11 are sources, 2, 6 and 12 are iCRs and 3, 7 and 13 are responses. However, these events do not necessarily occur in immediate sequence. Here, we investigate how the game dynamics change at two positions: iCR rounds and rounds immediately following an iCR. We look at the mean number of actions per round and the difference in \begin{table} \begin{tabular}{r r r r} \hline \hline & all & w/ iCRs & until peek \\ \hline dialogues & 9,993 & 4,052 & - \\ rounds & 77,502 & 36,149 & 61,829 \\ iCR utterances & 8,807 & 8,807 & 7,803 \\ \% iCR utterances & 11.36 & 24.36 & 12.62 \\ mean iCRs/dialogue & 0.88 & 2.17 & 0.78 \\ std iCRs/dialogue & 1.53 & 1.73 & 1.36 \\ \hline \hline \end{tabular} \end{table} Table 2: Descriptive statistics: Annotation. Figure 3: 50 most frequent iCRs initial bigrams in the CoDraw dataset. Figure 2: 50 most frequent Instruction CRs in the CoDraw dataset ordered by rank. the score metric with respect to the previous state, as shown in Table 3. On average, more actions occur at iCR rounds than at non-CR rounds. The difference is even larger in post-iCR rounds, where necessary edits can be occurring. iCR rounds also cause an average higher improvement in the metric than other rounds and the same occurs for rounds after iCRs in dialogues containing iCRs. To conclude this section, we refer back to our desiderata. The **naturalness** of iCRs is a consequence of the data being produced by synchronous human-human interaction in a setting that does not directly induce players to ask for clarification; indeed, almost 60% of the games do not contain iCRs, which we take to be evidence that they are a result of the private decision making of the _IF_ and not due to them following instructions on which dialogue acts to produce. **Specificity** is guaranteed by the annotation process which had a definition to distinguish iCRs from other utterances. In terms of **frequency**, iCRs are a common phenomenon in CoDraw-iCR (v1), which contains 8,807 (11.36%) iCR utterances, a sample larger than existing annotated datasets. We have gathered evidence that **diversity** is present, given that iCRs occur in various forms and exhibit lexical and semantic variety on content related to the game. When it comes to **relevance** to the task, we have shown that there are statistically significant differences in number of actions and score differences at turns realising and following iCRs, which is a sign that agents need to process iCRs in order to act accordingly throughout the game. **Regularity** is addressed in the experiments in the next section. ## 6 Models and Experiments In this section, we present the models for the two tasks discussed in Section 3.2 as well as the evaluation metrics. Both are binary classification tasks using regression to predict the probability of the positive label (iCR) on imbalanced datasets, whose distribution is shown in Table 4. \begin{table} \begin{tabular}{c c c c} \hline \hline & **train** & **val** & **test** \\ \hline datapoints & 62,067 & 7,714 & 7,721 \\ \% iCR & 11.30 & 11.92 & 11.28 \\ \% not iCR & 88.69 & 88.07 & 88.71 \\ \hline \hline \end{tabular} \end{table} Table 4: Distribution of labels. Figure 4: Most common tokens weighted by frequency. ### Models We model the two prediction subtasks as a function \(f:(s,c,u)\mapsto P(l=1)\) where \(s\) is the representation of the scene, \(c\) is the representation of the dialogue context, \(u\) is the representation of the last utterance and \(l\) is the label. This function is approximated with a neural network that takes each input embedding, encodes them, and maps them to a concatenated representation which is fed into a two-layer classifier that outputs the probability of the positive label by applying the sigmoid function to the logit output, as illustrated in Figure 5.6 Footnote 6: Details about the implementation, setup and experiments are in the Appendix and the code is available at [https://github.com/briemadu/codraw-icr-v1/](https://github.com/briemadu/codraw-icr-v1/). ### Evaluation Although the area under the ROC curve is a standard evaluation metric for binary classification, it can be deceptive in imbalanced datasets due to the interpretation of specificity, in which case Precision-Recall curves are more suitable (Saito and Rehmsmeier, 2015). The Average Precision (AP) summarizes this curve into one metric that ranges from 0 to 1, where 1 is the best performance, and the theoretical random is the fraction of positive labels. To facilitate comparison to existing literature, we also report macro-average F1 Score. As trivial baselines, we perform logistic regression on basic features of the utterances and on the input representation vectors. For Task 1, the features are the length of the last teller's utterance and its boolean bag-of-words representation. For Task 2, we use the length of the last drawer's utterance and a binary variable indicating whether a content word occurs in it. The list of content words was extracted manually from a sample of dialogues. ### Embeddings The pretrained embeddings for texts are generated with SentenceTransformers (Reimers and Gurevych, 2019) and for images with ResNet101 (He et al., 2016). In order to probe whether the pretrained sentence encoders minimally capture the necessary information for our task, we use the dialogue context representation at the turn before the peek action to predict whether iCRs occurred in the dialogue so far. Using a logistic regression model on dialogues that contain a peek turn, we achieve AP\(=0.91\) and macro F1 Score\(=0.86\) in the validation set. This provides evidence that, despite they having been optimized for other tasks, the occurrence of iCRs is, to some extent, encoded in the representations. ## 7 Results Table 5 presents the main results of our models on the two tasks. The feature-based baselines provide some gain over the random performance for Task 1, and a considerable improvement for Task 2. The logistic regression baseline is enough to produce good results for Task 2, whereas Task 1 remains very challenging even for the neural network model. Figure 5: Illustration of the classifier architecture, with an example dialogue from CoDraw (ID 3454). \begin{table} \begin{tabular}{l l c c c c} \hline \hline & \multicolumn{3}{c}{**Task 1: _IF_**} & \multicolumn{2}{c}{**Task 2: _IG_**} \\ \cline{3-6} & & AP & mF1 & AP & mF1 \\ \hline \multirow{3}{*}{random} & val &.117 &.489 &.117 &.489 \\ & test &.113 &.503 &.113 &.503 \\ \cline{2-6} & val &.206 &.531 &.687 &.858 \\ \cline{2-6} & test &.195 &.518 &.687 &.855 \\ \cline{2-6} & val &.324 &.587 &.984 &.962 \\ \cline{2-6} & test &.287 &.576 &.978 &.961 \\ \cline{2-6} & val &.399 &.662 &.991 &.969 \\ \cline{2-6} & test & **.347** & **.645** & **.988** & **.968** \\ \hline \hline \end{tabular} \end{table} Table 5: Main results. Average Precision and macro-average F1 Score on the validation and test sets. **Ablation.** We remove each component of the input to the neural network model in order to understand what information is more relevant for this task. Table 6 shows the differences with respect to the performance in the validation set. The image representation does not seem to be fully exploited by the model. While in Task 2 the image is expected to be superfluous to detect the dialogue act, it should play a role for Task 1, as it imposes constraints on possible actions. It is possible that the off-the-shelf pretrained model is not adequate to encode cliparts and further investigation with other models and fine-tuning is required. The last message is the most relevant signal for Task 2, as expected, given that it is the iCR being classified. Without it, the task is almost equivalent to Task 1 and the performance is indeed similar. Interestingly, the most relevant signal for Task 1 is the context and not the last utterance, which is evidence that the model fails to distinguish well which instructions require an iCR. To further investigate this, we remove the teller's utterances and the drawer's utterances from the context embeddings. While removing the teller's utterances causes little change, removing the drawer's utterances is almost as detrimental as removing the whole context. We thus conclude that the model is likely exploring patterns in the drawer's behavior to make decisions. ## 8 Discussion Our findings are aligned with the recent conclusions by Aliannejadi et al. (2021) and Shi et al. (2022) that the task of predicting when a CR should be made is rather difficult with data-driven models. Techniques to deal with the class imbalance (downsampling, upsampling and varying the cost-sensitive loss function) and variations of the models (_e.g._ Transformer-based architectures) so far led us to similar results. On the other hand, the task of identifying iCR utterances is uncomplicated even for a simpler logistic regression model. The results reached by our model in Task 1 do not quite allow us to see desideratum **regularity** as satisfied at this point, but we are confident that there is much room for interesting further research with this dataset. On their own, these tasks model an overhearer that predicts what the agent should do. What is of interest in reality is having them integrated as subcomponents, implicitly or explicitly, in the models that also make the instruction-giving/following decisions, because these capabilities are not detached in the agents _de facto_. We expect that the decision to ask for clarification should emerge more easily in representations of models that are also making actions. The fact that the drawer's utterances seem to be informative in the dialogue representations for the task speaks against the "silent drawer assumption" int he original models (Kim et al., 2019). Removing the drawer's utterances from the dialogue likely cause loss of relevant dialogue phenomena that is pertinent to the game. ## 9 Conclusion We have shown that CoDraw-iCR (v1), the CoDraw dataset augmented with our iCR annotation, is a valuable resource for investigating instruction-level CRs at scale. Through the corpus analysis, we have also concluded that iCR turns and post-iCR turns imply different game dynamics, which is relevant for models trained to play this game successfully. Therefore, in order to succeed in this type of task, agents need to know how to handle iCRs, as they influence not only the dialogue acts but also the game moves. Our models perform well on detecting iCRs and lay the groundwork for further research on predicting when an iCR should be made. The research roadmap is to integrate iCRs into the full \(IF\) agent, so that the decision to ask for clarification is learnt together with the actions in the game. The second annotation phase will provide fine-grained categories of iCRs' form and content and ground them to the game objects, opening the possibility to explore other tasks like generation. ## 10 Limitations In this section, we discuss some limitations that we inherit from the CoDraw dataset, and then some limitations of our task setup and baseline model. \begin{table} \begin{tabular}{r r r r r} \hline \hline & \multicolumn{3}{c}{**Task 1: _IF_**} & \multicolumn{3}{c}{**Task 2: _IG_**} \\ \cline{2-5} & AP & mF1 & AP & mF1 \\ \hline no image & -.032 & -.012 &.001 &.005 \\ no message & -.050 & -.021 & -.652 & -.328 \\ no context & -.109 & -.054 &.001 &.007 \\ context w/o teller & -.001 & -.000 & -.001 & -.000 \\ context w/o drawer & -.087 & -.054 & -.000 &.007 \\ \hline \hline \end{tabular} \end{table} Table 6: Results of ablation in the input components. Differences in relation to the main result in the val set. CoDraw is a simplified but representative instance of instruction giving/following dialogue games and we show that iCRs are frequent and play an important role in it. Since modelling CRs is still an open problem, using abstract scenes is a reasonable strategy to simplify the underlying task while still giving room for iCRs to occur. Limitations are inherent to data collections in controlled environments. We aim for our annotations to add to other recent efforts, which are limited in other ways. CoDraw-iCR (v1) thus aims to move one step forward towards modelling iCRs, but general conclusions depend on various resources and further collaborative efforts in our field. Actions were not irreversible in CoDraw games. The introduction of the peek action for the teller can be an incentive both for the teller to not give exhaustive instructions and for the drawer to build only an approximation, knowing it could be refined after the peek. We have no access to what the performance would have been if they could not make CRs at all. Meta-data about crowdworker ID is not available.7 Because of that, we cannot investigate the effects of individual CR strategies by players. Players that play multiple games get to know what to expect of the game and should both have more practice in identifying underspecified instructions that require repair and be able to make better guesses about the cliparts. Experienced tellers probably anticipate common problems and adapt their instructions to avoid them (_e.g._ they know that multiple cliparts of trees exist and would likely describe it in their instruction, avoiding unnecessary communication problems). Besides, we cannot draw conclusions on whether dialogues without iCRs indeed did not require repair or some players were personally less inclined to make the effort to ask for clarification. Footnote 7: Personal communication with the authors. Although CRs annotation should take into account the full context [1], the decision to annotate utterance types instead of full dialogues, as discussed in Section 4, is due to the limited resources given the size of the dataset and to the nature of the game setting. We avoided the need to go over multiple non-iCR utterances that occur very often. The plan for the second step of the annotation is to provide fine-grained annotation for each identified iCR within its own context. Our models do not take into account the gallery of cliparts available to the drawer, which is informative (as it limits the choices of cliparts per game) and could be part of the input. Preliminary experiments did not lead to better results. Building a suitable representation of the gallery is left for future research. ## Acknowledgements We are thankful for the anonymous reviewers for their feedback. We thank our student assistants, Sebastiano Gigliobianco and Sophia Rauh, for performing the annotation and Philipp Sadler for generating the step-by-step scenes.
2309.06370
Padding-free Convolution based on Preservation of Differential Characteristics of Kernels
Convolution is a fundamental operation in image processing and machine learning. Aimed primarily at maintaining image size, padding is a key ingredient of convolution, which, however, can introduce undesirable boundary effects. We present a non-padding-based method for size-keeping convolution based on the preservation of differential characteristics of kernels. The main idea is to make convolution over an incomplete sliding window "collapse" to a linear differential operator evaluated locally at its central pixel, which no longer requires information from the neighbouring missing pixels. While the underlying theory is rigorous, our final formula turns out to be simple: the convolution over an incomplete window is achieved by convolving its nearest complete window with a transformed kernel. This formula is computationally lightweight, involving neither interpolation or extrapolation nor restrictions on image and kernel sizes. Our method favours data with smooth boundaries, such as high-resolution images and fields from physics. Our experiments include: i) filtering analytical and non-analytical fields from computational physics and, ii) training convolutional neural networks (CNNs) for the tasks of image classification, semantic segmentation and super-resolution reconstruction. In all these experiments, our method has exhibited visible superiority over the compared ones.
Kuangdai Leng, Jeyan Thiyagalingam
2023-09-12T16:36:12Z
http://arxiv.org/abs/2309.06370v1
# Padding-free Convolution based on Preservation of Differential Characteristics of Kernels ###### Abstract Convolution is a fundamental operation in image processing and machine learning. Aimed primarily at maintaining image size, padding is a key ingredient of convolution, which, however, can introduce undesirable boundary effects. We present a non-padding-based method for size-keeping convolution based on the preservation of differential characteristics of kernels. The main idea is to make convolution over an incomplete sliding window "collapse" to a linear differential operator evaluated locally at its central pixel, which no longer requires information from the neighbouring missing pixels. While the underlying theory is rigorous, our final formula turns out to be simple: the convolution over an incomplete window is achieved by convolving its nearest complete window with a transformed kernel. This formula is computationally lightweight, involving neither interpolation or extrapolation nor restrictions on image and kernel sizes. Our method favours data with smooth boundaries, such as high-resolution images and fields from physics. Our experiments include: i) filtering analytical and non-analytical fields from computational physics and, ii) training convolutional neural networks (CNNs) for the tasks of image classification, semantic segmentation and super-resolution reconstruction. In all these experiments, our method has exhibited visible superiority over the compared ones. machine learning, computer vision, convolutional neural network, padding, differential operator ## I Introduction Convolution is a basic operation in image processing. By convolving an image with certain kernels, one can achieve various effects such as blurring, sharpening, and edge detection. The establishment of modern convolutional neural networks (CNNs) [1, 2, 3, 4] has unprecedentedly highlighted the significance of convolution. CNN-based network architectures have been thriving, such as variational autoencoders (VAEs) [5], generative adversarial networks (GANs) [6] and more recently the diffusion models [7, 8], which spawn numerous derivations for applications. Meanwhile, many techniques have been developed to improve CNNs at a lower level, such as batch normalisation [9], depth-wise separable convolution [10], and skip connections [2, 4], many of which have become common practice in CNN tasks. This paper concerns boundary handling, a key ingredient of convolution for maintaining feature map size. Padding is the routine at present. Despite the great success of CNNs with simple padding (e.g., zero padding), previous studies have shown undesirable boundary effects caused by padding, such as artefacts in features [11, 12, 13] and the spatial bias [14, 15]. A number of techniques have been developed for alleviating padding-induced boundary effects, such as explicit boundary handling (EBH) [11], partial convolution [12], avoiding uneven application of padding [14], and quantifiable position-information encoding [16]. Though they have been proven or supposed to improve CNNs on different aspects, two major drawbacks persist. First, these techniques only work with back propagation, hence unavailable for image filtering with given kernels. Second, most of these techniques, along with simple padding, are empirically motivated, lacking some rigorous connection between the near-boundary and the interior parts whereby boundary handling can be rendered more interpretable and controllable. In this paper, we propose a padding-free method for size-keeping convolution, available for both image filtering and CNN training. By introducing a continuous presentation of image over a sliding window, we establish an equivalence between window-wise convolution and pixel-wise differentiation. The latter can be conducted locally at a boundary pixel so that padding is no longer needed. Our final formula is elegantly simple and computationally lightweight: the convolution over an incomplete window is achieved by convolving its nearest complete window with a transformed kernel. Boundary handling essentially addresses the issue of missing information, so there cannot exist a method that always prevails for all kinds of data. The reduction from convolution to differentiation makes our method more efficient for images with smoother or more predictable boundaries, such as continuous fields from mathematics and physics, and high-resolution images. Concerning fields, a highly relevant application is CNN-based physics-informed learning [17, 18, 19, 20, 21, 22], aimed at simulating or inverting partial differential equations (PDEs) with physics-embedding loss functions. In a physics-informed CNN, kernels are employed as a discrete representation of differential operators, so preserving the differential characteristics of such kernels at the boundary becomes imperative. First, any artefacts from padding will act as secondary sources to the boundary-value problem, generating fake energies to propagate across the entire domain; second, padding also means imposing a Dirichlet boundary condition [22], which can be incompatible with the given PDE system. Owing to its rich content, we have to discuss physics-informed learning in another paper; here we propose our method for general-purpose image filtering and machine learning. The remainder of this paper is organised as follows. In the next section, we cover the related work, followed by the description of our method in Section III. We then describe our experiments in Section IV, covering both forward filtering and CNN training. The paper is then concluded in Section V. ## II Related Work Most existing boundary handling techniques are motivated by CNNs, and thus forward-incompatible. We found two methods available for forward image filtering: padding by algebraic extrapolation [23] and a discrete Fourier transformation-based method that involves padding by reflection and circular deconvolution [24]. We will compare our method with the former, as we could not find a reliable implementation of the latter. For CNNs, studies have shown that simple padding may not only cause artefacts in features [11, 12, 13] but also introduce a spatial bias that impairs the translation invariance of CNNs [14, 15, 25, 26]. Here translation invariance means that CNNs are expected to extract the relevant features regardless of the absolute positions of entities in images. A metric of this bias has recently been proposed in [16]. Existing remedies can be largely divided into two categories: advanced padding and relative position encoding, the former motivated more by artefact suppression and the latter by mitigating the spatial bias. Advanced padding includes randomly-valued padding [27], randomly-positioned padding [28], training an auxiliary CNN for padding [29], symmetric padding with even-sized kernels [30], and some non-generic algorithms for domain-specific data [13, 31, 32]. Relative position encoding includes eliminating uneven application of padding by constraining the image and kernel sizes [14], partial convolution [12, 33], and EBH [11]. Partial convolution is a non-trainable method that first conducts convolution with zero padding and then divides the result by the number fraction of existent pixels in the sliding window. Statistically, it is similar to the randomly-valued padding [27] where the padded values are sampled from a probability distribution determined by a selected boundary vicinity. EBH is the most expensive method, which introduces \((K^{2}-1)\) duplicates of kernels (where \(K\) is the kernel size) to be trained exclusively on the near-boundary pixels grouped by their positions relative to the image boundary. In theory, these \((K^{2}-1)\) duplicated kernels should maximally reduce boundary effects, but such an extra cost is prohibitively high, especially for a large kernel size (such as \(K=7\)); besides, these duplicated kernels can only see a small fraction of data near the boundary, so they can converge much slower than the main kernel trained for the bulk interior. Empirically, we do not observe an outstanding advantage of EBH from our CNN experiments. In Section IV, we will compare our method to the simple padding schemes (zeros, reflect, replicate and circular), along with padding by extrapolation [23], padding by distribution [27], partial convolution [12], and EBH [11]. ## III Method We describe our method in this section. Einstein summation convention is adopted for both superscript and subscript indices (in lower case letters) unless they are parenthesised. ### _Forward convolution_ Let \(\mathbf{\omega}=\{\omega_{ij}\}\), \(i,j\in\{0,1,\cdots,K-1\}\) denote the input kernel, which has size \(K\times K\), with \(K\) being an odd number. Here we assume a square-shaped kernel only to simplify the notations. Consider a pixel "a" centring a complete sliding window "A", as illustrated in Fig. 0(a), meaning that pixel a is "valid" for convolution. Let \(\mathbf{u^{\texttt{a}}}=\{u^{\texttt{a}}_{ij}\}\), \(i,j\in\{0,1,\cdots,K-1\}\) denote the input image given at the \(K\times K\) pixels in A (after stride and dilation if required). The convolution over A can then be written as \[z^{\texttt{a}}:=\omega_{ij}u^{\texttt{A}}_{ij}=\mathbf{\omega}:\mathbf{u^{ \texttt{A}}}. \tag{1}\] We aim for a non-padding-based method to accomplish such convolution at the "invalid" pixels that centre incomplete sliding windows, such as "b" centring "B" in Fig. 0(a). A continuous, sub-pixel image can be formed in A using the Lagrange interpolating polynomial, as denoted by \(\tilde{u}^{\texttt{A}}(h,w)\) with \(h\) and \(w\) being the spatial coordinates: \[\tilde{u}^{\texttt{A}}(h,w):=l_{i}(h)l_{j}(w)u^{\texttt{A}}_{ij},\quad h,w\in[ 0,K-1], \tag{2}\] where \(l_{i}(x)\) is the Lagrange basis simplified for a uniform grid with a unit interval, \[l_{i}(x)=\prod_{\begin{subarray}{c}0\leq k\leq K-1\\ k\neq i\end{subarray}}\frac{x-k}{i-k},\quad i\in\{0,1,\cdots,K-1\}. \tag{3}\] The interpolation in eq. (2) leads to a 2D polynomial of degree \((K-1)\) that preserves the pixel values, i.e., \(\tilde{u}^{\texttt{A}}(i,j)=u^{\texttt{A}}_{ij}\). Spatial derivatives of \(\tilde{u}^{\texttt{A}}(h,w)\) can then be conducted up to order \((K-1)\) in each direction, formulated as \[\mathcal{D}^{mn}\tilde{u}^{\texttt{A}}\big{|}_{(h,w)}:=\left.\frac{\partial^{ (m+n)}\tilde{u}^{\texttt{A}}}{\partial h^{(m)}\partial w^{(n)}}\right|_{(h,w)} =l^{m}_{i}(h)l^{n}_{j}(w)u^{\texttt{A}}_{ij}, \tag{4}\] for \(m,n\in\{0,1,\cdots,K-1\}\), where \(l^{m}_{i}(x)\) is the \(m\)-th derivative of \(l_{i}(x)\), available in exact form given \(K\) (as they are all rational numbers), and \(\mathcal{D}^{mn}\) denotes the partial differential operator of order \((m,n)\), e.g., \(\mathcal{D}^{12}=\frac{\partial^{3}}{\partial h\partial w^{2}}\). Evaluated at a pixel with location \((p,q)\), for \(p,q\in\{0,1,\cdots,K-1\}\), eq. (4) yields \[\mathcal{D}^{mn}\tilde{u}^{\texttt{A}}\big{|}_{(p,q)}=\delta^{mn}_{pq,ij}u^{ \texttt{A}}_{ij}=\delta^{mn}_{pq}:\mathbf{u^{\texttt{A}}}, \tag{5}\] based on the definition that \[\delta^{mn}_{pq,ij}:=l^{m}_{i}(p)l^{n}_{j}(q). \tag{6}\] Equation (5) states that, \(\mathcal{D}^{mn}\tilde{u}^{\texttt{A}}|_{(p,q)}\), the \((m,n)\)-th order derivative of our continuous image evaluated at pixel \((p,q)\), can be obtained by convolving the image \(\mathbf{u^{\texttt{A}}}\) with the above-defined kernel \(\boldsymbol{\updelta}^{mn}_{pq}\). Therefore, we call \(\boldsymbol{\updelta}\) defined by eq. (6) the _differential kernels_, which depends only on the kernel size \(K\). An example for \(K=3\) is provided in Fig. 0(b). Our central idea is to _represent the wanted convolution or eq._ (1) _over a generic window \(\mathbb{B}\), complete or incomplete, as a unique linear differential operator \(\mathcal{L}\) applied on the continuous image \(\tilde{u}(h,w)\) and evaluated locally at the centre \(\mathbb{b}\)_. Formally, we prescribe the following equivalence: \[\boldsymbol{\omega}:\mathbb{u}^{\mathbb{B}}\equiv\,\mathcal{L}\tilde{u}_{ \mathbb{b}}\,,\quad\mathcal{L}:=\alpha^{mn}\mathcal{D}^{mn}, \tag{7}\] where \(\alpha^{mn}\), for \(m,n\in\{0,1,\cdots,K-1\}\), are the real coefficients of the linear differential operator \(\mathcal{L}\). It must be emphasised that eq. (7) does not specify how \(\tilde{u}(h,w)\) is determined, implying that the window for its interpolation does not need to be our target window \(\mathbb{B}\). This will eventually enable convolution over the incomplete windows. The coefficients \(\alpha^{mn}\) are determined such that eq. (7) holds _at every valid pixel_ given \(\tilde{u}(h,w)\) interpolated by its centred window. A generic example is our pixel \(\mathsf{a}\) that centres window \(\mathsf{A}\). The local coordinates of \(\mathsf{a}\) in \(\mathsf{A}\) are \((M,M)\), with \(M=\frac{K-1}{2}\). With \(\tilde{u}(h,w)\) interpolated by \(\mathsf{A}\), eq. (7) becomes \[\begin{split}\boldsymbol{\omega}:\mathbb{u}^{\mathsf{A}}& \equiv\,\mathcal{L}\tilde{u}_{\mathbb{a}}\equiv\,\mathcal{L} \tilde{u}^{\mathsf{A}}\big{|}_{\mathsf{a}}\\ &=\,\alpha^{mn}\mathcal{D}^{mn}\tilde{u}^{\mathsf{A}}\big{|}_{(M,M)}=\alpha^{mn}\boldsymbol{\xi}_{MM}^{mn}:\mathbb{u}^{\mathsf{A}},\end{split} \tag{8}\] the last part using our definition of \(\mathfrak{g}_{pq}^{mn}\) in eq. (6). To make eq. (8) hold regardless of data \(\mathbf{u^{A}}\), \(\alpha^{mn}\) must be the solution of the following \(K^{2}\times K^{2}\) linear system: \[\alpha^{mn}\,\mathfrak{g}_{MM}^{mn}=\mathbf{\omega}. \tag{9}\] If we denote the vectorisation of a generic \(K\times K\) matrix \(\mathbf{a}\) by \(\mathbf{\vec{a}}\) such that \(\vec{a}_{iK+j}=a_{ij}\), the above linear system can be recast to the following standard form: \[\underbrace{\begin{pmatrix}\vec{\eth}_{MM,0}^{0}&\vec{\eth}_{MM,0}^{1}& \cdots&\vec{\eth}_{MM,0}^{K^{2}-1}\\ \vec{\eth}_{MM,1}^{0}&\vec{\eth}_{MM,1}^{1}&\cdots&\vec{\eth}_{MM,1}^{K^{2}- 1}\\ \vdots&\vdots&\ddots&\vdots\\ \underbrace{\begin{pmatrix}\vec{\eth}_{MM,K^{2}-1}^{0}&\vec{\eth}_{MM,K^{2}- 1}^{1}&\cdots&\vec{\eth}_{MM,K^{2}-1}^{K^{2}-1}\\ \end{pmatrix}}_{\mathbf{\vec{\omega}}}\\ =\underbrace{\begin{pmatrix}\vec{\omega}_{0}&\vec{\omega}_{1}&\cdots&\vec{ \omega}_{K^{2}-1}\end{pmatrix}^{\top}}_{\mathbf{\vec{\omega}}^{\top}}.\end{pmatrix} \tag{10}\] The assembled matrix \(\mathbf{\vec{D}}_{MM}^{M}\) is always invertible because the \(K^{2}\) differential kernels, \(\vec{\boldsymbol{\xi}}_{MM}\) for \(k\in\{0,1,\cdots,K^{2}-1\}\) (i.e., each column in \(\mathbf{\vec{D}}_{MM}\)), are linearly independent given that \(l_{i}(x)\) is a complete polynomial of degree \((K-1)\). The inverse of \(\mathbf{\vec{D}}_{MM}\) can be exactly shown for a given kernel size \(K\), so computing \(\mathbf{\vec{\alpha}}=\mathbf{\vec{D}}_{MM}^{-1}\cdot\mathbf{\vec{\omega}}\) is trivial. Having uniquely determined the differential operator \(\mathcal{L}\) as \(\alpha^{mn}\mathcal{D}^{mn}\), we can evaluate \(\mathcal{L}\) at any invalid pixel once a continuous image in its neighbourhood is provided. Here we choose to determine such a continuous image by its nearest complete window. Let b be an invalid pixel centring an incomplete window B, and A be the complete window nearest to b, such as Fig. 0(a). It is straightforward to show that b lies in A (but not at its centre). Assume that the local coordinates of b in A are \((R,S)\), with \(R,S\in\{0,1,\cdots,K-1\}\) and \((R,S)\neq(M,M)\). We use the R.H.S. of eq. (7) to compute the convolution over B, however, with the continuous image \(\tilde{u}(h,w)\) interpolated from A: \[\begin{split}\mathbf{\omega}:\mathbf{u^{B}}& \equiv\mathcal{L}\tilde{u}\big{|}_{\text{b}}\approx\mathcal{L} \tilde{u}^{A}\big{|}_{\text{b}}\\ &=\alpha^{mn}\mathcal{D}^{mn}\tilde{u}^{A}\big{|}_{(R,S)}=\alpha ^{mn}\mathcal{g}_{RS}^{mn}:\mathbf{u}^{A}.\end{split} \tag{11}\] We colour \(\mathbf{u^{B}}\) in red to indicate that it is partially unavailable. Note that the "\(\approx\)" sign in the above equation indicates the only approximation we have introduced: the window by which the continuous image \(\tilde{u}\big{|}_{\text{b}}\) is interpolated. More readably, using \(\mathbf{\vec{\alpha}}=\mathbf{\vec{D}}_{MM}^{-1}\cdot\mathbf{\vec{\omega}}\), eq. (11) can be simplified as \[\mathbf{\vec{\omega}}:\mathbf{u^{B}}\approx\mathbf{\vec{\omega}}\cdot\mathbf{ \vec{u}}^{A},\quad\mathbf{\vec{\omega}}:=\mathbf{\vec{D}}_{RS}\cdot\mathbf{ \vec{D}}_{MM}^{-1}\cdot\mathbf{\vec{\omega}}. \tag{12}\] We refer to eq. (12) as the _differential kernel transformation_. Clearly, it is compatible with the valid pixels, for which \(R=S=M\) and thus \(\mathbf{\vec{\omega}}=\mathbf{\omega}\). Refer to Fig. 1 for the example of \(K=3\). In summary, _the convolution with \(\mathbf{\omega}\) over an incomplete window B is conducted by a "shifted" convolution with the transformed kernel \(\mathbf{\vec{\omega}}\) over its nearest complete window A, under a window shift of \((M-R,M-S)\)_. ### _Method properties_ Our method has the following key properties: #### Iii-B1 Theoretical soundness Our method preserves the differential characteristics of kernels at the invalid pixels by making the window-wise convolution collapse to a pixel-wise differential operator, or eq. (7). Such a connection between the "invalid" near-boundary part and the "valid" interior part is theoretically sound and self-contained. In contrast, most previous methods are empirically motivated by CNNs. #### Iii-B2 Only using original pixel values Though we have introduced a continuous image conceptually, our final formula for convolution at the invalid pixels, eq. (12), operates on the original pixels from the input image. Without introducing extra information outside the image boundary (such as by padding or extrapolation) or between pixels (such as by interpolation), our method avoids these sources of artefacts. This property also makes our method compatible with stride and dilation for CNN training. #### Iii-B3 Low overhead The transformation matrices in eq. (12), \(\mathbf{\vec{D}}_{RS}\cdot\mathbf{\vec{D}}_{MM}^{-1}\), are constants given the kernel size \(K\), which can be precomputed for frequently-used \(K\)'s, such as 3, 5 and 7. Therefore, eq. (12) demands a low computational cost in both forward and back propagation. #### Iii-B4 Favouring data with smooth boundaries Let \(\tilde{u}^{*}(h,w)\) denote the true image function, which can be discontinuous or even inexpressible. At the boundary pixels, our method preserves the convolution-associated differential operator up to order \((K-1)\). Therefore, it becomes exact when the local Taylor expansion of \(\tilde{u}^{*}(h,w)\) on the boundary is of order \((K-1)\) or lower, i.e., when \(\tilde{u}^{*}(h,w)\) is sufficiently smooth on the boundary. The error of our method increases as \(\tilde{u}^{*}(h,w)\) becomes more non-smooth or unpredictable near the boundary. Datasets underpinned by physical or mathematical processes, such as solutions of partial differential equations and tomographic images, tend to benefit from our method. Furthermore, our method tends to work better with higher-resolution images because \(\tilde{u}^{*}(h,w)\) becomes smoother as the sliding windows shrink with respect to image contents. ## IV Experiments We evaluate our method with two types of experiments, the former on image filtering with given kernels, as reported in Section IV-A, and the latter on CNN-based computer vision tasks, as reported in Section IV-B. Einstein summation convention is not used in this section. ### _Image filtering_ We consider three synthetic datasets. The first two are analytical 2D functions, respectively generated from the Chebyshev polynomials and the spherical harmonics. Both are popular basis functions in computational physics and mathematics. Therefore, the accuracy of our method as tested on these basis functions can reasonably indicate its versatility for handling extensive continuous fields. Our Chebyshev-based functions are given by \[C_{n}(h,w)=U_{n}(h)U_{n}(w)\sin\left(n(h+w)\right), \tag{13}\] where the \(U_{n}\)'s, for \(n\in\{0,1,2,\cdots\}\), are the Chebyshev polynomials of the second kind, and the rotation by \(\sin\left(n(h+w)\right)\) makes the 2D patterns non-parallel to the axes. The spherical harmonic-based functions are given by \[S_{n}(h,w)=Y_{2n}^{n}(h,w)\sin\left(n(h+w)\right), \tag{14}\] where \(Y_{l}^{m}\) is the spherical harmonic function of degree \(l\) and order \(m\), composed of trigonometric functions and the associated Legendre polynomials. As the function order \(n\) grows, both \(C_{n}\) and \(S_{n}\) becomes more oscillating or non-smooth, as shown in Fig. 1(a) and 1(b). Our third dataset is non-analytical, including numerical solutions of the Navier-Stokes equations for turbulence, as shown in 1(c), borrowed from the physics-informed neural operators [34]. We apply random filters to these datasets and compare the accuracy of the following eight methods: padding respectively by zeros (zero), reflection (Refl), replicate (Repl), circular (Circ), extrapolation (Extr) [23], and distribution (Rand) [27], along with partial convolution (Part) [12] and our differentiation-based method (Diff). For Extr, we use linear, quadratic and cubic respectively for \(K=3,5\) and \(7\). For Rand, the padded values are sampled from the four normal distributions determined respectively for the top, bottom, left and right edges with a thickness of \((K+1)/2\). Requiring back-propagation, EBH [11] is not applicable here. The superiority of our method against the others is visible from Fig. 2. The second row shows the \(L^{1}\) errors of each method for 100 random filters applied to the three datasets. Our method proves to be remarkably more accurate than the others (note that the y-axes are logarithmic). For Chebyshev and spherical harmonics, the errors increase with the function order \(n\), but our method prevails across all the orders. The bottom row of Fig. 2 zooms into the boundary artefacts caused by the Laplace filters with different kernel sizes. It is shown that our method is visually artefact-free even at \(n=100\); Extr also works reasonably well, but its induced errors are still visible and 1\(\sim\)2 orders of magnitude larger than ours. The other padding schemes and partial convolution will cause strong artefacts irrespective of kernel size and function order, so they are unsuitable for the task of image filtering. ### _Learning with CNNs_ In this section, we consider three common CNN tasks: image classification, semantic segmentation and super-resolution reconstruction. For all these experiments we use real-world datasets (instead of e.g. analytical fields) to avoid favouritism towards our method by means of forward filtering. The original U-Net architecture [35] is adopted, with its Conv2d layers varying among nine different boundary handling methods. The first eight are those from the previous experiment: Zero, Refl, Repl, Circ, Extr, Rand, Part and Diff (ours), all non-trainable, and the last one is EBH [11], involving eight duplicates of kernels (as \(K=3\) in a U-Net). The relative wall-times for training the nine U-Nets are reported in Table I, which shows that our method runs as fast as circular padding (PyTorch built-in). For each problem, the nine U-Nets are initialised with the same weights, and we loop over five random seeds to obtain the reported metric scores. It must be emphasised that _boundary handling is a low-level operation in CNNs, so we use a simple network architecture and loss functions to isolate its influences_ instead of pursuing the state of the art of the considered problems (datasets) with any advanced yet irrelevant techniques. #### Iii-B1 Classification We use the Caltech-101 dataset [36] for this experiment. The latent (bottom layer) of the U-Net is connected to a two-layer fully-connected network to predict the soft labels and then the classification (cross-entropy) loss. The total loss is the sum of the classification loss and 1% of the reconstruction loss (which accelerates convergence). The images are all reshaped to \(448\times 448\). The accuracy of the models on the test set (20% of data) is reported in Table I. It is shown that Part, EBH and Diff have achieved a much higher accuracy (\(>50\%\)) than the other six padding-based methods (\(<30\%\)), with our Diff attaining the highest. The padding-based U-Nets have mostly failed to learn (with many hyperparameters tested), as can be seen from their low accuracy and training history (not shown here for brevity). This can be a good example of padding-free boundary handling (Part, EBH and Diff) significantly enhancing the learnability of a CNN. #### Iii-B2 Semantic Segmentation In this experiment, we use the Cityscapes dataset [36] for end-to-end supervised learning of semantic segmentation with a U-Net. The category identities (8 classes) instead of the fine identities (34 classes) are used as the labels because our simple architecture and loss function (cross entropy) could not well handle a high degree of class imbalance in the latter. The original images (\(1024\times 2048\)) are decimated by a factor of two due to our device capacity. The segmentation metric scores on the validation set are shown in Table I. It can be seen that the scores from Zero, Refl, Repl, Circ and Rand are mostly identical, implying that none of them have facilitated segmentation from the perspective of boundary handling. Extr and Part have led to a small (yet visible) improvement, and EBH has advanced further. Our method Diff has yielded the best results. Note that all the methods have attained a high baseline (accuracy \(>97\%\)) in an absolute sense, from which even a small improvement is not easy to achieve. #### Iii-B3 Super-resolution Reconstruction In this experiment, we train a U-Net to reconstruct a world topographic map from a low to a high resolution. The data come from the _ETOPO 2022 15 Arc-Second Global Relief Model_[37], a large image containing \(43200\times 86400\) pixels, each spanning a central angle of \(15^{\prime\prime}\) (or 0.464 km on Earth's surface) in the latitudinal and longitudinal directions, as displayed in Fig. 2(a) and 2(b). We train the U-Net with non-overlapping small patches sampled from this large image, each with size \(192\times 192\) (geographically \(0.8^{\circ}\times 0.8^{\circ}\)). We generate the low-resolution input by a Gaussian filter (\(\sigma=3\)), from which we attempt to recover the high-resolution output, as shown in Fig. 2(c). Mean squared error (MSE, denoted \(\varepsilon^{2}\)) is used as the loss function. To make the boundary effects more visible, we divide each test patch into two parts, interior and frame, with a frame width of eight pixels. The reconstruction error is computed separately over these two parts. For our patches of size \(192\times 192\), \(\varepsilon_{\text{INTF}}^{2}\) is computed over the central part of size \(176\times 176\) (\(176=192-2\times 8\)), and \(\varepsilon_{\text{FRAME}}^{2}\) over the cropped frame. The MSEs are summarised in Table I, which show that our method (Diff) has not only achieved the highest accuracy for both interior and frame but also maximally reduced the error gap between interior and frame. We visualise the error maps over a randomly picked region (near Caspian Sea), as shown in Fig. 2(e)\(\sim\)3m. These error maps are obtained in three steps: patch reconstruction by the U-Net, assembling the \(4\times 4\) non-overlapping error maps, and applying the Farid transform [38] to detect the horizontal and vertical edges. It is shown that the boundary artefacts are visible in all the error maps except the one delivered by Diff. Table I and Fig. 3 make it evident that Diff performs significantly better than the other methods for this super-resolution task. ## V Conclusions We have presented a new padding-free method for size-keeping image convolution. The central idea is to establish an equivalence between window-wise convolution over a discrete image and pixel-wise differentiation over a continuous representation of that image. Convolution within an incomplete sliding window can then be achieved by differentiation at its centre, with the continuous image parameterised from the nearest complete window. As such, our method preserves the differential characteristics of kernels. Our final formula is simple and computationally lightweight, available for both image filtering and CNN-based machine learning. The preservation Fig. 2: Filtering three datasets by convolution: Chebyshev, spherical harmonics and solutions of Navier-Stokes equations [34]. The top row shows our target functions. The second row shows the \(L^{1}\) errors (\(\varepsilon^{1}\)) for 100 random filters applied to the datasets. We vary the function order \(n\) for Chebyshev and spherical harmonics, and the batch index for the Navier-Stokes. In the bottom row, we apply the Laplace filters (\(\Delta=\partial_{hh}+\partial_{ww}\)) of sizes 3, 5 and 7 to \(C_{100}\): (g) shows the ground truth of \(\Delta C_{100}\) based on analytical evaluation, and (h)\(\sim\)(j) the results by convolution, zooming into the annotated box in (g). \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{ \begin{tabular}{c} **Classification** \\ **Caltech-101** \\ \end{tabular} } & \multicolumn{3}{c|}{**Semantic Segmentation**} & \multicolumn{3}{c|}{**Super-resolution Reconstruction**} & \multicolumn{2}{c|}{**Computational cost**} \\ & & \multicolumn{2}{c|}{**Cityscapes**} & \multicolumn{2}{c|}{**ETOPO-15\({}^{\prime\prime}\)**} & \multicolumn{2}{c|}{**Original U-Net**} \\ \cline{2-11} \cline{2-11} & _Accuracy\({}^{\mathrm{a}}\)_ & _IoU_ & _F1_ & _Accuracy_ & \(\varepsilon_{\mathrm{Birth}}^{2}\times 10^{4}\) & \(\varepsilon_{\mathrm{Birth}}^{2}\times 10^{4}\) & \(\varepsilon_{\mathrm{Birth}}^{2}\times 10^{4}\) & \(\varepsilon_{\mathrm{Birth}}^{2}/\varepsilon_{\mathrm{Birth}}^{2}\) & _# kernels\({}^{\mathrm{b}}\)_ & _Wall-time\({}^{\mathrm{c}}\)_ \\ \hline Zero & 30.7\% & 84.8\% & 91.8\% & 97.9\% & 7.0 & 10.2 & 144.5\% & 1 & 1.0 \\ Ref1 & 28.8\% & 84.6\% & 91.6\% & 97.7\% & 7.2 & 10.0 & 139.9\% & 1 & \(\sim\)1.1 \\ Rep1 & 22.9\% & 85.1\% & 92.0\% & 97.9\% & 7.0 & _9.2_ & _132.0\%_ & 1 & \(\sim\)1.1 \\ Circ & 29.1\% & 85.0\% & 91.9\% & 97.9\% & 7.2 & 11.6 & 157.9\% & 1 & \(\sim\)1.5 \\ Extr [23] & 28.8\% & 85.3\% & 92.1\% & 98.0\% & 7.4 & 10.9 & 147.5\% & 1 & \(\sim\)5.8 \\ Rand [27] & 27.9\% & 85.1\% & 91.9\% & 97.9\% & 7.7 & 15.5 & 200.2\% & 1 & \(\sim\)2.6 \\ Part [12] & **53.7\%** & 85.5\% & 92.3\% & 98.1\% & 7.1 & 10.3 & 145.0\% & 1 & \(\sim\)1.2 \\ EBH [11] & 53.2\% & **85.9\%** & **92.6\%** & **98.3\%** & _6.8_ & 9.3 & 135.9\% & 9 & \(\sim\)12.5 \\ Diff (ours) & **55.8\%** & **86.1\%** & **92.8\%** & **98.4\%** & **6.6** & **8.0** & **120.7\%** & 1 & \(\sim\)1.6 \\ \hline STD Diff\({}^{\mathrm{d}}\) & \(\pm\)0.92\% & \(\pm\)0.10\% & \(\pm\)0.06\% & \(\pm\)0.02\% & \(\pm\)0.05 & \(\pm\)0.12 & \(\pm\)1.8\% & & \\ \hline \end{tabular} * The best and second-best scores in each column are respectively printed in boldface and italic-boldface. * This column shows the total number of kernels to be trained for one target kernel; only EBH introduces eight duplicates. * This column shows the approximate wall-time (relative to Zero) required to train a U-Net with input shape [64, 3, 224, 224], measured for forward and back propagation on an Nvidia A100. The first four methods are PyTorch built-ins while the rest are based on our implementation. * For brevity, we only show the standard deviations of the metrics yielded by Diff. The metrics are stable with respect to model initialisation. Fig. 3: Super-resolution reconstruction of a world topographic map using a U-Net. The top row shows the data at three scales, (a) for global, (b) zooming into the region marked by the red-coloured cross in (a), and (c) zooming into the red-coloured box in (b). (c) shows a pair of input and output of the U-Net. (d) shows the reconstruction of (b) by model Zero; the reconstructions by the other models look similar. (e)\(\sim\)(m) display the error maps over (b), obtained by patch reconstruction, assembling the \(4\times 4\) error maps (non-overlapping), and the Farid transform (both horizontal and vertical) to highlight the artefacts near the patch boundaries (the horizontal and vertical stripes). of the differential operator at the boundary pixels makes our method more accurate for processing images with smoother boundaries, such as mathematical or physical fields and high resolution images. Our experiments have shown visible superiority of our method on both image filtering and CNN-based computer vision tasks. We provide an optimised implementation of our method, including both forward filtering (to replace torch.nn.functional.conv2d) and a convolutional layer class (to replace torch.nn.Conv2d), available open-source from [https://github.com/stfc-sciml/DifferentialConv2d](https://github.com/stfc-sciml/DifferentialConv2d) (with all experiments included). ## Acknowledgements This work is supported by the EPSRC grant, Blueprinting for AI for Science at Exascale (BASE-II, EP/X019918/1), which is Phase II of the Benchmarking for AI for Science at Exascale (BASE) grant.
2309.12305
Gamma-ray burst precursors from tidally resonant neutron star oceans: potential implications for GRB 211211A
Precursor emission has been observed seconds to minutes before some short gamma-ray bursts. While the origins of these precursors remain unknown, one potential explanation relies on the resonance of neutron star pulsational modes with the tidal forces during the inspiral phase of a compact binary merger. In this paper, we present a model for short gamma-ray burst precursors which relies on tidally resonant neutron star oceans. In this scenario, the onset of tidal resonance in the crust-ocean interface mode corresponds to the ignition of the precursor flare, possibly through the interaction between the excited neutron star ocean and the surface magnetic fields. From just the precursor total energy, the time before the main event, and a detected quasi-periodic oscillation frequency, we may constrain the binary parameters and neutron star ocean properties as never before. Our model can immediately distinguish neutron star-black hole mergers from binary neutron star mergers without gravitational wave detection. We apply our model to GRB 211211A, the recently detected long duration short gamma-ray burst with a quasi-periodic precursor, and explore the parameters of this system within its context. The precursor of GRB 211211A is consistent with a tidally resonant neutron star ocean explanation that requires an extreme-mass ratio NSBH merger and a high mass neutron star. While difficult to reconcile with the gamma-ray burst main emission and associated kilonova, our results constrain the possible precursor generating mechanisms in this system. A systematic study of short gamma-ray burst precursors with the model presented here can test precursor origin and could probe the possible connection between gamma-ray bursts and neutron star-black hole mergers.
Andrew G. Sullivan, Lucas M. B. Alves, Zsuzsa Márka, Imre Bartos, Szabolcs Márka
2023-09-21T17:59:09Z
http://arxiv.org/abs/2309.12305v2
Gamma-ray burst precursors from tidally resonant neutron star oceans: potential implications for GRB 211211A ###### Abstract Precursors have been observed seconds to minutes before some short gamma-ray bursts. While the precursor origins remain unknown, one explanation relies on the resonance of neutron star pulsational modes with the tidal forces during the inspiral phase of a compact binary merger. In this paper, we present a model for short gamma-ray burst precursors which relies on tidally resonant neutron star oceans. In this scenario, the onset of tidal resonance in the crust-ocean interface mode ignites the precursor flare, possibly through the interaction between the excited neutron star ocean and the surface magnetic fields. From just the precursor total energy, the time before the main event, and a detected quasi-periodic oscillation frequency, we may constrain the binary parameters and neutron star ocean properties. Our model can immediately distinguish neutron star-black hole mergers from binary neutron star mergers without gravitational wave detection. We apply our model to GRB 211211A, the recently detected long duration short gamma-ray burst with a quasi-periodic precursor, and explore the parameters of this system. The precursor of GRB 211211A is consistent with a tidally resonant neutron star ocean explanation that requires an extreme-mass ratio neutron star-black hole merger and a high mass neutron star. While difficult to reconcile with the main gamma-ray burst and associated kilonova, our results constrain the possible precursor mechanisms in this system. A systematic study of short gamma-ray burst precursors with the model presented here can test precursor origin and probe the possible connection between gamma-ray bursts and neutron star-black hole mergers. keywords: (transients:) black hole - neutron star mergers - (transients:) neutron star mergers - stars: oscillations - gravitational waves - gamma-rays: bursts ## 1 Introduction Short gamma-ray bursts (sGRB) represent electromagnetic counterparts to compact binary mergers (Narayan et al., 1992; Abbott et al., 2017, 2017). In these events, the powerful dynamics in the binaries can form relativistic jets which produce gamma-ray emission. While the exact gamma-ray burst (GRB) central engine-whether a strongly magnetized proto-neutron star (Cilofi, 2020; Suvorov and Kokkotas, 2021, e.g.) or a black hole (Sarin and Laszlo, 2021)-is unknown, compact binary mergers should produce sGRBs if they contain a neutron star and relativistic jets form at the time of the merger (Sarin et al., 2022). Some sGRBs are followed by kilonovae (Rossi et al., 2020, e.g.), optical thermal emission from the radioactive decay of heavy elements produced by the merger (Li and Paczynski, 1998; Metzger et al., 2010; Metzger and Berger, 2012; Kasen et al., 2013; Tanvir et al., 2013; Ciolfi, 2018). With the additional detection of gravitational waves (GWs) (Cutler and Flanagan, 1994; Abbott et al., 2017) and possibly high energy neutrinos (Rosswog and Liebendorfer, 2003; Cusineto et al., 2021; Abbasi et al., 2023), multimessenger observations of compact binary mergers reveal the properties of their progenitors as well as the dynamical processes at work during the inspirals. Precursor electromagnetic emission has been associated with some observed GRBs, including \(\gtrsim 1\%\) of sGRBs (Troja et al., 2010; Zhong et al., 2019; Coppin et al., 2020; Wang et al., 2020; Li et al., 2021). Such precursors can be \(\sim 1-100\) s prior to the main sGRB event (Troja et al., 2010) and may indicate particular features of the systems from which they originate. Proposed mechanisms for producing sGRB precursors include an initial episode of the main GRB emission (Charisi et al., 2015), the interaction between neutron star magnetospheres (Ascenzi et al., 2021), the orbital motion of a weakly magnetized companion and a highly magnetized neutron star (Vietri, 1996; Hansen and Lyutikov, 2001; McWilliams and Levin, 2011; Lai, 2012; Piro, 2012; Sridhar et al., 2021), and the tidally induced shattering of a neutron star crust (Tsang et al., 2012; Suvorov and Kokkotas, 2020; Gittins et al., 2020; Passamonti et al., 2021; Kuan et al., 2021, 2021; Neill et al., 2022). Particularly, resonant tides, in which the binary orbital motion becomes resonant with internal neutron star pulsational modes, represent promising causes of crustal shattering and by extension electromagnetic emission (Tsang et al., 2012; Passamonti et al., 2021; Neill et al., 2022; Dichiara et al., 2023). Lower frequency modes in the surface layers of a neutron star have the potential to generate early precursors when resonant with orbital motion. A possible site of early resonance is the fluid outer layer, the neutron star's ocean. The ocean sustains its own set of low frequency pulsational modes associated with its lower density and separation from the rest of the neutron star by the elastic crust (Bildsten and Cutler, 1995; Lattimer and Prakash, 2001; Piro and Bildsten, 2005; Sullivan et al., 2023). Tidal resonance should occur in neutron star ocean modes early in compact binary inspirals and may deposit large amounts of energy into the modes (Sullivan et al., 2023). In fact, Sullivan et al. (2023) show that the energy deposited into the neutron star ocean during tidal resonance may be sufficient to produce a detectable electromagnetic flare. Motivated by the results of Sullivan et al. (2023), we advance a new model for sGRB precursors in this paper. We propose that the interface pulsational mode associated with the crust-ocean boundary may become resonant during the inspiral phase of a compact binary merger. The excited ocean consequently represents the site of a sGRB precursor. This model can be applied to sGRBs from both binary neutron star (BNS) mergers and neutron star-black hole (NSBH) mergers, thus admitting precursors from either scenario and distinguishing between them through electromagnetic emission alone. In this paper, we develop analytical formulae for precursor sGRB observables which can be used to estimate compact binary parameters in the context of this model. As a first application, we consider the precursor associated with the recently detected GRB 211211A, an especially unique sGRB due to its long length (Rastinejad et al., 2022) as well as the possible identification of quasi-periodic oscillations (QPOs) in its precursor emission (Xiao et al., 2022). In Section 2, we review tidal resonance in neutron star oceans, and present the theory relevant to our model. In Section 3, we provide a detailed discussion of our precursor model. In Section 4, we apply our model to GRB 211211A and constrain the parameters of the system. We also evaluate our model's applicability to this system and its potential consequences. In Section 5, we consider future prospects and conclude. ## 2 Tidal resonances in neutron star oceans In this section, we review the general properties of neutron star ocean tidal resonances (Sullivan et al., 2023). Neutron stars, like main sequence stars, possess a spectrum of excitable pulsational modes (McDermott et al., 1988; Reisenegger and Goldreich, 1994; Lai, 1994; Passamonti et al., 2006; Samuelsson et al., 2007; Passamonti and Andersson, 2012). A select few modes are localized almost entirely to the neutron star ocean, including surface \(g\)-modes (Bildsten and Cutler, 1995), as well as the crust-ocean interface mode or \(i\)-mode (Piro and Bildsten, 2005; Sullivan et al., 2023). The \(i\)-mode is the generalization of the shallow ocean surface mode to the case where the ocean floor is not completely rigid. Thus, it is associated with the discontinuity in shear modulus across the crust-ocean boundary. Tides in compact binary systems represent a promising mechanism for exciting these modes (Lai, 1994; Tsang et al., 2012; Tsang, 2013; Passamonti et al., 2021; Sullivan et al., 2023). Tidal resonance, which occurs in compact binary inspirals when a harmonic of the orbital frequency matches the mode frequency, can deposit significant energy into the crust-ocean \(i\)-mode, potentially sufficient to produce a flare (Sullivan et al., 2023). ### Neutron Star Ocean Modes Quantitatively, the modes are fluid perturbations on the background neutron star. The principle equation of motion is the perturbed Euler equation \[\partial_{t}^{2}\vec{\xi}+\frac{\nabla\delta p}{\rho}-\frac{\delta\rho}{\rho^ {2}}\nabla p-\frac{1}{\rho}\nabla\cdot\mathbf{\sigma}=-\nabla\chi, \tag{1}\] where \(\vec{\xi}\) is the Lagrangian fluid displacement, \(\rho\) is the background fluid density, \(p\) is the background fluid pressure, \(\delta\rho\) is the Eulerian perturbation of the density, \(\delta p\) is the Eulerian perturbation of the pressure, \(\mathbf{\sigma}=\sigma_{ij}\) is the elastic stress tensor, and \(\chi\) is an external potential, which corresponds to the tidal potential in this case. The continuity equation combined with the definition of the Lagrangian perturbation gives an additional governing equation (Friedman and Schutz, 1978) \[\Delta\rho=\delta\rho+\vec{\xi}\cdot\nabla\rho=-\rho\nabla\cdot\vec{\xi}. \tag{2}\] Setting \(\chi=0\), the mode solutions have \(\vec{\xi}\propto e^{i\omega t}\), where \(\omega\) is the mode frequency. Eq. 1 simplifies to the eigenvalue equation \[(\mathcal{L}-\omega^{2}\rho)\vec{\xi}=0, \tag{3}\] where \(\mathcal{L}\) is the operator which contains the non-time derivative terms in eq. 1. #### 2.1.1 Mode Frequency The shallow ocean surface mode corresponds to an approximate analytical solution to eqs. 1 and 2. The shear modulus is \(\tilde{\mu}=0\) in the fluid ocean, so eq. 1 reduces to \[\partial_{t}^{2}\vec{\xi}+\frac{\nabla\delta\rho}{\rho}-\frac{\delta\rho}{\rho ^{2}}\nabla p=0. \tag{4}\] At the surface of the star, the traction vanishes, so \[\Delta p=\delta p+\vec{\xi}\cdot\nabla p=0 \tag{5}\] from the definition of the Lagrangian displacement. Therefore, in a shallow ocean \[\delta p\approx-\vec{\xi}\cdot\nabla p=\xi_{r}\rho g, \tag{6}\] where \(\xi_{r}\) is radial component of \(\vec{\xi}\) and \(g=\frac{GM_{\star}}{R_{\star}^{2}}\) is the magnitude of the gravitational acceleration at the surface. The second equality arises from assuming the background ocean is in hydrostatic equilibrium. In a shallow ocean, we expect \(\delta\rho/\rho<<1\)(Gittins et al., 2023). The Euler equation consequently simplifies to \[\partial_{t}^{2}\vec{\xi}+g\nabla\xi_{r}=0, \tag{7}\] while eq. 2 simplifies to \[\frac{d\rho}{dr}\xi_{r}=-\rho\nabla\cdot\vec{\xi}, \tag{8}\] where we have assumed that the background \(\rho\) is purely radial. In the ocean, we expect \(\frac{d\rho}{dr}\geq\frac{\rho}{h_{o}}\), where \(h_{o}\) is the ocean depth, so we obtain an expression for \(\xi_{r}\) \[\xi_{r}=-h_{o}\nabla\cdot\vec{\xi}. \tag{9}\] Substituting this value for \(\xi_{r}\) into eq. 7 gives \[\partial_{t}^{2}\vec{\xi}-gh_{o}\nabla(\nabla\cdot\vec{\xi})=0. \tag{10}\] Assuming \(\xi\) is curl free by construction gives a wave equation \[\partial_{t}^{2}\vec{\xi}-gh_{o}\nabla^{2}\vec{\xi}=0. \tag{11}\] Restricting to the value of \(\xi\) at the ocean surface and expanding in spherical harmonics yields the ordinary differential equation \[\partial_{t}^{2}\vec{\xi}+\frac{l(l+1)gh_{o}}{R_{\star}^{3}}\vec{\xi}=0, \tag{12}\] whose solution is a simple harmonic oscillator with frequency \[\omega=\sqrt{l(l+1)\frac{GM_{\star}}{R_{\star}^{3}}\frac{h_{o}}{R_{\star}}}. \tag{13}\] Therefore, the surface fluid layer of the neutron star naturally sustains periodic oscillations. In this analysis, we have assumed that the crust is completely rigid at the boundary between the neutron star ocean and crust. In general, the crust should be elastic with a non-infinite shear modulus \(\tilde{\mu}\)(Horowitz and Kadau, 2009; Baiko, 2011; Zemlyakov and Chugunov, 2023). Piro and Bildsten (2005) showed that when the neutron star crust's shear modulus is less than the pressure at the crust-ocean boundary \(\rho_{o}\), the shallow ocean frequency given by eq. 13 must be corrected by a factor of \(\sqrt{\tilde{\mu}/p_{o}}\). Therefore, the mode frequency is \[\omega=\sqrt{\frac{\tilde{\mu}}{p_{o}}}I(l+1)\frac{GM_{\star}}{R_{\star}^{3}} \frac{h_{o}}{R_{\star}}. \tag{14}\] Evidently, the shallow ocean surface mode will pulsate with frequency given by eq. 14, which depends on the parameters of the neutron star. ### Tidal Resonance The tidal potential \(\chi\) induced by a companion object is (Press and Teukolsky, 1977; Lai, 1994) \[\chi=-\sum_{l=2}^{\infty}\sum_{m=-l}^{l}\frac{GMr^{l}}{D(t)^{l+1}}W_{Im}e^{-im \Phi(t)}Y_{Im}(\theta,\phi), \tag{15}\] where \(M\) is the companion mass, \(D(t)\) is the binary separation, \(\Phi(t)\) is the true anomaly, and \(W_{Im}\) is the numerical coefficient (Press and Teukolsky, 1977) \[W_{Im}=(-1)^{\frac{4\mu m}{2}}\frac{\left((\frac{4\pi}{2+1})(l-m)!(l+m)!\right) ^{\frac{1}{2}}}{2!(\frac{l-m}{2})!(\frac{l+m}{2})!}, \tag{16}\] where \(l+m\) must be even. When this potential is added, the Euler equation can be expressed as \[(\mathcal{L}+\rho\partial_{t}^{2})\vec{\xi}=-\rho\nabla\chi, \tag{17}\] where \(\mathcal{L}\) is defined by eq. 3. We assume this equation has the solution \[\vec{\xi}=\frac{1}{n}a_{n}(t)\vec{\xi}_{n}, \tag{18}\] where \(\vec{\xi}_{n}\) is the eigenvector solution to eq. 3. From eq. 17 and the orthogonality condition \(\int\rho\vec{\xi}_{n}^{s}\cdot\vec{\xi}_{m}dV=A_{n}^{2}\delta_{nm}\)(Sullivan et al., 2023), we obtain an equation for \(a_{n}(t)\) \[\tilde{a}_{n}(t)+\omega_{m}^{2}a_{n}(t)=\frac{GMW_{Im}}{D(t)^{l+1}}e^{-im\Phi( t)}\frac{Q_{nl}}{A_{n}^{2}}. \tag{19}\] where \(Q_{nl}\) is the overlap integral defined by \[Q_{nl}=\int\rho\vec{\xi}_{n}^{s}\cdot\nabla(r^{l}Y_{Im}(\theta,\phi))dV. \tag{20}\] The mode most likely to be tidally excited is the \(l=2\) mode because the driving tidal force is lowest order in \(1/D\). #### 2.2.1 Resonance Time As is typical for driven harmonic oscillators, a resonant oscillation will occur when \(m\dot{\Phi}(t)=\omega_{n}\). In the case of an inspiraling compact binary, \(\dot{\Phi}(t)\) continuously increases, so the orbital tidal force will become resonant with the \(i\)-mode at some point before the merger. The time before merger at which resonance occurs can be computed by recalling \[\dot{\Phi}(t)=\sqrt{\frac{G(M+M_{\star})}{D(t)^{3}}}, \tag{21}\] for circular binaries. The orbital separation at the time of resonance is \[D_{r}=\left(\frac{m^{2}G(M+M_{\star})}{\omega_{n}^{2}}\right)^{\frac{1}{3}}. \tag{22}\] The time until merger due to gravitational wave emission for a given orbital separation \(D\) is (Peters, 1964) \[t_{m}=\frac{5D^{4}c^{5}}{256G^{3}MM_{\star}(M+M_{\star})}, \tag{23}\] where \(c\) is the speed of light. Therefore, the merger time when \(D=D_{r}\) is \[t_{r}=\frac{5c^{5}(M+M_{\star})^{\frac{1}{3}}m^{\frac{8}{3}}}{256G^{\frac{5}{3} }MM_{\star}\omega_{n}^{\frac{8}{3}}}. \tag{24}\] This expression for the mode resonance time is general and does not depend on which mode becomes resonant with the orbit. The time before merger when the crust-ocean \(i\)-mode resonance occurs can be computed simply by substituting eq. 14 for \(\omega_{n}\). #### 2.2.2 Tidal Energy When resonance occurs, the amplitude of the oscillation should be maximized. This is directly related to the amount of energy deposited into the ocean due to the tidal force. We can estimate the amplitude at the resonance time by assuming a solution for \(a(t)\) of the form \(a(t)=GMW_{Im}\frac{Q_{nl}}{A_{n}^{2}c}(t)e^{-i\varepsilon_{th}u_{m}t}\)(Lai, 1994), where \(c(t)\) is a complex valued function of a real variable and \(s=\pm 1\). In terms of \(c(t)\), equation 19 becomes (Lai, 1994) \[\vec{c}-2iso_{n}\dot{c}=D(t)^{-(l+1)}\exp\big{[}i(sout-m\Phi(t))\big{]}. \tag{25}\] Near resonance, numerical solutions have shown that the amplitude increases approximately linearly with time (Lai, 1994). Therefore, neglecting \(\vec{c}\) and integrating with time gives an approximate expression for \(c(t)\) \[c(t)\approx\frac{1}{2iso_{n}}\int D(t)^{-(l+1)}\exp\big{[}i(sout-m\Phi(t)) \big{]}dt. \tag{26}\] Assuming \(\omega_{n}>>1/t_{r}\) (which should be the case as \(t_{r}\gtrsim 1\) s and \(\omega\gtrsim 1\) Hz for reasonable parameters (Sullivan et al., 2023)), the limits on this integral may be taken as infinite. In this case, the stationary phase approximation may be used to evaluate c(t) (Lai, 1994). The maximum value of c(t) will be \[|c(t)|_{max}\simeq\frac{1}{2\omega_{n}D_{r}^{l+1}}\sqrt{\frac{2\pi}{m\tilde{ \Phi}(t_{r})}}, \tag{27}\] where we have evaluated the absolute value of eq. 26, and \(\tilde{\Phi}(t_{r})\) is evaluated at the resonance time. \(\tilde{\Phi}\) at time of resonance is \[\tilde{\Phi}=\frac{3}{2}\sqrt{\frac{G(M+M_{\star})}{D_{r}^{3}}}\frac{\tilde{D} _{r}}{D_{r}}=\frac{3}{8m}\frac{\omega_{n}}{t_{r}}. \tag{28}\] This allows us to write \(|c(t)|_{max}\) in terms of the parameters of the mode resonance \[|c(t)|_{max}\simeq\frac{2}{D_{r}^{l+1}}\sqrt{\frac{\pi t_{r}}{3\omega_{n}^{3}}}. \tag{29}\] After tidal resonance in binary inspirals, the energy of the mode should be that of a harmonic oscillator with frequency \(\omega_{n}\) and amplitude \(|a(t)|_{max}A_{n}\). Additionally, for the \(l=2\) mode, both the \(m=2\) and \(m=-2\) modes contribute to the energy equally. Therefore, the tidal interaction will deposit the energy \[E=\omega_{n}^{2}|a(t)|_{max}^{2}A_{n}^{2}=\omega_{n}^{2}G^{2}M^{2}W_{lm}^{2} \frac{Q_{nl}^{2}}{A_{n}^{2}}|c(t)|_{max}^{2} \tag{30}\] into the mode (Lai, 1994; Sullivan et al., 2023). The normalization \(A_{n}^{2}\) and the \(l=2\) overlap integral of crust-ocean \(i\)-modes grow proportionately with the square of the stellar radius and the ocean depth, respectively (Sullivan et al., 2023). Their ratio respects \(Q/A_{n}^{2}\sim h_{o}/R_{\star}\)(Passamonti et al., 2021; Sullivan et al., 2023, e.g.). Hence, the normalization factor can be estimated as \[A_{n}^{2}=M_{\star}R_{\star}^{2}, \tag{31}\] while the \(l=2\) overlap integral is \[Q\approx\frac{11}{10}M_{\star}R_{\star}^{2}\left(\frac{h_{o}}{R_{\star}}\right), \tag{32}\] where we infer the prefactor from the results in table 1 of Sullivan et al. (2023). The exact value of the numerical factor is model dependent, but should remain order unity. The energy in terms of stellar and mode parameters is \[E\simeq\frac{121\pi^{2}}{6400\times 2^{\frac{1}{3}}}\frac{c^{5}Mh_{o}^{2} \omega^{\frac{1}{3}}}{(G(M_{\star}+M))^{\frac{3}{3}}}, \tag{33}\] where \(\omega_{n}\) is given by eq. 14. Like \(t_{r}\), the energy deposited into the mode by the tide directly depends on the masses of the objects as well as the depth of the neutron star ocean. ## 3 Tidal resonance as a source of GRB precursor flares The model we outline in Sec. 2 explains how energy can be deposited into a neutron star ocean through the tidal interaction in the moments leading up to a compact binary merger. Sullivan et al. (2023) found that if the energy of the \(i\)-mode tidal resonance could be released electromagnetically, a detectable precursor could result. We now extend this picture and apply it to sGRB precursor events. We therefore suppose that gamma-ray precursors exhibiting QPOs could result from this \(i\)-mode tidal resonance in a neutron star ocean. ### Model Parameters As we have shown, the ocean tidal resonance is principally described by three quantities: the energy deposited into the mode \(E_{tot}\) given by eq. 33, the time of resonance \(t_{r}\) given by eq. 24, and the mode frequency \(\omega_{n}\) given by eq. 14. In a sGRB precursor, these quantities correspond to emission properties. We propose that the energy of the precursor corresponds to the energy deposited into the mode, the time of ignition of the flare corresponds to the resonance time of the \(i\)-mode, and the QPO frequency is the \(i\)-mode frequency. In reality, the precursor energy is a lower limit on the actual energy deposited into the mode, as the radiation efficiency of the emission mechanism remains unknown. Nevertheless, the precursor energy usefully constrains the total energy deposited into the mode. Using these three observables, we may estimate the parameters of the astrophysical GRB source. The three main quantities of our model depend on five system parameters. Four of these parameters directly relate to the neutron star in the binary, while the remaining relates to the companion. Our model is sensitive to the neutron star mass \(M_{\star}\), radius \(R_{\star}\), ocean depth \(h_{o}\), and crust shear modulus to ocean floor pressure ratio \(\tilde{\mu}/p_{o}\), as well as the mass of the companion \(M\). Excitingly, from the precursor alone, we may constrain parameters essential to understanding the dynamics of the compact binary as well as the interior structure of neutron stars. ### Model Implications This model for sGRB precursors can describe both binary neutron star and neutron star-black hole mergers. In fact, these two scenarios can be directly distinguished through the companion mass. The only requirement is that one of the component masses in the system be a neutron star, which is already required for sGRBs (Fong and Berger, 2013; Rosswog, 2015; Ascenzi et al., 2021). The properties of neutron star oceans have principally been probed by observations of X-ray bursts on accreting neutron stars (Bildsten and Cutler, 1995; Strohmayer and Mahmoodifar, 2014; Chambers and Watts, 2020). The ocean forms at temperatures and densities when the crust melts. To have sizable oceans, neutron star crusts much achieve temperatures of \(T\gtrsim 10^{7}\) K, hotter than expected for old neutron stars in compact binaries. The tides can heat neutron stars to temperatures \(\sim 10^{8}\) K during the inspiral (Lai, 1994), and potentially higher depending on the viscosity in the neutron star (Meszaros and Rees, 1992). With this model, we directly probe the depth of the ocean \(h_{o}\) via compact binary coalescence. \(h_{o}\) is very sensitive to the material that composes the neutron star crust, the neutron star crust temperature \(T\), as well as the equation of state at the neutron star surface (Farouki and Hamaguchi, 1993; Bildsten and Cutler, 1995; Haensel et al., 2007; Horowitz and Kadau, 2009; Baiko and Chugunov, 2018; Gititas et al., 2020) (although there is degeneracy between these three quantities). Constraints on the ocean depth for neutron stars in compact binaries can inform whether there is a difference in ocean structure between neutron stars in X-ray binaries and in compact binaries. While this model is sensitive to five extremely interesting properties of neutron stars and compact binaries, its reliance on only three main observables limits its parameter estimation ability. For certain reasonable choices of neutron star mass and radius \(M_{\star}\) and \(R_{\star}\), one can solve for the other three parameters with this model, and immediately distinguish an NSBH from a BNS based on the companion mass results. An alternative approach might be to solve for the neutron star mass and radius as well as the companion mass as a function of \(h_{o}\) and \(\tilde{\mu}/p_{o}\). Estimates of the neutron star mass \(M_{\bullet}\) and \(R_{\bullet}\) are particularly exciting as they can directly be used to constrain the neutron star equation of state. The degeneracy in parameters can nevertheless be broken in multiple ways. Most promising is a coincident GW detection from the merger. Chirp mass and total mass measurements, as well as tidal deformability limits from GWs (Abbott et al., 2019, e.g.) provide additional constraints on the system which can completely disentangle all parameters. In the case of a binary neutron star merger, the oceans of the two different neutron stars may become resonant at different times, causing two precursors with QPOs. In Fig. 1, we show the mode frequency \(f_{n}=\omega_{n}/2\pi\) and resonance time \(t_{r}\) as a function of neutron star mass for different values of \(h_{o}\) with a 1.4 M\({}_{\odot}\) companion. \(f_{n}\) and \(t_{r}\) are also shown for the companion with \(h_{o}\) values. Both \(f_{n}\) and \(t_{r}\) can differ by at least order unity between the two stars, so the two precursors can be distinguished if their durations are \(\lesssim 10\%\) of \(t_{r}\)(Troja et al., 2010). This then provides six equations to disentangle eight parameters. Most interestingly, if the degeneracy between \(M_{\bullet}\) and \(R_{\bullet}\) can be broken with a GW detection, our model constrains the equation of state (Lattimer and Prakash, 2001; Abbott et al., 2018; Lattimer, 2021) by providing more data points to directly probe the neutron star mass-radius relationship. The precursors should be distinguishable from the main emission for sources of interest. We take an NSBH merger with a 1.4 M\({}_{\odot}\) neutron star and a 5 M\({}_{\odot}\) black hole, which should be a sGRB progenitor (Pannarale et al., 2011), as an example. The depth of a relativistic degenerate neutron star ocean is (Bildsten and Cutler, 1995) \[h_{o}\approx 12.8~{}\mathrm{m}\left(\frac{A}{12}\right)^{-1}\left(\frac{Z}{6 }\right)^{-\frac{1}{3}}\left(\frac{T}{10^{7}~{}\mathrm{K}}\right), \tag{34}\] where \(Z\) and \(A\) are the atomic number and mass of the ions that compose the crust, respectively. For fiducial ocean values of \(T=10^{7}\) K, \(A=12\), and \(Z=6\) as well as neutron star properties \(R_{\bullet}=10\) km and \(\tilde{\mu}/p_{o}=0.01\), the precursor parameters are \(f_{n}=\omega_{n}/2\pi=20\) Hz, \(t_{r}=70\) s, and \(E_{tot}=4\times 10^{47}\) erg. For a BNS where the companion is \(M=1.4\) M\({}_{\odot}\), the energy of the precursor would remain approximately unchanged while the time before merger would increase to 3 min, allowing the scenarios to be distinguished. The value inferred for the companion mass \(M\) is most sensitive to \(t_{r}\) while the inferred \(h_{o}\) value is sensitive to \(E_{tot}\) and \(\omega_{n}\). The other neutron star parameters may be constrained by \(\omega_{n}\). ### Electromagnetic Emission Mechanisms In its current form, our model remains agnostic to how precursor gamma-rays are generated. Our model also does not predict that the expected emission will necessarily be in the form of gamma-rays. We choose gamma-rays as the application of this model because Fermi-GBM's all-sky field of view makes the instrument particularly well-suited to detecting rapid transients (Meegan et al., 2009). In principle, the resultant electromagnetic emission from a neutron star ocean tidal resonance could be across the electromagnetic spectrum. The connection between our model and electromagnetic emission remains speculative at this stage. To actually ignite an electromagnetic flare, we envision a scenario in which the energy deposited into the neutron star ocean by the tide excites particles on the neutron star surface to high energies. The resultant high-energy electrons on the surface may synchrotron radiate in the presence of the strong surface magnetic field. The ocean Alfven frequency \(\omega_{A}\sim l(l+1)B^{2}/4\pi\rho_{o}R^{2}\) may be comparable to the crust-ocean \(i\)-mode frequency for \(B\sim 10^{12}\) G. Consequently, high magnetic field can modify the surface structure and \(i\)-mode properties as well as complicate connecting tidal energy deposition with emission. The energy deposited by the tide may nevertheless be comparable to the breaking energy of neutron star crusts, which ranges from \(10^{44}-10^{46}\) erg (Tsang et al., 2012; Baiko and Chuguov, 2018), causing the crust to crack or melt (Penner et al., 2012). While full crustal failure may be difficult to achieve initially since only 0.1% of the crust-ocean \(i-\)mode energy is deposited into the crust (Piro and Bildsten, 2005), the back reaction of the strongly deformed or even damaged crust may cause the crust-core \(i\)-mode frequency to increase as the mode penetrates deeper into the star like the \(r\)-mode under the influence of a strong magnetic field (Andersson et al., 2018). Figure 1: The crust-ocean \(i\)-mode frequency (left) and resonance time with companion mass \(M=1.4\) M\({}_{\odot}\) (right) as a function of neutron star mass for the ocean depths \(h_{o}\) shown in the legend. The dashed lines show the mode frequencies and resonance times for the \(M=1.4\) M\({}_{\odot}\) companion with the same \(h_{o}\). These results assume both neutron stars have \(\tilde{\mu}/p_{o}=0.01\) and \(R_{\bullet}=10\) km. 2000; Rezzolla et al. 2001a,b). This can allow for the extraction of more tidal energy as the overlap integral grows. If the neutron star crust breaks, subsequent reconnection of the liberated crustal magnetic fields may induce large scale particle acceleration and consequently the emission of gamma-rays (Lander et al., 2015; Kaspi & Beloborodov, 2017, e.g.). Each of these mechanisms likely has a different radiation efficiency which can affect our energy estimate. We leave the details including the effects of strong magnetic fields for future work. Whatever the mechanism, the energetics of the resonant tide (Sullivan et al., 2023) coupled to the exotic conditions on neutron star surfaces make electromagnetic emission a plausible result of ocean-tidal resonances. ## 4 Application to GRB 211211a GRB 211211A is one of the longest sGRB events detected with a burst duration of 51.37 s (Rastinejad et al., 2022). The subsequent detection of an associated kilonova located at distance of 350 Mpc suggests that the cause of this burst was a compact binary coalescence (Rastinejad et al., 2022; Troja et al., 2022; Yang et al., 2022; Mei et al., 2022; Zhang et al., 2022). A precursor flare 0.9 seconds prior to the initiation of GRB 211211A was also detected, possibly exhibiting QPOs with frequency 22 Hz (Xiao et al., 2022). The precursor flare had an isotropic equivalent energy of \(7.7\times 10^{48}\) erg while the isotropic equivalent energy of the main event and extended emission approached \(10^{52}\) erg (Xiao et al., 2022). Many questions about this system remain, as no model has conclusively determined the source of this event. Suggested sources of GRB 211211A include a NSBH merger (Gao et al., 2022; Zhu et al., 2022; Gottlieb et al., 2023), a BNS merger involving a magnetar (Xiao et al., 2022; Zhang et al., 2022), a white dwarf-neutron star merger (Zhong et al., 2023) and a collapsar (Barnes & Metzger, 2023). BNS magnetar models invoke a shattering flare induced by the tides of the companion and cracking the magnetar crust (Suvorov et al., 2022), while NSBH models invoke both the presence of a magnetar (Gao et al., 2022) and the presence of a rapidly spinning BH (Zhang et al., 2022) to enhance energy release from tides. Zhou et al. (2023) investigated the plausibility that the source of GRB 211211A was a strangeon star and invoked tidally induced crust fracturing to explain the energetics of the system. These models broadly rely on large tidal forces and strong magnetization to explain both the precursor and extended length of the main emission. As more events like this one are observed (Dichiara et al., 2023, e.g.), models will likely converge on an explanation of GRB 211211A and other similar sources (Gottlieb et al., 2023). As possibly the first sGRB observed with QPOs during its precursor and without a conclusive description of this system, GRB 211211A represents an excellent test bed for our precursor model. Within the context of our model, the GRB 211211A precursor would be interpreted as a flare induced in a neutron star ocean by resonant tides. The pulsating ocean gives rise to the QPOs in the gamma-ray emission. This picture qualitatively agrees with that of Suvorov et al. (2022) in which tidal forces crack the neutron star crust and release energy for a flare. Suvorov et al. (2022) associate the QPOs with resonant torsional \(g\)-modes of a highly magnetized neutron star surface after crust cracking. High magnetization is required to explain the nonthermal spectrum of the precursor and provide sufficient energy to the flare. Suvorov et al. (2022) rely on the mass results of the kilonova modeling of Rastinejad et al. (2022) and therefore consider tidal resonances in a BNS only, despite ambiguity in the literature. Our model by contrast leaves open the possibility that the system is a NSBH and assumes tidal resonance of the crust-ocean \(i\)-mode. The observations of GRB 211211A provide a precursor ignition time, oscillation frequency, and total energy (Xiao et al., 2022), the exact parameters our model requires. The 22 Hz QPO frequency resembles magnetar crustal shear mode frequencies, and thus represents a plausible value for that of a surface ocean mode (Samuelsson & Andersson, 2007; Colaiuda & Kokkotas, 2011). While the QPO remains unconfirmed (Chirenti et al., 2023), we apply our model to this system as an example of how it can be used and check whether it reasonably explains phenomena of this sGRB. Without any GW emission to unambiguously measure the source masses, we cannot constrain all system parameters. Note, however, that \(t_{r}\) (eq. 24) is only a function of the two companion masses. Consequently, to obtain \(M\), one only needs \(M_{\star}\). With \(M\) and \(M_{\star}\), \(h_{o}\) can immediately be solved for from the observed \(E\). Therefore, setting \(E=7.7\times 10^{48}\) erg, \(t_{r}=0.93\) s, and \(f_{n}=\omega_{n}/2\pi=22\) Hz, we solve eqs. 14, 24, and 33 for \(R_{\star}\), \(M\), and \(h_{o}\) for \(M_{\star}\in[1\) M\({}_{\odot}\), \(3\) M\({}_{\odot}]\)--viable masses for a neutron star--and choices of \(\tilde{\mu}/\rho_{o}\). We neglect the effects of gravitational and cosmic redshift on \(t_{r}\) and \(f_{n}\). This affects our results by no more than 30%, exceeding the precision needed to assess our model's implications. We show our parameter results in Fig. 2 as a function of \(M_{\star}\). The main prediction of our model is that the source of GRB 211211A is an extreme-mass ratio NSBH merger. This prediction comes directly from associating the precursor to a tidal resonance, particularly the resonance of a mode with the alleged QPO frequency, and is independent of the exact nature of the excited mode. Therefore, if the resonant neutron star mode which ignites the precursor has frequency \(\sim 20\) Hz (irrespective of which mode and how), the companion mass must exceed \(\sim 500\) M\({}_{\odot}\) simply by eq. 24. For an NSBH, resonance must occur before the neutron star crosses the horizon of the companion black hole. This constrains the parameter space to companions with mass below 1000 M\({}_{\odot}\) and consequently neutron stars with mass \(M_{\star}\gtrsim 1.8\) M\({}_{\odot}\). The viable regions of parameter space are to the right of the red-dashed line in Fig. 2. Again, this constraint is independent of the nature of the mode, and only relies on associating the QPO with the resonant mode which causes the precursor. Associating the precursor with the crust-ocean \(i\)-mode subsequently informs the neutron star structure. Our model predicts a neutron star ocean with \(h_{o}\gtrsim 200\) m. Such a deep ocean ensures that the tidal overlap integral of the crust-ocean mode is large enough to garner sufficient energy for the precursor. This particularly deep ocean suggests that the temperature inside the neutron star should be very high: \(T\gtrsim 2\times 10^{8}\) K for a crust made of carbon and hotter for heavier elements (see eq. 34). This is comparable to surface temperatures reached during accretion (Fujimoto et al., 1984; Haensel & Zdunik, 1990, 2003, 2008), and would require extreme heating. Tidal heating by core \(g-\)mode resonances can produce these temperatures (Lai, 1994); however, the core \(g-\)mode frequencies are likely \(>20\) Hz and would be resonant after the crust-ocean \(i-\)mode. Alternatively, accretion onto the neutron star, if the binary is in a gaseous environment such as an active galactic nucleus (AGN) accretion disk, may heat the surface. The mode frequency gives the neutron star radius with certain choices of \(\tilde{\mu}/\rho_{o}\). We see that radii consistent with plausible neutron star equations of state (Dietrich et al., 2020) must have \(\tilde{\mu}/p_{o}\approx 0.0005-0.002\). These values imply a particularly weak crust compared to the internal pressure of the neutron star, and are broadly consistent with a deeper ocean whose ocean floor pressure is greater. The weak shear modulus implies that the mode penetrates deeper into the crust, which could account for the large value of \(h_{o}\) needed to provide the flare energy. A lower value of \(\dot{\mu}\) also decreases the energy needed to fracture the crust (Baiko & Chugunov, 2018), making a shattering event much more likely. Such a large amount of energy tidally deposited into the neutron star surface, along with a low breaking energy, leaves a majority of the energy for the precursor, again showing that the model is self-consistent. If our model accurately describes the event, GRB 211211A would represent the first ever detection of an extreme mass ratio compact binary inspiral. Our model also suggests a high mass neutron star in the event, which has particular relevance to constraining the neutron star equation of state (Lattimer & Prakash, 2007; Steiner et al., 2013; Brandes et al., 2023, e.g.). For such a heavy black hole and a neutron star to form in binary, a dynamical environment such as an AGN or globular cluster would likely host the source (Gayathri et al., 2021, e.g.). This extremely exotic potential origin could explain the peculiarity of this event. To account for such a small fraction of GRBs, long sGRBs may require extreme tides and unique neutron star parameters. Furthermore, previous work has suggested NSBH origin for this event (Gao et al., 2022; Zhang et al., 2022; Gottlieb et al., 2023), as the distinction between shorter and longer sGRBs may correspond to the difference between BNS and NSBH merger origin (Dimple et al., 2023). Such a large mass ratio inspiral is nevertheless extremely difficult to explain given the observed sGRB and kilonova. These transients require full tidal disruption of the neutron star, which intermediate mass black holes should fail to cause (Neill et al., 2022). In fact, \(M\sim 10\) M\({}_{\odot}\) represents an upper limit on the companion mass which can plausibly cause a sGRB (Pannarale et al., 2011; Neill et al., 2022). Furthermore sGRBs from NSBH mergers should be rare, with event rates only as high as \(\sim 100\) Gpc\({}^{-3}\) yr\({}^{-1}\)(Abbott et al., 2023; Bisocoeanu et al., 2023). Future numerical simulations of extreme mass ratio inspirals (Boschini et al., 2023, e.g.) of NS-BHs will hopefully provide further insight into the feasibility of the scenario we consider. If tidal resonance actually ignites the precursor, a higher frequency resonant mode would provide a more plausible companion mass estimate. For example, a resonant mode with frequency \(\sim 50\) Hz (Suvorov et al., 2022, e.g.) would imply a companion mass \(\sim 10\) M\({}_{\odot}\), which could plausibly produce the sGRB. This leaves open the possibility that the 22 Hz QPOs represent the decaying after shocks of a previously excited mode. The QPO may represent a previously excited crust-ocean \(l=0\) mode, so that the \(l=2\) crust-ocean \(i\)-mode has frequency \(\sim 50\) Hz. This would predict an ocean depth of \(h_{o}\sim 60\) m and a more modest crust temperature \(T<10^{8}\) K. Alternatively, the QPO could correspond to a much earlier tidally excited \(l=2\) crust-ocean \(i\)-mode, while a resonant crustal shear mode or the core-crust \(i\)-mode shatters the crust (Tsang et al., 2012). If this is the case, fainter precursors may have preceded the observed event by \(\gtrsim 1\) min (Sullivan et al., 2023). Searches for earlier faint emission among Fermi-GBM, _Swift_, and Insight-HXMT sub-threshold data may reveal further evidence of our model at work in this system. ## 5 Conclusion We have presented a new model for sGRB precursors invoking tidal resonance of the surface mode of a neutron star in a compact binary coalescence. Our model posits that precursors to sGRBs can be ignited by the resonance of the neutron star crust-ocean \(i\)-mode with the orbitally modulated tidal forces from the inspiraling companion. In this picture, the energy fueling the flare is deposited by the tide and QPOs emerge as a natural consequence of the excitation of the mode. Thus our model can be applied to any precursor with QPOs. With three main observables, our model can provide constraints on compact binary parameters solely from information about the precursor. Companion mass constraints and by extension the type of merger in question can be obtained from just the observed time prior to the main sGRB and the frequency of the QPOs, which corresponds to the resonance frequency of the mode in our model. By associating the precursor with the crust-ocean \(i-\)mode specifically, we constrain the neutron star ocean depth, neutron star radius, and even shear modulus of the neutron star crust. Our model provides an interesting, though likely inaccurate, explanation for some of the observable properties of GRB 211211A. If true, GRB 211211A would be associated with an NSBH merger with an intermediate mass black hole and a high mass neutron star. Such a system would be the first of its kind, representing the discovery of an intermediate mass black hole as well as the largest Figure 2: The parameters of the compact binary coalescence which produced GRB 211211A estimated by the precursor model presented in this paper. For the range of plausible neutron star masses, 1-3 M\({}_{\odot}\), we determine the mass of the companion \(M\) (middle), the depth of the neutron star ocean \(h_{o}\) (right), and the radius of the companion neutron star \(R_{\star}\) (left). Determining \(R_{\star}\) also requires choices of \(\dot{\mu}/p_{o}\), the ratio of the neutron star crust shear modulus to the pressure at the crust-ocean boundary. The red dashed line on each panel corresponds to the neutron star mass \(M_{\star}=1.8\) M\({}_{\odot}\), below which the Schwarzschild radius of the companion exceeds the resonance binary separation. black hole involved in a compact binary merger (excluding those potentially observed by pulsar timing arrays (Agazie et al., 2023)). For the excited \(i-\)mode to contain the energy, a deep neutron star ocean would be needed, suggesting that the crust must either be composed of lighter elements than previously considered or extremely hot (Chamel & Haensel, 2008, e.g.). Such a deep ocean also necessitates a small shear modulus for reasonable neutron star radii. Some distinctive features our model identifies may help explain the event's extraordinary status as a very long sGRB. The predictions of our model nevertheless remain difficult to reconcile with the sGRB main emission and the observed kilonova. As we have discussed, such a large mass companion would have difficulty tidally disrupting the neutron star to produce powerful electromagnetic emission during the GW-driven merger. We have found that the claimed QPOs are extremely unlikely to be the mode that caused the precursor, although it remains possible that a different resonant mode did. Because the emission mechanism is unknown, however, it remains just as likely that the QPOs originate from intrinsic GRB properties rather than pulsational modes. Unfortunately, the lack of observed GW emission keeps the origins of GRB 211211A nebululous (Sarin et al., 2023). The joint detection of GWs with this sGRB could have more definitively constrained the applicability of our model as well as other proposed models to this unique system. Applying our model to more sGRBs precursors, especially those with long durations and other distinctive features (Veres et al., 2023), may yield interesting results. As detection techniques improve, more sGRBs will be identified (Kerr et al., 2023), hopefully providing more opportunities to test our model. Previous searches for QPOs in sGRB precursors have yet to reveal any additional candidates with \(>3\sigma\) significance (Xiao et al., 2022), but have been constrained by photon statistics. Continued observations, particularly in coincidence with detected GW events during the O4 run of LIGO-Virgo-Kagra (Colombo et al., 2022), as well as improved targeted searches will hopefully reveal more such candidates. If more sGRB precursor QPOs can be identified, models of tidal resonance-induced precursor emission like the one presented in this paper can immediately be tested. ## Acknowledgments The authors are grateful to Nils Andersson, Roger Blandford, and Roger Romani for reading and providing feedback on this manuscript. The authors thank Isabella Leite for helpful discussions and reviewing the manuscript. The authors thank Stanford University, Columbia University in the City of New York, and the University of Florida for their generous support. The Columbia Experimental Gravity group is grateful for the generous support of Columbia University. AS is grateful for the support of the Stanford University Physics Department Fellowship and the National Science Foundation Graduate Research Fellowship Program. LMBA is grateful for the Columbia Undergraduate Scholars Program Summer Enhancement Fellowship and the Columbia Center for Career Education Summer Funding Program. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2309.14774
BLIP-Adapter: Parameter-Efficient Transfer Learning for Mobile Screenshot Captioning
This study aims to explore efficient tuning methods for the screenshot captioning task. Recently, image captioning has seen significant advancements, but research in captioning tasks for mobile screens remains relatively scarce. Current datasets and use cases describing user behaviors within product screenshots are notably limited. Consequently, we sought to fine-tune pre-existing models for the screenshot captioning task. However, fine-tuning large pre-trained models can be resource-intensive, requiring considerable time, computational power, and storage due to the vast number of parameters in image captioning models. To tackle this challenge, this study proposes a combination of adapter methods, which necessitates tuning only the additional modules on the model. These methods are originally designed for vision or language tasks, and our intention is to apply them to address similar challenges in screenshot captioning. By freezing the parameters of the image caption models and training only the weights associated with the methods, performance comparable to fine-tuning the entire model can be achieved, while significantly reducing the number of parameters. This study represents the first comprehensive investigation into the effectiveness of combining adapters within the context of the screenshot captioning task. Through our experiments and analyses, this study aims to provide valuable insights into the application of adapters in vision-language models and contribute to the development of efficient tuning techniques for the screenshot captioning task. Our study is available at https://github.com/RainYuGG/BLIP-Adapter
Ching-Yu Chiang, I-Hua Chang, Shih-Wei Liao
2023-09-26T09:16:44Z
http://arxiv.org/abs/2309.14774v1
# BLIP-Adapter: Parameter-Efficient Transfer Learning for Mobile Screenshot Captioning ###### Abstract This study aims to explore efficient tuning methods for the screenshot captioning task. Recently, image captioning has seen significant advancements, but research in captioning tasks for mobile screens remains relatively scarce. Current datasets and use cases describing user behaviors within product screenshots are notably limited. Consequently, we sought to fine-tune pre-existing models for the screenshot captioning task. However, fine-tuning large pre-trained models can be resource-intensive, requiring considerable time, computational power, and storage due to the vast number of parameters in image captioning models. To tackle this challenge, this study proposes a combination of adapter methods, which necessitates tuning only the additional modules on the model. These methods are originally designed for vision or language tasks, and our intention is to apply them to address similar challenges in screenshot captioning. By freezing the parameters of the image caption models and training only the weights associated with the methods, performance comparable to fine-tuning the entire model can be achieved, while significantly reducing the number of parameters. This study represents the first comprehensive investigation into the effectiveness of combining adapters within the context of the screenshot captioning task. Through our experiments and analyses, this study aims to provide valuable insights into the application of adapters in vision-language models and contribute to the development of efficient tuning techniques for the screenshot captioning task. Our study is available at [https://github.com/RainYuGG/BLIP-Adapter](https://github.com/RainYuGG/BLIP-Adapter) ## Introduction Recently, in this era where everyone owns a smartphone, screenshot captioning has attracted increasing attention. This task is aimed at producing natural language descriptions of user behaviors captured within mobile screenshots. Without these screenshot captioning systems, users are burdened with the task of manually describing the UI of mobile applications whenever they need to report issues to developers, or when creating application tutorials, etc. This process can be both time-consuming and labor-intensive. Our objective is to investigate efficient tuning strategies tailored for the screenshot captioning task. Machine learning has achieved significant success in both vision and language tasks [14, 15, 16, 17, 18, 19]. Moreover, there have been notable advancements in vision-language tasks [13, 14, 15, 16, 17], such as image-text matching, visual question answering, and image captioning. In these frameworks, Vision-language models, which typically utilize both a visual model and a language model, have greatly improved due to enhanced architecture designs and the availability of large-scale high-quality datasets [13, 14, 15, 16, 17]. Despite advancements in architectural designs, the size of modern vision-language models is rapidly increasing, leading to substantial memory and storage requirements. Moreover, these models typically comprise a vast number of parameters, which poses a significant challenge for training from scratch and consumes considerable time and computational resources. Large-scale high-quality datasets in image captioning tasks primarily consist of general real-world scenes, with a lack of labeled datasets specifically tailored to special domains, such as medical images [15], earth observation images [16], and mobile screenshots [18]. However, collecting large-scale datasets for every visual task is labor-intensive and prohibitively expensive to scale. Moreover, ensuring high-quality datasets also requires human resources for data labeling. To overcome these problems, fine-tuning the pre-trained models has been widely adopted modernly. The models pre-trained on large-scale datasets, such as ImageNet [14], COCO [13], and Visual Genome [15], are used. These models are then subsequently fine-tuned on smaller-scale specific downstream datasets. It serves as a solution to leverage the pre-training knowledge from large-scale datasets and adapt it to specific tasks at hand. However, fine-tuning the entire model can still be resource-intensive, consuming considerable computation power, storage, and time, especially for large vision-language models. To address this, adapter-based fine-tuning [17, 18, 19, 16, 15] has emerged as a more parameter-efficient alternative. Similarly, the model pre-trained on large-scale datasets are used as the backbones and the adapters with light learnable weight will be inserted into the model. As shown in Figure 1, by using adapters, the weights in the pre-trained models are frozen, subsequently, fine-tuning the adapters on smaller-scale downstream datasets. we can only conduct training on additional adapters to get the fine-tuning effect. This approach enables the model to be adapted to specific tasks while preserving the knowledge acquired during pre-training. Additionally, it helps mitigate the limitations related to dataset availability, computational resources, and storage requirements that are often encountered when training models from scratch. To address the screenshot captioning task, this study explores various methods and techniques for implementing parameter-efficient tuning and evaluates their effectiveness within this specific context. Modifications of the state-of-the-art vision-language model, BLIP, are explored by employing parameter-efficient tuning methods for task-specific fine-tuning in the mobile user interface (UI) screenshot captioning domain. The impacts of various methods are assessed, providing a comprehensive evaluation of their respective effects. Additionally, a modification is incorporated into the BLIP model, similar to the implementation in previous work, by inserting an intermediate layer between the vision and language models. The effect of this alteration, coupled with the use of parameter-efficient methods on the language model, is analyzed. Furthermore, this study experiments with different combinations of parameter-efficient tuning strategies on BLIP, evaluating their efficacy for screenshot captioning tasks. Our contributions could be summarized as: * The evaluation of various parameter-efficient tuning strategies is conducted, applied separately to vision and language tasks, on the state-of-the-art captioning model, BLIP. * This study presents comprehensive transfer learning research applied to Screen2Words, a dataset specifically tailored for image captioning tasks within the mobile UI domain. * The demonstration that applying a combination of different parameter-efficient tuning methods can achieve performance comparable to full fine-tuning, but only requires updating 0.08% to 1.47% of the parameters. ## Related Work ### Vision-Language Models Vision-language models are a category of models that blend vision and language components to tackle tasks that cross both domains. These models' architectures may vary depending on the specific task. For instance, in image-text matching tasks, a Siamese network architecture, featuring both a visual encoder and a text encoder, is often preferred. Conversely, image captioning tasks typically employ an encoder-decoder architecture, which comprises a visual encoder and a text decoder. Over the years, a variety of vision-language models have emerged, reflecting advancements in architectural designs and pre-training strategies, especially in image captioning tasks. In 2015, Vinyals et al. (Vinyals et al., 2015) introduced an image caption generator that synergizes Convolutional Neural Networks (Krizhevsky, Sutskever, and Hinton, 2012) as a visual encoder with Recurrent Neural Networks as a text decoder. In 2018, Anderson et al. introduced the Bottom-Up and Top-Down Attention model (Anderson et al., 2018). Its primary innovation lies in the utilization of Faster R-CNN (Ren et al., 2015) for object detection, enabling the achievement of bottom-up attention by obtaining the corresponding detected targets and labels. Further, Long-Short Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) were leveraged in the decoder, dynamically adjusting the focus on the input image features according to the output language. This attention mechanism allows the model to pay greater attention to the more salient and important objects in the image, thus creating a better description. However, after transformer-architecture-based models (Vaswani et al., 2017; Devlin et al., 2019) demonstrated state-of-the-art performance in natural language processing tasks, their impact extended to the field of vision-language models. The introduction of the Vision Transformer by Dosovitskiy et al. in 2020 (Dosovitskiy et al., 2021) marked a significant milestone. The Vision Transformer applies the transformer architecture to visual data by treating images as sequences of patches. This approach achieved competitive results in image classification tasks and influenced the development of vision-language models. Building on these innovations, the BLIP model, introduced by Li et al. in 2022 (Li et al., 2022), has achieved state-of-the-art performance across multiple vision-language tasks. It achieves this through pre-training multimodal components such as visual and text encoders, as well as a text decoder, which are applicable to various vision-language tasks, including image-text matching, visual question answering, and image captioning. During pre-training, BLIP utilizes large-scale datasets, in Figure 1: Illustration of the adapter-based fine-tuning approach. Through the insertion of adapters, we can selectively fine-tune the lightweight components, which, in our case, comprise approximately 1.47% of the model’s total parameters. cluding COCO [11], Visual Genome [14], Conceptual Captions [15], Conceptual 12M [13], SBU captions [12], and LAION [15] for comprehensive pre-training. In image captioning tasks, the visual encoder and text decoder components are used, which are ViT [16] and BERT [17] respectively, followed by fine-tuning on the COCO Caption dataset [11]. This convergence of ViT and BERT underlines the transformative role of the transformer architecture in propelling advancements in vision-language models. Such progress highlights the importance of architectural innovation and refined pre-training strategies in further advancing the field of vision-language understanding and generation. ### Adapter Approaches Adapters, introduced by Houlsby et al. in 2019 [10], are a parameter-efficient tuning technique employed in transfer learning. This approach involves adding lightweight, task-specific layers or modules to a pre-trained model without altering the original parameters. Instead of fine-tuning the entire model, it's feasible to tune only the small-scale adapters to achieve the fine-tuning effect. By selectively updating these adapters while freezing the remaining parameters, parameter efficiency is achieved without significantly compromising the model's performance. Moreover, by only saving the additional weights of the adapters during training, greater storage efficiency is achieved compared to saving the weights of the entire model. These adapters learn task-specific information while keeping the pre-trained model's parameters intact, thus reducing computational and storage costs and allowing for better generalization to different tasks. Adapters have shown significant performance not only in natural language processing tasks [10, 19, 18, 19, 20, 21] but also in various vision tasks. [12, 13, 14, 15]. The bottleneck adapters, as proposed by Houlsby et al., are inserted into the transformer architecture, specifically after the feed-forward layers. These adapters comprise a down projection, followed by a GELU activation function, and then an up projection. Bapna et al. [14] modify the adapter architecture for translation tasks by incorporating an additional layer normalization and replacing the GELU activation function with a ReLU activation function. The Compacter architecture, proposed by Mahabadi et al. in 2021 [14], modifies the adapter structure by replacing the linear down- and up-projections with a parameterized hypercomplex multiplication layer. Distinct from the linear layer, this hypercomplex multiplication layer generates its weight matrix from two smaller matrices, consequently decreasing the total number of parameters. Additionally, these matrices can be factorized and shared across all adapter layers. Prefix Tuning, introduced by Li et al. in 2021 [11], innovates by incorporating new parameters within the multi-head attention blocks in each transformer layer. Specifically, it enhances the model by prepending trainable prefix vectors to the keys and values of the attention head input, which adds flexibility and adaptability to the model's attention mechanism. Low-Rank Adaptation (LoRA), introduced by Hu et al. in 2021 [18], incorporates trainable low-rank decomposition matrices into the layers of a pre-trained model. Specifically, LoRA targets the attention weights within the transformer's self-attention sub-layers. Instead of introducing additional parameters, BitFit [15] simply fine-tunes the bias terms within each module, allowing for task-specific adaptation with minimal changes to the pre-trained model. In object detection tasks, Liu et al. introduced Explicit Visual Prompting (EVP) in 2023 [11], which employs an architecture similar to Houlsby's adapters. EVP distinguishes itself by incorporating handcrafted features as input to the adapters and utilizing shared up-projection layers within the adapter structure. In 2022, Sung et al. introduced VL-Adapter [14], wherein they experimented with integrating various adapters into vision-language models and evaluated their performance. They primarily applied these adapters to video question answering tasks. In contrast to our approach, their model features both a visual encoder and a text encoder, as well as a text decoder, while ours does not include a text encoder. Additionally, their models are pre-trained on general image datasets and NLP tasks, whereas our model is specifically pre-trained for the image captioning task and we further fine-tuned for a specialized screenshot captioning dataset. ### Mobile Screenshot Captioning Mobile screenshot captioning is a specialized subset of image captioning that concentrates on generating textual descriptions for mobile screenshots. This task is notably challenging due to the distinctive traits of mobile screenshots, including the presence of various UI elements, the absence of fixed layouts, and a wide range of UI elements and styles. Contrary to general image captioning, which emphasizes describing the overall scene and objects, mobile screenshot captioning focuses on articulating the functionality and content of the UI elements. Additionally, in mobile screenshot captioning, the layout of UI elements is more closely linked to captioning performance. Screen2Words, introduced by Wang et al. in 2021 [14], is the first open-source dataset for mobile screenshot captioning. This dataset is built upon the Rico dataset [15], which is a large-scale dataset containing mobile app interface images, and UI layouts. Screen2Words enhances the mobile screenshots in the Rico dataset [15] by including human-labeled textual descriptions that correspond to the screenshots. The dataset consists of 22,417 unique Android screenshots, each accompanied by five concise language descriptions that convey important content and functionality of the mobile screens. These descriptions are valuable for various language-based application scenarios. Additionally, they offer a model that includes a ResNet encoder and Transformer decoder, which serves to evaluate the performance of the dataset. ## Methodology We explored the effectiveness of various parameter-efficient tuning strategies as applied to screenshot captioning tasks. Our goal is to illustrate how employing a combination of parameter-efficient tuning methods can contribute to achieving parameter-efficient tuning for our caption models while maximizing the performance of screenshot captioning systems. The BLIP Caption model [11] is employed as our image caption model and is fine-tuned by the Screen2Words dataset [20] as our baseline. Subsequently, the following parameter-efficient tuning strategies were evaluated separately on the visual encoder and text decoder of the BLIP Caption model. Following this, different combinations of these strategies were employed to ascertain the most effective approach for optimizing the model architecture and fine-tuning the components of the model to achieve desired outcomes. The specifics of these parameter-efficient tuning approaches and the model architecture are detailed in the following sections. ### Parameter-Efficient Tuning approaches As our model is a vision-language model, we had explored the usage of adapters specifically designed for the vision and language components separately. These adapters, including the Houlsby adapter [13], BitFit [23], LoRA [12], and Explicit Visual Prompting [13], were examined both individually and in various combinations in the experiments. * **Houlsby adapter** is additional bottleneck module that consist of a down-projection, GELU activation, an up-projection, and a skip connection. In our implementation, Houlsby adapters were integrated into the three feed-forward layers of each transformer block in the text decoder as shown in Figure 2 (a). * **BitFit** stands out by not necessitating the introduction of additional modules to the model. Instead, it focuses on fine-tuning the biases of the existing modules within the model. In these experiments, the effectiveness of applying BitFit to either the visual encoder or the text decoder is examined. * **LoRA** consists of a down-projection and an up-projection that run in parallel with the existing linear-projection layers. In our experiments, LoRA was integrated into the attention modules of the transformer blocks in text decoder as shown in Figure 2 (b). The attention module typically consists of queries, keys, and values, with LoRA being employed in the queries and values. * **Explicit Visual Prompting** (EVP) adapters utilize a similar architecture to Houlsby adapter, comprising a down-projection, GELU activation, and an up-projection. However, there are notable differences in terms of input, weight sharing, and network connection. In the case of EVP adapters, the input includes both the projection of the input image and task-specific information. Furthermore, the weight of the up-projection is shared among all EVP adapters, allowing for efficient parameter sharing. These adapters are inserted in front of each transformer block within the visual encoder. ### Model Architecture Modification As we need to insert additional modules into the model and freeze the original weights except for those we want to fine-tune, there are several ways to modify the model architecture. First, experiments were conducted using these parameter-efficient tuning methods individually, either on the visual encoder or the text decoder, to observe the impact of each method on the vision-language model. Since EVP adapter is specifically tailored for object detection tasks and aligns well with the Vision Transformer architecture, it was deployed on the visual encoder. To this end, three different handcrafted features were experimented with as inputs to EVP adapter: the Fast Fourier Transform of the original image, as outlined in EVP paper, the original image itself, and its grayscale version. The inclusion of the grayscale version is particularly beneficial for screenshots, as the semantics of most screenshot elements are color-irrelated, which is described in the Screen2Words paper. On the other hand, given that the other methods are primarily engineered for NLP tasks, it was considered fitting to apply them to the text decoder. This was done to determine whether the method modified on the visual encoder or the text decoder is more effective for the vision-language model when choosing only one of them. The positions of insertions or modifications for each method within the model are depicted in Figure 2 (a). Second, a modification similar to the model implementation of the VL-adapter [20] was implemented, which involves inserting a visual projection between the visual encoder and the language model as shown in Figure 2 (a). A key difference between our model and that of the VL-adapter is the absence of a text encoder in the former. Therefore, the linear projection layer was inserted between our visual encoder and text decoder. In this Figure 2: Illustration of the adapter insertion points in the BLIP Caption model. (a) The modifications to the BLIP Caption model architecture. (b) The architecture of parameter-efficient method modules. strategy, only the visual projection and associated modules of the methods on the text decoder were fine-tuned. Given the freeze applied to the entire visual encoder, only the methods on the text decoder were employed. Moreover, an experiment was conducted using the Vision Transformer block as the visual projection instead of the linear projection layer, inspired by the implementation of the VL-adapter. This was done to determine whether employing a visual projection and solely fine-tuning the text decoder could outperform fine-tuning the entire model in this particular scenario. Finally, an attempt was made to combine the methods on both the visual encoder and text decoder to see if the combination could lead to enhanced results. EVP adapters were integrated into the visual encoder using various handcrafted features, and combined with Houlsby adapter and LoRA, which were implemented on the text decoder. As the underlying principle of BitFit is not to introduce additional modules but rather to fine-tune the biases, it was additionally tested across the entire model. Through these experiments, insights were gained into how different adapters and methods affect the vision-language model's performance. Our findings provide guidance for selecting the optimal approach in fine-tuning and architectecting the model to achieve the desired results in screenshot captioning systems. ## Evaluation We begin by introducing the experimental settings, including dataset selection, evaluation metrics, and training procedures. Subsequent utilization of parameter-efficient tuning methods and evaluation of various model structure modifications associated with these methods was carried out, as detailed in the Methodology section. Initially, the performance of the methods when applied to the visual and language components was evaluated individually. Thereafter, the visual encoder was frozen and only the text decoder and the inclusion of a visual projection were trained. Finally, the integration of the methods into both the visual and language components of the BLIP caption model was explored. In this three attempts, the entire model with approximately 223 million parameters, which underwent complete fine-tuning, served as a baseline for performance comparisons. This enables the assessment of how closely each modification and the methods can approach the baseline performance. Through these evaluations, the aim was to determine the impact of each modification and the methods on the performance and parameter efficiency of the model. This comprehensive analysis enabled the identification of the most effective strategies for enhancing the screenshot captioning system. ### Experiment Settings In our experiments, the Screen2Words dataset Wang et al. (2021) is utilized for training, validation, and evaluation. The dataset was split according to the guidelines provided in its release. Each screenshot in the dataset is accompanied by five captions. To match the number of captions and ensure sufficient training data, the screenshots were duplicated during the training phase, resulting in five training instances for each. During the evaluation phase, the five captions associated with each screenshot were used as references. To gauge the performance of the models, the BLEU-4 and CIDEr metrics Papineni et al. (2002); Vedantam et al. (2015) were employed, both being widely used benchmarks in the field of image captioning. All experiments were conducted using the PyTorch deep learning framework Paszke et al. (2019) and were performed on a single Nvidia RTX A6000 GPU. During the training process, the AdamW optimizer with a weight decay of 0.1 was utilized, and the learning rate was set to 5e-5. The batch size for training is 32, and the models were trained for a total of 30 epochs. ### Individual Tuning of Visual and Language Components The parameter-efficient tuning approaches were employed for the visual encoder and text decoder separately to observe the individual effects of each approach on our captioning task. For the text decoder, methods such as BitFit, Houlsby adapter, and LoRA were utilized. In this scenario, the entire visual encoder was frozen and the training was focused on the components of the method integrated into the text decoder. Conversely, for the visual encoder, methods such as EVP, BitFit were employed, and the entire text decoder was frozen while training exclusively focused on the components associated with the method on the visual encoder. Additionally, due to the use of handcrafted feature extraction in EVP, three different handcrafted features were employed: the Fast Fourier Transform of the original image, along with the original image and its grayscale version. In our notation, "FT" stands for fine-tuning, "EVP" represents Explicit Visual Prompting, "EVP-gs" indicates Explicit Visual Prompting with grayscale extraction, and "EVP-fft" signifies Explicit Visual Prompting with Fast Fourier Transform extraction. Additionally, "(A)" denotes the entire model, "(V)" corresponds to the visual encoder, and "(T)" signifies the text decoder. The "Parameters(%)" column in the table denotes the percentage of trainable parameters in the model. The results are shown in Table 1. We can observe that by fine-tuning the entirety of a model component if it has a sufficient number of parameters, the performance can either approach the baseline or even show potential for enhancement. Both fine-tuning the entire visual encoder and the entire text decoder could achieve more than 90% BLEU-4 score of the baseline. Notably, when solely fine-tuning the entire visual encoder, the performance exceeds expectations, with the CIDEr score surpassing that achieved by fine-tuning the whole model. We guess that this can be attributed to the text decoder, which remains frozen, already having been pre-trained for the captioning task, thus proving sufficient in generating quality captions. Additionally, the sufficient parameters of the visual encoder are conducive to the learning of task-specific information. When employing parameter-efficient tuning methods individually, Houlsby adapter stands out by achieving the highest performance, reaching approximately 91.4% of the CIDEr score attained by fine-tuning the entire model, thereby surpassing all other methods. Additionally, a marked decline in performance was observed when these methods were utilized in isolation, especially if they were not applied to the text decoder. This can be attributed to the fewer parameters being adjusted, which suggests that methods applied solely to the visual encoder may not possess sufficient parameters to learn task-specific information, whereas methods applied exclusively to the text decoder are more adept at learning how to generate captions. Therefore, it is more effective to use the methods on the text decoder and fine-tune it to achieve satisfactory results. This enables the model to adjust and enhance its language generation capabilities for the specific task, leading to getting close to the performance achieved by fine-tuning the entire model. ### Text Decoder and Visual Projection Tuning The visual encoder was frozen and focus was exclusively put on training the text decoder and the visual projection layer as illustrated in Figure 2 (a). BitFit, Houlsby adapter, and LoRA are utilized with a visual projection layer to investigate the potential performance enhancements this combination might yield. The visual projection layer employed is a linear projection layer that maps the visual features to the same dimensionality as the original features. Moreover, an experiment was conducted using a Vision Transformer block (ViT block) as an alternative to the linear projection layer. The results are shown in Table 2. We observe that when using a visual projection layer and training both the projection and the entire text decoder, the performance significantly declines, with the CIDEr score dropping to 67.3, which is approximately 76.2% of the baseline. This decline can be attributed to the visual projection layer being initialized with zeros, which destabilizes the pre-trained weights on the text decoder during training. In contrast, due to the frozen pre-trained weights on the text decoder, LoRA and Houlsby adapter only fine-tune the additional parameters, making them more stable. In this case, both LoRA and Houlsby adapter can achieve more than 88% CIDEr score of the baseline. Moreover, utilizing ViT blocks leads to superior performance in comparison to using a linear projection layer. This improvement can be attributed to the ViT block's enhanced capability in capturing the specific information of the intermediate visual embeddings relative to the linear projection layer. Houlsby adapter, in particular, attains a CIDEr score of 86.0, which represents approximately 97.4% of the baseline. Furthermore, LoRA exhibits the highest performance, achieving a CIDEr score of 88.1, equivalent to roughly 99.8% of the baseline. In this experiment, it's evident that the use of visual projection can effectively transfer task-specific features to a certain extent. ### Entire Model Tuning Both types of parameter-efficient tuning methods were implemented simultaneously on the visual encoder and text decoder. For the text decoder, Houlsby adapter and LoRA were utilized, while EVP was employed for the visual encoder. Additionally, BitFit is exclusively tested on the entire model. Given that BitFit's fundamental concept does not involve the introduction of additional modules, but instead focuses on fine-tuning biases, it was deemed pertinent to evaluate its impact across the whole model. The results are shown in Table 3. By utilizing the parameter-efficient tuning methods on the whole model, where the trainable parameters are distributed sparsely throughout the model, we observe that the results are reasonably robust. The lowest performance achieved in this configuration is a CIDEr score of approximately 79.2, which is about 89.7% of the baseline. But the lowest BLEU score is 15.4, which is only 74.8% of the baseline. This discrepancy can be attributed to the BLEU metric focusing solely on the n-gram overlap between the generated captions and the reference captions, whereas the CIDEr metric takes into account not only n-grams but also the Term Frequency Inverse Document Frequency (TF-IDF) [12], which measures the similarity between the generated captions and the reference captions. The combinations of EVP with LoRA may capture some implicit information, but may not perform as well in representing explicit details, leading to lower BLEU scores. However, when using the \begin{table} \begin{tabular}{c|c c c} \hline & BLEU-4 & CIDEr & Parameters(\%) \\ \hline FT (A) & 20.6 & 88.3 & 100.0 \\ FT (V) & 18.6 & **89.6** & 38.44 \\ FT (T) & **19.0** & 82.3 & 61.56 \\ \hline EVP (V) & 11.6 & 65.1 & 0.29 \\ EVP-gs (V) & 11.7 & 64.9 & 0.29 \\ EVP-fft (V) & 12.1 & 66.1 & 0.29 \\ BitFit (V) & 13.4 & 67.9 & **0.05** \\ BitFit (T) & 14.8 & 70.4 & 0.08 \\ Houlsby (T) & **18.0** & **80.7** & 1.18 \\ LoRA (T) & 14.5 & 74.2 & 0.26 \\ \hline \end{tabular} \end{table} Table 1: Performance of individual component tuning using parameter-efficient methods. \begin{table} \begin{tabular}{c|c c c} \hline & BLEU-4 & CIDEr & Parameters(\%) \\ \hline FT (A) & 20.6 & 88.3 & 100.0 \\ linear \& text decoder & 15.7 & 67.3 & 61.66 \\ linear \& Houlsby & 17.9 & 81.3 & 1.44 \\ linear \& LoRA & 16.1 & 80.5 & **0.52** \\ ViT block \& text decoder & 16.6 & 77.9 & 62.74 \\ ViT block \& Houlsby & **18.2** & 86.0 & 4.18 \\ ViT block \& LoRA & 17.9 & **88.1** & 3.31 \\ \hline \end{tabular} \end{table} Table 2: Performance of text decoder tuning with visual projection layer using parameter-efficient methods. combination of EVP with Houlsby adapter, the BLEU score could acheieve 18.2, which is 88.3% of the baseline ans the CIDEr score could reach 85.2, which is 96.5% of the baseline. ### Discussion From our experimental results, we can observe that the handcrafted features used in EVP do not contribute to the enhancement of the model's performance in our case. The performance of EVP in this scenario is inferior to the other methods when applied individually to the visual encoder. However, when EVP is combined with the Houlsby adapter or LoRA, effectively dispersing updatable parameters across the entire model, we observe a substantial improvement in performance. BitFit, when applied across the entirety of the model, can yield decent results without necessitating the insertion of additional modules. However, when employed exclusively on a partial component of the model, its effectiveness diminishes. LoRA, in most cases, can achieve satisfactory results, yet its performance is not on par with that of the Houlsby adapter. These can be attributed to the fact that LoRA is only applied to the attention module within the Transformer block, whereas the Houlsby adapter is implemented throughout the entirety of the Transformer block. But when LoRA is applied with a ViT block, it can achieve the highest performance, very close to the baseline. This may because LoRA in the cross-attention modules directly captures the intermediate visual features during tuning, which are more task-specific, and the ViT block can capture more information than the linear projection layer. Houlsby adapter, whether used individually on the text decoder, applied with a visual projection layer, or combined with EVP, almost outperforms all other methods. Houlsby adapter, even when deployed independently, comes close to matching the performance achieved by fine-tuning the entire text decoder. Moreover, when it is applied with EVP, its performance approximates the effectiveness of fine-tuning the entire model, all while only necessitating updates to a mere 1.47% of the model's parameters. This demonstrates its stability and flexibility across various scenarios. As demonstrated in Figure 4, we examine the CIDEr performance of our experimental approaches. Individually tuning the visual encoder and text decoder using parameter-efficient methods doesn't yield sufficiently satisfactory results. This could be due to an inadequate number of parameters available to learn task-specific information, or the inability to modify the other component of the model, thereby resulting in model instability. As depicted in Figure 3, the BLEU scores of our combined approaches or those using visual projection are all approximately 18, reflecting closely comparable performances. However, implementing the VL-adapter's model modification with a linear visual projection layer does have some impact, but our application of the ViT block coupled with LoRA--with only 3.31% of the entire model's parameters--is more effective in capturing intermediate visual features, thus achieving a higher score. The ViT block, having a larger parameter count than the linear projection layer, could account for its superior performance. Notably, the ViT block's parameters constitute approximately 3% of the entire model's parameters, a quantity nearly ten-fold that of the EVP's parameters. The use of EVP with Houlsby adapter, requiring only 1.47% of the model's parameters, yields a substantial score, reaching 96.5% of the baseline. We further delve into comparing the generated captions from these two top-performing methods, as outlined in the Generated Captions section. ## 5 Generated Captions In this section, we present a collection of example captions generated by our model, compared with human labels and the outputs from the Screen2Words paper's model in the context of screenshot captioning tasks. These examples further elucidate our model's capabilities in generating captions, extending beyond the quantitative language scores covered in the main sections of the paper. These selections Figure 3: Comparison of the overall BLEU score of parameter-efficient tuning methods. \begin{table} \begin{tabular}{c|c c c} \hline \hline & BLEU-4 & CIDEr & Parameters(\%) \\ \hline FT (A) & 20.6 & 88.3 & 100.0 \\ BitFit (A) & 17.6 & 80.4 & 0.13 \\ EVP \& Houlsby & **18.2** & **85.2** & 1.47 \\ EVP-gs \& Houlsby & **18.2** & 84.7 & 1.47 \\ EVP-fft \& Houlsby & 18.1 & 84.0 & 1.47 \\ EVP \& LoRA & 15.4 & 79.2 & 0.55 \\ EVP-gs \& LoRA & 15.4 & 79.2 & 0.55 \\ EVP-fft \& LoRA & 15.8 & 80.7 & 0.55 \\ \hline \hline \end{tabular} \end{table} Table 3: Overall performance of parameter-efficient tuning methods on entire model. have been curated to showcase the diversity and quality of captions generated across varied scenarios and conditions. We also include a few examples where our model fails to generate captions that are as accurate as the human labels, and discuss the possible reasons for these failures. The examples provided here encompass captions generated by our implementation of EVP with the Houlsby adapter, LoRA paired with a ViT block visual projection, and the fine-tuning of the entire model. For a comparative analysis, we also include the generated captions from the model implemented in the Screen2Words paper. It's worth noting that the model from the Screen2Words paper is a multi-modal model that uses not only images but also the text and layout information present in the screenshots as inputs. As depicted in Figure 5, our models are able to capture the global information but struggle to retrieve the detailed information, as evidenced in the two top examples. as evidenced in the two top examples. The failures observed could be attributed to the abundance of text on the screen-shot and the fact that we solely use the image as input. In the middle-left example, our two adapter-based models exhibit a comprehensive understanding of the context as fine-tuning the. In the middle-right example, the caption generated by LoRA with the ViT block visual projection aligns more closely with the human label than the captions produced by either our implementation of EVP with the Houlsby adapter or by fine-tuning the entire model. Interestingly, the latter two models mistakenly interpret the album as a music player. This difference could potentially be attributed to the LoRA's insertion into the attention module, enabling the model to focus more on semantic features. In the bottom-left example, our models grasp half of the context--they can identify it as a login page but fail to recognize which app it is. This may be attributed to the simplicity of the login page, which consists of only a few elements in the page. Additionally, the models may struggle to comprehend the text within the page. In the bottom-right example, our models manage to discern that the app is a baby care app, but do not pick up on the text information displayed on the screen. Notably, the caption describes a baby in the app, which may be due to the pre-training of the model on diverse datasets, including classification datasets. This pre-training may predispose the model to concentrate on objects present in the image. ## Conclusion This study explores the effectiveness of various parameter-efficient tuning strategies, originally designed for both visual and language tasks, within the context of screenshot captioning and conduct various combinations of the methods to identify the most effective combination for the BLIP Caption model. Through our experiments, we find that using LoRA with a ViT block as a visual projection achieved the highest CIDEr performance, reaching about 99.8% of the performance achieved by fully fine-tuning the model, with only 3.31% of the model's parameters needing tuning. Similarly, the combination of EVP on the visual encoder and the Houlsby adapter on the text decoder reached about 96.5% of the performance of full fine-tuning, using only 1.47% of the model's parameters. This underscores that both EVP with Houlsby adapter and LoRA with ViT block are viable choices, depending on the parameter constraints. It also demonstrates the efficacy of parameter-efficient tuning strategies in enhancing screenshot captioning performance. Figure 4: Comparison of the overall CIDEr score of parameter-efficient tuning methods. Figure 5: Example of the generated captions These findings serve as an important benchmark for future research in this field. Moreover, the generated captions by these models for the test set could be a valuable resource for further studies related to caption generation. ## Acknowledgments This project is funded by Google's grant to National Taiwan University numbered "Google-NTU-112-6-00049".
2309.04567
Fluid-driven slow slip and earthquake nucleation on a slip-weakening circular fault
We investigate the propagation of fluid-driven fault slip on a slip-weakening frictional interface separating two identical half-spaces of a three-dimensional elastic solid. Our focus is on axisymmetric circular shear ruptures as they capture the most essential aspects of the dynamics of unbounded ruptures in three dimensions. In our model, fluid-driven aseismic slip occurs in two modes: as an interfacial rupture that is unconditionally stable, or as the quasi-static nucleation phase of an otherwise dynamic rupture. Unconditionally stable ruptures progress through four stages. Initially, ruptures are diffusively self-similar and the interface behaves as if it were governed by a constant friction coefficient equal to the static friction value. Slip then accelerates due to frictional weakening while the cohesive zone develops. Once the latter gets properly localized, a finite amount of fracture energy emerges along the interface and the rupture dynamics is governed by an energy balance of the Griffith's type. In this stage, fault slip transition from a large-toughness to a small-toughness regime. Ultimately, self-similarity is recovered and the fault behaves again as having a constant friction coefficient, but this time equal to the dynamic friction value. When slow slip is the result of a frustrated dynamic instability, slip also initiates self-similarly at a constant peak friction coefficient. The maximum aseismic rupture size varies from a critical nucleation radius (shear modulus divided by slip-weakening rate) to infinity near the limit that separates the two modes of aseismic sliding. We provide analytical and numerical solutions for the problem solved over its full dimensionless parameter space. Due to its three-dimensional nature, the model enables quantitative comparisons with field observations as well as preliminary engineering design of hydraulic stimulation operations.
Alexis Sáez, Brice Lecampion
2023-09-08T19:39:51Z
http://arxiv.org/abs/2309.04567v1
# Fluid-driven slow slip and earthquake nucleation on a slip-weakening circular fault ###### Abstract Following the work of Saez _et al_. [1] who examined the three-dimensional propagation of injection-induced stable frictional sliding on a constant-friction fault interface separating two identical elastic solids, we extend their model to account for a friction coefficient that weakens with slip. This enables the model to develop a proper cohesive zone besides incorporating a finite amount of fracture energy, both ingredients absent in the former model. To do so, we consider two friction laws characterized by a linear and an exponential weakening of friction respectively. We focus on the particular case of axisymmetric circular shear ruptures as they capture the most essential aspects of the dynamics of unbounded ruptures in three dimensions. It is shown that fluid-driven slow slip can occur in two distinct modes in this model: as an interfacial rupture that is unconditionally stable, or as the quasi-static nucleation phase of an otherwise dynamic rupture. Whether the interface slides in one way or the other depends primarily on the sign of the difference between the initial shear stress (\(\tau_{0}\)) and the in-situ residual strength (\(\tau_{r}^{0}\)) of the fault. For ruptures that are unconditionally stable (\(\tau_{0}<\tau_{r}^{0}\)), fault slip undergoes four distinct stages in time. Initially, ruptures are self-similar in a diffusive manner and the fault interface behaves as if it were governed by a constant friction coefficient equal to the peak (static) friction value. Slip then accelerates due to frictional weakening while the cohesive zone develops. Once the latter gets properly localized, a finite amount of fracture energy emerges along the interface and the rupture dynamics is governed by an energy balance of the Griffith's type. We show that in this stage, fault slip always transition from a large-toughness to a small-toughness regime due to the diminishing effect of the fracture energy in the near-front energy budget as the rupture grows. Moreover, while slip grows likely confined within the pressurized region in prior stages, here the rupture front can largely outpace the pressurization front if the fault is close to the stability limit (\(\tau_{0}\approx\tau_{r}^{0}\)). Ultimately, self-similarity is recovered and the fault behaves again as possessing a constant friction coefficient, but this time equal to the residual (dynamic) friction value. It is shown that in this ultimate regime, the fault interface operates to leading order with zero fracture energy. On the other hand, when slow slip propagates as the nucleation phase of a dynamic rupture (\(\tau_{0}>\tau_{r}^{0}\)), fault slip also initiates in a self-similar manner and the interface operates at a constant peak friction coefficient. The maximum size that aseismic ruptures can reach before becoming unstable (inertially dominated) can be as small as a critical nucleation radius equal to the shear modulus divided by the slip-weakening rate, and as large as infinity when faults are close to the stability limit (\(\tau_{0}\approx\tau_{r}^{0}\)). The former case corresponds to faults that are critically stressed before the injection starts, in which case ruptures always expand much further away than the pressurized region. The larger the critical nucleation radius is with regard to the cohesive zone size, the longer ruptures can accelerate aseismically before becoming unstable. When the nucleation radius is smaller than the cohesive zone size, aseismic ruptures accelerate upon departing from the self-similar response due to continuous frictional weakening over the entire slipping region, undergoing nucleation unaffected by the residual fault strength. Conversely, when the nucleation radius is (much) larger than the cohesive zone size, aseismic ruptures transition towards a stage controlled by a front-localized energy balance and undergo nucleation in a 'crack-like' manner. Our results include analytical and numerical solutions for the problem solved over its full dimensionless parameter space, as well as expressions for relevant length and time scales characterizing the transition between different stages and regimes. Due to its three-dimensional nature, the model enables quantitative comparisons with field observations as well as preliminary engineering design of hydraulic stimulation operations. Existing laboratory and in-situ experiments of fluid injection are briefly discussed in the light of our results. **Keywords:** Friction; Fracture; Instability; Geological material; Injection-induced fault slip. ## 1 Introduction Sudden pressurization of pore fluids in the Earth's crust has been widely acknowledged as a trigger for inducing slow slip on pre-existing fractures and faults [2, 3, 4, 5]. Sometimes referred to as injection-induced aseismic slip, this phenomenon is thought to play a significant role in various subsurface engineering technologies and natural earthquake-related phenomena. Notable examples of the natural source include seismic swarms and aftershock sequences, often attributed to be driven by the diffusion of pore pressure [6, 7] or the propagation of slow slip [8, 9], with recent studies suggesting that the interplay between both mechanisms may be indeed responsible for the occurrence of some seismic sequences [10, 11, 12]. Similarly, low-frequency earthquakes and tectonic tremors are commonly considered to be driven by slow slip events occurring downdip the seismogenic zone in subduction zones [13, 14], where systematic evidence of overpressurized fluids has been found [14, 15, 16], with recent works suggesting that the episodicity and some characteristics of slow slip events may be explained by fluid-driven processes [17, 18, 19]. Anthropogenic fluid injections are, on the other hand, known to induce both seismic and aseismic slip [3, 4, 5]. For instance, hydraulic stimulation techniques employed to engineer deep geothermal reservoirs aim to reactivate fractures through shear slip, thereby enhancing reservoir permeability by either dilating pre-existing fractures or creating new ones. The occurrence of predominantly aseismic rather than seismic slip, is considered a highly favorable outcome, as earthquakes of relatively large magnitudes can pose a significant risk to the success of these projects [20, 21]. Injection-induced aseismic slip can, however, play a rather detrimental role in some cases, as slow slip is accompanied by quasi-static changes of stress in the surrounding rock mass which, in turn, may induce failure of unstable fault patches that could sometimes lead to earthquakes of undesirably large magnitude [22]. Moreover, since injection-induced aseismic slip may propagate faster than pore pressure diffusion, this mechanism can potentially trigger seismic events in regions that are far from the zone affected by the pressurization of pore fluids [4, 22, 23]. Fluid-driven aseismic slip may play a similar role in other subsurface engineering technologies than deep geothermal energy, such as hydraulic fracturing of unconventional oil and gas reservoirs [22], oil wastewater disposal [24], and carbon dioxide sequestration [25]. The apparent relevance of injection-induced aseismic slip in the aforementioned phenomena have motivated the development of physical models that are contributing to a better comprehension of this hydro-mechanical problem. The first rigorous investigations on the mechanics of injection-induced aseismic slip focused, for the sake of simplicity, on idealized two-dimensional configurations. Specifically, on the propagation of fault slip under plane-strain conditions considering either in-plane shear (mode II) or anti-plane shear (mode III) ruptures, with a fluid source of infinite extent along the out-of-the-plane direction. Yet these studies have significantly contributed to establish a fundamental qualitative understanding of how the initial state of stress, the fluid injection parameters, the fault hydraulic properties, and the fault frictional rheology affect the dynamics of fluid-driven aseismic slip transients [26, 27, 28, 29], the applicability of such models remains limited as three-dimensional configurations are expected to prevail in nature. Recently, Saez _et al._[1] examined the propagation of injection-induced aseismic slip under mixed-mode (II+III) conditions, on a fault embedded in a fully three-dimensional domain. An important finding of this study is that for the same type of fluid source (either constant injection rate or constant pressure in [1]), the spatiotemporal patterns of fault slip differ even qualitatively between the three-dimensional model and its two-dimensional counterpart, highlighting the importance of resolving more realistic rupture configurations. In the three-dimensional model of Saez _et al._[1], the perhaps strongest assumption is the consideration of a constant friction coefficient at the fault interface. This friction model, known as Coulomb's friction, corresponds to the minimal physical ingredient that can produce unconditionally stable shear ruptures. As discussed by Saez _et al._[1], a model with Coulomb's friction represents a case in which the frictional fracture energy spent during rupture propagation is effectively zero, without the possibility of developing a process zone in the proximities of the rupture front. In this paper, we eliminate this assumption and therefore extend the model of Saez _et al._[1] to account for a friction coefficient that weakens upon the onset of fault slip. This incorporates into the model the proper growth and localization of a process zone, with the resulting finite amount of fracture energy. We do so by considering the simplest model of friction that can provide the sought physical ingredients, namely, a slip-weakening friction coefficient [30]. We consider the two most common types of slip-weakening friction: a linear and an exponential decay of friction with slip, from some peak (static) value towards a constant residual (dynamic) one. On the other hand, as shown by Saez _et al._[1] for the Coulomb's friction case, a Poisson's ratio different than zero has mainly an effect on the aspect ratio of the resulting quasi-elliptical ruptures which become more elongated for increasing values of \(\nu\). The characteristic size of mixed-mode, quasi-elliptical ruptures is nevertheless determined primarily by the rupture radius of circular ruptures, which occur in the limit of \(\nu=0\) for such an axisymmetric problem. The case of a null Poisson's ratio is therefore particularly insightful and notably simpler since in that limit, we can leverage the axisymmetry property of the problem to compute more efficient numerical solutions [1, 23, 31] besides allowing the problem to be tractable analytically to some extent. We therefore focus in this paper on the case of mixed-mode circular ruptures alone. We also note that our model can be considered as an extension of the two-dimensional model of Garagash and Germanovich [32]. While Garagash and Germanovich focused their investigation on the problem of nucleation and arrest of dynamic slip, our work here is concerned primarily with a different phenomenon, namely, the propagation of aseismic slip. Nonetheless, since aseismic slip could correspond indeed to the nucleation phase of an ensuing dynamic rupture, we also examine the problem of nucleation of a dynamic instability under mixed-mode conditions. In fact, by pursuing this route, we provide an extension of the nucleation length of Uenishi and Rice [33] to the three-dimensional, axisymmetric case, for both tensile and shear ruptures. Similarly, we extend some other relevant nucleation lengths identified by Garagash and Germanovich [32]. We organize this paper as follows. In section 2, we introduce the mathematical formulation of our physical model. In section 3, we present two simplified models that will be later shown to be asymptotic and/or approximate solutions of the slip-weakening model under certain regimes. In section 4, we introduce the scaling of the problem and the map of possible rupture regimes. In section 5, we examine in detail the case of ruptures that are ultimately stable. In section 6, we focus on the case in which aseismic slip corresponds to the nucleation phase of an otherwise dynamic rupture. Finally, in section 7, we provide a brief discussion of recent laboratory and in-situ experiments of fault reactivation by fluid injection in light of our results. ## 2 Problem formulation ### Governing equations Fluid is injected into a poroelastic fault zone of width \(w\) that is characterized by an intrinsic permeability \(k\) and a storage coefficient \(S\), assumed to be constant and uniform (see figure 1b). The fault zone is confined within two linearly elastic half spaces of same elastic constants, namely, a shear modulus \(\mu\) and Poisson's ratio \(\nu\). The initial stress tensor is assumed to be uniform and is characterized by a resolved shear stress \(\tau_{0}\) and total normal stress \(\sigma_{0}\) acting along the \(x\) and \(z\) directions of the Cartesian reference system of figure 1a, respectively. We consider the injection of fluids via a line source that is located along the \(z\) axis and crosses the entire fault zone width. Under such conditions, fluid flow is axisymmetric with regard to the \(z\) axis and occurs only within the porous fault zone. Moreover, the displacement field induced by the fluid injection is irrotational and the pore pressure diffusion equation of poroelasticity reduces to its uncoupled version [34], \(\partial p/\partial t=\alpha\nabla^{2}p\), where \(\alpha=k/S\eta\) is the fault hydraulic diffusivity, with \(\eta\) the fluid dynamic viscosity. Solutions of the previous linear diffusion equation are known extensively for a broad range of boundary and initial conditions [35]. Here, we focus on the perhaps most practical case in which the fluid injection is conducted at a constant volumetric rate \(Q\). For the following boundary conditions: \(2\pi rw(k/\eta)\partial p/\partial r=-Q\) when \(r\to 0\) and \(p=p_{0}\) when \(r\rightarrow\infty\), with \(p_{0}\) the initial pore pressure field assumed to be uniform, the solution of the diffusion equation in terms of the overpressure \(\Delta p(r,t)=p(r,t)-p_{0}\) reads as (section 10.4, eq. 5, [35]) \[\Delta p(r,t)=\Delta p_{*}E_{1}\left(\frac{r^{2}}{4\alpha t}\right),\;\text{ with}\;\Delta p_{*}=\frac{Q\eta}{4\pi kw}, \tag{1}\] where \(\Delta p_{*}\) is the intensity of the injection with units of pressure, and \(E_{1}\left(x\right)=\int_{x}^{\infty}\left(e^{-x\xi}/\xi\right)\mathrm{d}\xi\) is the exponential integral function. Let us define the following characteristic overpressure, \[\Delta p_{c}=\frac{Q\eta}{kw}, \tag{2}\] which relates to the injection intensity as \(\Delta p_{c}=4\pi\Delta p_{*}\). A close examination of equation (1) for the large times in which the line-source approximation is valid, \(t\gg r_{s}^{2}/\alpha\) with \(r_{s}\) the characteristic size of the actual fluid source, shows that \(\Delta p_{c}\) is in the order of magnitude of the overpressure at the fluid source. Moreover, as discussed in Appendix C, the fluid-source overpressure, say \(\Delta p_{s}\), increases slowly (logarithmically) with time, such that for practical applications, one could think in considering \(\Delta p_{s}\) to be rather constant and approximately equal to the characteristic overpressure \(\Delta p_{c}\). Because of its simplicity, we adopt such approximation \(\Delta p_{s}\approx\Delta p_{c}\) throughout this work, with the implications further discussed in Appendix C. Having established that, we note that in this work we consider exclusively injection scenarios in which the characteristic fluid-source overpressure satisfies \(\Delta p_{c}\lessapprox\sigma_{0}^{\prime}\), where \(\sigma_{0}^{\prime}=\sigma_{0}-p_{0}\) is the initial effective normal stress. In this way, we make sure that the walls of the fault remain always in contact, thus avoiding hydraulic fracturing. This latter scenario has been notably addressed in the context of slip instabilities by others [36]. Suppose now that the fault zone possesses a slip surface located at \(z=0\) where the totality of fault slip is accommodated (see figure 1b). The slip surface is assumed to obey a Mohr-Coulomb shear failure criterion without any cohesion such that the maximum shear stress \(\tau\) and fault strength \(\tau_{s}\) satisfy at any position along the slip surface and any time, the following local relation \[|\tau|\leq\tau_{s}=f\times\left(\sigma_{0}^{\prime}-\Delta p\right), \tag{3}\] Figure 1: Model schematics. (a, b) Fluid is injected into a permeable fault zone of width \(w\) via a line-source that crosses the entire fault zone width. The fault is planar and embedded in an unbounded linearly elastic impermeable host rock of same elastic constants. The initial stress tensor is uniform. The resulting mixed-mode (II+III) shear rupture is circular when Poisson’s ratio \(\nu=0\). (c) Direction of shear stress \(\tau\) along the rupture front and the corresponding mode-II and mode-III components with regard to both the Cartesian and cylindrical coordinate systems. where \(\Delta p\) is the overpressure given by equation (1) and \(f\) is a local friction coefficient that depends on fault slip \(\delta\)[30]. There are two common choices for the slip-weakening friction model, namely, a friction coefficient that decays linearly with slip, \[f(\delta)=\begin{cases}f_{p}-\left(f_{p}-f_{r}\right)|\delta|/\delta_{c}&\text{ if }|\delta|\leq\delta_{c}\\ f_{r}&\text{if }|\delta|>\delta_{c},\end{cases} \tag{4}\] and a friction coefficient that decays exponentially with it, \[f(\delta)=f_{r}+\left(f_{p}-f_{r}\right)e^{-|\delta|/\delta_{c}}. \tag{5}\] In the previous equations, \(f_{p}\) is the peak (or static) friction coefficient, \(f_{r}\) is the residual (or kinetic) friction coefficient, and \(\delta_{c}\) is the characteristic 'distance' over which the friction coefficient decays from \(f_{p}\) to \(f_{r}\), as displayed in figure 2. Moreover, we assume that the slip surface is fully locked before the injection starts and, as such, the initial shear stress \(\tau_{0}\) must be lower than the in-situ static strength of the fault, \(\tau_{p}^{0}=f_{p}\sigma_{0}^{\prime}\). According to equation (3), the injection of fluid has the effect of reducing the fault strength owing to the increase of pore-fluid pressure which decreases the effective normal stress locally. Such pore pressure increase will be eventually sufficient to activate fault slip when the fault strength equates the pre-injection shear stress \(\tau_{0}\), which marks the onset of the interfacial frictional rupture. Indeed, owing to the line-source approximation of the fluid source, the activation of slip in our model occurs immediately upon the start of the injection as a consequence of the weak logarithmic singularity that the exponential integral function features near the origin (see Appendix C). In the three-dimensional axisymmetric configuration under consideration, the resulting shear rupture will propagate under mixed-mode II+III conditions, where II and III represent the in-plane shear and anti-plane shear deformation modes, respectively. The modes of deformation are schematized in figure 1c in terms of the near-front shear stress components. Moreover, as stated in the introduction, we restrict ourselves to the case of circular ruptures Figure 2: Slip weakening friction law for (black) linear weakening and (red) exponential decay with slip. If the friction coefficient \(f\) is multiplied by some constant effective normal stress that is approximately uniform and representative of the one acting along the process zone, then the gray areas times such effective normal stress represent fracture energy \(G_{c}\). Note that \(G_{c}^{\text{exp}}=2\cdot G_{c}^{\text{lin}}\) when \(\delta_{c}\) is the same in both models. alone. Such an idealized case is exact for radial fluid flow when the Poisson's ratio \(\nu=0\)[1, 23]. Moreover, in some limiting regimes of the fault response, numerically-derived asymptotic expressions for the aspect ratio of elongated ruptures (\(\nu\neq 0\)) [1] may result useful to construct approximate solutions for the evolution of non-circular rupture fronts using the solution for circular ruptures, at least in the case of Coulomb's friction [1]. By neglecting any fault-zone poroelastic coupling upon the onset of the rupture, the quasi-static elastic equilibrium that relates fault slip \(\delta\) to the shear stress \(\tau\) acting along the fault, can be written as the following boundary integral equation along the \(x\) axis [23, 37], \[\tau(r,t)=\tau_{0}+\frac{\mu}{2\pi}\int_{0}^{R(t)}F\left(r,\xi\right)\frac{ \partial\delta(\xi,t)}{\partial\xi}\mathrm{d}\xi, \tag{6}\] where the kernel \(F\left(r,\xi\right)\) is given by \[F\left(r,\xi\right)=\frac{K\left(k(r/\xi)\right)}{\xi+r}+\frac{E\left(k(r/\xi) \right)}{\xi-r},\text{ with }\quad k(x)=\frac{2\sqrt{x}}{1+x}, \tag{7}\] and \(K\left(\cdot\right)\) and \(E\left(\cdot\right)\) are the complete elliptic integrals of the first and second kind, respectively. Note that in equation (6), the shear stress \(\tau\) can be written as a function in space of the radial coordinate only due to the aforementioned axisymmetry property when \(\nu=0\). Equations (1), (3), and (6), plus the corresponding constitutive friction law, either (4) or (5), provide a complete system of equations to solve for the spatio-temporal evolution of fault slip \(\delta(r,t)\) and the position of the rupture front \(R(t)\): our primary unknowns in the problem. ### Front-localized energy balance Under certain conditions and over a certain spatial range, the previous problem may be formulated equivalently through an energy balance of the Griffith type, that is, as a classical shear crack in the theory of Linear Elastic Fracture Mechanics (LEFM) [38]. The idea that frictional ruptures could be approximated by classical shear cracks is relatively old [30, 39]. Yet it has been just recently validated by modern experiments concern with the problem of frictional motion on both dry and lubricated interfaces [40]. Let us assume that there exists a localization length \(\ell_{*}\) near the rupture front such that the shear stress evolves from some peak value \(\tau_{p}\) at the rupture front \(r=R\) to some residual and approximately constant amount \(\tau_{r}\) at distances \(r<R-\ell_{*}\). The localization length \(\ell_{*}\) is sometimes called the process zone size or cohesive zone size for the similarity of the shear rupture problem to the case of tensile fractures [41]. Let us further assume that \(\ell_{*}\) is small in comparison to the rupture radius \(R\). Under such conditions, we can invoke the'small-scale yielding' approximation of LEFM to shear ruptures [39]. In particular, during rupture propagation, the influx of elastic energy into the edge region \(G\), also known as energy release rate, must equal the frictional fracture energy \(G_{c}\). The energy release rate for an axisymmetric, circular shear rupture is (Appendix B of [1]) \[G=\frac{2}{\pi\mu R(t)}\left[\int_{0}^{R(t)}\frac{\tau_{0}-\tau_{r}(r,t)}{ \sqrt{R(t)^{2}-r^{2}}}r\mathrm{d}r\right]^{2}. \tag{8}\] In the previous equation, it is assumed that the current shear stress acting on the 'crack' faces is the residual strength \(\tau_{r}\). This is consistent with the small-scale yielding approach where the details of the process zone are neglected for the calculation of \(G\)[39, 42]. On the other hand, the fracture energy \(G_{c}\) corresponds to the energy dissipated within the process zone per unit area of rupture growth, which in the case of a frictional shear crack is equal to the work done by the fault strength \(\tau_{s}\) against its residual part \(\tau_{r}\)[39], \[G_{c}=\int_{0}^{\delta_{*}}\left[\tau_{s}(\delta)-\tau_{r}(\delta_{*})\right] \mathrm{d}\delta, \tag{9}\] where \(\delta_{*}\) is the accrued slip throughout the process zone assuming, again, that there exists a proper localization length \(\ell_{*}\). Let us now recast the Griffith's energy balance, \(G=G_{c}\), in a way that will be more convenient for analytic derivations. For a mixed-mode shear rupture, the energy release rate can be expressed as \(G=K_{\mathrm{II}}^{2}(1-\nu)/2\mu+K_{\mathrm{III}}^{2}/2\mu\)[43], where \(K_{\mathrm{II}}\) and \(K_{\mathrm{III}}\) are the mode-II and mode-III stress intensity factors. Here, \(K_{\mathrm{II}}\) and \(K_{\mathrm{III}}\) are understood as the intensities of the singular fields of LEFM that emerge as intermediate asymptotics at distances \(\ell_{*}\ll r\ll R\). Moreover, since \(\nu=0\), we can conveniently define \[K^{2}=K_{II}^{2}+K_{III}^{2}=2\mu G,\quad\text{and}\quad K_{c}=\sqrt{2\mu G_{c}}, \tag{10}\] where \(K\) is an 'axisymmetric stress-intensity factor' and \(K_{c}\) an 'axisymmetric fracture toughness'. Note that if one considers the singular terms of the mode-II and mode-III shear stress components acting nearby and ahead of the rupture front, say \(\tau_{\mathrm{II}}\) and \(\tau_{\mathrm{III}}\) respectively (see figure 1c), then \(K\) represents the intensity of the square-root singularity associated with the absolute (maximum) shear stress \(\tau\) (acting along the \(x\)-direction of our Cartesian reference system) which relates to the in-plane shear and anti-plane shear stress components as \(\tau^{2}=\tau_{\mathrm{II}}^{2}+\tau_{\mathrm{III}}^{2}\). By combining equations (8) and (10), we can rewrite the Griffith's energy balance in the sought Irwin's form, \(K=K_{c}\), which yields \[\frac{2}{\sqrt{\pi R(t)}}\int_{0}^{R(t)}\frac{\tau_{0}-\tau_{r}(r,t)}{\sqrt{R (t)^{2}-r^{2}}}r\mathrm{d}r=K_{c}. \tag{11}\] Let us now consider some details about the calculation of \(G_{c}\) in equation (9). Upon the arrival of the rupture front at a certain location over the fault plane, the fault strength \(\tau_{s}\) weakens according to equation (3) due to both the decrease of the friction coefficient \(f\) and the increase of overpressure \(\Delta p\). However, the near-front processes associated with energy dissipation for rupture growth in equation (9) are for the most part related to the frictional process only. This can be readily seen after closely examining equation (1) for the spatio-temporal evolution of \(\Delta p\). Equation (1) introduces the well-known diffusion length scale \(L(t)=\sqrt{4\alpha t}\) in the problem, which is itself a proxy for the radius of the nominal area affected by the pressurization of pore fluid (see figure 1a). We note that if the overpressure front \(L\) is either in the order of or much greater than the rupture radius \(R\), then \(\Delta p\) varies smoothly over the process zone --whose length \(\ell_{*}\) is at this point by definition much smaller than \(R\). On the other hand, if \(L\) is much smaller than \(R\), then \(\Delta p\) varies abruptly over the slipping region but highly localized near the rupture center over a small zone that is far away from the process zone, such that the overpressure within the process zone is negligibly small. Hence, the overpressure has the simple role of approximately setting the current amount of effective normal stress within the process zone, which can be reasonably taken as uniform at a given time and evaluated at the rupture front, \(\Delta p(R,t)\). The fracture energy \(G_{c}\) can be thus approximated as \[G_{c}\approx\left[\sigma_{0}^{\prime}-\Delta p(R,t)\right]\int_{0}^{\delta_{*} }\left[f(\delta)-f_{r}\right]\mathrm{d}\delta. \tag{12}\] Note that in addition to the separation of scales between the diffusion of pore pressure and the frictional weakening process just discussed, the previous equation assumes that the friction coefficient itself must effectively evolve throughout \(\ell_{*}\) (or equivalently over \(\delta_{*}\)) towards an approximately constant residual value \(f_{r}(\delta_{*})=f_{r}\). This latter is guaranteed in the linear-weakening model (4) as \(\delta_{*}=\delta_{c}\), and it seems a good approximation for the exponential-weakening case (5) at some \(\delta_{*}>\delta_{c}\). Assuming this latter separation of scales too, the residual strength of the fault can be then written as \[\tau_{r}(r,t)=f_{r}\times\left[\sigma_{0}^{\prime}-\Delta p(r,t)\right]. \tag{13}\] Substituting the previous equation into (11) leads after some manipulations to an energy-based equation for the rupture front \(R(t)\), \[\underbrace{\frac{2}{\sqrt{\pi}}\frac{f_{r}\Delta p_{*}}{\sqrt{R}}\int_{0}^{R }\frac{E_{1}\left(r^{2}/L^{2}\right)}{\sqrt{R^{2}-r^{2}}}r\mathrm{d}r}_{K_{p}} +\overbrace{\frac{2}{\sqrt{\pi}}\left[\tau_{0}-f_{r}\sigma_{0}^{\prime}\right] \sqrt{R}}^{K_{r}}=K_{c}(R,t), \tag{14}\] where the explicit dependence of the rupture front \(R\) and overpressure front \(L\) on time \(t\) has been omitted for simplicity. Equation (14) is an insightful form of the near-front energy balance, equivalent to the one obtained by Garagash and Germanovich [32] in two dimensions. It shows that the instantaneous position of the rupture front is determined by the competition between three distinct 'crack' processes that are active during the propagation of the rupture. The first term on the left-hand side \(K_{p}\) is an axisymmetric stress intensity factor (SIF) associated with the equivalent shear load induced by the fluid injection alone, which continuously unclamps the fault. The second term on the left-hand side \(K_{\tau}\) is an axisymmetric SIF due to a uniform shear load that is equal to the difference between the initial shear stress \(\tau_{0}\) and the residual strength of the fault without any overpressure, \(f_{r}\sigma_{0}^{\prime}\), commonly known as stress drop. Note that this term may be either positive or negative, whereas the first term is always positive. Finally, the right-hand side of the near-front energy balance is associated with the fracture energy \(G_{c}\) that is dissipated within the process zone or, in the Irwin's form, the fracture toughness \(K_{c}\). Note that in our model, the fracture energy may generally depend on the rupture radius \(R\) and time \(t\) due to variations in overpressure (see equation (12)). ## 3 Two simplified models ### The constant friction model The constant friction model is the simplest idealization of friction that produces fluid-driven stable frictional ruptures. It has been extensively studied in two-dimensional configurations for different injection scenarios [1, 28] and more recently in the fully three-dimensional case [1]. Here, we briefly summarize the main characteristics of the circular rupture model [1], whose results will be later put in a broader perspective in relation to the slip-weakening model. Saez _et al_. [1] showed that fault slip induced by injection at a constant volumetric rate is self-similar in a diffusive manner. The rupture radius \(R(t)\) thus evolves simply as \[R(t)=\lambda L(t), \tag{15}\] where \(L(t)=\sqrt{4\alpha t}\) is the diffusion length scale and nominal position of the overpressure front, and \(\lambda\) is the so-called amplification factor for which an analytical solution was derived [1] from the condition that the rupture grows with no stress singularity at its front, \[\int_{0}^{1}\frac{E_{1}\left(\lambda\eta\right)}{\sqrt{1-\eta^{2}}}\eta \mathrm{d}\eta=\mathcal{T}, \tag{16}\] which leads after evaluating analytically the corresponding integral to \[2-\gamma+\frac{2}{3}\lambda^{2}{}_{2}F_{2}\left[\begin{array}{cc}1&1\\ 2&\nicefrac{{5}}{{2}}\end{array};-\lambda^{2}\right]-\ln(4\lambda^{2})=\mathcal{T}, \tag{17}\] where \(\gamma=0.577216...\) is the Euler-Mascheroni's constant and \({}_{2}F_{2}\left[\begin{array}{c}\end{array}\right]\) is the generalized hypergeometric function. Note that \(\lambda\) is function of a sole dimensionless number, the so-called stress-injection parameter \[\mathcal{T}=\frac{f_{\mathrm{cons}}\sigma_{0}^{\prime}-\tau_{0}}{f_{\mathrm{ cons}}\Delta p_{*}}, \tag{18}\] where \(f_{\mathrm{cons}}\) is the constant friction coefficient. The parameter \(\mathcal{T}\) is defined as the ratio between the amount of shear stress that is necessary to activate fault slip \(f_{\mathrm{cons}}\sigma_{0}^{\prime}-\tau_{0}\), and \(f_{\mathrm{cons}}\Delta p_{*}\) which quantifies the intensity of the fluid injection. \(\mathcal{T}\) can vary in principle between \(0\) and \(+\infty\)[1]. However, for practical purposes \(\mathcal{T}\) is upper bounded. Indeed, as described previously, \(\Delta p_{*}=\Delta p_{c}/4\pi\) with \(\Delta p_{c}\) taken as approximately equal to the overpressure at the fluid source. Hence, one can estimate the minimum amount of overpressure that is required to activate fault slip as \(f_{\mathrm{cons}}\Delta p_{c}\approx f_{\mathrm{cons}}\sigma_{0}^{\prime}- \tau_{0}\). Substituting the previous relation into equation (18) leads to an approximate upper bound for \(\mathcal{T}\lessapprox 10\), where we have approximated the factor \(4\pi\) by \(10\). The lower and upper bounds of \(\mathcal{T}\) are associated with two end-member regimes that were first introduced by Garagash and Germanovich [32]. When \(\mathcal{T}\) is close to zero, \(\tau_{0}\to f_{\mathrm{cons}}\sigma_{0}^{\prime}\) and thus the fault is critically stressed or about to fail before the injection starts. On the other hand, when \(\mathcal{T}\approx 10\), the fault is'marginally pressurized' as the injection has provided just the minimum amount of overpressure to activate fault slip. The asymptotic behavior of \(\lambda\) for the limiting values of \(\mathcal{T}\) is particularly insightful. For critically stressed faults (\(\mathcal{T}\ll 1\)), the amplification factor turns out to be large (\(\lambda\gg 1\)) and thus the rupture front outpaces largely the fluid pressure front (\(R(t)\gg L(t)\)), whereas for marginally pressurized faults (\(\mathcal{T}\sim 10\)), the amplification factor is small (\(\lambda\ll 1\)) and thus the rupture front lags significantly the overpressure front (\(R(t)\ll L(t)\)). In the critically stressed limit, since \(\lambda\gg 1\), the equivalent shear load due to fluid injection can be approximated as a point force, \[f_{\mathrm{cons}}\Delta p(r,t)\approx f_{\mathrm{cons}}\Delta P(t)\delta^{dirac }(r)/2\pi r,\text{ with }\Delta P(t)=\Delta p_{*}\int_{0}^{\infty}E_{1}\left(\frac{r^{2}}{4 \alpha t}\right)2\pi r\mathrm{d}r=4\pi\alpha t\Delta p_{*}, \tag{19}\] whereas in the marginally pressurized limit, its asymptotic form comes simply from expanding the exponential integral function for small values of its argument as \(\lambda\ll 1\): \(f_{\mathrm{cons}}\Delta p(r,t)\approx-f_{\mathrm{cons}}\Delta p_{*}\left[\ln \left(r^{2}/4\alpha t\right)+\gamma\right]\). Substituting the previous asymptotic forms for the fluid-injection 'forces' into the rupture propagation condition (16), leads to the following asymptotes for the amplification factor [1]: \[\lambda\approx\begin{cases}1/\sqrt{2\mathcal{T}}&\text{for critically stressed faults, }\mathcal{T}\ll 1,\\ \frac{1}{2}\exp\left(\left[2-\gamma-\mathcal{T}\right]/2\right)&\text{for marginally pressurized faults, }\mathcal{T}\sim 10.\end{cases} \tag{20}\] Comparing the previous asymptotes with the exact solution (17) suggests that the asymptotic approximation (20) is accurate up to \(5\%\) in the critically stressed and marginally pressurized regimes, for \(\mathcal{T}\lessapprox 0.16\) and \(\mathcal{T}\gtrapprox 2\), respectively. ### The constant fracture energy model Consider the simple case in which the fracture energy \(G_{c}\) is constant (and so the fracture toughness \(K_{c}\)). Although this is an idealized scenario, it accounts already for one of the main ingredients of the slip-weakening friction model, that is, a finite fracture energy. At the same time, the constant fracture energy model allows us to examine quickly the different regimes of propagation that emerge from the competition between the three distinct terms that compose the front-localized energy balance (14). Furthermore, a constant fracture energy model will result to be an excellent approximation of some important rupture regimes in the slip-weakening model. #### 3.2.1 Scaling and structure of the solution Similarly to the case of Coulomb's friction, let us define an amplification factor in the form \[\lambda(t)=\frac{R(t)}{L(t)}. \tag{21}\] Unlike the solution of the previous section that is self-similar, here the introduction of a finite fracture energy breaks the self-similarity of the problem and makes the solution for \(\lambda\) be now time-dependent. Non-dimensionalization of the front-localized energy balance shows that the solution for the amplification factor \(\lambda\) can be written as \[\lambda\left(\mathcal{T}_{r},\mathcal{K}(t)\right), \tag{22}\] where \(\mathcal{T}_{r}\) and \(\mathcal{K}\) are two dimensionless parameters with the physical meanings that we explain below. Interestingly, the dependence of \(\lambda\) on time is only in the second parameter \(\mathcal{K}\). The first parameter \(\mathcal{T}_{r}\) is a dimensionless number in the form \[\mathcal{T}_{r}=\frac{f_{r}\sigma_{0}^{\prime}-\tau_{0}}{f_{r}\Delta p_{*}}, \tag{23}\] which turns out to be identical to the stress-injection parameter of the constant friction model (equation (18)) except that the constant friction coefficient \(f_{\rm cons}\) is now the residual friction coefficient \(f_{r}\). For this reason, we name it as the'residual' stress-injection parameter. \(\mathcal{T}_{r}\) quantifies the combined effect of the two equivalent shear loads that drive the propagation of the 'fractured' slipping patch, namely, the uniform stress \(f_{r}\sigma_{0}^{\prime}-\tau_{0}\) and the distributed load associated with fluid injection \(f_{r}\Delta p(r,t)\) whose intensity is \(f_{r}\Delta p_{*}\). The uniform stress is equal to the difference between the residual fault strength under ambient conditions \(f_{r}\sigma_{0}^{\prime}\) and the initial shear stress \(\tau_{0}\). Note that depending on the sign of \(f_{r}\sigma_{0}^{\prime}-\tau_{0}\), \(\mathcal{T}_{r}\) can be either positive or negative. Moreover, as noted by first Garagash and Germanovich [32] in their two-dimensional model, the sign of \(f_{r}\sigma_{0}^{\prime}-\tau_{0}\) is expected to strongly affect the overall stability of the fault response. According to Garagash and Germanovich, if the condition \(f_{r}\sigma_{0}^{\prime}<\tau_{0}\) is satisfied (\(\mathcal{T}_{r}\) negative), ruptures may ultimately run away dynamically and never stop within the limits of such a homogeneous and infinite fault model. Conversely, when \(f_{r}\sigma_{0}^{\prime}>\tau_{0}\) (\(\mathcal{T}_{r}\) positive), ruptures would propagate ultimately in a quasi-static, stable manner. Assuming for now that the ultimate stability condition of Garagash and Germanovich [32] holds in the circular rupture configuration, we consider only ultimately quasi-static cases in this section and, therefore, values for \(\mathcal{T}_{r}\) that are strictly positive. We will soon show that this assumption is indeed satisfied. Let us now find the limiting values of \(\mathcal{T}_{r}\). As lower bound, \(\mathcal{T}_{r}\) can be as small as possible (\(\mathcal{T}_{r}\to 0\)) when \(f_{r}\sigma_{0}^{\prime}\to\tau_{0}\). Since in this limit the fault is approaching the ultimate unstable condition of Garagash and Germanovich [32], we refer to it as the 'nearly unstable' limit. As upper bound, similarly to the case of constant friction, the maximum value of \(\mathcal{T}_{r}\) is set by the minimum possible magnitude of \(\Delta p_{*}\), which in turn relates to the minimum amount of overpressure that is required to activate fault slip. This is given by the approximate relation \(f_{p}\Delta p_{c}\approx f_{p}\sigma_{0}^{\prime}-\tau_{0}\), where \(f_{p}\) is the peak friction coefficient and \(\Delta p_{c}\) is the characteristic overpressure of the fluid source (equation (2)). By substituting this previous relation into (23), we find the sought upper bound to be \(\mathcal{T}_{r}\lessapprox 4\pi(\sigma_{0}^{\prime}-\tau_{0}/f_{r})/(\sigma_{0}^{ \prime}-\tau_{0}/f_{p})\). Since the ratio \(f_{r}/f_{p}\) is always between \(0\) and \(1\) and the maximum value of the upper bound is obtained when \(f_{r}/f_{p}\to 1\), we obtain such a maximum upper bound as \(\mathcal{T}_{r}\lessapprox 10\) (again, the factor \(4\pi\)). Given that in this limit the upper bound is still related to the minimum amount of overpressure that is required to activate a frictional rupture, we still denominate it as marginally pressurized limit. Note that the upper bound limit has a quite similar meaning than the one of \(\mathcal{T}\) in the constant friction model. Conversely, the lower bound limit of \(\mathcal{T}_{r}\) has no longer the interpretation of a critically stressed fault as for \(\mathcal{T}\). The second parameter of the constant fracture energy model, \(\mathcal{K}\), corresponds to a time-dependent dimensionless toughness. It can be either defined using the stress scales of the nearly unstable or marginally pressurized limits depending on the proper regime characterizing the fault response. These two choices are: \[\mathcal{K}_{nu}(t)=\frac{K_{c}}{\left(f_{r}\sigma_{0}^{\prime}-\tau_{0} \right)\sqrt{R(t)}},\;\text{and}\quad\mathcal{K}_{mp}(t)=\frac{K_{c}}{f_{r} \Delta p_{*}\sqrt{R(t)}}, \tag{24}\] for the nearly unstable (\(\mathcal{T}_{r}\ll 1\)) and marginally pressurized (\(\mathcal{T}_{r}\sim 10\)) regimes, respectively. Note that in (24), the explicit dependence of \(R\) on time has been emphasized as it provides the time dependence of \(\mathcal{K}\). The dimensionless toughness \(\mathcal{K}\) quantifies the relevance of the fracture energy in the near-front energy balance at a given time \(t\). Since for any physically admissible solution in which the injection is continuous, the rupture radius must increase monotonically with time, the solution will always evolve from a large-toughness regime (\(\mathcal{K}\gg 1\)) to a small-toughness regime (\(\mathcal{K}\ll 1\)). Moreover, the effect of the fracture energy in the energy balance can be ultimately neglected as the dimensionless toughness effectively vanishes (\(\mathcal{K}\to 0\)) in the limit \(R\to\infty\) (or \(t\to\infty\)). We denominate this ultimate solution as the zero-toughness or zero-fracture-energy solution. Since the effect of the fracture energy becomes irrelevant in this limit, such asymptotic solution is self-similar (\(\lambda\) in (22) becomes time-independent). Finally, the transition between the large- and small-toughness regimes is characterized by the following rupture length scales (obtained by setting \(\mathcal{K}_{nu}=1\) and \(\mathcal{K}_{mp}=1\), respectively), \[R_{nu}^{*}=\left(\frac{K_{c}}{f_{r}\sigma_{0}^{\prime}-\tau_{0}}\right)^{2} \;\text{and}\quad R_{mp}^{*}=\left(\frac{K_{c}}{f_{r}\Delta p_{*}}\right)^{2}. \tag{25}\] Note that the two dimensionless toughnesses in (24) are of course not independent as they are two choices of one same parameter. They are indeed related through the residual stress-injection parameter \(\mathcal{T}_{r}\) as \[\mathcal{K}_{mp}=\mathcal{T}_{r}\mathcal{K}_{nu}. \tag{26}\] #### 3.2.2 General and ultimate zero-fracture-energy solutions Considering the scaling of the previous section plus the definition of the following non-dimensional integral: \[\Psi(\lambda)=\int_{0}^{1}\frac{E_{1}\left(\lambda\eta\right)}{\sqrt{1-\eta^{ 2}}}\eta\mathrm{d}\eta, \tag{27}\] the front-localized energy balance (14) can be written in dimensionless form as \[\frac{\Psi(\lambda)}{\mathcal{T}_{r}}-1=\frac{\sqrt{\pi}}{2}\mathcal{K}_{nu}, \;\text{and}\quad\Psi(\lambda)-\mathcal{T}_{r}=\frac{\sqrt{\pi}}{2}\mathcal{ K}_{mp}, \tag{28}\] in the nearly unstable (\(\mathcal{T}_{r}\ll 1\)) and marginally pressurized (\(\mathcal{T}_{r}\sim 10\)) regimes, respectively. Note that the non-dimensional integral \(\Psi\) is identical to the one in equation (16) and can Figure 3: The constant fracture energy model. (Left) Nearly unstable regime (\(\lambda\gg 1\), \(\mathcal{T}_{r}\ll 1\)) and (right) marginally pressurized regime (\(\lambda\ll 1\), \(\mathcal{T}_{r}\sim 10\)). (a, b) Amplification factor \(\lambda\) as a function of dimensionless toughness \(\mathcal{K}(t)\) for different values of \(\mathcal{T}_{r}\). (c, d) Normalized rupture radius \(R\) as a function of the normalized squared root of time or position of the overpressure front \(L(t)=\sqrt{4\alpha t}\). (e, f) Normalized rupture radius \(R\) as a function of dimensionless time \(t\). All curves tend to collapse when using the latter scaling. Legends: black solid lines are the general solution of the constant fracture energy model; red dashed lines correspond to the asymptotes for \(\lambda\) (29) in (a, b), and the asymptotes for the normalized rupture radius (32) and (33) in (c) and (d), respectively; blue dashed lines represent the constant residual friction, ultimate zero-fracture-energy solution. be thus evaluated analytically to obtain the left-hand side of equation (17). Moreover, the limiting behaviors of such integral are: \(\Psi(\lambda)\approx 1/2\lambda^{2}+O(\lambda^{-4})\) when \(\lambda\gg 1\), and \(\Psi(\lambda)\approx 2-\gamma-\ln\left(4\lambda^{2}\right)+O(\lambda^{2})\) when \(\lambda\ll 1\). Using the previous asymptotic expansions and assuming similarly to the constant friction model that \(\lambda\gg 1\) when \(\mathcal{T}_{r}\ll 1\), and \(\lambda\ll 1\) when \(\mathcal{T}_{r}\sim 10\), we derive from equations (28), the following closed-form asymptotic expressions for the amplification factor: \[\lambda\approx\begin{cases}1/\sqrt{\left(2+\sqrt{\pi}\mathcal{K}_{nu}\right) \mathcal{T}_{r}}&\text{for nearly unstable faults, }\mathcal{T}_{r}\ll 1,\\ \frac{1}{2}\exp\left[\left(2-\gamma-\mathcal{T}_{r}-\mathcal{K}_{mp}\sqrt{ \pi}/2\right)/2\right]&\text{for marginally pressurized faults, }\mathcal{T}_{r}\sim 10,\end{cases} \tag{29}\] where the dependence of both \(\mathcal{K}\) and \(\lambda\) on time has been omitted for simplicity. The full solution of the model given by equations (28) together with the asymptotics (29) are shown in figures 3a and 3b. The direction of time in these plots goes from right to left as the dimensionless toughness \(\mathcal{K}(t)\) decreases with time. Moreover, since ultimately (\(t\to\infty\), \(R\to\infty\)) the dimensionless toughness is negligibly small (\(\mathcal{K}\to 0\)), equations (28)a and (28)b become both identical to the rupture propagation condition of the constant friction model, equation (16), as long as the constant friction coefficient \(f_{\text{cons}}\) is now understood as the residual one \(f_{r}\). The solution of the constant friction model (17) with \(f_{\text{cons}}=f_{r}\) is also displayed in figures 3a and 3b. It is now clear how the constant fracture energy solution approaches asymptotically the constant residual friction solution as \(\mathcal{K}\to 0\). This can be also seen in the asymptotics (29) that become identical to (20) when \(\mathcal{K}=0\). The previous result has an important implication: the constant friction model analyzed in [1] can be now interpreted in two distinct manners: as an scenario in which the friction coefficient does not significantly weaken (\(f_{\text{cons}}\approx f_{p}\)), or as the ultimate asymptotic solution of a model with constant fracture energy provided that \(f_{\text{cons}}=f_{r}\). In the former, the fracture energy \(G_{c}=0\) by definition. In the latter, the effect of the non-zero fracture energy in the rupture-front energy balance is to leading order negligible compared to the other two terms that drive the propagation of the rupture. In addition, because the integral \(\Psi(\lambda)\) is strictly positive and in the ultimate asymptotic regime \(\mathcal{K}\to 0\), the near-front energy balance (28) admits ultimate quasi-static solutions only if \(\mathcal{T}_{r}>0\). Negative values of \(\mathcal{T}_{r}\) which are equivalent to the condition \(f_{r}\sigma_{0}^{\prime}<\tau_{0}\) may be thus related to ultimately unstable solutions, not accounted for the quasi-static energy balance. This result supports our assumption that the ultimate stability condition of Garagash and Germanovich [32] holds in the circular rupture configuration. We now recast the solution of the constant fracture energy model in a perhaps more intuitive way, as the evolution of the rupture radius with time: \[R(t)=\lambda(\mathcal{T}_{r},\mathcal{K}(t))\cdot L(t). \tag{30}\] Recalling that \(L(t)=\sqrt{4\alpha t}\) and noting that \[\mathcal{K}_{nu}=(R/R_{nu}^{*})^{-1/2},\text{ and }\quad\mathcal{K}_{mp}=(R/R_{mp}^{*})^{-1/2}, \tag{31}\] we solve equations (28)a and (28)b for \(R/R^{*}\) as a function of the normalized squared root of time \(\sqrt{4\alpha t}/R^{*}\) and \(\mathcal{T}_{r}\), where \(R^{*}\) represents the characteristic rupture length scale of either the nearly unstable or marginally pressurized regime (equation (25)). This version of the solution is displayed in figures 3c and 3d. In these plots, the normalized square root of time can be also interpreted as the normalized position of the overpressure front \(L(t)\). Indeed, the thicker dashed line corresponds to the current position of the overpressure front. Slip fronts propagating above this line represent cases in which the rupture front outpaces the overpressure front. We observe that such a situation is a common feature of nearly unstable faults (\(\mathcal{T}_{r}\ll 1\)), being the analog regime of critically stressed faults in the constant friction model. Moreover, taking into account (30) and (31), the asymptotics (29) can be recast as the following implicit equations for the normalized rupture radius \(R/R^{*}\) as a function of time: \[\sqrt{4\alpha t}/R^{*}_{nu}=\frac{R}{R^{*}_{nu}}\left[\left(2+\frac{\sqrt{\pi}}{ \sqrt{R/R^{*}_{nu}}}\right)\mathcal{T}_{r}\right]^{1/2} \tag{32}\] for nearly unstable faults, and \[\sqrt{4\alpha t}/R^{*}_{mp}=\frac{2\left(R/R^{*}_{mp}\right)}{\exp\left[\left( 2-\gamma-\mathcal{T}_{r}-\left(\sqrt{\pi}/2\right)\left(R/R^{*}_{mp}\right)^{ -1/2}\right)/2\right]} \tag{33}\] for marginally pressurized faults. Note that the transition from the large-toughness (\(\mathcal{K}\gg 1\)) to small-toughness (\(\mathcal{K}\ll 1\)) regime in figures 3c and 3d occurs along the vertical axis when \(R/R^{*}_{nu}\sim 1\) and \(R/R^{*}_{mp}\sim 1\), respectively. The characteristic time at which this transition occurs can be approximated by using the constant residual friction solution or, what is the same, the ultimate zero-fracture-energy solution, \(\lambda_{r}=\lambda\left(\mathcal{T}_{r},\mathcal{K}=0\right)\), which yields \[t^{*}_{nu}\approx\frac{1}{\alpha\lambda_{r}^{2}}\left(\frac{K_{c}}{f_{r} \sigma_{0}^{\prime}-\tau_{0}}\right)^{4},\text{ and }\quad t^{*}_{mp}\approx\frac{1}{\alpha\lambda_{r}^{2}}\left(\frac{K_{c}}{f_{r} \Delta p_{*}}\right)^{4}. \tag{34}\] \(\lambda_{r}\) can be estimated from the asymptotes presented in equation (20) for both nearly unstable (\(\mathcal{T}_{r}\ll 1\)) and marginally pressurized (\(\mathcal{T}_{r}\sim 10\)) faults, provided that \(\mathcal{T}\) is replaced by \(\mathcal{T}_{r}\). Normalizing time by the previous characteristic times naturally tends to collapse all solutions for every value of \(\mathcal{T}_{r}\) as displayed in figures 3e and 3f, where the power law \(1/2\) reflects the diffusively self-similar property of the ultimate zero-fracture-energy solution. Finally, it seems worth mentioning that the solution for marginally pressurized faults is nonphysical at times in which the rupture is small, \(R/R^{*}_{mp}\lessapprox 0.05\) (see figures 3d and 3f). This could be related either to the occurrence of a dynamic instability or to a rupture size that is too small comparing to realistic process zone sizes. Indeed, the solution constructed here for the case of a constant fracture energy has the inherent limitations of LEFM theory. First, it does not account for the initial stage in which the process zone is under development (\(R<\ell_{*}\)) and, second, it is an approximate solution that relies on the small-scale yielding assumption (\(R\gg\ell_{*}\)). Both limitations are overcome in the next section by solving numerically the governing equations of the coupled initial boundary value problem for slip-weakening friction. ## 4 Scaling analysis, map of rupture regimes and ultimate stability condition ### Scaling analysis The scaling of the slip-weakening problem comes directly from the two-dimensional linear-weakening model of Garagash and Germanovich [32], which is also valid for the exponential-weakening version of the friction law. We summarize the scaling as follows: \[\bar{t}=\frac{t}{R_{w}^{2}/\alpha},\quad\bar{r}=\frac{r}{R},\quad\bar{\tau}= \frac{\tau}{f_{p}\sigma_{0}^{\prime}},\quad\bar{\delta}=\frac{\delta}{\delta_ {w}},\quad\Delta\bar{p}=\frac{\Delta p}{\Delta p_{*}}, \tag{35}\] where the bar symbol represents dimensionless quantities, \(\delta_{w}\) is the slip weakening scale, and \(R_{w}\) is an elasto-frictional rupture length scale, given respectively by (see also Uenishi and Rice [33]) \[\delta_{w}=\frac{f_{p}}{f_{p}-f_{r}}\delta_{c},\text{ and }\quad R_{w}=\frac{ \mu}{\left(f_{p}-f_{r}\right)\sigma_{0}^{\prime}}\delta_{c}. \tag{36}\] In the previous equation, \((f_{p}-f_{r})\sigma^{\prime}_{0}/\delta_{c}\) is the so-called slip-weakening rate [33]. Nondimensionalization of the governing equations of the model using the previous scaling shows that the normalized fault slip \(\bar{\delta}\) depends in addition to dimensionless space \(\bar{r}\) and time \(\bar{t}\), on the following three dimensionless parameters: \[\mathcal{S}=\frac{\tau_{0}}{f_{p}\sigma^{\prime}_{0}},\quad\mathcal{P}=\frac{ \Delta p_{*}}{\sigma^{\prime}_{0}},\quad\mathcal{F}=\frac{f_{r}}{f_{p}}. \tag{37}\] The first parameter \(\mathcal{S}\) is the pre-stress ratio or sometimes called, stress criticality. It is the quotient between the initial shear stress \(\tau_{0}\) and the initial static fault strength \(f_{p}\sigma^{\prime}_{0}\). The pre-stress ratio \(\mathcal{S}\) quantifies how close to frictional failure the fault is under ambient (pre-injection) conditions. The range of values for \(\mathcal{S}\) is naturally \[0\leq\mathcal{S}<1, \tag{38}\] being zero when the fault has no initial shear stress whatsoever, and one when the fault is critically stressed or about to fail under ambient conditions, \(\tau_{0}\to f_{p}\sigma^{\prime}_{0}\). The second parameter \(\mathcal{P}\) is the overpressure ratio, which quantifies the intensity of the injection \(\Delta p_{*}\) with regard to the initial effective normal stress \(\sigma^{\prime}_{0}\). The range of possible values for \(\mathcal{P}\) is determined as follows. Its upper bound comes from the maximum possible amount of overpressure that in our model corresponds to an scenario in which the fault interface is about to open: \(\Delta p_{c}\approx\sigma^{\prime}_{0}\), where \(\Delta p_{c}=4\pi\Delta p_{*}\) (equation (2)). On the other hand, the lower bound of \(\mathcal{P}\) comes from the minimum amount of overpressure that is required to activate fault slip: \(f_{p}\Delta p_{c}\approx f_{p}\sigma^{\prime}_{0}-\tau_{0}\). By replacing the previous approximate relations into \(\mathcal{P}=\Delta p_{*}/\sigma^{\prime}_{0}\), we obtain the sought range of values for \(\mathcal{P}\) in an approximate sense as \[10^{-1}(1-\mathcal{S})\lessapprox\mathcal{P}\lessapprox 10^{-1}, \tag{39}\] where \(\approx 10^{-1}\) comes from the factor \(1/4\pi\). Finally, the third parameter, the residual-to-peak friction ratio \(\mathcal{F}\) is such that \[0\leq\mathcal{F}\leq 1. \tag{40}\] \(\mathcal{F}\) is zero when there is a total loss of frictional resistance upon the passage of the rupture front, a situation that is unlikely to occur for stable, slow slip, as oppose to fast slip in which thermally-activated dynamic weakening mechanisms could make the fault reach quite low values for \(\mathcal{F}\)[44]. On the other hand, \(\mathcal{F}\) is equal to one when the friction coefficient does not weaken at all, which corresponds indeed to the particular case of Coulomb's friction \(f_{\text{cons}}=f_{p}\). Finally, it will result useful to define the residual stress-injection parameter \(\mathcal{T}_{r}\) of the constant fracture energy model, equation (23), as a combination of the three dimensionless parameters of the slip-weakening model, \[\mathcal{T}_{r}=\frac{1-\mathcal{S}/\mathcal{F}}{\mathcal{P}}. \tag{41}\] In addition, one can also define a stress-injection parameter based on the peak value of friction \(f_{p}\) instead of the residual one. Such a parameter reads as \[\mathcal{T}_{p}=\frac{f_{p}\sigma^{\prime}_{0}-\tau_{0}}{f_{p}\Delta p_{*}}= \frac{1-\mathcal{S}}{\mathcal{P}}. \tag{42}\] We denominate \(\mathcal{T}_{p}\) as the 'peak' stress-injection parameter. The latter is indeed the maximum possible value of the residual stress-injection parameter \(\mathcal{T}_{r}\) (when \(\mathcal{F}=1\)), so that \[\mathcal{T}_{r}\leq\mathcal{T}_{p}. \tag{43}\] As a final comment on the scaling, when comparing the fault response for each version of the friction law, the results that we present in the next sections are particularly valid under the assumption that both friction laws are characterized by the same slip weakening scale \(\delta_{c}\) (see figure 2). Alternatively, one could compare the effect of both friction laws under a different condition such as, for instance, an equal fracture energy \(G_{c}\), or any other criterion. In the case of equal \(G_{c}\), the characteristic slip weakening scales would be related as \(\delta_{c,\mathrm{lin}}=2\delta_{c,\mathrm{exp}}\). Our results can be then easily re-scaled using the previous relation as the dimensionless solution remains unchanged, and the same could be done with any other criterion. ### Map of rupture regimes and ultimate stability condition Given the similarity of the scaling between our three-dimensional axisymmetric rupture model and the two-dimensional plane-strain model of Garagash and Germanovich [32], we find, not surprisingly, that the map of regimes of fault behavior in our model is essentially the same as in the two-dimensional problem [32]. Figure 4 summarizes the map of regimes in the parameter space composed by \(\mathcal{S}\), \(\mathcal{P}\) and \(\mathcal{F}\). Moreover, as anticipated when examining the constant fracture energy model, the ultimate stability condition of Garagash and Germanovich [32] holds in the circular rupture case. Therefore, mixed-mode circular ruptures will propagate ultimately (\(t\to\infty\), \(R\to\infty\)) in a quasi-static, stable manner, if any of the following three equivalent conditions is satisfied: \[f_{r}\sigma_{0}^{\prime}>\tau_{0}\iff\mathcal{S}<\mathcal{F}\iff\mathcal{T}_ {r}>0. \tag{44}\] Notably, the residual stress-injection parameter \(\mathcal{T}_{r}\) must be strictly positive. Else, ruptures will propagate ultimately in an unstable, dynamic manner. In the latter case, dynamic ruptures will run away and never stop within the limits of such a homogeneous and infinite fault model. Since in this work, we are mainly interested in the propagation of quasi-static slip, the regimes of major interest are the ones corresponding to ultimately stable ruptures and the quasi-static nucleation phase preceding dynamic ruptures. We examine both scenarios in what follows. ## 5 Ultimately stable ruptures Figure 5 displays the propagation of the slip front in the case of ultimately stable ruptures: \(f_{r}\sigma_{0}^{\prime}>\tau_{0}\). Without loss of generality, we fix the residual-to-peak friction ratio \(\mathcal{F}=0.7\) and examine for both the linear- and exponential-weakening friction laws the parameter space for \(\mathcal{P}\) and \(\mathcal{S}<0.7\). The case of an overpressure ratio \(\mathcal{P}=0.05\) is shown in figures 5a and 5b. For all values of \(\mathcal{S}\) in these figures, we obtain ruptures that propagate in a purely quasi-static manner without any dynamic excursion, that is, the regime R1 in figure 4. Figures 5c and 5d show, on the other hand, the case of a lower overpressure ratio \(\mathcal{P}=0.035\). For this value of \(\mathcal{P}\), we observe the occurrence of the regime R2 for the linear-weakening case and both regimes R1 and R2 for the exponential-weakening case. The regime R2 corresponds to a situation in which a dynamic rupture nucleates, arrest, and is then followed by purely quasi-static slip. ### Early-time Coulomb's friction stage and localization of the process zone It is clear from figure 5 that at early times and for both regimes (R1 and R2), the propagation of the slip front is well approximated by the Coulomb's friction model, \(f_{\mathrm{cons}}=f_{p}\). The rupture radius thus evolves in this stage as \[R(t)\approx\lambda_{p}L(t), \tag{45}\] where \(\lambda_{p}\) is the amplification factor given by equation (17) considering the peak stress-injection parameter \(\mathcal{T}_{p}\), and \(L(t)=\sqrt{4\alpha t}\) is the position of the overpressure front as usual. This early-time Coulomb's friction similarity solution is meant to be valid while the friction coefficient does not decrease significantly throughout the slipping region, as shown for a few cases in the Figure 4: Map of rupture regimes for linear slip-weakening model (adapted from figure 11 in [32]). (R1) Unconditionally stable fault slip. (R2) Quasi-static slip up to the nucleation of a dynamic rupture, followed by arrest and then purely quasi-static slip. (R3) Quasi-static slip until the nucleation of a run-away dynamic rupture. (R4) Quasi-static slip up to the nucleation of a dynamic rupture, followed by arrest and then re-nucleation of a run-away dynamic rupture. \({}^{*}\)The condition of no slip is established in the approximate sense discussed in Appendix C. Figure 5: Ultimately stable faults, \(\mathcal{S}<\mathcal{F}=0.7\), at distances \(R/R_{w}<20\). Normalized rupture radius versus square root of dimensionless time for (left) linear and (right) exponential weakening versions of the friction law. (a) \(\mathcal{P}=0.05\) and (b) \(\mathcal{P}=0.035\). Red dashed lines correspond to the analytical constant friction solution considering the peak friction coefficient. Gray dashed lines correspond to the solution of the front-localized energy balance, \(G=G_{c}\). Green dashed lines correspond to an improved version of the front-localized energy balance as explained in the main text, shown here for only a few cases. Light blue lines represent the position of the overpressure front \(L(t)=\sqrt{4\alpha t}\). Note that for all cases, \(R(t)\ll L(t)\). examples of figures 8d and 8e, when looking at the spatial distribution of the friction coefficient for the earliest times (\(t_{1}\) and \(t_{2}\)). Note that in figure 5, we also include the evolution of the overpressure front with time (light blue dashed line). In the spatial range covered by this figure, the slip front of ultimately stable ruptures always lags the overpressure front (\(R(t)\ll L(t)\)) as the corresponding values of \(\mathcal{T}_{p}\) are well into the marginally pressurized regime (\(\mathcal{T}_{p}\sim 10\)). Now, beyond this early stage, the propagation of the slip front starts departing from the Coulomb's friction similarity solution while the slipping region experiences further weakening of friction. At this point, a dynamic instability could nucleate, arrest, and be followed by aseismic slip (examples in figures 5c and 5d), within a relatively narrow region of the parameter space (R2 in figure 4). More generally, ruptures will propagate in a purely quasi-static manner (examples in figures 5a and 5b, regime R1 in figure 4). Either way, when this transition happens, the rupture radius becomes greater than the rupture length scale \(R_{w}\), which is around the same order than the process zone size \(\ell_{*}\) for the linear weakening law (see, for an example, figure 8f). In the case of the exponential weakening case, figure 5 shows that the transition is smoother and occurs later than for the linear weakening case, whereas the localization of the process zone occurs also at a later time and for rupture lengths that seem to be many times or even an order of magnitude greater than the elasto-frictional length scale \(R_{w}\) (see, also, an example in figure 8f). Furthermore, starting from this point, the process zone has fully developed and thus a proper fracture energy \(G_{c}\) can be calculated. In this way, we can now examine the evolution of the rupture front through the near-front energy balance (14), to an accuracy set by the small-scale yielding approximation. ### Front-localized energy balance, large- and small-toughness regimes Using equation (12) in combination with (1), (4) and (5), we obtain an expression for the fracture energy of the slip-weakening model as \[G_{c}\approx\kappa\left(f_{p}-f_{r}\right)\delta_{c}\left[\sigma_{0}^{\prime}- \Delta p_{*}E_{1}\left(\lambda^{2}\right)\right] \tag{46}\] with \(\lambda(t)=R(t)/L(t)\) as usual, and \(\kappa\) is a coefficient equal to \(1/2\) for the linear weakening case, and \(1\) for the exponential law, reflecting that the fracture energy of the latter is twice the fracture energy of the former at equal \(\delta_{c}\). Introducing the scaling of the slip weakening model (35) into equation (14), one can nondimensionalize the front-localized energy balance in the following form, \[\mathcal{FP}\Psi\left(\lambda\right)+\left(\mathcal{S}-\mathcal{F}\right)= \frac{\sqrt{\pi}}{2}\sqrt{2\kappa}\left(1-\mathcal{F}\right)\frac{\sqrt{1- \mathcal{P}E_{1}\left(\lambda^{2}\right)}}{\sqrt{R/R_{w}}}, \tag{47}\] where \(\Psi(\lambda)\) is the non-dimensional integral (27) whose evaluation is know analytically in (17). As expected from the scaling of the problem, equation (47) shows that the normalized rupture radius \(R/R_{w}\) depends in addition to dimensionless time \(L(t)/R_{w}\) (which is implicit in \(\lambda\)) on the three dimensionless parameters of the model: \(\mathcal{S}\), \(\mathcal{P}\) and \(\mathcal{F}\). Considering that for any physically admissible quasi-static solution, the rupture radius must increase monotonically with time, we solve equation (47) by imposing \(R/R_{w}\) and then calculating \(L(t)/R_{w}\) for a given combination of dimensionless parameters. The solution of the front-localized energy balance (47) is shown in figure 5 together with the full numerical solutions. We observe that in the linear weakening case, the near-front energy balance yields a good approximation of the full numerical solution already for \(R\gtrapprox 2R_{w}\). On the other hand, in the exponential decay case, a good approximation is reached only after \(R\gtrapx 10R_{w}\). The difference is due to the fact that the localization of the process zone is less sharp and takes longer for the exponential decay in comparison to the linear weakening case, as exemplified in figure 8f. Moreover, the approximate nature of the energy-balance solution is of course due to the finite size of the process zone as opposed to the infinitesimal size required by LEFM. Indeed, in the two-dimensional problem, a correction due to the finiteness of the process zone for the linear weakening case was considered by Garagash and Germanovich [32] based on the work on cohesive tensile crack propagation due to uniform far-field load by Dempsey _et al_. [45], providing a solution with improved accuracy. Figure 5 shows the results of this correction for a few cases in our model. The solution gets slightly better despite our frictional shear crack is circular and as such, the pre-factors in the scaling relations of the two-dimensional problem should differ from ours. The front-localized energy balance allows us notably to examine the evolution of the rupture radius beyond the spatial range covered by figure 5, without the need of calculating the full numerical solutions. Note that the energy-balance solution is not only a good approximation over this spatial range but will also become an exact asymptotic solution in the LEFM limit \(\ell_{*}/R\to 0\). Solutions for \(10\leq R/R_{w}\leq 10^{3}\) are displayed in figure 6. In particular, figure 6a shows that the higher the pre-stress ratio is, the faster the rupture propagates. Similarly, figure 6b displays that the more intense injection is (higher overpressure ratio), the faster the rupture propagates too. Both effects are intuitively expected and consistent with the definition and effect of the stress-injection parameter in the constant friction model (section 3.1). Moreover, initially (\(R/R_{w}\leq 10\)), we observe in figure 5 that the rupture front lags the overpressure front (\(\lambda<1\)) as faults are governed at early times by Coulomb's friction with a peak stress-injection parameter \(\mathcal{T}_{p}\) well into the marginally pressurized regime (\(\mathcal{T}_{p}\sim 10\)). Nonetheless, as the rupture accelerates due to the further weakening of friction, figure 6 shows that at later times, the slip front may end up outpacing the overpressure front (\(\lambda>1\)). We examine the conditions leading to such behavior in what follows. #### 5.2.1 Nearly unstable faults, \(\lambda(t)\gg 1\) When \(\lambda(t)\gg 1\), the term \(\Delta p_{*}E_{1}\left(\lambda^{2}\right)\) in (46) can be neglected as the overpressure within the process zone is vanishingly small. Hence, the fracture energy \(G_{c}\) becomes approximately constant and simply equal to \[G_{c}\approx\kappa\left(f_{p}-f_{r}\right)\delta_{c}\sigma_{0}^{\prime}. \tag{48}\] At these length scales and in this regime, the problem becomes now identical to the constant fracture energy model that we extensively analyzed in section 3.2. All the results and insights obtained for nearly unstable faults in that model are therefore inherited here. In fact, the rupture front will always outpace the overpressure front provided that the fault responds in the nearly unstable regime as quantified by the residual stress-injection parameter \(\mathcal{T}_{r}\ll 1\), which is indeed intentionally the case of all the examples shown in figure 6. Introducing (48) into (24)a via (10), and then (24)a into (28)a, leads to the dimensionless form of the front-localized energy balance in the nearly unstable regime, \[\frac{1}{\mathcal{T}_{r}}\Psi\left(\lambda\right)-1=\frac{\sqrt{\pi}}{2}\sqrt{ 2\kappa}\frac{1}{\sqrt{R/R_{nu}^{*}}}, \tag{49}\] with \[\frac{R_{nu}^{*}}{R_{w}}=\left(\frac{f_{p}\sigma_{0}^{\prime}-f_{r}\sigma_{0}^ {\prime}}{f_{r}\sigma_{0}^{\prime}-\tau_{0}}\right)^{2}=\left(\frac{1-\mathcal{ F}}{\mathcal{F}-\mathcal{S}}\right)^{2}. \tag{50}\] Equations (49) and (50) can be also obtained by neglecting the term \(\mathcal{P}E_{1}\left(\lambda^{2}\right)\) in (47) and then dividing the latter by \(\mathcal{F}-\mathcal{S}\). What is interesting to highlight, is that when the overpressure across the process zone becomes approximately constant (and so the fracture energy \(G_{c}\)), the mathematical structure of the solution for the rupture front in the slip weakening model changes and it is no longer dependent on three but only one single dimensionless number, the residual stress-injection parameter \(\mathcal{T}_{r}\). Figure 6: Ultimately stable ruptures, \(\mathcal{S}<\mathcal{F}=0.7\), at distances \(10\leq R/R_{w}\leq 10^{3}\) for the linear-weakening friction law. Normalized rupture radius \(R/R_{w}\) versus square root of dimensionless time \(\sqrt{4\alpha t}/R_{w}\) for: (a) \(\mathcal{P}=0.075\) and different values of \(\mathcal{S}\) (values of \(\mathcal{T}_{r}\) indicated between brackets); (b) \(\mathcal{S}=0.697\) and different values of \(\mathcal{P}\); (c) different combinations of \(\mathcal{P}\) and \(\mathcal{S}\) for the same \(\mathcal{T}_{r}=0.1\); and (d) same as (c) but re-scaling both axes using \(R_{nu}^{\star}\). Note that all curves collapse under the new scaling in the latter plot. The new scaling is exemplified in figures 6c and 6d. The former figure shows the evolution of the rupture front as given by equation (47) for different combinations of \(\mathcal{S}\) and \(\mathcal{P}\) that are all characterized by the same value of the residual stress-injection parameter, \(\mathcal{T}_{r}=0.1\). After re-scaling the solution using the characteristic rupture length scale \(R_{nu}^{*}\) (50), figure 6d displays how all curves in figure 6c collapse under the new, constant-fracture-energy scaling. Moreover, by using the asymptotic behavior of \(\Psi\left(\lambda\right)\) for large \(\lambda\), equation (32) provides an implicit equation for the normalized rupture radius \(R/R_{nu}^{*}\) as a function of the normalized square root of time \(\sqrt{4\alpha t}/R_{nu}^{*}\) and the residual stress-injection parameter \(\mathcal{T}_{r}\), provided that \(R_{nu}^{*}\) is replaced by (50). #### 5.2.2 Marginally pressurized faults, \(\lambda(t)\ll 1\) A similar reasoning can be considered now for the case in which \(\lambda(t)\ll 1\). Here, the overpressure within the process zone can be taken as approximately constant and equal to the overpressure at the fluid source, \(\Delta p_{c}\) (2), as the rupture radius is much smaller than the pressurized zone, \(R(t)\ll L(t)\). Therefore, we can approximate the fracture energy (46) as \[G_{c}\approx\kappa\left(f_{p}-f_{r}\right)\delta_{c}\left(\sigma_{0}^{\prime}- \Delta p_{c}\right), \tag{51}\] which is constant as well. We recall that \(\Delta p_{c}\) is a rough approximation of the fluid-source overpressure as discussed in Appendix C. Again, all the results and insights from the constant fracture energy model are inherited now in this regime. Particularly, the so-called marginally pressurized regime as quantified by the residual stress-injection parameter (\(\mathcal{T}_{r}\sim 10\)) is the one related to \(\lambda(t)\ll 1\). We recall that this marginally pressurized regime is not defined exactly as the one emerging during the early-time Coulomb's friction stage. The latter is defined by the condition \(f_{p}\Delta p_{c}\approx f_{p}\sigma_{0}^{\prime}-\tau_{0}\), where as the former relates to the residual friction coefficient instead, \(f_{r}\Delta p_{c}\approx f_{r}\sigma_{0}^{\prime}-\tau_{0}\). We use the same name for these two regimes, yet there is this subtle difference between them. By introducing (51) into (24)b via (11), and then (24)b into (28)b, we obtain the dimensionless form of the front-localized energy balance in the marginally-pressurized regime, \[\Psi\left(\lambda\right)-\mathcal{T}_{r}=\frac{\sqrt{\pi}}{2}\sqrt{2\kappa} \frac{1}{\sqrt{R/R_{mp}^{*}}}, \tag{52}\] with \[\frac{R_{mp}^{*}}{R_{w}}=\frac{\left(f_{p}-f_{r}\right)^{2}\sigma_{0}^{\prime }\left(\sigma_{0}^{\prime}-\Delta p_{c}\right)}{\left(f_{r}\Delta p_{*} \right)^{2}}\approx\left(\frac{1-\mathcal{F}}{\mathcal{F}\mathcal{P}}\right)^ {2}\left(1-10\mathcal{P}\right). \tag{53}\] In the latter equation, we have approximated the factor \(4\pi\) by \(10\) as usual in the marginally pressurized limit. Alternatively, equations (52) and (53) can be derived by approximating the term \(\mathcal{P}E_{1}\left(\lambda^{2}\right)\approx 4\pi\mathcal{P}\) in (47) and then dividing the latter by \(\mathcal{F}\mathcal{P}\). Moreover, by using the asymptotic behavior of \(\Psi\left(\lambda\right)\) for small \(\lambda\), equation (33) provides an implicit equation for the normalized rupture radius \(R/R_{mp}^{*}\) as a function of the normalized square root of time \(\sqrt{4\alpha t}/R_{mp}^{*}\) and the residual stress-injection parameter \(\mathcal{T}_{r}\), provided that \(R_{mp}^{*}\) is replaced by (53). The meaning of the rupture length scales \(R_{nu}^{*}\) and \(R_{mp}^{*}\) are the same as in the constant fracture energy model. Essentially, when \(R\ll R^{*}\), the fracture energy plays a dominant role in the near-front energy balance. This is, the large-toughness regime. On the other hand, when \(R\gg R^{*}\), the fracture energy becomes increasingly less relevant in the rupture-front energy budget, corresponding to the small-toughness regime. Hence, likewise in the constant fracture energy model, unconditionally stable ruptures in the slip-weakening model will always transition from a large-toughness to a small-toughness regime. This transition is shown in figures 3c and 3d for both nearly unstable and marginally pressurized faults, respectively, with \(R_{nu}^{*}\) and \(R_{mp}^{*}\) as in equations (50) and (53). ### Ultimate zero-fracture-energy similarity solution By taking the ultimate limit \(R/R^{*}\rightarrow\infty\) in equations (49) and (52), it is evident that the fracture-energy term in the energy balance (the right-hand side) vanishes for both the nearly unstable (\(\mathcal{T}_{r}\ll 1\)) and marginally pressurized regimes (\(\mathcal{T}_{r}\sim 10\)). Hence, in this limit, equations (49) and (52) become simply \[\Psi(\lambda)=\mathcal{T}_{r}, \tag{54}\] which is exactly the rupture propagation condition of the constant friction model (equation (16)) but with a constant friction coefficient \(f_{\rm cons}\) equal to the residual one \(f_{r}\). The self-similar constant friction model is therefore the ultimate asymptotic solution of the slip-weakening model, provided that \(f_{\rm cons}=f_{r}\). The transition from the small-toughness regime to the constant residual friction solution, also denominated as ultimate zero-fracture-energy solution, is shown in figures 3c and 3d for both nearly unstable and marginally pressurized faults, respectively. Note that one could alternatively define a dimensionless toughness \(\mathcal{K}\) for both regimes (\(\lambda(t)\gg 1\) and \(\lambda(t)\ll 1\)) as in section 3.2 (equation (24)) to show the same type of transition than in figures 3a-b, since the solution for the amplification factor can be written as \(\lambda\left(\mathcal{T}_{r},\mathcal{K}(t)\right)\). Such a dimensionless toughness will always decrease with time and ultimately tend to zero, \(\mathcal{K}\to 0\). Finally, using the constant residual friction solution \(\lambda_{r}\), one can estimate as done in section 3.2 (equation (34)) the transition timescales between the large-toughness and small-toughness regimes, which results in \[t_{nu}^{*}\approx\frac{(R_{nu}^{*})^{2}}{\alpha\lambda_{r}^{2}},\;{\rm and} \quad t_{mp}^{*}\approx\frac{\left(R_{mp}^{*}\right)^{2}}{\alpha\lambda_{r}^{ 2}}. \tag{55}\] ## 6 The nucleation phase preceding a dynamic rupture Figure 7 displays the case of ultimately unstable ruptures: \(f_{r}\sigma_{0}^{\prime}<\tau_{0}\). Again, without loss of generality, we fix the residual-to-peak friction ratio as \(\mathcal{F}=0.7\), and examine the parameter space for \(\mathcal{P}\) and now \(\mathcal{S}>0.7\), for both versions of the slip-weakening friction law. The case of an overpressure ratio \(\mathcal{P}=0.1\) which corresponds to an injection that is about to open the fault, is shown in figures 7a and 7b. For all values of \(\mathcal{S}\) in these figures, we observe the nucleation of a dynamic rupture that runs away and never stop within the limits of our model, this is, the regime R3 in figure 4. On the other hand, the case of a lower overpressure ratio \(\mathcal{P}=0.05\) is shown in figures 7c and 7d. For the linear weakening model, we observe the occurrence of both regimes R3 and R4 of figure 4. The latter corresponds to cases in which a dynamic rupture nucleates, propagates and arrests, with a new rupture instability nucleating afterwards on the same fault, which is ultimately unstable (run-away). Moreover, in figure 7d for the exponential weakening version of the friction law, we do not observe the regime R4, at least for the numerical solutions we include in this figure. ### Early-time Coulomb's friction stage and acceleration towards rupture instability Similarly to the case of ultimately stable ruptures, here, unstable ruptures are also well-described by the Coulomb's friction model at early times (see figure 7). Therefore, the rupture radius evolves approximately as equation (45), with \(\lambda_{p}\) given by equation (17) considering the peak stress-injection parameter \(\mathcal{T}_{p}\). Moreover, figure 7 also shows that during the nucleation phase, the slip front may largely outpace the overpressure front (\(\lambda_{p}\gg 1\)) when faults are critically stressed as quantified by the peak stress-injection parameter (\(\mathcal{T}_{p}\ll 1\)), or significantly lag the overpressure front (\(\lambda_{p}\ll 1\)) when faults are marginally pressurized (\(\mathcal{T}_{p}\sim 10\)). Figures 8d and 8e Figure 7: Ultimately unstable faults, \(\mathcal{S}>\mathcal{F}=0.7\). Normalized rupture radius versus square root of dimensionless time for (left) linear weakening and (right) exponential decay versions of the slip weakening friction law. (a, b) \(\mathcal{P}=0.1\) and (c, d) \(\mathcal{P}=0.05\). Black and red circles indicate the nucleation and arrest of a dynamic rupture, respectively. Red dashed lines correspond to the analytical Coulomb’s friction model considering the peak friction coefficient. In (c) and (d), gray dashed lines represent the near-front energy balance solution. In (c), the green dashed line corresponds to an improved energy-balance solution as explained in the main text. Figure 8: (a, b, c) Normalized spatial distribution of slip at the times indicated in the insets and commented in the main text. Insets: normalized rupture radius as a function of the normalized squared root of time. Blue arrows in (a) and (b) represent the theoretical prediction for the nucleation radius in the critically stressed and marginally pressurized limits. (d, e, f) Spatial profile of the normalized friction coefficient at the same times indicated previously. (Left) Critically stressed limit, \(\mathcal{S}=0.995\), \(\mathcal{P}=0.1\), \(\mathcal{F}=0.7\), and associated \(\mathcal{T}_{p}=0.05\). (Center) Marginally pressurized limit, \(\mathcal{S}=0.8\), \(\mathcal{P}=0.02\), \(\mathcal{F}=0.7\), and associated \(\mathcal{T}_{p}=10\). (c, f) Ultimate stability limit, \(\mathcal{S}=0.71\), \(\mathcal{P}=0.075\), and \(\mathcal{F}=0.7\). display, on the other hand, the spatial distribution of the friction coefficient for critically stressed and marginally pressurized cases, respectively. We can clearly observe that at the Coulomb's friction stage, \(f/f_{p}\approx 1\) throughout most of the slipping region. Note that in this stage, we could also approximate the spatio-temporal evolution of fault slip in the critically stressed and marginally pressurized regimes, using the analytical asymptotic expressions derived by Saez _et al._[1] (equation 25 and 26 in [1]), provided that \(f_{\text{cons}}=f_{p}\). The same can be done for the ultimate zero-fracture-energy solution of ultimately stable ruptures, with \(f_{\text{cons}}=f_{r}\). Now, beyond this early-time stage, the propagation of the slip front starts departing from the Coulomb's friction similarity solution due to the further weakening of friction. The latter can be seen in figures 8d and 8e for intermediate times (\(t_{2}\)) and times close to nucleation (\(t_{c}\)). The slip front accelerates indeed towards the nucleation of a dynamic rupture. Figure 7 shows that the rupture radius at the instability time increases with decreasing pre-stress ratio \(\mathcal{S}\) and increasing overpressure ratio \(\mathcal{P}\), for both versions of the friction law. This is consistent with the extensive analysis on earthquake nucleation provided by Garagash and Germanovich [32] for the two-dimensional, linear weakening model. Moreover, figure 7 also displays that one of the main effects of the exponential weakening version of the friction law is to smooth the transition of the rupture towards the dynamic instability with regard to the linear law. Furthermore, the exponential law retards the instability time, and generally increases the critical radius for the rupture to become unstable. Such effect of the exponential law becomes stronger when the pre-stress ratio \(\mathcal{S}\) decreases towards its minimum value in the ultimately unstable case, this is, the ultimate stability limit \(\mathcal{S}=\mathcal{F}\). Given the importance of the nucleation radius to characterize the maximum size that quasi-static ruptures can afford in the ultimately unstable case, we calculate in the next sections theoretical bounds for it, following the procedure of Uenishi and Rice [33] and Garagash and Germanovich [32]. This corresponds to an extension of their results from the two-dimensional (mode II or III) fault model to the three-dimensional circular (modes II+III) configuration. ### Theoretical bounds for the nucleation radius #### 6.2.1 Critically stressed and marginally pressurized limits As shown in Appendix A.1 and A.2, at the time of instability \(t_{c}\), the time derivative of the quasi-static elastic equilibrium throughout the slipping region takes the form of the following eigenvalue problem for both the critically stressed (\(\mathcal{T}_{p}\ll 1\)) and marginally pressurized (\(\mathcal{T}_{p}\sim 10\)) regimes: \[\frac{1}{2\pi}\int_{0}^{1}F\left(\bar{r},\bar{\xi}\right)\frac{\partial\bar{v} (\bar{\xi})}{\partial\bar{\xi}}\mathrm{d}\bar{\xi}=\beta\bar{v}(\bar{r}), \tag{56}\] where \(\bar{v}=v/v_{\text{rms}}\) is the normalized slip rate distribution (with \(v_{\text{rms}}\) given by equation (A2)), and \(\beta\) is the eigenvalue \[\beta=\frac{R}{R_{w}}\cdot\begin{cases}1&\text{for critically stressed faults }\mathcal{T}_{p}\ll 1\\ \nicefrac{{v}}{{f_{p}\sigma_{0}^{\prime}}}&\text{for marginally pressurized faults }\mathcal{T}_{p}\sim 10.\end{cases} \tag{57}\] In the previous equations, the dependence of \(\bar{v}\) and \(R\) on the instability time \(t_{c}\) has been omitted for simplicity. Moreover, equations (56) and (57) are valid not only for the linear weakening friction law (4), but also for the exponential one (5). This is due to both the critically stressed and marginally pressurized limits are characterized by small slip at the nucleation time, \(\delta(r=0,t_{c})\ll\delta_{c}\) (see, for example, figure 8). It takes just a simple Taylor expansion to show that in this range of slip, the exponential weakening version of the friction law is, to first order in \(\delta/\delta_{c}\), asymptotically equal to the linear weakening case. This also means that the residual branch of the linear weakening law does not need to be considered in such stability analysis. The solution of (56) for the eigenvalues and eigenfunctions is calculated in Appendix A.3. This is done by discretizing the linear integral operator on the left-hand side of (56) via a collocation boundary element method using piece-wise ring 'dislocations' of constant slip rate. The most important result is the smallest eigenvalue \(\beta_{1}\), which was shown by Uenishi and Rice [33] for the two-dimensional problem to give the critical nucleation radius. We find (see Table A1) \[\beta_{1}\approx 1.003, \tag{58}\] which is interestingly, for all practical purposes, approximately equal to one. Taking hereafter \(\beta_{1}\approx 1\), the nucleation radius is recovered from equation (57) as \[R_{c}^{cs}\approx R_{w} \tag{59}\] for critically stressed faults (\(\mathcal{T}_{p}\ll 1\), vertical line \(\mathcal{S}=1\) on the right side of figure 4), and \[R_{c}^{mp}\approx\frac{f_{p}\sigma_{0}^{\prime}}{\tau_{0}}R_{w}=\frac{R_{w}}{ \mathcal{S}}, \tag{60}\] for marginally pressurized faults (\(\mathcal{T}_{p}\sim 10\), inclined line \(\mathcal{P}\approx 10^{-1}(1-\mathcal{S})\) in figure 4). The theoretical estimates (59) and (60) are compared to numerical solutions that are representative of each limiting regime in figures 8a and 8b, respectively. In these figures, the blue arrows indicate the theoretical radii at the instability time \(t_{c}\). We highlight that the critically stressed nucleation radius (59) is a proper asymptote that is always reached in the limit \(\tau_{0}\to f_{p}\sigma_{0}^{\prime}\), up to the numerical approximation made for the eigenvalue (58). On the other hand, the marginally pressurized nucleation radius (60) can be defined only in an approximate sense due to the reasons explained in Appendix C. Although this approximation seems to be quite accurate for the linear weakening law (see figure 8b), the exponential decay version does not seem to follow this trend. This is likely due to the additional assumption of small slip that the exponential weakening law requires in order to be well approximated by a linear relation. In the example of figure 8b, slip does not seem to be small enough. Because of the approximate nature of the marginally pressurized limit, it is challenging to find the model parameters that will result in sufficiently small slip at the nucleation time for the linear approximation of the exponential weakening law to be valid. Furthermore, equations (59) and (60) suggest that the minimum possible nucleation radius is the one associated with critically stressed faults (59), whereas the greatest possible nucleation radius can be as large as infinity for marginally pressurized faults (60), in the limit of zero pre-stress \(\tau_{0}\to 0\). Yet such a limit corresponds indeed to an ultimately stable rupture, specifically, a case in which the fault is about to open (top left corner of figure 4), so that the dynamic rupture will eventually arrest and then propagate ultimately in a quasi-static manner. Finally, as shown in Appendix A, the nucleation radius in the critically stressed limit (59) is independent of the specific form of the spatio-temporal evolution of pore pressure, that is, equation (59) is also valid for other type of fluid injections than the constant volumetric rate considered in this study. On the other hand, the nucleation radius in the marginally pressurized limit (60) is, under certain conditions (see details in Appendix A.2), also independent of the injection scenario. Moreover, the critically stressed nucleation radius (59) is itself an extension of the nucleation length of Uenishi and Rice [33] (found also previously by Campillo and Ionescu [46] under different assumptions) from their two-dimensional fault model to the three-dimensional axisymmetric configuration. Since for the shear mixed-mode (II+III) rupture, the circular front shape is strictly valid only when \(\nu=0\), our results could be used in combination with perturbation techniques such as the work of Gao [47] to characterize the corresponding non-circular slipping region at the nucleation time of a shear rupture for \(\nu\neq 0\). Indeed, since the work of Gao [47] is based on linear elastic fracture mechanics (valid in the small-scale yielding limit) and the nucleation radius (59) (and (60)) is smaller than the process zone size, one should rather consider a variational approach as the one proposed recently by Lebihain _et al._[48] for cohesive cracks based on the perturbation of crack face weight functions. The approach of Gao [47] would be still useful to characterize non-circularity in the nearly stable limit of the next section. This would provide an alternative to the work of Uenishi [49] who considered an energy approach and fixed the rupture shape to an ellipse. An elliptical rupture shape may be a very good approximation for a shear rupture [1], yet not necessarily the actual equilibrium shape. Finally, for a tensile (mode I) rupture, the nucleation radius (59) is valid for any value of \(\nu\), as long as the load driving the rupture growth is peaked around the crack center and axisymmetric in magnitude. Further details about this generalization of our results can be found in Appendix A. #### 6.2.2 Nearly stable limit Figures 6(c) and 6(d) show that the nucleation radius of ultimately unstable ruptures becomes very large, \(R(t_{c})\gg R_{w}\), when approaching the ultimate stability condition \(f_{r}\sigma_{0}^{\prime}\rightarrow\tau_{0}\) (vertical line \(\mathcal{S}=\mathcal{F}\) in figure 4). Specifically, figure 6(c) displays a case of large re-nucleation radius (regime R4) in the linear weakening model for \(\mathcal{S}=0.71\) (\(\mathcal{F}=0.7\)), whereas figure 6(d) shows an example of large nucleation radius for the exponential weakening model and the same parameters than before. In the two-dimensional model, Garagash and Germanovich [32] not only found this same behavior but provided also an asymptote for the nucleation length in this limit, that we also derive here for the circular rupture model. First, let us note that the condition \(R(t_{c})\gg R_{w}\) implies also that \(R(t_{c})\gg\ell_{*}\), since the process zone size \(\ell_{*}\) for the linear weakening model is roughly around the same order than the elasto-frictional lengthscale \(R_{w}\), and about an order of magnitude larger in the case of the exponential weakening law. Hence, we can invoke the front-localized energy balance, equation (14). Indeed, figures 6(c) and 6(d) display such energy-balance solution for some ruptures that are on their way to become unstable. On the other hand, near the ultimate stability limit, \(R(t_{c})\) is also much bigger than the radius of the overpressure front \(L(t_{c})\), such that \(\lambda(t_{c})\gg 1\). Therefore, we can approximate the equivalent shear load associated with the fluid source as a point force via equation (19). The corresponding axisymmetric stress intensity factor for such a point force comes from resolving the integral of the left-hand side of equation (14) considering (19) and \(f_{\text{cons}}=f_{r}\), which gives \[K_{p}=\frac{f_{r}\Delta P(t)}{(\pi R)^{3/2}} \tag{61}\] with \(\Delta P(t)=4\pi\alpha t\Delta p_{*}\). Substituting the previous equation into (14) leads to following form of the front-localized energy balance, \[\underbrace{\frac{f_{r}\Delta P(t)}{(\pi R)^{3/2}}}_{K_{p}}+\underbrace{\frac {2}{\sqrt{\pi}}\left[\tau_{0}-f_{r}\sigma_{0}^{\prime}\right]\sqrt{R}}_{K_{r}} =K_{c}, \tag{62}\] where the fracture toughness \(K_{c}=\sqrt{2\mu G_{c}}\) is, after combining equations (36) and (48), equal to \[K_{c}=(f_{p}-f_{r})\sigma_{0}^{\prime}\sqrt{2\kappa R_{w}}. \tag{63}\] We recall that the coefficient \(\kappa\) is equal to \(1/2\) for the linear weakening friction law, and \(1\) for the exponential weakening case. Moreover, the fracture toughness \(K_{c}\) is constant due to the negligible overpressure within the process zone when \(\lambda\gg 1\). By differentiating equation (62) with respect to time and then dividing by \(\dot{R}\) on both sides, one can show upon taking the limit at the nucleation time: \(R\to R_{c}\) and \(\dot{R}\rightarrow\infty\), that \(K_{\tau}/3\). Substituting this previous relation into (62) allows us to eliminate \(\Delta P(t_{c})\) and so the instability time \(t_{c}\) in the equation, leading to the sought critical nucleation radius: \[\frac{R_{c}^{\infty}}{R_{w}}\simeq\frac{9\pi\kappa}{32}\left(\frac{f_{p}\sigma_{ 0}^{\prime}-f_{r}\sigma_{0}^{\prime}}{\tau_{0}-f_{p}\sigma_{0}^{\prime}}\right) ^{2}=\frac{9\pi\kappa}{32}\left(\frac{1-\mathcal{F}}{\mathcal{S}-\mathcal{F}} \right)^{2},\mbox{ when }f_{r}\sigma_{0}^{\prime}\rightarrow\tau_{0}. \tag{64}\] Note that the previous equation is a proper asymptote due to the small-scale yielding approximation. In addition, the relation \(K_{p}=K_{\tau}/3\) plus the previous expression for \(R_{c}^{\infty}\) can provide together an expression for the nucleation time \(t_{c}\). Indeed, expressions for the instability time \(t_{c}\) in the critically stressed and marginally pressurized limits might be also obtainable analytically, via asymptotic analysis as conducted by Garagash and Germanovich [32], yet we do not attempt to pursue this route in this paper. ## 7 Discussion Frustrated dynamic ruptures and unconditionally stable slip: the two propagation modes of injection-induced aseismic slip In our model, injection-induced aseismic slip can be the result of either a frustrated dynamic rupture that did not reach the required size to become unstable, or the propagation of slip that is unconditionally stable. Whether injection-induced aseismic ruptures occur in one regime or the other, depends primarily on the ultimate stability condition of Garagash and Germanovich [32], that we demonstrated here to be applicable to the circular rupture configuration as well (equation (44)). #### 7.1.1 Unconditionally stable ruptures When the initial shear stress \(\tau_{0}\) is lower than the in-situ residual fault strength, \(\tau_{0}<f_{r}\sigma_{0}^{\prime}\), faults tend to produce mostly unconditionally stable ruptures (regime R1 in figure 4), except for a relatively narrow range of parameters where the nucleation of a dynamic rupture occurs, followed by arrest and purely quasi-static slip (regime R2 in figure 4). We found that unconditionally stable ruptures evolve always between two similarity solutions (see figure 9). At early times (stage I), they behave as being governed by Coulomb's friction, that is, a constant friction coefficient equal to the peak value \(f_{p}\). During this initial stage, fault slip is self-similar in a diffusive manner and is governed by one single dimensionless number: the peak stress-injection parameter \(\mathcal{T}_{p}\). After, the response of the fault gets more complex in stages II and III yet ultimately, slip recovers the same type of similarity at very large times (stage IV). In this ultimate regime, the rupture behaves as if it were governed by a constant friction coefficient equal to the residual one \(f_{r}\), and depends also on one single dimensionless number: the residual stress-injection parameter \(\mathcal{T}_{r}\). An interesting characteristic in both limiting regimes is that the rupture propagates as having zero fracture energy \(G_{c}\). While at early times \(G_{c}=0\) in an absolute sense as the process zone has not developed yet, at large times the contribution of the finite fracture energy to the rupture-front energy balance is to leading order negligible compared to the other terms that drive the propagation of the rupture. Furthermore, the two similarity solutions are equivalent to the analytical solution for a constant friction coefficient derived in [1], as long as the so-called stress-injection parameter \(\mathcal{T}\) in [1] is replaced by \(\mathcal{T}_{p}\) at early times and \(\mathcal{T}_{r}\) at large times, which are then associated with constant amplification factors \(\lambda_{p}\) and \(\lambda_{r}\), respectively, as shown in figure 9. This is a key finding of our work as it puts the results of the former constant-friction model of Saez _et al._[1] in a more complete picture of the problem of injection-induced aseismic slip. Figure 9: Schematic solution for unconditionally stable ruptures undergoing four distinct stages in time. Stage (I), Coulomb’s friction similarity solution, \(f_{\rm cons}=f_{p}\). Stage (II), acceleration due to frictional weakening and localization of the process zone. Stage (III), rupture is governed by an energy balance of the Griffith’s type, \(G=G_{c}\), transitioning from a large-toughness to a small-toughness regime. Stage (IV), ultimate constant residual friction similarity solution, \(f_{\rm cons}=f_{r}\), also denominated ultimate zero-fracture-energy solution. Note that in the case of aseismic slip as a frustrated dynamic instability, stage I is always present, while stages II and III might be experienced to different extents depending on how large the nucleation radius is compared to the process zone size. In between the two similarity solutions, fault slip undergoes two subsequent stages. First, after departing from the Coulomb's friction solution, the rupture accelerates due to frictional weakening (stage II). The details of the friction law matter here as the rupture radius is of the same order than the process zone size. The exponential weakening law tends to slow down the propagation of slip and smooth the acceleration phase with regard to the linear weakening case when considering the same \(\delta_{c}\) in both laws. Fault slip depends in addition to dimensionless space and time, on three non-dimensional parameters: the pre-stress ratio \(\mathcal{S}\), the overpressure ratio \(\mathcal{P}\), and the residual-to-peak friction ratio \(\mathcal{F}\). The higher the initial shear stress on the fault is (higher \(\mathcal{S}\)) or the more intense the injection is (higher \(\mathcal{P}\)), the faster the rupture propagates. Note that this dependence of the rupture speed on \(\mathcal{S}\) and \(\mathcal{P}\) is embedded in both the peak stress-injection parameter \(\mathcal{T}_{p}\) and residual stress-injection parameter \(\mathcal{T}_{r}\). Therefore, it is a general feature present in all stages of injection-induced aseismic slip. In a subsequent stage, once the process zone has adequately localized, the evolution of the slip front is well approximated by the rupture-front energy balance (stage III). The details of how the friction coefficient weakens from its peak value towards its residual value no longer matter in relation to the position of the slip front or the rupture speed. The only two important quantities here associated with the friction law are: the amount of fracture energy that is dissipated near the rupture front, and the residual friction coefficient \(f_{r}\). Moreover, the fracture energy is approximately constant (albeit of different magnitude) in the two end-member cases of nearly unstable (\(\lambda(t)\gg 1\)) and marginally pressurized faults (\(\lambda(t)\ll 1\)), with the amplification factor \(\lambda\) depending on only two dimensionless numbers: a time-dependent dimensionless toughness \(\mathcal{K}(t)\) and the residual stress-injection parameter \(\mathcal{T}_{r}\). A constant fracture energy model as the one introduced in section 3.2 is thus sufficient to capture the dynamics of the slip front for the two end-members. The dimensionless toughness \(\mathcal{K}(t)\) quantifies the relevance of the dissipation of fracture energy in the rupture-front energy balance, which decreases monotonically with time. The rupture speed thus increases with time as the diminishing effect of the fracture energy offers less 'opposition' for the rupture to advance. Eventually, \(\mathcal{K}(t)\to 0\) when \(t\to\infty\) and the rupture reaches asymptotically the large-time similarity solution (stage IV), where the only information about the friction law that matters is \(f_{r}\). Finally, the residual stress-injection parameter \(\mathcal{T}_{r}\) plays a crucial role in stages III and IV. When faults are near the ultimate stability limit (\(\mathcal{T}_{r}\to 0\)) thus responding in the so-called nearly unstable regime (\(\mathcal{T}_{r}\ll 1\)), the slip front always outpace the overpressure front \(\lambda(t)\gg 1\), even though at early times (stage I) the rupture front would likely lag the overpressure front \(\lambda(t)\ll 1\). Conversely, when faults operate in the so-called marginally pressurized regime (\(\mathcal{T}_{r}\sim 10\)), the slip front will always move much slower than the overpressure front, \(\lambda(t)\ll 1\), over the entire lifetime of the rupture: the slip front will never outpace the overpressure front. #### 7.1.2 Aseismic slip as a frustrated dynamic instability When the initial shear stress \(\tau_{0}\) is greater than the in-situ residual fault strength, \(\tau_{0}>f_{r}\sigma_{0}^{\prime}\), faults will always host a dynamic event, sometimes even more than one (regime R4 in figure 4) if injection is sustained for sufficient time. The maximum size that aseismic ruptures can reach before becoming unstable is as small as the elasto-frictional length scale \(R_{w}\) for faults that are critically stressed, and as large as infinity for faults that are either marginally pressurized and about to open, or near the ultimate stability limit (so-called nearly stable faults). The spatial range for a fault to exhibit aseismic slip as a frustrated dynamic instability is therefore extremely broad. Moreover, similarly to the case of unconditionally stable ruptures, the quasi-static nucleation phase is governed at early times by the Coulomb's friction similarity solution with \(f_{\rm cons}=f_{p}\) (stage I in figure 9). Aseismic ruptures can therefore move much faster than the diffusion of pore pressure right after the start of fluid injection when faults are critically stressed according to the peak stress-injection parameter (\(\mathcal{T}_{p}\ll 1\), \(\lambda_{p}\gg 1\)), or propagate much slower than that when faults operate in the so-called marginally pressurized regime (\(\mathcal{T}_{p}\sim 10\), \(\lambda_{p}\ll 1\)). After, depending on how large the nucleation radius is with regard to the process zone size, aseismic ruptures may be able to either partially or fully explore stages II and III on their way to reach their critical unstable size. In stage II, the rupture behavior is the same as described in the previous section for unconditionally stable ruptures. Moreover, if the critical nucleation radius is sufficiently large compared to the process zone size, the rupture could transit stage III where the propagation of the slip front is well approximated by an energy balance of the Griffith's type, similarly to stage III of unconditionally stable ruptures. Finally, it is worth noting that critically stressed faults and marginally pressurized faults nucleate dynamic ruptures with very little decrease of the friction coefficient, far from reaching the residual friction value over the entire slipping region at the instability time. Conversely, nearly stable ruptures undergo dynamic nucleation in a 'crack-like' manner, that is, with a small process zone where the fracture energy is dissipated and the remaining much larger part of the 'fracture' (slipping area) is at the residual friction level. In between these two limiting behaviors of dynamic rupture nucleation, a continuum of instabilities are spanned in our model; a result already found by Garagash and Germanovich [32] and also present in heterogeneous, mechanically-loaded slip-weakening frictional interfaces [50]. ### Laboratory experiments Laboratory experiments of injection-induced fault slip on rock samples where a finite rupture grows along a pre-existing interface [51, 52] have recently confirmed some insights predicted by theory. For instance, the meter-scale experiments of Cebry _et al._[52] showed that the closer to frictional failure the fault is before the injection starts, the faster aseismic slip propagates. A somewhat similar observation was made previously by Passelegue _et al._[51] through a set of centimeter-scale experiments and using a rupture-tip energy balance argument. This general feature of injection-induced aseismic slip can be particularly seen in the closed-form expression for the rupture speed of critically stressed faults responding in the Coulomb's friction stage: \(V_{r}=\left[\left(f_{p}\Delta p_{*}/\left(f_{p}\sigma_{0}^{\prime}-\tau_{0} \right)\right)\left(\alpha/2t\right)\right]^{1/2}\)[1]. This formula displays, in addition to the previous stress state dependence, some other general and quite intuitive features of injection-induced aseismic slip: the more intense the injection is, or the higher the fault hydraulic diffusivity is; the faster the rupture propagates. The latter dependencies remain to be seen in the laboratory, as well as many other aspects of injection-induced fault slip. We discuss a few of them in the context of published experiments in what follows. Notably, our results provide a means for characterizing the conditions under which distinct regimes and stages of injection-induced aseismic slip are expected to emerge under well-controlled conditions in the laboratory, where the validation of the relevant physics incorporated in our model can be potentially realized. For example, the type of experiments conducted by Passelegue _et al._'s [51] could provide important insights into the aseismic slip phase preceding dynamic ruptures. Indeed, the nucleation radius of critically stressed faults, equation (59), which is itself the circular analog of Uenishi and Rice's nucleation length [33], has been estimated under somewhat similar laboratory conditions (confining pressures \(\sim 100\) MPa) in \(R_{\rm c}^{cs}\sim 1\) m [33]. Variations of this nucleation radius could be reasonably expected due to uncertainties mostly in the critical slip-weakening scale \(\delta_{\rm c}\). Moreover, a nucleation length of about 1 m has been recently measured by Cebry _et al._[52] during meter-scale experiments of fluid injection with fault normal stresses of about 4 MPa. Since the experiments of Passelegue _et al._ were carried out in a similar rock, if ones corrects the nucleation length of Cebry _et al._[52] by the effective normal stresses that are representative of Passelegue _et al._'s experiments, we obtain roughly \(R_{\rm c}^{cs}\sim 10\) cm. Since the critically stressed nucleation radius (59) is the minimum possible nucleation size of injection-induced dynamic ruptures in our model, and given that Passelegue _et al._'s cylindrical rock samples have a diameter of 4 cm, we expect that their aseismic ruptures might have likely been operating in the Coulomb's friction phase (stage I) and perhaps some excursion in the acceleration phase towards a dynamic instability (stage II), yet never reached the onset of a macroscopic dynamic rupture in the sample. Moreover, as the initial shear stress was set to be 90 percent of the 'in-situ' static fault strength, it is likely that the initial shear stress was greater than the residual strength of the fault thus further supporting the ultimately unstable condition assumed for these experiments. Another interesting set of fluid injection experiments are the ones reported by Cebry _et al._[52]. Their 3-meters long, quasi-one-dimensional fault, allowed them to observed not only the nucleation of injection-induced dynamic ruptures, but also some details of the quasi-static phase preceding such instabilities. Due to the elasticity and fluid flow boundary conditions in their experimental setup, it is not possible to make direct quantitative comparisons with our three-dimensional model, nor with two-dimensional plane-strain models [32]. An unbounded rupture may have certainly propagated before fault slip reached the shortest side of the sample, yet most of the measurements were conducted starting from this moment. In spite of these differences, it is interesting to note at least two experimental observations that are qualitatively consistent with our model: i) the aseismic slip front continuously decelerates during fluid injection except for the moment right before instabilities occur, and ii) the transition from self-arrested to run-away dynamic ruptures seems to have occurred in relation to the ultimate stability condition (44). This type of experiments are well suited to be analyzed via dynamic numerical modeling to provide the possibility for direct quantitative comparisons. Finally, using rock analog materials with reduced shear modulus such as PMMA [53, 54] could provide important insights into injection-induced aseismic slip by reducing the elasto-frictional length scale \(R_{w}\) in approximately one order of magnitude. This "widens" the observable spatial range of the problem thus providing the chance to explore larger-scale processes and regimes that would be otherwise difficult to observe in the laboratory using rock samples. For instance, stage III, where injection-induced aseismic slip is governed by an energy balance of the Griffith's type, or perhaps even stage IV, where ultimately stable ruptures behave as having nearly zero fracture energy, could be potentially investigated in this type of experimental settings. The front-localized energy balance of dynamic ruptures has been extensively studied on both dry [55, 56] and fluid-lubricated frictional interfaces [57, 58]. Yet the same kind of energy balance for injection-induced slow slip, which is determined by the competition of three distinct factors (equation (14)), remains to be investigated experimentally. Indeed, stages III and IV might be the relevant regimes for subsurface processes such as the reactivation in shear of fractures by fluid injection for geo-energy applications, and natural earthquake-related phenomena where coupled fluid flow and aseismic slip processes are thought to play an important role (e.g., seismic swarms, aftershock sequences and slow earthquakes). ### In-situ experiments Fluid injection experiments in shallow natural faults have recently provided important insights into the mechanics of injection-induced fault slip [4, 59]. Yet laboratory experiments under well-controlled conditions are likely better positioned than in-situ experiments to validate the physics of injection-induced fault slip due to the further uncertainties naturally present in the field (e.g., heterogeneities of stress, strength, fracture/fault geometrical complexities, among others), in-situ experiments do have various benefits such as, for instance, covering a larger decameter scale and providing more realistic field conditions, particularly those ones allowing the growth of fully three-dimensional unbounded ruptures initiated from a localized fluid source, as the one considered in this study. Our results thus provide the opportunity to make quantitative comparisons in these cases. Among them, the experiments of Guglielmi _et al._[4] have been particularly impactful as they were able to measure for the first time not only micro-seismicity and injection-well fluid pressure and volume rate history (as done previously in large-scale field experiments [3, 60]), but also the history of induced fracture slip and opening at the injection point/interval. This unique dataset has been analyzed via dynamic numerical modelling by various researchers [4, 23, 61, 62]. The multiplicity of proposed models that have fitted the data suggests indeed that more spatially distributed measurements, as done more recently [59], could further help to constrain the underlying physical processes operating behind these experiments. For the purpose of illustrating the application of our model, hereafter we focus on the modeling work of Bhattacharya and Viesca [23] for the unique reason that they assumed a slip-weakening fault model which makes the application of our results more straightforward. Let us first estimate the three dimensionless parameters of our model: \(\mathcal{S}\), \(\mathcal{P}\), and \(\mathcal{F}\) (37). Considering the best-fit model parameters of Bhattacharya and Viesca [23]: \(\mu=11.84\) GPa, \(\delta_{c}=0.37\) mm, \(f_{p}=0.6\), \(f_{r}=0.42\), \(\sigma_{0}^{\prime}\)=5.08 MPa, and \(\tau_{0}=2.41\) MPa; we can directly compute \(\mathcal{S}\approx 0.79\) and \(\mathcal{F}=0.7\). Note that the estimated initial shear stress of 2.41 MPa is greater than the in-situ residual fault strength \(f_{r}\sigma_{0}^{\prime}\approx 2.1\) MPa (or alternatively \(\mathcal{S}>\mathcal{F}\) (44)), so that the injection-induced rupture is inferred to be ultimately unstable. Because no macroscopic dynamic rupture occurred during the experiment, the propagation mode of aseismic slip must be the one of a frustrated dynamic instability. Let us now estimate the remaining dimensionless parameter \(\mathcal{P}\). To do so, we need to estimate first the injection intensity \(\Delta p_{*}\), equation (1). We approximate the injection history via a constant-volume-rate injection characterized by the same total volume of fluid injected during the experiment. This gives roughly \(Q\sim 40\) l/min. The fault intrinsic permeability \(k\) is widely recognized to have increased during this test [4, 23, 61]. It was estimated by Bhattacharya and Viesca [23] between \(k_{\rm min}=0.8\times 10^{-12}\) m\({}^{2}\) and \(k_{\rm max}=1.3\times 10^{-12}\) m\({}^{2}\). Consider, for instance, the average \(\tilde{k}=1.05\times 10^{-12}\) m\({}^{2}\). By assuming a fluid dynamic viscosity of \(\eta\sim 10^{-3}\) Pa-s and fault-zone width \(w=0.2\) m [23], the characteristic wellbore overpressure, equation (2), is \(\Delta p_{c}\approx 3.17\) MPa, which is very close to the actual, nearly constant wellbore overpressure measured at the latest part of the injection, \(\sim 3\) MPa (see figure 2a in [23]). Following this approximation for the fluid injection, we obtain an injection intensity \(\Delta p_{*}\approx 0.25\) MPa, which in turn yields \(\mathcal{P}\approx 0.05\). By considering either the minimum or maximum fault permeability, we would obtain \(\mathcal{P}\approx 0.065\) and \(\mathcal{P}\approx 0.04\), respectively; the higher the permeability, the lower the injection intensity. To understand under what regime aseismic slip may have developed during the initial Coulomb's friction stage (either critically stressed or marginally pressurized regime), let us calculate the peak stress-injection parameter \(\mathcal{T}_{p}\). Given the known values for \(\mathcal{S}\) and \(\mathcal{P}\), we obtain via (42) \(\mathcal{T}_{p}\approx 4.22\), which is well into the marginally pressurized regime, with an associated amplification factor \(\lambda_{p}\approx 0.12\) (equation (20)). By considering \(k_{\rm min}\) and \(k_{\rm max}\) instead, we would obtain just a modest change in \(\mathcal{T}_{p}\) approximately equal to 3.20 and 5.22, respectively. Furthermore, we can estimate the critical nucleation radius in the marginally pressurized regime via equation (60). For that, we must calculate first the elasto-frictional lengthscale \(R_{w}\), equation (36). Given the best-fit model parameters, \(R_{w}\approx 4.79\) m, and the nucleation radius is then \(R_{c}^{mp}\approx 6.06\) m. Considering the size of the aseismic rupture estimated by Bhattacharya and Viesca [23] at the final time analyzed in their work, \(t_{f}=1378\) s (see inset of figure 3a in [23]), one could expect that the aseismic rupture was quite close to become unstable. Note that in [23], their original circular rupture solutions were modified to appear elliptical considering the perturbative approach for circular shear cracks of Gao [47], who gives an aspect ratio \(a/b=1/(1-\nu)\) (to first order in \(\nu\)), where \(a\) and \(b\) are the semi-major and semi-minor axes of the elliptical rupture front. The calculations of Gao [47] involve a planar circular crack whose shape is perturbed under uniform shear load and constant energy release rate along the front. Perhaps, a better correction could be done by considering the asymptotic behavior in the marginally pressurized regime obtained in [1], \(a/b=(3-\nu))/(3-2\nu)\), at least in the Coulomb's friction stage where the previous equation is valid. This would lead to ruptures that are less elongated than the ones considered by Bhattacharya and Viesca [23]. For example, considering a Poisson's ratio \(\nu=0.25\), the marginally-pressurized aspect ratio is \(a/b=1.1\), whereas Gao's aspect ratio is \(a/b\approx 1.33\). The latter was indeed found to be the asymptotic behavior in the critically stressed regime (\({\cal T}_{p}\ll 1\)) of the Coulomb's friction stage [1]. Let us assume as a first estimate, that the rupture propagates with Coulomb's friction until the final time \(t_{f}=1378\) s. In this scenario, the rupture radius would be at this time simply \(R_{f}=\lambda_{p}\sqrt{4\alpha t_{f}}\), and the corresponding accumulated slip at the injection point, \(\delta_{f}=(8/\pi)(f_{p}\Delta p_{*}/\mu)R_{f}\) (equation 27 in [1]). To perform the previous calculations, we need to estimate the fault hydraulic diffusivity \(\alpha=k/\eta S\). Considering the storage coefficient \(S\) estimated in \(2.2\times 10^{-8}\) Pa\({}^{-1}\)[23], we obtain a hydraulic diffusivity \(\alpha\approx 0.048\) m\({}^{2}\)/s. The final rupture radius and accrued slip are then \(R_{f}\approx 2.01\) m and \(\delta_{f}\approx 0.07\) mm, respectively. Variations in fault permeability considering \(k_{\rm min}\) and \(k_{\rm max}\) would result in \(R_{f}\approx 2.91\) m and \(1.35\) m, and \(\delta_{f}\approx 0.12\) mm and \(0.04\) mm, respectively. In any case, the previous quantities do not account for the total slip measured at the injection point which is an order of magnitude higher (\(\sim 1\) mm) and estimated rupture radius \(\sim 5\) m. Moreover, there is a clear acceleration of slip in the final part of the injection (see figure 3a in [23]) which in our model could only come from frictional weakening (stage II). We therefore calculate the evolution of the rupture front and slip at the injection point numerically. Our numerical solution shows that the nucleation of a dynamic rupture occurs at \(R_{c}\approx 6.08\) m, which is in excellent agreement with the theoretical nucleation radius for marginally pressurized faults calculated previously. The calculated accrued slip at the center of the rupture at the instability time is \(0.35\) mm, which is about a third part of the actual measurement. On the other hand, the nucleation time is \(t_{c}=5885\) s, which is several times longer than \(t_{f}\). Indeed, at the time \(t_{f}\) the rupture radius is just \(2.17\) m in our numerical solution, not much larger than the Coulomb's friction approximation (\(2.01\) m). The latter indicates that our model is indeed operating in stage I at \(t_{f}\). The differences between our calculations and the ones of Bhattacharya and Viesca [23] are due to time-history variations in permeability not accounted in our model, and the approximation of the fluid injection via an equivalent constant volume rate. Nevertheless, the theoretical nucleation radius is expected to hold as this quantity is relatively independent of the injection scenario. Finally, we note that a rupture propagating in the marginally pressurized regime is expected to be confined within the pressurized area even at the instability time. In this regard, the main difference with Bhattacharya and Viesca [23] who suggested that the slip front outpaced the migration of fluids is merely a matter of definitions. In our case, the overpressure front \(L(t)=\sqrt{4\alpha t}\) represents the radial distance from the injection point at which the overpressure is approximately \(2\) percent of the fluid-source overpressure, the latter being approximately \(3\) MPa at the final time. In [23], various overpressure isobars are drawn. The one with lowest overpressure is at \(0.5\) MPa, which is around \(17\) percent of the fluid-source overpressure. ### A note on rate-and-state fault models: similarities and differences Laboratory-derived friction laws [63, 64] are widely used in the earthquake modeling community to reproduce the entire spectrum of slip velocities on natural faults [65]. These empirical friction laws capture the dependence of friction on slip rate and the history of sliding (via a state variable) as observed during velocity-step laboratory experiments on bare rock surfaces and simulated fault gouge [66]. In its simplest form, the rate-and-state friction coefficient is expressed as \[f(v,\theta)=f_{0}+a{\rm ln}\left(\frac{v}{v_{0}}\right)+b{\rm ln}\left(\frac {v_{0}\theta}{d_{c}}\right), \tag{65}\] where \(f_{0}\) is the friction coefficient at a reference slip rate \(v_{0}\) and state variable \(\theta_{0}=d_{c}/v_{0}\) with units of time, \(d_{c}\) is a characteristic slip 'distance' for the evolution of \(\theta\), which is usually thought to be an order of magnitude smaller than \(\delta_{c}\) of the slip weakening model [33, 67], and \(a\) and \(b\) are the rate-and-state friction parameters, both positive and of order \(10^{-2}\). An additional dynamical equation describing the evolution of the state variable \(\theta\) is required. For the purpose of this discussion, we consider hereafter a widely used state-evolution equation known as aging law: \(\dot{\theta}=1-v\theta/d_{c}\). The similarities between frictional ruptures obeying slip-weakening and rate-and-state friction have been long recognized (see [32, 33, 67, 68] for example). Furthermore, in the context of injection-induced fault slip, some similarities were already recognized by Garagash and Germanovich [32] particularly in relation to the nucleation of dynamic slip. Specifically, they noted that the nucleation length of critically stressed faults for linear slip-weakening friction or, what is the same, the one of Uenishi and Rice [33], is identical to the nucleation length of rate-and-state faults for \(a/b\ll 1\)[69]. On the other hand, the large nucleation length near the ultimate stability limit which is equal (except by a pre-factor of order one) to the one of Andrews [70], is identical to the nucleation length of rate-and-state faults when approaching the velocity-neutral limit \(a/b\to 1\)[69]. The equivalence between the previous nucleation lengths is obtained by recasting the slip-weakening friction law in terms of the rate-and-state parameters, this is, replacing the peak to residual strength drop \((f_{p}-f_{r})\sigma^{\prime}_{0}\) by \(b\sigma^{\prime}_{0}\), and the stress drop \(\tau_{0}-f_{r}\sigma^{\prime}_{0}\) by \((b-a)\sigma^{\prime}_{0}\)[68, 69]. In addition to the previous similarities, we discuss some additional ones now. For instance, the ultimate stability condition (44) has been observed to determine in rate-and-state models whether velocity-weakening faults (\(a/b<1\)) produce either self-arrested or run-away ruptures [71]. Also, the same stability condition has been observed to determine whether velocity-strengthening faults (\(a/b>1\)) host either purely quasi-static slip or a dynamic instability [26]. Both results are essentially the same as predicted by slip-weakening fault models. Indeed, the ultimate stability condition is somehow expected to occur in rate-and-state models (both velocity weakening and velocity strengthening) in the vicinity of the velocity-neutral limit (\(a/b\to 1\)), as the condition (44) that the residual fault strength drops at nearly the same level of the shear stress present further ahead of the rupture is reminiscent of the case \(a\approx b\) when analyzing the steady-state (\(\dot{\theta}=0\)) response of equation (65) to an incremental velocity step approximating crudely the passage of a rupture front. Another similarity between slip-weakening and rate-and-state fault models relates to our constant-residual-friction similarity solution, or ultimate zero-fracture-energy regime. As already noted by Saez _et al_. [1] for the particular case of injection at constant volumetric rate in a one-dimensional fault (see section 5.2 in [1]), the ultimate regime of rate-and-state faults [26] coincides with the solution for a constant friction coefficient [1]. It is clear now albeit in a three-dimensional model, that this friction coefficient corresponds to the residual value in a slip-weakening model and that this regime is characterized by negligible fracture energy. The latter is additionally consistent with the work of Garagash [27] who found that slip transients driven by a point-force-like injection approaches a zero-toughness condition as an ultimate regime in a one-dimensional fault. Note that the limit of a point-force-like injection is reached in our three-dimensional model in the nearly unstable limit, but not in the marginally pressurized one. Moreover, the features that give rise to this ultimate asymptotic behavior seem rather general and we think are likely expected to hold under other types of fluid injection. As already shown in [1] (section 5.1), the spatio-temporal patterns of injection-induced aseismic slip are strongly influenced by the type of fluid injection (or injection rate history). Quantifying this effect is important and we will address it soon in a future study. We would also like to highlight some differences between slip-weakening and rate-and-state fault models. Indeed, one of the main physical ingredients that rate-and-state friction would incorporate in our model is frictional healing. The recovery of the friction coefficient with the logarithm of time is a well-established phenomenon [63, 64, 66] that is essential in physics-based models that attempt to reproduce earthquake cycles [72, 73]. Frictional healing would provide, for instance, the possibility of nucleating multiple dynamic events on the same fault segment. Yet we highlight that this is not a unique characteristic of rate-and-state friction in the sense that two events can also nucleate on the same slip-weakening fault segment (regime R4 in figure 4). Another case in which the rate-and-state framework could be particularly useful is to model the reactivation of faults that are thought to be steadily creeping. In fact, from a Coulomb's friction perspective, rate-and-state faults are always at failure: the shear stress is at any time and over the entire fault extent equal to the fault strength. The initial stress state is a result of the history of sliding. At an initial time \(t=0\), it will be defined by the initial distribution of slip rate \(v_{i}\) and initial state variable \(\theta_{i}\). To illustrate this latter point and get some further insights into a rate-and-state fault model, consider our same hydro-mechanical model but with a rate-and-state friction coefficient. Due to the 'always-failing' condition, \(R(t)\rightarrow\infty\) in equation (6) and the inequality (3) becomes an equality. One possible way of considering the initial stress state is to assume that the initial slip velocity \(v_{i}\) is uniform and it operates at the creep rate that one could further consider as the reference slip rate \(v_{0}\). The initial stress state is then encapsulated in the initial state variable \(\theta_{i}\). Assuming the latter as uniform as well, one can readily show by dimensional analysis that the slip rate (the primary unknown in this model together with the state variable) depends in addition to dimensionless space \(rb\sigma_{0}^{\prime}/d_{c}\mu\) and time \(tv_{0}/d_{c}\), on the following five non-dimensional parameters: \(a/b\), \(\Delta p_{*}/\sigma_{0}^{\prime}\), \(\alpha/\alpha_{c}\), \(f_{0}/b\) and \(\theta_{i}/\theta_{0}\), where \(\alpha_{c}=\mu^{2}d_{c}v_{0}/b^{2}\sigma_{0}^{\prime 2}\) is a characteristic diffusivity. We note that this rate-and-state version of our model has an increased complexity with two more dimensionless parameters than the slip-weakening model. \(\Delta p_{*}/\sigma_{0}^{\prime}\) is indeed our same overpressure ratio \(\mathcal{P}\). \(a/b\) quantifies and degree of weakening (\(a/b<1\)) or strengthening (\(a/b>1\)) and, as discussed previously, it would relate to the the ultimate stability behavior of the fault. \(f_{0}/b\) quantifies the constant part of the friction coefficient with regard to \(b\) which in turn relates to the strength drop (i.e., the decay from the peak to the residual friction). Finally, \(\theta_{i}/\theta_{0}\) is where the initial shear stress or pre-stress ratio \(\mathcal{S}\) of the slip-weakening model would equivalently emerge. In fact, by multiplying equation (65) by \(\sigma_{0}^{\prime}\) and then expressing it at the initial conditions \(v_{i}\) and \(\theta_{i}\), the resulting dimensionless form of such equation reads as \(\ln\left(\theta_{i}/\theta_{0}\right)=\left(f_{0}/b\right)\left(\tau_{0}/f_{0 }\sigma_{0}^{\prime}-1\right)\). The dimensionless parameter \(\theta_{i}/\theta_{0}\) can be then alternatively chosen as \(\tau_{0}/f_{0}\sigma_{0}^{\prime}\), which is similar to the pre-stress ratio of the slip-weakening model. ## 8 Concluding remarks We have investigated the propagation of fluid-driven slow slip and earthquake nucleation on a slip-weakening circular fault subjected to fluid injection at a constant volume rate. Despite some simplifying assumptions in our model, our investigation has revealed a very broad range of aseismic slip behaviors, from frustrated dynamic instabilities to unconditionally stable slip, from ruptures that move much faster than the diffusion of pore pressure to ruptures that move much slower than that. The circular fault geometry is likely the simplest one enabling quantitative comparisons with field observations thanks to its three-dimensional nature. It is thus also useful for preliminary engineering design of hydraulic stimulations in geo-energy applications. In addition to the effect of a non-zero Poisson's ratio that will elongate the shape of the rupture along the direction of principal shear, changes in lithologies as commonly encountered in the upper Earth's crust will alter the dynamics of an otherwise unbounded rupture as the one we have examined here. In particular, the effect of layering might promote containment of the reactivated fault surface within certain lithologies, similar to what is observed for hydraulic fractures [74]. This effect may be important in some cases and would require further quantification. ### CRediT authorship contribution statement **Alexis Saez:** Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Writing - Original Draft, Writing - review & editing, Visualization, Funding acquisition. **Brice Lecampion:** Conceptualization, Methodology, Software, Writing - review & editing, Funding acquisition. ### Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ### Funding The results were obtained within the EMOD project (Engineering model for hydraulic stimulation). The EMOD project benefits from a grant (research contract no. SI/502081-01) and an exploration subsidy (contract no. MF-021-GEO-ERK) of the Swiss federal office of energy for the EGS geothermal project in Haute-Sorne, Canton of Jura, which is gratefully acknowledged. Alexis Saez was partially funded by the Federal Commission for Scholarships for Foreign Students via the Swiss Government Excellence Scholarship. ### Acknowledgements Alexis Saez would like to thank Francois Passelegue for discussions about his own experiments. ## Appendix A Eigenvalue problem at the instability time for unlimited linear weakening of friction ### Generalized eigenvalue problem Following Uenishi and Rice [33] and Garagash and Germanovich [32], we extend their eigenvalue-based stability analysis, valid for unlimited linear weakening of friction under either in-plane shear (II) or anti-plane shear (III) mode of sliding, to the case of a circular rupture propagating under mixed-mode (II+III) conditions. Equilibrium dictates that within the slipping region \(r\leq R(t)\), the fault strength \(\tau_{s}\) (3) must be locally equal to the fault shear stress \(\tau\) (6). Equating the two previous equations and then differentiating with respect to time, leads to \[v\left(r,t\right)\frac{\mathrm{d}f}{\mathrm{d}\delta}\sigma^{\prime}(r,t)+f( \delta)\frac{\partial\sigma^{\prime}(r,t)}{\partial t}=\frac{\partial\tau_{0 }(r,t)}{\partial t}-\frac{\mu}{2\pi}\int_{0}^{R(t)}F\left(r,\xi\right)\frac{ \partial v(\xi,t)}{\partial\xi}\mathrm{d}\xi,\] (A1) where \(v=\partial\delta/\partial t\) is the fault slip rate. When differentiating the integral term previously, we have considered Leibniz's integral rule and applied the condition \(\partial\delta\left(R(t),t\right)/\partial r=0\) which guarantees non-singular shear stresses along the rupture front. In our problem, the effective normal stress \(\sigma^{\prime}(r,t)=\sigma^{\prime}_{0}-\Delta p(r,t)\) decreases from the initial uniform value \(\sigma^{\prime}_{0}\) due to overpressure \(\Delta p(r,t)\) associated with fluid injection, whereas the shear stress that would be present on the fault if no slip occurs is a uniform and constant value \(\tau_{0}\). Nevertheless, for the sake of generality, we keep utilizing the generic terms \(\sigma^{\prime}(r,t)\) and \(\tau_{0}(r,t)\). Indeed, \(\sigma^{\prime}(r,t)\) generally contains not only the initial effective normal stress and changes in pore pressure, but also possible changes in total normal stress from the far field, as \(\sigma^{\prime}=\sigma-p\). Similarly, \(\tau_{0}(r,t)\) could be a summation of both the initial shear stress and far field shear loading. Far field loads may be due to, for instance, tectonic forces and seasonal variations of stress, among many others. Note that \(\sigma^{\prime}(r,t)\) and \(\tau_{0}(r,t)\) are both axisymmetric in magnitude and at least one of them must be locally peaked around the origin in order to initiate slip at \(r=0\) at a certain time \(t=t_{0}\). Equation (A1) is then valid at any time \(t>t_{0}\) and, as mentioned in the main text, it assumes a Poisson's ratio \(\nu=0\). Let us scale equation (A1) by introducing the following non-dimensional quantities: \(\bar{r}=r/R\), \(\bar{\xi}=\xi/R\), and \(\bar{v}=v/v_{\rm rms}\), where \[v_{\rm rms}(t)=\sqrt{\frac{1}{R(t)}\int_{0}^{R(t)}v^{2}(r,t){\rm d}r}\] (A2) is the root mean square of the slip rate distribution, such that \(\int_{0}^{1}\bar{v}^{2}(\bar{r}){\rm d}\bar{r}=1\). Note that for the linear-weakening friction law (4), \({\rm d}f/{\rm d}\delta=-(f_{p}-f_{r})/\delta_{c}\). The latter has the strong assumption that the residual friction coefficient \(f_{r}\) has not been reached yet at any point within the rupture. Otherwise, wherever \(f=f_{r}\), \({\rm d}f/{\rm d}\delta=0\). Moreover, for the exponential-weakening friction law (5), the same expression for \({\rm d}f/{\rm d}\delta\) is approximately valid in the range of small slip \(\delta\ll\delta_{c}\), to first order in \(\delta/\delta_{c}\). Considering the previous quantities plus the relation (36) \(\mu\delta_{c}=(f_{p}-f_{r})\sigma^{\prime}_{0}R_{w}\), we nondimensionalize equation (A1) to obtain \[-\frac{R}{R_{w}}\bar{v}(\bar{r})\frac{\sigma^{\prime}(\bar{r}R)}{\sigma^{ \prime}_{0}}+\frac{R}{\mu v_{\rm rms}}f(\delta)\frac{\partial\sigma^{\prime}( \bar{r}R)}{\partial t}=\frac{R}{\mu v_{\rm rms}}\frac{\partial\tau_{0}(\bar{ r}R)}{\partial t}-\frac{1}{2\pi}\int_{0}^{1}F\left(\bar{r},\bar{\xi}\right) \frac{\partial\bar{v}(\bar{\xi})}{\partial\bar{\xi}}{\rm d}\bar{\xi}.\] (A3) In the previous equation, we dropped the explicit dependence on time \(t\) of the various variables for simplicity in the notation. Following Uenishi and Rice [33], at the instability time \(t_{c}\), the slip rate diverges all over the fault plane, such that the root mean square of the slip rate distribution \(v_{\rm rms}(t_{c})\to\infty\). The only non-vanishing terms of (A3) leads to the following generalized eigenvalue problem: \[\frac{R}{R_{w}}\bar{v}(\bar{r})\frac{\sigma^{\prime}(\bar{r}R)}{\sigma^{ \prime}_{0}}=\frac{1}{2\pi}\int_{0}^{1}F\left(\bar{r},\bar{\xi}\right)\frac{ \partial\bar{v}(\bar{\xi})}{\partial\bar{\xi}}{\rm d}\bar{\xi},\] (A4) which corresponds to the mixed-mode, circular rupture version of equation (14) in [32]. Given a normalized distribution of effective normal stress at the instability time, \(\sigma^{\prime}(r,t_{c})/\sigma^{\prime}_{0}\), equation (A4) can be solved to obtain the corresponding generalized eigenvalues and eigenfunctions. Moreover, what is more important is to calculate the smallest eigenvalue that would be related to the instabilities we observe in the full numerical solutions. In our problem, \(\sigma^{\prime}(r,t_{c})\) is set by the distribution of overpressure at the time of instability, which does not allow us to obtain a purely analytical insight as the instability time is generally unknown and, more importantly, information about one of the problem parameters, the pre-stress ratio \(\mathcal{S}\), is lost when deriving the eigen problem (A4). The only scenario in which equation (A4) is independent of \(t_{c}\) is when \(\sigma^{\prime}(r,t_{c})\) is uniform, which in turn leads to a regular eigenvalue problem. Furthermore, in the particular case of \(\sigma^{\prime}(r,t)=\sigma^{\prime}_{0}\), we obtain the circular rupture version of the eigenvalue problem of Uenishi and Rice (equation (12) in [33]), which will give the corresponding universal nucleation radius of their problem. ### Eigenvalue problem in the critically stressed and marginally pressurized limits Let us come back to our particular problem where \(\sigma^{\prime}(r,t)=\sigma^{\prime}_{0}-\Delta p(r,t)\) and assume a rather general but self-similar injection scenario such that the overpressure can be written in the sim ilarity form: \(\Delta p(r,t)=\Delta p_{w}(t)\Pi(\xi)\), where \(\Delta p_{w}(t)\) is the overpressure at the fluid source, and \(\Pi(\xi)\) is the spatial distribution of overpressure with the properties: \(\Pi(0)=1\) and \(\Pi(\infty)\to 0\). Note that such self-similar injection scenario is possible only if one assumes a line source of fluids such that no length scale associated with the fluid source is introduced into the problem. For a discussion about the line-source approximation, see Appendix C. Introducing the previous relations for the effective normal stress into (A4), the generalized eigenvalue problem becomes \[\frac{R}{R_{w}}\bar{v}(\bar{r})\left(1-\frac{\Delta p_{w}}{\sigma_{0}^{\prime} }\Pi\left(\lambda\bar{r}\right)\right)=\frac{1}{2\pi}\int_{0}^{1}F\left(\bar{r },\bar{\xi}\right)\frac{\partial\bar{v}(\bar{\xi})}{\partial\bar{\xi}}\mathrm{ d}\bar{\xi},\] (A5) where \(\lambda(t_{c})=R(t_{c})/\sqrt{4\alpha t_{c}}\) is the so-called amplification factor at the instability time. We recall that the dependence of the various variables in the previous equation on \(t_{c}\) is omitted for simplicity. Consider now the critically stressed limit: \(\tau_{0}\to f_{p}\sigma_{0}^{\prime}\), where the rupture front largely outpaces the overpressure front at the time of instability, such that \(\lambda(t_{c})\gg 1\). In view of the properties of \(\Pi(\xi)\), the term \((\Delta p_{w}/\sigma_{0}^{\prime})\Pi(\lambda\bar{r})\ll 1\) so that, if neglected, equation (A5) further simplifies to \[\frac{R}{R_{w}}\bar{v}(\bar{r})=\frac{1}{2\pi}\int_{0}^{1}F\left(\bar{r},\bar {\xi}\right)\frac{\partial\bar{v}(\bar{\xi})}{\partial\bar{\xi}}\mathrm{d} \bar{\xi}.\] (A6) The previous equation is a regular eigenvalue problem. It corresponds indeed to the circular rupture version of the eigenvalue problem of Uenishi and Rice [33]. Derivation of equation (A6) can be alternatively done by following the reasoning of Garagash and Germanovich [32] that in the critically stressed limit, the effective normal stress over the slipping region is largely unchanged so that \(\sigma^{\prime}(r,t_{c})\approx\sigma_{0}^{\prime}\), except for a very small region of approximate size \(\sqrt{4\alpha t_{c}}\) near the rupture center that at spatial scales in the order of the rupture size can be neglected. Replacing \(\sigma^{\prime}(r,t_{c})\approx\sigma_{0}^{\prime}\) into (A4) leads equivalently to (A6). Let us now examine the marginally pressurized limit: \(f_{p}\Delta p_{w}\approx f_{p}\sigma_{0}^{\prime}-\tau_{0}\), where the rupture front significantly lags the overpressure front at the instability time, so that \(\lambda(t_{c})\ll 1\). We refer to Appendix C for a discussion about the marginally pressurized limit and its relation to the line-source approximation. Particularly, we note that the property \(\Pi(0)=1\) cannot be rigorously defined but, still, it can be established in an order of magnitude sense. Furthermore, for injection at constant volumetric rate, the prefactor is quite close to one for all practical purposes (see figure C1b). It is therefore convenient for practical applications to define the marginally pressurized limit in an approximate sense. Hence, we approximate the fluid overpressure within the rupture as \(\Delta p_{w}\Pi(\lambda\bar{r})\approx\sigma_{0}^{\prime}-\tau_{0}/f_{p}\). After substituting the previous relation into (A5), we obtain the following regular eigenvalue problem for marginally pressurized cases, \[\frac{R}{R_{w}}\frac{\tau_{0}}{f_{p}\sigma_{0}^{\prime}}\bar{v}(\bar{r})= \frac{1}{2\pi}\int_{0}^{1}F\left(\bar{r},\bar{\xi}\right)\frac{\partial\bar{v} (\bar{\xi})}{\partial\bar{\xi}}\mathrm{d}\bar{\xi}.\] (A7) It is important to mention that the critically stressed and marginally pressurized limits are both characterized by small slip at the instability time: \(\delta(r=0,t_{c})\ll\delta_{c}\). This is observed in our numerical solutions and was also established by Garagash and Germanovich [32] in the two-dimensional problem. The latter is very important since it implies that the approximation of the exponential-weakening friction law by a linear relation is valid, as well as the assumption of unlimited linear-weakening of friction (never reaching the residual strength of the fault). Finally, as a last comment, we have established the eigenvalue problems in both limits for a general (self-similar) injection scenario, not restricted to the constant-volumetric rate case that we solve in the main text. However, in the marginally pressurized limit, we have implicitly assumed that the overpressure at the fluid source at the instability time is approximately equal to the overpressure at the time of activation of slip. Such approximation is reasonable in the case of constant-volumetric rate as the increase of overpressure at the fluid source is slowly logarithmic (see figure C1b) and assumed to be approximately constant, equal to \(\Delta p_{c}\), for practical applications. This approximation has to be carefully considered when dealing with other injection scenarios (see, for instance, [32, 75, 76]). ### Numerical solution of the regular eigenvalue problem In the critically stressed and marginally pressurized limits, the eigen equations (A6) and (A7) can be recast as: \[\frac{1}{2\pi}\int_{0}^{1}F\left(\bar{r},\bar{\xi}\right)\frac{\partial\bar{v }(\bar{\xi})}{\partial\bar{\xi}}\mathrm{d}\bar{\xi}=\beta\bar{v}(\bar{r}),\] (A8) with the eigenvalue \[\beta=\frac{R}{R_{w}}\cdot\begin{cases}1&\text{for critically stressed faults }\mathcal{T}_{p}\ll 1\\ \nicefrac{{\tau_{0}}}{{f_{p}\sigma_{0}^{\prime}}}&\text{for marginally pressurized faults }\mathcal{T}_{p}\sim 10.\end{cases}\] (A9) We calculate the eigenvalues \(\beta_{k}\) and eigenfunctions \(\bar{v}_{k}\) of (A8), with \(k=1,2,...,\infty\), by discretizing the linear integral operator on the left-hand side via a collocation boundary element method employing ring 'dislocations' with piece-wise constant slip rate. The details of such implementation can be found in the supplementary material of [31]. For the numerical calculations, it is convenient to express the discretized form of the eigen equation (A8) in matrix-vector form as \[\mathbf{E}\bar{\boldsymbol{v}}_{k}=\beta_{k}\bar{\boldsymbol{v}}_{k},\] (A10) where \(\bar{\boldsymbol{v}}_{k}\in\mathbb{R}^{N}\) are the discretized eigenfunctions with \(k=1,2,...,N\), where \(N\) is the number of ring-dislocation elements, and \(\mathbf{E}\in\mathbb{R}^{N\times N}\) is a non-dimensional matrix that is equivalent to the collocation boundary element matrix of a circular shear crack of unit radius and unit shear modulus (see [31]). By collocation boundary element matrix, we mean that the product between \(\mathbf{E}\) and a given vector representing a discretized slip distribution \(\boldsymbol{\delta}\), would give as a result the corresponding discretized shear stress distribution \(\boldsymbol{\tau}\) that is in quasi-static equilibrium with \(\boldsymbol{\delta}\), in an infinite and otherwise unstressed solid. We solve the discretized eigen equation (A10) with the standard Wolfram Mathematica functions _Eigenvalues_ and _Eigenvectors_ which can be instructed to search only for the smallest eigenvalues and their corresponding eigenvectors. We do not intend here to conduct an extensive analysis of the eigenvalues and eigenfunctions as our unique goal in this work is to determine the smallest eigenvalue that we expect to give the nucleation radii for the critically stressed and marginally pressurized regimes. Nevertheless, we do report the first five (smaller) eigenvalues and their eigenvectors in table A1 and figure A1, respectively. The eigenfunctions are normalized such that \(\int_{0}^{1}\bar{\boldsymbol{v}}_{k}^{2}(\bar{r})\mathrm{d}\bar{r}=1\), meaning that the normalized eigenvectors from (A10) must be divided by \(\sqrt{1/N}\). It is interesting to note that the smallest eigenvalue \(\beta_{1}\) is for all practical purposes equal to \(1\). Also, we note that the eigenfunctions are not orthogonal as the matrix \(\mathbf{E}\) is non-symmetric. Universal nucleation radius of Uenishi and Rice for tensile and shear circular rupture instabilities The eigen equation (A6) for the critically stressed limit corresponds to the penny-shaped version of the eigen equation of Uenishi and Rice [33]. The nucleation radius (59) in the main text, is therefore the nucleation radius of a dynamic instability in the conditions analyzed by Uenishi and Rice [33]. In our circular configuration, the tectonic shear loading that drives the quasi-static phase of the rupture must be considered to be unidirectional, locally peaked around \(r=0\), and \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{\(k\)} & \multicolumn{3}{c}{Number of boundary elements \(N\)} \\ \cline{2-4} & 100 & 1000 & 10000 \\ \hline \hline 1 & 0.998912 & 1.002648 & 1.003018 \\ \hline 2 & 2.551356 & 2.561554 & 2.562539 \\ \hline 3 & 4.111055 & 4.128215 & 4.129803 \\ \hline 4 & 5.671338 & 5.696629 & 5.698843 \\ \hline 5 & 7.230949 & 7.265725 & 7.268573 \\ \hline \hline \end{tabular} \end{table} Table A1: Eigenvalues \(\beta_{k}\) as a function of the number of boundary elements \(N\). axisymmetric in magnitude. Moreover, this result is not only valid for a mixed-mode (II+III) shear rupture (with \(\nu=0\)), but also for a cohesive tensile (mode I) crack. In this later case, the nucleation radius is valid for any value of \(\nu\) as long as the shear modulus \(\mu\) is replaced by \(E^{\prime}/2\), where \(E^{\prime}=E/(1-\nu^{2})\) is the plane-strain Young's modulus. Note that similarly to the shear rupture case, here the far-field tensile load driving the quasi-static growth of the mode-I rupture must be also locally peaked at \(r=0\) (in order to initiate fracture yielding at the origin) and axisymmetric in magnitude. ## Appendix B Numerical solver for slip-dependent friction We calculate the numerical solutions of the slip-weakening model via a fully-implicit boundary-element-based solver with an elasto-plastic-like interfacial constitutive law [1, 31]. The details of the three-dimensional version of the solver were presented by Saez _et al_[1] for the particular case of Coulomb's friction, whereas the necessary modifications to solve in a more efficient manner for the special case of axisymmetric, circular shear ruptures were presented more recently by Saez and Lecampion [31]. Here, we extend our axisymmetric solver to a case in which the friction coefficient is an arbitrary function of slip. Such extension is indeed relatively straightforward and requires just changes in the integration of the constitutive interfacial law at the collocation point level, plus deriving the proper consistent tangent operator. We present these two changes in sections B.1 and B.2 respectively, plus some calculation and implementation details in section B.3. ### Integration of the constitutive interfacial law The integration of the constitutive interfacial law in three dimensions was described in section 2.2.3 of Saez _et al_[1] for the case of Coulomb's friction. With reference to this latter section and following the notation in [1], the extension to slip-dependent friction requires no further modification than just expressing the former constant friction coefficient \(f\) as a function of the magnitude of the shear vector of plastic displacement discontinuity at each collocation point, \(f(\|\mathbf{d}_{s}^{p}\|)\), with \(\mathbf{d}_{s}^{p}=(d_{1}^{p},d_{2}^{p})^{\top}\), and \(d_{1}^{p}\) and \(d_{2}^{p}\) the two shear components of plastic displacement discontinuity at a certain collocation point. The latter are expressed in the local reference system of the triangular boundary elements. Furthermore, in the axisymmetric configuration of interest in this work, the extension is even simpler since the shear part of the displacement discontinuity vector is a scalar (the direction of slip is fixed and known). Hence, the friction coefficient is simply expressed as \(f(d_{s}^{p})\), where the subscript's' denotes the shear component and 'p' the plastic part of the displacement discontinuity. With the previous considerations in mind, one can show that when frictional sliding occurs (\(\Delta\gamma>0\)), the system of equations (10)-(14) of [1] leads to the following implicit equation for the plastic multiplier \(\Delta\gamma\), \[\Delta\gamma=\frac{|t_{s}^{\rm trial}|-f\left(d_{s}^{p,n}-\Delta\gamma\cdot{ \rm sgn}\left(t_{s}^{\rm trial}\right)\right)t_{n}^{\rm trial}}{k_{s}},\] (B1) where \(d_{s}^{p,n}\) is the 'plastic' (frictional) slip at the previous time step \(n\), \(t_{s}^{\rm trial}\) and \(t_{n}^{\rm trial}\) are the shear and normal components of the elastic-trial traction vector \(\mathbf{t}^{\rm trial}=-\mathbf{\mathsf{D}}\cdot\mathbf{d}^{n+1}\) of the elastic predictor-plastic corrector algorithm adopted in [1], with \(\mathbf{d}^{n+1}\) the total (elastic + plastic) displacement discontinuity vector at the current time step \(n+1\) (coming from the global Newton-Raphson scheme that solves the quasi-static elastic equilibrium), and \(k_{s}\) the shear component of the diagonal elastic stiffness matrix \(\mathbf{\mathsf{D}}\). We recall our adopted geomechanics convention of positive stresses in compression. At a given iteration of the global Newton-Raphson scheme, equation (B1) is solved at every collocation point via a Newton-Raphson procedure, using as initial guess \(\Delta\gamma=0\). ### The consistent tangent operator The global Newton-Raphson iterations of the fully-implicit time integration scheme [1, 31] require the calculation of the so-called consistent tangent operator \(\mathbf{C}_{TO}\), which depends on the specific constitutive interfacial law under consideration. For the case of Coulomb's friction, the consistent tangent operator has been derived analytically for both fully 3D (see Appendix A in [1]) and axisymmetric (see Supplemental Material 2 in [31]) cases. Here, we derive the proper axisymmetric operator for the case in which the friction coefficient is an arbitrary function of slip. Using again the same notation than in [1], we define the tangent operator as \(\mathbf{C}_{TO}=-\partial\Delta\boldsymbol{t}^{\prime}/\partial\Delta \boldsymbol{d}\), where \(\Delta\boldsymbol{t}^{\prime}\) is the increment of the effective traction vector and \(\Delta\boldsymbol{d}\) is the increment of the displacement discontinuity vector. Note that the consistent tangent operator \(\mathbf{C}_{TO}\) is a block diagonal matrix of size \(2N\times 2N\), where \(N\) is the number of collocation points, and is composed by squared blocks \(\mathbf{C}_{TO}^{i}\) of size \(2\times 2\), where \(i=1,...,N\) is the collocation point index. Combining equations (10), (11) and (13) from [1], one can obtain \(\Delta\boldsymbol{t}^{\prime}\) as a function of \(\Delta\boldsymbol{d}\): \[\Delta\boldsymbol{t}^{\prime}=-\mathbf{D}\cdot\left[\Delta\boldsymbol{d}+ \Delta\gamma\left(\Delta\boldsymbol{d}\right)\left\{\mathrm{sgn}\left(t_{s}^{ \mathrm{trial}}\right),0\right\}^{\top}\right].\] (B2) Differentiation of the latter expression with respect to \(\Delta\boldsymbol{d}\) leads to the following expression for the squared blocks that composed the tangent operator: \[\mathbf{C}_{TO}^{i}=\mathbf{D}-\mathbf{C}_{TO}^{p}\text{, with }\quad \mathbf{C}_{TO}^{p}=-\begin{pmatrix}k_{s}\mathrm{sgn}\left(t_{s}^{\mathrm{ trial}}\right)\\ 0\end{pmatrix}\otimes\begin{pmatrix}\partial\Delta\gamma/\partial\Delta d_{s} \\ \partial\Delta\gamma/\partial\Delta d_{n}\end{pmatrix},\] (B3) where \(\mathbf{C}_{TO}^{p}\) is the plastic part of the tangent operator and \(\otimes\) is the tensor product. Note that if \(\Delta\gamma=0\), that is, if the collocation point state is elastic or, in other words, no frictional slip occurs, then \(\mathbf{C}_{TO}^{p}\) is a null matrix, and \(\mathbf{C}_{TO}^{i}=\mathbf{D}\). At this point, we just need the partial derivatives of the plastic multiplier \(\Delta\gamma\) with respect to \(\Delta\boldsymbol{d}\) to obtain the consistent tangent operator. To do so, we consider the incremental form of the consistency condition (see Appendix A in [1]), which in the case of slip-dependent friction \(f(d_{s}^{p})\) reads \[\frac{\partial\mathcal{F}}{\partial\boldsymbol{t}^{\prime}}\cdot\Delta \boldsymbol{t}^{\prime}+\frac{\partial\mathcal{F}}{\partial d_{s}^{p}}\cdot \Delta d_{s}^{p}=0.\] (B4) Using the previous equation in combination with equations (10), (11) and (13) in [1], one obtains \(\Delta\gamma\) as a function of \(\Delta\boldsymbol{d}\), to finally calculate the sought partial derivatives \[\frac{\partial\Delta\gamma}{\partial\Delta d_{s}}=\frac{k_{s}\mathrm{sgn} \left(t_{s}^{\mathrm{trial}}\right)}{A},\text{ and }\quad\frac{\partial\Delta\gamma}{\partial\Delta d_{n}}=-\frac{f \left(d_{s}^{p}\right)k_{n}}{A},\] (B5) where \(A=f^{\prime}\left(d_{s}^{p}\right)t_{n}^{\mathrm{trial}}\mathrm{sgn}\left(t_{s }^{\mathrm{trial}}\right)-k_{s}\), with \(f^{\prime}\) the first derivative of the arbitrary function describing the dependence of friction on slip. Note that if the friction coefficient \(f\) is constant, we effectively recover the axisymmetric consistent tangent operator for Coulomb's friction presented in Supplemental Material 2 of [31]. ### Some calculation and implementation details We use the adaptive time-stepping scheme based on the rupture speed described in Supplemental Material 2 of [31]. The parameter \(\beta\) that controls the number of elements that the front advances during one time step is fixed for most simulations as 2.5, which results in a front advancement of 2 to 3 elements per time step. To resolve properly the cohesive zone, we consider no less than 100 elements covering the elasto-frictional length scale \(R_{w}\). Verification tests for the numerical solver in the case of a constant friction coefficient were performed in [31]. Here, the solver is further verified for the slip-dependent friction case throughout the systematic match between the numerical solutions and the analytical asymptotic and approximate solutions derived in the main text for the different stages and regimes of the problem. With regard to numerical convergence, we consider that our Newton-Raphson scheme employed to solve every backward Euler time step converges when the relative increment of the \(\mathrm{L}^{2}\) norm of the displacement discontinuity vector (our primary unknown) between two consecutive iterations falls below \(10^{-4}\). On the other hand, the tangent mechanical system at each Newton-Raphson iteration is solved using a biconjugate gradient stabilized iterative solver (BiCGSTAB) with a tolerance set to \(10^{-4}\). Appendix C A note on the line-source approximation of the fluid injection and the marginally pressurized limit In our model, we idealize the fluid injection as a line source. Such approximation is of course valid for times \(t\gg r_{s}^{2}/\alpha\), where \(r_{s}\) is the characteristic size of the actual fluid source. This is graphically shown in figure C1a, where the line-source approximation is compared to the solution for a finite circular source of radius \(r_{s}\). The latter is calculated from the known solution in the Laplace domain (section 13.5, eq. 16, [35]) that we then invert numerically using the Stehfest's method [77]. Figure C1a shows clearly how at large times the line-source and finite-source solutions become asymptotically equal at distances \(r\geq r_{s}\). In particular, the overpressure at the fluid source can be approximated at large times by simply evaluating \(\Delta p(r,t)\) (equation (1)) at \(r=r_{s}\). By doing so, the argument of the exponential integral function is very small, \(r_{s}^{2}/4\alpha t\ll 1\), and the overpressure at the fluid source can be asymptotically approximated as \[\Delta p(r=r_{s},t)\approx\frac{\Delta p_{c}}{4\pi}\left(-\gamma-\ln\left( \frac{r_{s}^{2}}{4\alpha t}\right)\right),\] (C1) where \(\gamma=0.577216...\) is the Euler-Mascheroni's constant. Equation (C1) indicates that the overpressure at the fluid source increases logarithmically with time. This is further displayed in figure C1b, where the temporal evolution of \(\Delta p(r_{s},t)\) is plotted for both the line-source and finite-source solutions. From this figure, we observe that the line-source approximation is already quite accurate for times \(\alpha t/r_{s}^{2}\gtrapprox 10\). Furthermore, figure C1b shows that the characteristic overpressure \(\Delta p_{c}\) (equation (2)) is in the order of magnitude of the overpressure at the fluid source for a wide range of practically relevant times. Consider, for instance, the case of geo-energy applications where fluid injections are conducted through a wellbore of radius \(r_{s}\sim 10\) cm. By assuming plausible values of hydraulic diffusivity in the range \(10^{-5}\) to \(1\) m\({}^{2}\)/s, the characteristic time \(r_{s}^{2}/\alpha\) takes values between 1000 down to 0.01 seconds which are much smaller than typical fluid injection duration in geo-energy applications. The large time limit is therefore commonly satisfied. Note that we had already introduced \(\Delta p_{c}\) in a previous work [31] with the purpose of defining the marginally pressurized limit in a form that is more convenient for practical applications than in [1]. We recall that the marginally pressurized limit is defined by the condition that the overpressure at the fluid source \(\Delta p_{w}\) is just sufficient to activate fault slip, \(f_{p}\Delta p_{w}\approx f_{p}\sigma_{0}^{\prime}-\tau_{0}\). As we have seen, we can approximate \(\Delta p_{w}\) quite well through a line source, yet its magnitude is not constant but rather increases with time. This increase is nevertheless logarithmically slow, so one could think in approximating \(\Delta p_{w}\) as constant and equal to the characteristic overpressure \(\Delta p_{c}\). Indeed, the pre-factor in the order-of-magnitude relation \(\Delta p_{w}\sim\Delta p_{c}\) is quite close to unity over a wide range of times (see figure C1b). We therefore enforce \(\Delta p_{w}\approx\Delta p_{c}\) with the aim of defining the marginally pressurized limit in the more practically convenient way: \(f_{p}\Delta p_{c}\approx f_{p}\sigma_{0}^{\prime}-\tau_{0}\). In this way, we essentially avoid introducing the length scale of the fluid source \(r_{s}\) into the problem that, we think, would unnecessary complexify the model and its practical applications. This subtle 'assumption' is all over the main text. Moreover, because the so-called intensity of the injection is \(\Delta p_{*}=\Delta p_{c}/4\pi\), the factor \(4\pi\) is usually approximated by 10.
2310.00104
Galaxy Distribution Systems as Fractals
This work tests if the large-scale galaxy distribution can be characterized as a fractal system. The $\Lambda$CDM cosmology with $H_0=(70\pm 5)$ km/s/Mpc is adopted to study the UltraVISTA DR1, COSMOS2015 and SPLASH surveys, alongside the number density equations of these galaxy distribution systems as fractals with dimension D. The relativistic distance definitions $d_L$, $d_Z$ and $d_G$ are used to estimate the galaxy number densities in the redshift interval $0.1 \leq z \leq 4$ at volume limited subsamples. Applying the appropriate relations for the description of galaxy fractal structures with single dimension $D$ in the relativistic settings to these surveys datasets it is possible to state that for $z<1$ the UltraVISTA DR1 galaxies presented an average of $D=(1.58\pm 0.20)$, the COSMOS2015 galaxies produced $D=(1.39\pm 0.19)$ and the SPLASH galaxies generated $D=(1.00\pm 0.12)$. For $1 \leq z \leq 4$ the dimensions respectively decreased to $D=(0.59\pm 0.28)$, $D=0.54^{+0.27}_{-0.26}$ and $D=0.83^{+0.36}_{-0.37}$. These results are robust under the Hubble constant uncertainty assumed here. Analysis of blue and red galaxies subsamples in the COSMOS2015 and SPLASH surveys show that the fractal dimensions of blue galaxies present essentially no alteration from the values above, although the ones for the red galaxies changed mostly to smaller values, meaning that D may be assumed as a more intrinsic property of the distribution of objects in the Universe, thus allowing for the fractal dimension to be used as a tool to study different populations of galaxies. All results confirm the decades old theoretical prediction of a decrease in the fractal dimension for $z>1$ suggesting that either there are yet unclear observational biases causing such decrease in the fractal dimension, or the galaxy clustering was possibly more sparse and the universe void dominated in a not too distant past.
Sharon Teles
2023-09-29T19:28:59Z
http://arxiv.org/abs/2310.00104v1
# Galaxy Distribution Systems as Fractals ###### Abstract We present a new method for estimating the distance of the Galaxy Distribution System in the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy System System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy Distribution System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy System is a function of the function of the distance of the Galaxy Distribution System System. We show that the distribution of the Galaxy System is a function of the distance of the Galaxy Distribution System System.
2305.20076
Decision-Oriented Dialogue for Human-AI Collaboration
We describe a class of tasks called decision-oriented dialogues, in which AI assistants such as large language models (LMs) must collaborate with one or more humans via natural language to help them make complex decisions. We formalize three domains in which users face everyday decisions: (1) choosing an assignment of reviewers to conference papers, (2) planning a multi-step itinerary in a city, and (3) negotiating travel plans for a group of friends. In each of these settings, AI assistants and users have disparate abilities that they must combine to arrive at the best decision: assistants can access and process large amounts of information, while users have preferences and constraints external to the system. For each task, we build a dialogue environment where agents receive a reward based on the quality of the final decision they reach. We evaluate LMs in self-play and in collaboration with humans and find that they fall short compared to human assistants, achieving much lower rewards despite engaging in longer dialogues. We highlight a number of challenges models face in decision-oriented dialogues, ranging from goal-directed behavior to reasoning and optimization, and release our environments as a testbed for future work.
Jessy Lin, Nicholas Tomlin, Jacob Andreas, Jason Eisner
2023-05-31T17:50:02Z
http://arxiv.org/abs/2305.20076v3
# Decision-Oriented Dialogue for Human-AI Collaboration ###### Abstract We describe a class of tasks called _decision-oriented dialogues_, in which AI assistants must collaborate with one or more humans via natural language to help them make complex decisions. We formalize three domains in which users face everyday decisions: (1) choosing an assignment of reviewers to conference papers, (2) planning a multi-step itinerary in a city, and (3) negotiating travel plans for a group of friends. In each of these settings, AI assistants and users have disparate abilities that they must combine to arrive at the best decision: assistants can access and process large amounts of information, while users have preferences and constraints external to the system. For each task, we build a dialogue environment where agents receive a reward based on the quality of the final decision they reach. Using these environments, we collect human-human dialogues with humans playing the role of assistant. To compare how current AI assistants communicate in these settings, we present baselines using large language models in self-play. Finally, we highlight a number of challenges models face in decision-oriented dialogues, ranging from efficient communication to reasoning and optimization, and release our environments as a testbed for future modeling work.1 Footnote 1: Code and data are available at [https://github.com/jlin816/dialop](https://github.com/jlin816/dialop). ## 1 Introduction Imagine that you are trying to book conference travel with the help of a digital assistant. Your choice of airline is flexible, but you'd rather avoid layovers, want to arrive a day or two before the conference begins, and would like to be able to check in to your hotel as soon as you arrive. Additionally, you're in charge of booking travel for a few of your colleagues, each of whom has their own preferences and budgets, some of whom will be flying in from different cities, but all of whom would like to arrive at roughly the same time and stay in a nearby area. Suddenly, you must manage and communicate about a combinatorial explosion of possible solutions. Similar optimization problems occur in many everyday situations. Consider consulting a friend about what computer they'd recommend with the best tradeoff of features for your use cases. Or trying to allocate funding from multiple grants to determine which students should work on which projects, while juggling what the individual priorities of each student might be. Or making strategic decisions with your colleagues about which projects your company will take on, in the context of market conditions, and who to hire to manage those projects. All these situations share an underlying decision problem in the face of uncertainty, where communicating and collaborating with others is often critical to arrive at the best solution. Difficult decision problems like these are precisely where AI assistants could shine. Automated systems can handle large amounts of information and complex computations much better than humans. For example, in cases like travel booking, they can quickly search over a large number of possible itineraries and compute total costs in a way that the average user cannot. They may also be able to efficiently reason under uncertainty about the expected value of decision-relevant information, helping them determine what information may be important to share with or request from the user. On the other hand, these decisions cannot be _fully_ automated either. AI assistants _complement_ the user's information and capabilities: people know their preferences and may have other knowledge external to the system, including knowledge about fuzzy real-world constraints that are difficult to formalize in a computer-readable format. To solve these problems, systems need to communicate with users, ideally with a flexible interface such as natural language. In this paper, we develop a challenging suite of decision problems, benchmark the abilities of current language models on these tasks, and release environments to encourage future work in this area. We begin by formalizing a class of tasks, _decision-oriented dialogues_, in which multiple agents must communicate in order to arrive at a joint decision. They are jointly rewarded according to the quality of the decision. Each agent starts out with different information: for example, the user knows their own travel preferences, while the AI assistant has a database of flight and hotel prices. Sharing their information allows them to better assess different travel plans. Critically, however, the large amount of information and (in some tasks) the combinatorial solution space make it unnatural and inefficient for assistants to communicate _all_ of their knowledge to users, or vice versa. Instead, agents must determine what their partners already know and what information is likely to be decision-relevant, asking clarification questions and making inferences as needed. Within this class of tasks, we present DialOp, a suite of environments with three everyday domains where humans and agents must collaborate in order to make complicated decisions. (1) In Optimization, two agents take on the role of conference area chairs, assigning reviewers to conference papers when each agent has only has partial information about reviewer-paper similarity. (2) In Planning, an assistant with knowledge of a city must assist a human with building an itinerary based on their preferences. (3) In Mediation, multiple users must collaborate with an assistant in order to resolve group scheduling challenges. For each task, we specify an objective measure of utility based on the quality of the final decision. We first collect human-human dialogues on these tasks in order to establish a reference point for how humans naturally collaborate with each other. We then develop extensible environments for evaluating language models on each task, with support for tool use and chain-of-thought prompting. We use these environments to benchmark the relative performance of GPT-3 Brown et al. (2020), both in self-play and in a novel evaluation procedure known as _prompted self-play_, in which AI agents complete partial human dialogues. We then identify several common failure modes of GPT-3 and provide analyses of self-play dialogues. We release all dialogues, environments, and interfaces for human data collection in order to encourage future work that addresses these challenges. ## 2 Task Formulation We formalize a _decision-oriented dialogue_ (DoD) as a multi-agent problem consisting of a set of agents, an underlying world state \(W\), each agent's partial and possibly noisy observation \(O_{i}\), a set of legal messages \(m\in\mathcal{M}\) (analogous to actions in Figure 1: Overview of the three collaborative dialogue tasks that we consider. In Optimization, two agents with symmetric access to information play the role of area co-chairs assigning reviewers to conference papers. In Planning, an assistant must collaborate with a user in order to help them plan an itinerary. In Mediation, an assistant must that with multiple separate users in order to help them resolve a group scheduling problem. an MDP), a reward function over decisions \(R\) with parameters \(\theta\), and a communication cost function \(C\). The goal of a decision-oriented dialogue is to find a decision that maximizes \(R\) while minimizing the communication cost function \(C\). \(W\) remains fixed throughout the dialogue. Our problem can be thought of as a decentralized partially observation Markov decision process (Dec-POMDP; Bernstein et al. (2000)) where the actions are "cheap talk" and formal decision messages. An agent \(i\)'s policy \(\pi_{i}\) maps its known information \(O_{i}\) and the dialogue history \(\{m_{1},\ldots m_{t-1}\}\) to a new message \(m_{t}\): \(\pi_{i}\{m_{t}\ |\ O_{i},\{m_{1},\ldots m_{t-1}\}\}\). Agents take turns sending messages by sampling from their policy. Messages may specify a recipient if the number of agents > \(2\), and are expressed in natural language except for three special formal messages: a proposed decision, a formal acceptance of a decision, and a formal rejection. If an agent sends a proposed decision message and all other agents respond with a formal acceptance, the dialogue ends. When formal proposal decisions are sent, agents may additionally receive noisy observations of the reward of that decision (functions of the reward \(f(R_{\theta}(\cdot))\)). They can use these observations to make inferences about \(W\) and \(R\), and to decide how to respond. Otherwise, the only observations they receive throughout the dialogue are the messages from the other agents.2 Footnote 2: In general, the formalism does accommodate settings where an agent can pay to acquire new observations during the dialogue. Simply create other agents that have access to those observations (e.g., sensors), and assign a high cost to communicating with those agents. To illustrate the information in a DoD, consider the task of planning a travel itinerary that satisfies a user's preferences (Planning, as shown in Figure 1, middle). We represent the underlying world state as a weighted graph \(W\) = \((V,E,w)\) whose vertices are potential destinations. A decision is a path \(W^{t}\) in \(W\), representing the itinerary. Higher-weighted paths are better and the agents must communicate to improve their knowledge of the edge weights. In general, we represent the world state \(W\) as a weighted graph and the possible decisions as subgraphs \(W^{t}\) that satisfy task-specific constraints. Edges and vertices in \(W\) have weights \(w(e_{ij}),w(v_{i})\) that represent rewards (which may be negative) for including them in \(W^{t}\). The optimal decision for this world state is a subgraph \(W^{t}\)\(\subseteq\)\(W\) that maximizes the reward \[R_{\theta}(W^{t})=\sum_{v\in W^{t}}w(v)+\sum_{e\in W^{t}}w(e) \tag{1}\] In principle, the reward function could be be any function of \(W^{t}\), but we focus on the linear objective (1). For most practical tasks, the constrained optimization problem could then be expressed as an integer linear programming problem and solved using standard algorithms. We assume edge and vertex weights are determined by their features, represented by feature vectors \(\phi(\cdot)\in\mathbb{R}^{k}\), so that: \[\begin{split} w(v_{i})&=\theta^{T}\phi(v_{i})\\ w(e_{ij})&=\theta^{T}\phi(e_{ij})\end{split} \tag{2}\] where \(\theta\) is a preference vector. The form of \(R\) is common knowledge, but the world state \(W\)--in particular the feature vectors and the preferences \(\theta\)--is only partially observed by each player. Therefore, crucially, players must exchange messages in order to reduce their respective uncertainties about the optimization problem. However, there is a cost to communicating (e.g., time or effort), which agents must trade off with their desire to achieve a good decision. Thus, the overall objective function for a DoD is: \[\max_{W^{t}\subseteq W,\mathbf{m}} R_{\theta}(W^{t})-\sum_{t}C(m_{t})\] (3) subject to _task-specific constraints on \(W^{t}\)_ In the following sections, we introduce three everyday domains with collaborative decision-making and show how they can be formalized as DoD tasks in our benchmark. ### Optimization Our first task is an idealized bipartite matching problem, motivated by the scenario of conference organizers assigning reviewers to submitted papers (Figure 1, left). Although reviewer matching is sometimes completely automated via approaches like the Toronto Paper Matching System (TPMS; Charlin and Zemel, 2013), organizers often have incomplete and partially-overlapping knowledge about which reviewers fit which papers. Further, fit cannot necessarily be described on an absolute scale, so when working together on an assignment, organizers must discuss relative edge weights ("Alice would be a better choice than Bob for paper 8"). TPMS could in principle be replaced by an AI agent that joins this dialogue as an additional participant. We consider a simplified version of this problem in which two agents must select a one-to-one correspondence between reviewers and papers. We represent \(W\) as a bipartite graph and restrict valid proposals \(W^{\prime}\subseteq W\) to be bipartite matchings. Edge weights represent reviewer-paper affinities, and each agent observes some subset of these weights. A fuller version of this setting would derive the edge weights from features of the papers and the reviewers (footnote 4 below). This would make communication more interesting, but the underlying optimization problem would remain one of maximum weighted bipartite matching. ### Planning Next, we consider the scenario in which a user is planning an itinerary in a city with the assistance of a travel agent (Figure 1, middle). While existing systems can assist with parts of travel such as recommendation or booking, they often expect users to provide close-to-full specifications of their requests, rather than working toward a solution together with an assistant (although cf. SS8 for a discussion of mixed-initiative dialogue). Ideally, systems would be able to assist us in the comprehensive way a human travel agent would: starting with an under-specified set of "things we'd like to do," comprehensively exploring multi-day itineraries based on the user's preferences and domain knowledge, and iteratively refining the plan with the user based on feedback. We formalize a small version of this problem as a DoD task where the assistant must plan an itinerary of several sites for a user. The user has preferences about which sites to visit, a budget, and a preference for reducing travel time. Meanwhile, the assistant has access to a database of sites, along with information about their cost, location, and amenities (e.g., outdoor eating). We construct \(W\) as a fully-connected graph over the locations, where edge weights represent travel times (and the preference over edge weights is negative). Unlike reviewer matching, this task exhibits asymmetry of information: the assistant has information about vertex features and edge weights, while the user only has information about their own preference vector \(\theta\). Due to the budget constraint, the prescribed itinerary length, and the preference to minimize travel, this domain involves aspects of the knapsack problem, subset-selection problems, and the traveling salesman problem. ### Mediation Finally, we introduce a coordination scenario where the assistant serves as the role of mediator between multiple users (Figure 1, right). The users are attempting to book flights from their respective cities to all arrive at some shared destination at around the same time, e.g., to meet up for an event or vacation. It is often difficult to negotiate individual constraints and consider all the configurations efficiently. AI assistants may be more suited to guide the group toward a good joint solution, by helping users find options that will work well with the choices of other users as well as their own needs. We assume that the \(n\) users only coordinate through the single assistant.3 In the task, each user wants to choose a flight that is inexpensive and avoids conflicts with the user's calendar commitments, but that arrives close to the arrival times of other players. The assistant has access to each user's flight options and work calendar, but doesn't observe the user's personal calendar, nor the user's preferences about which meetings are important. In the underlying optimization problem, the world state \(W\) can be modeled as an complete \(n\)-partite graph, where the vertices associated with each user are their flight options. Any two flights for different users are connected by an edge, whose weight indicates how compatible the flights are (i.e., whether they arrive at similar times). Vertex weights are derived from the users' calendars, with important meetings creating a preference against flights (vertices) that conflict with them. The goal is to select a flight for each user so that the induced subgraph \(W^{\prime}\) (with \(n\) vertices and \(\binom{n}{2}\) edges) has high total weight. Footnote 3: Users in such a setting could learn about one another through talking to the assistant. Thus, such systems in practice should also manage privacy issues, which we ignore here. ## 3 The Dial0p Environments To instantiate each of these tasks, we release Dial0p, an open-source suite of decision-oriented dialogue environments. Dial0p environments can be used to evaluate models in self-play as in SS6.1, as an underlying API to build human user interfaces for data collection as in SS4, or to evaluate models in collaboration with humans. While other collaborative or task-oriented dialogue tasks are typically evaluated on coarse metrics such as success rate (did the system accomplish the user's goal?) (Li et al., 2016), the reward in a decision-oriented dialogue provides a _graded_ measure of communication success: how close to optimal is the final decision? This in turn provides signal on whether models are capable of asking the right questions, sharing the right information, and coordinating efficiently with the user so they can agree on the best course of action--in addition to simply understanding the user's utterances. In contrast to other dialogue tasks where evaluation is based on supervised datasets, our environments are also _procedurally generated_: the parameters of the underlying decision problem can be randomized to instantiate new dialogue contexts. Agents interact with the environment with an OpenAI Gym-like interface (Brockman et al., 2016). Agents send messages to the environment and receive messages from other players and any additional observations back. Before each message, agents must output a message type ([message], [propose], [accept], or [reject]), which the environment parses to determine how to interpret the message. Messages are forwarded to other agents. Proposals are parsed and scored; on the next turn the only valid actions for the other agents are [accept] and [reject]. Formal rejections clear the current proposal, and formal acceptances terminate the dialogue. Below, we describe how the environments implement each of the decision domains we introduce. OptimizationIn this task, agents must find the best assignment of \(k\) reviewers to \(k\) papers. For each game, we sample a random table of reviewer-paper affinity scores (edge weights). Each cell is shown to each player with probability \(p_{\text{observed}}\), so that a given cell may be shown to just one player, to both, or to neither. The initial observations \(o_{0}\) for each player are their observed table values.4 In our data collection and experiments we use \(k=8\), \(p_{\text{observed}}=0.4\). To discourage reviewers from communicating affinity scores in the form of numbers Figure 2: Data collection and evaluation frameworks. In order to collect human-human dialogues, we built web interfaces which allow humans to play either the user or assistant role for each task. When evaluating language models in self-play, we linearize information from the interface into a text prompt and provide additional tools which allow language models to access information which cannot fit within their context windows. which would not be natural in the real-world version of this scenario--we scale all scores shown to each player by a random positive constant, so that they are not comparable across agents but can still be discussed in relative terms such as "X is much better than Y." Agents take turns sending messages. Either agent is allowed to propose a matching at any point. If the other agent accepts on the next turn, the game ends; otherwise, the proposal is taken off the table and agents continue. The final reward is the sum of edge weights in this matching, normalized by the value of the best matching with the agents' pooled knowledge, computed as an expectation with a uniform prior over values so that rewards are in \([0,1]\). PlanningIn this task, an assistant and a user must book an itinerary of \(k\) sites that best satisfies the user's preferences. For each game, we procedurally generate sites (e.g., restaurants, parks, museums) with randomized features such as cuisine type or expected price range. We also procedurally generate a set of \(s\) preferences for the user and random preference weights \(\theta\) representing how much the user cares about each preference. To simulate the fact that people cannot quantify their actual preferences on an absolute scale, the user only observes natural language descriptions of their preferences, without the numerical preference weights. Only the assistant observes the inventory of sites and their features, while only the user observes their preferences. In our data collection and experiments we use \(k=3,s=10\). The assistant and the user take turns sending natural language messages. The assistant can propose a complete or partial itinerary at any point. This proposal's reward (while unknown to the assistant) is automatically computed for the user's convenience, including a breakdown that shows the contributions to the reward from each site, travel times, and budget constraints. With this information, the user can make judgments about aspects of the itinerary (e.g., that it is worth spending extra travel time to visit a particularly desirable site) and determine whether to accept the proposal. The game ends when the user accepts a full itinerary of \(k\) sites. The final reward is the score of the itinerary, range-normalized by the scores of the best and worst possible \(k\)-site itineraries. MediationIn this task, two users and one assistant must book the best flight for each user that satisfies their individual preferences, while being close to each other. On each game, the environment generates a random set of personal calendar events, work calendar events, and importance weights for each event indicating how important it is. The environment also generates a list of flights for each user, each with randomized features for price, arrival time, and departure time. The user observes their own personal and work calendar and flight set, while the assistant observes the work calendars and flight sets of _both_ users (but not their personal calendars). Additionally, the assistant does not observe the importance of each meeting, so it must communicate with the user to determine which events can be missed for the flight. When the assistant proposes a flight to a user, the user observes the score breakdown in terms of missed meetings, price, and closeness to the other user's flight (when known). The game ends when all users accept the assistant's proposals. The final reward is the sum of their scores, range-normalized by the scores of the best and worst pairs of flights. ## 4 The DialOp Dataset In order to study the communication strategies used by humans and establish baseline performance numbers for each task, we collected a set of human-human dialogues. For each task, we built a multi-player online interface and collected high-quality human-human dialogues using a mixture of Amazon Mechanical Turk and in-house Microsoft data annotators, resulting in a total of 409 dialogues, consisting of 5253 messages and over 58K words across domains. Human players take a median time of 8min 19sec across tasks. Humans achieve an average of roughly 90% of the maximum possible score on both the optimization and planning domains, and close to 100% performance in the mediation domain. We report additional dataset statistics in Table 2 in the appendix. In each task, each annotator played the role of an assistant or user. For ease of play, annotators were not required to take turns, but used a chat interface where they could send a message at any time. Consecutive messages from the same annotator were concatenated into a "turn." Although real-world users know their own preferences, our annotators are emulating users that we have generated programmatically, so we must tell them what their preferences are. This setup gives us full knowledge of user preferences so that we can objectively evaluate the quality of the decision. We simulate the fact that internal preferences may be comparative or fuzzy by scaling numerical values (in Optimization) or not showing numerical values until a proposal is presented. This design encourages realistic behavior in the dialogues: it is easier to make comparisons between travel itineraries and point to specific aspects you like and dislike, rather than fully specify an itinerary you would like. As depicted in Figure 2 for Planning, humans had access to the same information as models receive in the task, but presented in a graphical user interface (UI) rather than purely in text: OptimizationBoth annotators see a spreadsheet with their scaled known table values. They can click on cells in the spreadsheet to make a proposal. PlanningThe human assistant sees a map of all the locations, allowing them to visually estimate distances. They can fill in events into a proposed itinerary, which auto-calculates the exact distances. They can click on a site to see its features or filter sites on the map with checkboxes and sliders. The user initially only sees a plain-text list of their travel preferences (e.g., "like seafood, Japanese") without the preference weight values. When the assistant sends a proposed (partial or full) itinerary, the user sees the features of the proposed sites and a scorecard breaking down the total score by event, travel distance, and budget. MediationUsers see a three-day calendar with events and a list of flights with times and prices. Events are labeled with a numerical value for their importance. The human assistants see the calendars and flight lists for both users. When the assistant makes a proposal to one or both users, they see the proposed flight overlaid on their calendar and a scorecard breaking down the total score with the penalty for missing calendar events, arriving at a different time from the other user, and flight price. For more details on the data collection set up and Figure 3: An annotated example of a human-human dialogue and a model-model self-play dialogue with GPT-3 in Planning. While humans generally exhibit diverse and flexible strategies and reach good solutions, self-play dialogues tend to be repetitive, and the assistant makes mediocre proposals and often hallucinates. We discuss more analysis in §7. interface screenshots, refer to the appendix. We also release the code to run the UIs for the tasks. ## 5 Baseline Models We believe that AI agents for decision-oriented dialogue will benefit from incorporating explicit reasoning over possible world states and possible decisions. However, as a baseline approach, this paper evaluates few-shot prompted LLMs as the AI agents. These have the benefit that they can attempt a wide variety of dialogue interactions without the need for domain-specific training or modeling. In particular, we focus our evaluations on the instruction-tuned GPT-3 model known as text-davinci-003 Brown et al. (2020); Ouyang et al. (2022). For Optimization, we prompt with two human-human dialogue examples from the dataset; for the others we prompt with one, due to context length limitations. If models fail to generate a valid message (e.g., user simulator model attempting to send proposals), we append the generated message to the prompt, along with any error message from the game, and continue generating, allowing the model to revise its previous generation. Below, we describe how models are prompted with the information for each task. Refer to Appendix E for the full prompts. OptimizationBoth players see a partial table of weights matching reviewers and papers for this task. We prompt the model with the linearized table, formatted as a CSV. PlanningFor the user simulator model, we prompt with the natural language list of travel preferences as the context. The agent has access to a database of sites with features. We take a modular tool use approach, where the agent model accesses information in the database by writing search queries rather than conditioning directly on the database itself. The search queries are executed by a _query executor_ model that conditions on the database and generates the result for the new query. We hand-write several example queries in a simple domain-specific language where the agent can return specific fields (e.g. name, category, price) of a site, filter over fields, sort_by field values (including distance_to another destination), and search by text_query in freeform natural language. While the DSL examples guide the set of searches the agent can perform, the query executor can generalize to new searches beyond the demonstrations. We augment the 1-shot example in agent's prompt with examples of queries in the DSL along with their results throughout the dialogue and provide the query executor with query and result examples. Delegating searches over the database to the query executor reduces context length restrictions and allows the agent model to filter for relevant information from the database with an abstracted query layer. Future approaches may consider using larger context length models and directly conditioning on the database. This task requires particularly complex reasoning to search based on the dialogue (on the agent side) and decide whether to accept an itinerary based on the scores (on the user side). We also augment the dialogues in the user and agent prompt with [think] steps such as "I am losing the most points from the travel time between events. I should reject the proposal..." based on ReAct Yao et al. (2022) to provide the model with reasoning examples. MediationEach user can see their set of flights, private calendar, and shared work calendar while the agent can see flights and shared calendars (without event importance values) for both players. We prompt models with the list of all flights and calendar events. The environment allows the agent to talk to either player; generally, deciding which user to talk to is itself a strategic decision. We adopt a simple turn-taking strategy where we iterate round-robin through all players; on the agent's turn, they are prompted with You to and choose which user to send the message to by generating either 0 or 1 (e.g. "You to 0"). ## 6 Evaluation In this section, we compare the performance of humans and AI agents on our tasks. While we are ultimately interested in how well AI agents can perform in collaboration with human partners, we introduce two automatic evaluation setups which serve as proxies for human evaluation. Our experiments aim to understand: **(1)** how well do current models perform in decision-oriented dialogues (as evaluated in self-play; SS6.1) and **(2)** how well can models comprehend human dialogues, as a proxy for eventual collaboration with real people (as evaluated in prompted self-play; SS6.2)? ### Self-Play First, we evaluate how well models can collaborate with each other in self-play. We prompt each model with the private knowledge for a player. On each step of the environment, we generate from the model whose turn it is (assistant or user simulator(s)) and append the outputted message to both models' context. We repeatedly generate from the model until a proposal is made and accepted. In Figure 4, we show human-human and model-model scores against the number of words in the dialogue. For a fair comparison, we prompt models with the same randomly generated instances as the human-human dialogues in the evaluation dataset, although future agents can also generally be evaluated on new random instances generated from the environment. In gray, we show the performance of a naive rule-based baseline that selects a random proposal from the set of all possible proposals. Compared to humans, models tend to have longer dialogues than humans _and_ achieve less optimal solutions. Models significantly outperform the baseline on both the itinerary planning and mediation tasks but do slightly worse than random chance on the reviewer matching task, signaling that they struggle with its underlying optimization problem. These results suggest that models have yet to close the gap to human performance in communicating efficiently to collaborate on good solutions. ### Prompted Self-Play Even agents that perform well in self-play may not perform well in collaboration with humans Carroll et al. (2019). This disparity exists because humans often use different and more diverse strategies than artificial agents, particularly if agent strategies arise from explicit optimization of an objective. To bridge this gap, we propose a new mode of automatic evaluation known as _prompted self-play_ (PSP), in which dialogues are initialized with the prefix of a human-human dialogue and then continued by the model. Given a human-human dialogue from our dataset, we test how models perform if they are provided with 50% of the dialogue, 75% of the dialogue, and everything except the final proposal, and then complete the rest of the dialogue via self-play. PSP tests additional capabilities beyond self-play: in PSP, the dialogue history contains information that the human-human pair has talked about already, making it easier to find good solutions _if_ models are able to understand and reason over the information to make a proposal. Additionally, models should do some degree of belief modeling about what the human being simulated knows to communicate efficiently; for example, models ought to avoid asking about information already implied by previous utterances. Finally, prompting in this way encourages models to complete dialogues "in the style" of the human-human pair in the prefix. As a result, PSP both tests whether models can flexibly continue dialogues demonstrating different strategies (e.g. with one agent taking most of the initiative), and whether assistants can collaborate with a diverse range of humans, similar to population play and fictitious self-play evaluation Jaderberg et al. (2019); Strouse et al. (2021). Figure 4: Self-play scores and dialogue lengths in words, compared to human-human dialogues. Models achieve lower scores on average, and also tend to have longer dialogues. Marginal distributions for the # words and score are shown as histograms, and the average score of a randomly selected proposal is shown for each task as a dashed gray line. Mean and SEM numbers can be found in Table 1. We bias models to output dialogues that are approximately the same length as the corresponding human-human dialogue (cf. Appendix E). Figure 5 shows average PSP performance for each task. In Planning, models perform better with additional human data in the prompt, suggesting that they are at least partially capable of integrating information from the human-human prefix. However, there is a substantial gap between the _proposal_ condition and human-human dialogue scores, indicating that models struggle to perform the final optimization step of choosing the best solution given the entire dialogue history. Meanwhile, in Optimization, models fail across all PSP conditions; this occurs because the final step of the reviewer matching game involves integrating the discussed values to compute a bipartite matching, which is difficult for models. Finally, in Mediation, models score well above a random baseline in all PSP conditions but do not perform better with additional human-human dialogue context, suggesting that they can meaningfully communicate about the task but don't make the optimal final proposal. In the future, tool use could potentially greatly improve performance on this task, particularly with tools that can specifically handle the optimization part of the problem. ## 7 Analysis In order to quantify the strategies used in human-human dialogues, we used GPT-3 to annotate dialogues at the level of individual messages. Based on manual inspection of a small set of games, we devised a list of message types: (1) _share_, in which agents provide information about their preferences; (2) _query_, in which agents ask each other for information; (3) _affirm_, in which agents agree with each other and/or ground incoming messages; (4) _explain_, in which agents provide justification for a previous message or action; (5) _meta_, in which agents engage in discussion about high-level strategies or meta-game details; (6) _revise_, in which agents correct earlier statements; or (7) _miscellany_, which includes other messages such as greetings. Each message may have multiple message types. We prompted GPT-3 to generate message annotations for each of the 5253 messages using two hand-annotated example dialogues. We provide additional details and data statistics in the appendix. Most dialogues are focused on exchanging information: of the message types, we find that human agents most commonly _share_ or _query_ for information. In the Optimization game, agents send twice as many _share_ messages as any other type of message, often sending information about individual cells in their observed tables. One strategy used by humans involves both players sharing all observed information and then making a decision at the end of the game. This strategy is most tractable in Optimization game, where players have a relatively small observation space. However, this strategy leads to exceptionally long dialogues, even in Optimization, and is not the most common approach. Meanwhile, in the Planning and Mediation games, which have asymmetric information and roles, agents are more likely to _query_ for information or engage in _meta_-game discussion in order to learn what information the other agent can see. Agents must still _share_ information, but assistants for both of these tasks have access to an exceptionally large amount of information which cannot be fully shared with the users. We also provide a breakdown of message types Figure 5: Prompted self-play results for all three tasks, compared to human results. For each setting, we initialize dialogues with 50% and 75% of a corresponding human game and let GPT-3 complete the dialogue. In the _proposal_ setting, we prompt the model with an entire human dialogue except for the final proposal and force the model to end the game immediately. The average score of a randomly selected proposal is shown for each task as a dashed gray line. (*) For reference, we show the mean score of models in self-play, although we note that they are not prompted to end the dialogue at some length like the other PSP conditions. over the time-course of dialogues in Figure 6. As expected, many interactions begin with greetings, which is evidenced by a spike in the _miscellany_ category at the beginning of all three plots. In the Planning and Mediation tasks, agents are more likely to _query_ at the beginnings of games and then respond with _share_ messages shortly afterward. Finally, _affirm_ messages, although rare, are most likely to appear at the end of dialogues, once common ground has been established. Qualitatively, we show a human-human dialogue side-by-side with a self-play dialogue in Figure 3. We generally observe across the human dialogues that human-human pairs exhibit diverse strategies in (1) **user-agent initiative**: in some dialogues, users are proactive in sharing relevant information, while in others agents make directed queries to narrow down the set of proposals; and (2) **coordination strategies**: working incrementally from partial proposals, backtracking, and more. In self-play dialogues, current LLMs are capable of carrying on natural dialogues that partly address the user's preferences and find good solutions. However, they generally tend to be formulaic and repetitive, and hallucinations are a problem, as with other tasks involving language models. Critically, models ask general questions such as "Do you have any other preferences?" and sometimes slightly more specific ones such as "Do you have a price point?", but the questions are not _goal-directed_ in eliciting decision-critical information. In contrast, human assistants ask questions that help them decide between proposals or narrow down the search space. Finally, models fail to do the optimization step of the proposal (as supported by our PSP results): proposals are often only slightly better than random, and do not improve drastically over the course of the dialogue. This suggests that our task targets many of the critical capabilities missing from current models, such as reasoning, asking clarification questions, grounding to external sources, and hallucination. ## 8 Related Work Task-Oriented DialogueOur work may be viewed as an extension of task-oriented dialogue, where a system must assist a user with accomplishing a goal, such as hotel booking or calendar scheduling [1, 1, 16]. Most task-oriented dialogue settings involve helping a user who is seeking out a specific piece of information ("what is a vegetarian Italian restaurant nearby?") or wants to take an action ("change my flight to tuesday"). Systems are typically evaluated with coarse metrics such as success rate (e.g. at returning the right hotel information requested by a user) or word overlap with human-human dialogues. In contrast, our tasks are grounded in underlying optimization problems, where the quality of the final solution provides a richer measure of communicative success. All agents must engage in information-seeking and understand intents in the course of a dialogue decision problem, but furthermore have to _take initiative_ to share and query information to collaborate on a good solution. In this sense, our work is more similar to early work on task-oriented dialogue in mixed-initiative settings [15, 16] such as TRAINS [17] and TRIPS [17], in which users had to collaborate with a computer agent in order to solve planning problems such as train routing. Our task includes many similar design elements but is aimed at building general dialogue systems without the significant domain-specific engineering that went into projects like TRAINS and TRIPS. Figure 6: Kernel density estimates of message types in human-human dialogues plotted against their position within a dialogue. Message types were automatically annotated using few-shot prompting with GPT-3. Grounded DialogueAnother class of dialogue tasks are grounded dialogue settings such as Cards Potts (2012); Vogel et al. (2013), CerealBar Suhr et al. (2019), MutualFriends He et al. (2017), and OneCommon Udagawa and Aizawa (2019), where agents communicate in a game-like setting to achieve a goal. These tasks are often situated in a multimodal environment with visual elements or external knowledge. Our task also has many of these elements, but we focus on domains with everyday optimization problems where successful communication could be useful to people. Our work also shares elements in common with negotiation dialogue tasks such as Deal or No Deal Lewis et al. (2017) and Craigslist Bargaining He et al. (2018), but we focus on cooperative scenarios in which all agents share the same objective. Large Language ModelsOur goal of building task-general dialogue agents motivates the use of large language models (LLMs) such as GPT-3 Brown et al. (2020); Ouyang et al. (2022), PaLM Chowdhery et al. (2022), or LLaMA Touvron et al. (2023). Recent work has focused on using language models as dialogue agents, including OpenAI's ChatGPT, Microsoft's Sydney, Anthropic's Claude, and Google's LAMDA Thoppilan et al. (2022) and Bard. Current-era language models are known to struggle with aspects of our tasks, such as mathematical reasoning Hendrycks et al. (2021), explicit state tracking Li et al. (2021), pragmatics Fried et al. (2022), and theory of mind Sap et al. (2022). However, recent work in scratchpad prompting Nye et al. (2021), chain-of-thought reasoning Wei et al. (2022), and external tool use Schick et al. (2023) has sought to address these problems. We build baseline models with similar approaches in our setting. While LLMs can perform reasonably well in some of our settings, we show that they cannot consistently handle dialogues with complex decision problems as well as humans. Human-AI CollaborationOur task may also be viewed as a cooperative multi-agent setting Dafoe et al. (2020). Research in human-AI collaboration and multi-agent reinforcement learning has also formalized tasks that require collaborating strategically with other agents on a shared goal, through tasks such as Overcooked Carroll et al. (2019), Hanabi Bard et al. (2020), and Diplomacy Bakhtin et al. (2022). Our evaluation methodology is adapted from these tasks, where methods like population play and fictitious self-play are often used as proxies for human evaluation in addition to self-play Heinrich et al. (2015); Strouse et al. (2021). In human-AI collaboration, cooperative tasks have been formulated in game-theoretic terms where agents use signals from the user such as demonstrations, feedback, or language Jeon et al. (2020); Lin et al. (2022) to explicitly optimize for assistive behavior Hadfield-Menell et al. (2016); Sadigh et al. (2016). In our work, we are similarly interested in formalizing settings where agents should explicitly optimize for human assistance in the course of dialogue. ## 9 Conclusion In this paper, we presented data, environments, and model baselines for a class of tasks we call _decision-oriented dialogues_. Across all task settings, current-era language models did not perform as well as humans, suggesting failures in their ability to communicate efficiently and reason in structured real-world optimization problems. Future modeling work in this domain may seek to integrate tools and inference techniques which would allow language models to compute optimal decisions for these types of problems while maintaining their flexible communication and collaboration skills.
2309.06402
Expressive dynamics models with nonlinear injective readouts enable reliable recovery of latent features from neural activity
The advent of large-scale neural recordings has enabled new methods to discover the computational mechanisms of neural circuits by understanding the rules that govern how their state evolves over time. While these \textit{neural dynamics} cannot be directly measured, they can typically be approximated by low-dimensional models in a latent space. How these models represent the mapping from latent space to neural space can affect the interpretability of the latent representation. We show that typical choices for this mapping (e.g., linear or MLP) often lack the property of injectivity, meaning that changes in latent state are not obligated to affect activity in the neural space. During training, non-injective readouts incentivize the invention of dynamics that misrepresent the underlying system and the computation it performs. Combining our injective Flow readout with prior work on interpretable latent dynamics models, we created the Ordinary Differential equations autoencoder with Injective Nonlinear readout (ODIN), which captures latent dynamical systems that are nonlinearly embedded into observed neural activity via an approximately injective nonlinear mapping. We show that ODIN can recover nonlinearly embedded systems from simulated neural activity, even when the nature of the system and embedding are unknown. Additionally, ODIN enables the unsupervised recovery of underlying dynamical features (e.g., fixed points) and embedding geometry. When applied to biological neural recordings, ODIN can reconstruct neural activity with comparable accuracy to previous state-of-the-art methods while using substantially fewer latent dimensions. Overall, ODIN's accuracy in recovering ground-truth latent features and ability to accurately reconstruct neural activity with low dimensionality make it a promising method for distilling interpretable dynamics that can help explain neural computation.
Christopher Versteeg, Andrew R. Sedler, Jonathan D. McCart, Chethan Pandarinath
2023-09-12T17:03:50Z
http://arxiv.org/abs/2309.06402v1
# Expressive dynamics models with nonlinear injective ###### Abstract The advent of large-scale neural recordings has enabled new approaches that aim to discover the computational mechanisms of neural circuits by understanding the rules that govern how their state evolves over time. While these _neural dynamics_ cannot be directly measured, they can typically be approximated by low-dimensional models in a latent space. How these models represent the mapping from latent space to neural space can affect the interpretability of the latent representation. We show that typical choices for this mapping (e.g., linear or MLP) often lack the property of injectivity, meaning that changes in latent state are not obligated to affect activity in the neural space. During training, non-injective readouts incentivize the invention of dynamics that misrepresent the underlying system and the computation it performs. Combining our injective Flow readout with prior work on interpretable latent dynamics models, we created the Ordinary Differential equations autoencoder with Injective Nonlinear readout (ODIN), which learns to capture latent dynamical systems that are nonlinearly embedded into observed neural activity via an approximately injective nonlinear mapping. We show that ODIN can recover nonlinearly embedded systems from simulated neural activity, even when the nature of the system and embedding are unknown. Additionally, we show that ODIN enables the unsupervised recovery of underlying dynamical features (e.g., fixed points) and embedding geometry. When applied to biological neural recordings, ODIN can reconstruct neural activity with comparable accuracy to previous state-of-the-art methods while using substantially fewer latent dimensions. Overall, ODIN's accuracy in recovering ground-truth latent features and ability to accurately reconstruct neural activity with low dimensionality make it a promising method for distilling interpretable dynamics that can help explain neural computation. ## 1 Introduction Recent evidence has shown that when artificial recurrent neural networks are trained to perform tasks, the rules that govern how the internal activity evolves over time (i.e., the network dynamics) can provide insight into how the network performs the underlying computation [1; 2; 3; 4]. Given the conceptual similarities between artificial neural networks and biological neural circuits, it may be possible to apply these same dynamical analyses to brain activity to gain insight into how neural circuits perform complex sensory, cognitive, and motor processes [5, 6, 7]. However, unlike in artificial networks, we cannot easily interrogate the dynamics of biological neural circuits and must first estimate them from observed neural activity. Fortunately, advances in recording technology have dramatically increased the number of neurons that can be simultaneously recorded, providing ample data for novel population-level analyses of neural activity [8, 9, 10]. In these datasets, the activity of hundreds or thousands of neurons can often be captured by relatively low-dimensional subspaces [11], orders-of-magnitude smaller than the total number of neurons. Neural activity in these latent spaces seems to evolve according to consistent sets of rules (i.e., latent dynamics) [12, 6]. Assuming no external inputs, these rules can be expressed mathematically as: \[\mathbf{z}_{t+1} =\mathbf{z}_{t}+f(\mathbf{z}_{t}) \tag{1}\] \[\mathbf{y}_{t} =\exp g(\mathbf{z}_{t})\] (2) \[\mathbf{x}_{t} \sim\text{Poisson}(\mathbf{y}_{t}) \tag{3}\] where \(\mathbf{z}_{t}\in\mathbb{R}^{D}\) represents the latent state at time \(t\), \(f(\cdot):\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\) is the vector field governing the dynamical system, \(\mathbf{y}_{t}\in\mathbb{R}^{N}\) denotes the firing rates of the \(N\) neurons, \(g(\cdot):\mathbb{R}^{D}\rightarrow\mathbb{R}^{N}\) maps latent activity into log-firing rates, and \(\mathbf{x}_{t}\in\mathbb{R}^{N}\) denotes the observed spike counts at time \(t\), assuming the spiking activity follows a Poisson distribution with time-varying rates given at each moment \(t\) by \(\mathbf{y}_{t}\). Unfortunately, any latent system can be equivalently described by many combinations of dynamics \(f\) and embeddings \(g\), which makes the search for a unique latent system futile. However, versions of a latent system's dynamics \(f\) and embedding \(g\) that are less complex and use fewer latent dimensions can be easier to interpret than alternative representations that are more complex and/or higher-dimensional. Models of latent dynamics that can discover simple and low-dimensional representations will make it easier to link latent dynamics to neural computation. A popular approach to estimate neural dynamics [13, 14, 15] is to use neural population dynamics models (NPDMs), which model neural activity as a latent dynamical system embedded into neural activity. We refer to the components of an NPDM that learn the dynamics and embedding as the generator \(\hat{f}\) and the readout \(\hat{g}\), respectively. When modeling neural activity, the generator and readout are jointly trained to infer firing rates \(\hat{\mathbf{y}}\) that maximize the likelihood of the observed neural activity \(\mathbf{x}\). Using NPDMs to estimate underlying dynamics and embedding implicitly assumes that good reconstruction performance (i.e., \(\hat{\mathbf{x}}\approx\mathbf{x}\)) implies interpretable estimates of the underlying system (i.e., \(\hat{\mathbf{z}}\approx\mathbf{z}\), \(\hat{f}\approx f\), \(\hat{g}\approx g\)). However, recent work has shown that when the state dimensionality of the generator \(\hat{D}\) is larger than a system's latent dimensionality \(D\), high reconstruction performance may actually correspond to estimates of the latent system that are overly complex or misleading and therefore harder to interpret [15]. At present, reconstruction performance is seemingly an unreliable indicator for the interpretability of the learned dynamics. This vulnerability to learning overly complex latent features might emerge from the fact that, without constraints on the readout \(\hat{g}\), changes in the latent state are not obligated to have an effect on predicted neural activity. Thus, NPDMs can be rewarded for inventing latent activity that boosts reconstruction performance, even if that latent activity has no direct correspondence to neural activity. A potential solution is to make \(\hat{g}\) injective, which obligates all latent activity to affect neural reconstruction. This would penalize any latent activity that is not reflected in the observed neural activity, thereby putting pressure on the generator \(\hat{f}\) and readout \(\hat{g}\) to learn a more interpretable (i.e., simpler and lower dimensional) representation of the underlying system. In addition, most previously used readouts \(\hat{g}\) were not expressive enough to model diverse mappings from latent space to neural space, assuming the embedding \(g\) to be a relatively simple (often linear) transformation (though there are exceptions [16, 17, 18]). Capturing nonlinear embeddings is important because neural activity often lives on a lower-dimensional manifold that is nonlinearly embedded into the higher-dimensional neural space [7]. Therefore, assumptions of linearity are likely to prevent NPDMs from capturing dynamics in their simplest and lowest-dimensional form, making them less interpretable than the latent features learned by NPDMs that can approximate these nonlinearities. To address these challenges, we propose a novel architecture called the Ordinary Differential equation autoencoder with Injective Nonlinear readout (ODIN), which implements \(\hat{f}\) using a Neural ODE (NODE [19]) and \(\hat{g}\) using a network inspired by invertible ResNets [20; 21; 22; 19; 23]. ODIN approximates an injective nonlinear mapping between latent states and neural activity, obligating all latent state variance to appear in the predicted neural activity and penalizing the model for using excessively complex or high-dimensional dynamics to model the underlying system. On synthetic data, ODIN learns representations of the latent system that are more interpretable, with simpler and lower-dimensional latent activity and dynamical features (e.g., fixed points) than alternative readouts. ODIN's interpretability is also more robust to overestimates of latent dimensionality and can recover the nonlinear embedding of synthetic data that evolves on a simulated manifold. When applied to neural activity from a monkey performing a reaching task with obstacles, ODIN reconstructs neural activity comparably to state-of-the-art recurrent neural network (RNN)-based models while requiring far fewer latent state dimensions. In summary, ODIN estimates interpretable latent features from synthetic data and has high reconstruction performance on biological neural recordings, making it a promising tool for understanding how the brain performs computation. ## 2 Related Work Many previous models have attempted to understand neural activity through the lens of neural dynamics. Early efforts limited model complexity by constraining both \(\hat{f}\) and \(\hat{g}\) to be linear [24; 25; 26]. While these models were relatively straightforward to analyze, they often failed to adequately explain neural activity patterns [27]. Other approaches increased the expressiveness of the modeled dynamics \(\hat{f}\). RNNs can learn to approximate complex nonlinear dynamics, and have been shown to substantially outperform linear dynamics models in reconstructing neural activity [27]. Unfortunately, RNNs implicitly couple the capacity of the model to the latent state dimensionality, meaning their ability to model complex dynamics relies on having a high-dimensional latent state. In contrast, NODEs can model arbitrarily complex dynamics of embedded dynamical systems at the dimensionality of the system [19; 15]. On synthetic data, NODEs have been shown to recover dynamics more accurately than RNN-based methods [28; 15]. In contrast to our approach, previous NODE-based models used a linear readout \(\hat{g}\) that lacks injectivity. This can make the accuracy of estimated latent activity vulnerable to overestimates of the latent dimensionality (i.e., when \(\hat{D}>D\)) and/or fail to capture potential nonlinearities in the embedding \(g\). Early efforts to allow greater flexibility in \(\hat{g}\) preserved linearity in \(\hat{f}\), using feed-forward neural networks to nonlinearly embed linear dynamical systems in high-dimensional neural firing rates [16]. More recently, models have used Gaussian processes to approximate nonlinear mappings from latent state to neural firing with tuning curves [17]. Other models have combined nonlinear dynamics models and nonlinear embeddings for applications in behavioral tracking [29] and neural reconstruction [18]. Additional approaches extend these methods to incorporate alternative noise models that may better reflect the underlying firing properties of neurons [16; 30]. While nonlinear, the readouts of these models lacked injectivity in their mapping from latent activity to neural activity. Many alternative models seek to capture interpretable latent features of a system from observations. One popular approach uses a sparsity penalty on a high-dimensional basis set to derive a sparse symbolic estimate of the governing equations for the system [31]. However, it is unclear whether such sparse symbolic representation is necessarily a benefit when modeling dynamics in the brain. Another recent model uses contrastive loss and auxiliary behavioral variables to learn low-dimensional representations of latent activity [32]. This approach does not have an explicit dynamics model, however, so is not amenable to the dynamical analyses performed in this manuscript. Normalizing flows - a type of invertible neural network - have recently become a staple for generative modeling and density estimation [20; 23]. Some latent variable models have used invertible networks to approximate the mapping from the latent space to neural activity [33] or for generative models of visual cortex activity [34]. To allow this mapping to change dimensionality between the latent space and neural activity, some of these models used a zero-padding procedure similar to the padding used in this manuscript (see Section 3.3.1), which makes the transformation injective rather than invertible [33; 23]. However, these previous approaches did not have explicit dynamics models, making our study, to our knowledge, the first to test whether injective readouts can improve the interpretability of neural population dynamics models. ## 3 Methods ### Synthetic Neural Data To determine whether different models can distill an interpretable latent system from observed population activity, we first used reference datasets that were generated using simple ground-truth dynamics \(f\) and embedding \(g\). Our synthetic test cases emulate the empirical properties of neural systems, specifically low-dimensional latent dynamics observed through noisy spiking activity [13; 35; 36; 37]. We sampled latent trajectories from the Arneodo system (\(f\), \(D=3\)) and nonlinearly embedded these trajectories into neural activity via an embedding \(g\). We consider models that can recover the dynamics \(f\) and embedding \(g\) used to generate these data as providing an interpretable description of the latent system and its relation to the neural activity. Additional detail on data generation, models, and metrics can be found in the Supplementary Material. Unless otherwise noted, we generated activations for \(N\) neurons (\(N=12\)) by projecting the simulated latent trajectories \(\mathbf{Z}\) through a \(3\times N\) matrix whose columns were random encoding vectors with elements sampled from a uniform distribution \(U[-0.5,0.5]\) (Fig. 1A, left). We standardized these activations to have zero mean and unit variance and applied a different scaled sigmoid function to each neuron, yielding a matrix of non-negative time-varying firing rates \(\mathbf{Y}\). The scaling of each sigmoid function was evenly spaced on a logarithmic scale between \(10^{0.2}\) and \(10\). This process created a diverse set of activation functions ranging from quasi-linear to nearly step-function-like behavior (Fig. 1A, Activation Functions). For one experiment, we used the standard linear-exponential activation function, as described in previous work [15], instead of the scaled sigmoid. We simulated spiking activity \(\mathbf{X}\) by sampling from inhomogeneous Poisson processes with time-varying rate parameters equal to the firing rates \(\mathbf{Y}\) of the simulated neurons (Fig. 1A, right). We randomly split 70-point segments of these trials into training and validation datasets (training and validation proportions were 0.8 and 0.2, respectively). ### Biological Neural Data We evaluated how well our model could reconstruct biological neural activity on a well-characterized dataset [38] included in the Neural Latents Benchmark (NLB) [27]. This dataset is composed of single-unit recordings from primary and pre-motor cortices of a monkey performing a visually-guided reaching task with obstacles, referred to as the Maze task. Trials were trimmed to the window [-250, 350] ms relative to movement onset, and spiking activity was binned at 20 ms. To compare the reconstruction performance of our model directly against the benchmark, we split the neural activity into held-in and held-out neurons, comprising 137 and 35 neurons, respectively, using the same sets of neurons as were used to assess models for the NLB leaderboard. Figure 1: A) Synthetic neural data generation (left to right). Trajectories from the Arneodo system are projected onto random encoding vectors to compute activations at each timepoint. A scaled sigmoid nonlinearity is applied to convert the activations into firing rates. B) Zero-padded latent dynamics (green) are reversibly warped into higher-dimensional neural activity space (blue). C) The Flow readout maps from latent space to neural space by applying a sequence of \(K\) small updates (parameterized by an MLP, bottom). The reverse pass of the Flow maps from neural space to latent space and is implemented by serial subtraction of updates from the same MLP. ### Model Architecture We used three sequential autoencoder (SAE) variants in this study, with the main difference being the choice of readout module, \(\hat{g}(\cdot)\). In brief, a sequence of binned spike counts \(\mathbf{x}_{1:T}\) was passed through a bidirectional GRU encoder, whose final hidden states were converted to an initial condition \(\hat{\mathbf{z}}_{0}\) via a mapping \(\phi(\cdot)\). A modified NODE generator unrolled the initial condition into time-varying latent states \(\hat{\mathbf{z}}_{1:T}\). These were subsequently mapped to inferred rates via the readout \(\hat{g}(\cdot)\in\{\text{Linear},\text{MLP},\text{Flow}\}\). All models were trained for a fixed number of epochs to infer firing rates \(\hat{\mathbf{y}}_{1:T}\) that minimize the negative Poisson log-likelihood of the observed spikes \(\mathbf{x}_{1:T}\). \[\mathbf{h}_{T}=\big{[}\mathbf{h}_{fwd}\big{|}\mathbf{h}_{bwd} \big{]}=\text{BiGRU}(\mathbf{x}_{1:T}) \tag{4}\] \[\hat{\mathbf{z}}_{0}=\phi(\mathbf{h}_{T})\] (5) \[\hat{\mathbf{z}}_{t+1}=\hat{\mathbf{z}}_{t}+\alpha\cdot\text{MLP} (\hat{\mathbf{z}}_{t})\] (6) \[\hat{\mathbf{y}}_{t}=\exp\hat{g}(\hat{\mathbf{z}}_{t}) \tag{7}\] For models with Linear and MLP readouts, \(\phi(\cdot)\) was a linear map to \(\mathbb{R}^{\hat{D}}\). For models with Flow readouts, \(\phi(\cdot)\) was a linear map to \(\mathbb{R}^{N}\) followed by the reverse pass of the Flow (see Section 3.3.1). We unrolled the NODE using Euler's method with a fixed step size equal to the bin width and trained using standard backpropagation for efficiency. A scaling factor (\(\alpha=0.1\)) was applied to the output of the NODE's MLP to stabilize the dynamics during early training. Readouts were implemented as either a single linear layer (Linear), an MLP with two 150-unit ReLU hidden layers (MLP), or a Flow readout (Flow) which contains an MLP with two 150-unit ReLU hidden layers. We refer to these three models as Linear-NODE, MLP-NODE, and ODIN, respectively. #### 3.3.1 Flow Readout The Flow readout resembles a simplified invertible ResNet [23]. Flow learns a vector field that can reversibly transform data between latent and neural representations (Figure 1B). The Flow readout has three steps: first, we increase the dimensionality of the latent activity \(\mathbf{z}_{t}\) to match that of the neural activity by padding the latent state with zeros. This corresponds to an initial estimate of the log-firing rates, \(\log\hat{\mathbf{y}}_{t,0}\). Note that zero-padding makes our mapping injective rather than fully invertible (see [23, 33]). The Flow network then uses an MLP to iteratively refine \(\log\hat{\mathbf{y}}_{t,k}\) over \(K\) steps (\(K=20\)) after which we apply an exponential to produce the final firing rate predictions, \(\hat{\mathbf{y}}_{t}\). A scaling factor (\(\beta=0.1\)) was applied to the output of the Flow's MLP, which prevents the embedding from becoming unstable during the early training period. \[\log\hat{\mathbf{y}}_{t,0}=[\hat{\mathbf{z}}_{t}|\mathbf{0}]^{T} \tag{8}\] \[\log\hat{\mathbf{y}}_{t,k+1}=\log\hat{\mathbf{y}}_{t,k}+\beta \cdot\text{MLP}(\log\hat{\mathbf{y}}_{t,k})\] (9) \[\hat{g}\left(\hat{\mathbf{z}}_{t}\right)=\log\hat{\mathbf{y}}_{t, K}=\log\hat{\mathbf{y}}_{t} \tag{10}\] We also use a reverse pass of the Flow to transform the output of the encoders to initial conditions in the latent space via \(\phi(\cdot)\), approximating the inverse function \(\hat{g}^{-1}\). Our method subtracts the output of the MLP from the state rather than adding it as in the forward mode (Fig 1C), a simplified version of the fixed-point iteration procedure described in [23]. We then trim the excess dimensions to recover \(\hat{z}\in\mathbb{R}^{\hat{D}}\) (in effect, removing the zero-padding dimensions). \[\log\hat{\mathbf{y}}_{t,k-1}=\log\hat{\mathbf{y}}_{t,k}-\beta \cdot\text{MLP}(\log\hat{\mathbf{y}}_{t,k}) \tag{11}\] \[\hat{g}^{-1}\left(\log\hat{\mathbf{y}}_{t}\right)=\left[\log\hat{ y}_{t,0,1},\ldots,\log\hat{y}_{t,0,\hat{D}}\right]^{T}=\hat{\mathbf{z}}_{t} \tag{12}\] The Flow mapping is only guaranteed to be injective if changes in the output of the MLP are sufficiently small relative to changes in the input (i.e., Lipschitz constant for the MLP that is strictly less than 1) [23]. The model can be made fully injective by either restricting the weights of the MLP (e.g., spectral norm [39]), or using a variable step-size ODE solver that can prevent crossing trajectories (e.g., continuous normalizing flows [19]). In practice, we found that using a moderate number of steps allows Flow to preserve approximate injectivity of the readout at all tested dimensionalities (Supp. Fig. S2). ### Metrics and characterization of dynamics We assessed model performance in five domains: 1) reconstruction performance, 2) latent accuracy, 3) dynamical accuracy, 4) embedding accuracy, and 5) readout injectivity. All metrics were evaluated on validation data. Critically, on biological data without a ground-truth system, only the reconstruction performance and readout injectivity can be assessed, since all the other metrics rely on full observability of the underlying system. Therefore, we need models for which good performance on the observable metrics (reconstruction, injectivity) implies good performance on the unobservable metrics (latent, dynamical, and embedding accuracy). Reconstruction performance for the synthetic data was assessed using two key metrics. The first, spike negative log-likelihood (Spike NLL), was defined as the Poisson NLL employed during model training. The second, Rate \(R^{2}\), was the coefficient of determination between the inferred and true firing rates, averaged across neurons. We used Spike NLL to assess how well the inferred rates explain the spiking activity, while Rate \(R^{2}\) reflects the model's ability to find the true firing rates. These metrics quantify how well the model captures the embedded system's dynamics (i.e., that \(\hat{f},\hat{g}\) captures the system described by \(f,g\)), but give no indication of the interpretability of the learned latent representation (i.e., that the learned \(\hat{f},\hat{g}\) are simple and low-dimensional). For the biological neural data, we measured model performance using two metrics from the Neural Latents Benchmark (NLB) [27], co-smoothing bits-per-spike (co-bps) and velocity decoding performance on predicted firing rates (Vel \(R^{2}\)). co-bps is a measure of reconstruction performance that quantifies how well the model predicts the spiking of the held-out neurons, while Vel \(R^{2}\) quantifies how well the denoised rates can predict the monkey's hand velocity during the reach. We have no way to directly assess embedding, latent, or dynamical accuracy because they are unobserved in most biological datasets. To determine whether a model's inferred latent activity contains features that are not in the simulated latent activity, we used a previously published metric called the State \(R^{2}\)[15]. State \(R^{2}\) is defined as the coefficient of determination (\(R^{2}\)) of a linear regression from simulated latent trajectories \(\mathbf{z}\) to the inferred latent trajectories \(\hat{\mathbf{z}}\). State \(R^{2}\) will be low if the inferred latent trajectories contain features that cannot be explained by an affine transformation of the true latent trajectories. Importantly, State \(R^{2}\) alone cannot ensure latent accuracy. This is because a model can achieve high State \(R^{2}\) trivially if the inferred latent activity \(\hat{\mathbf{z}}\) is a low-dimensional projection of the simulated activity \(\mathbf{z}\). Therefore, only models that have _both_ good reconstruction performance (Spike NLL, Rate \(R^{2}\)) and State \(R^{2}\) can be said to accurately reflect the simulated latent dynamics without extra features that make the model harder to interpret (i.e., \(\hat{\mathbf{z}}\approx\mathbf{z}\)). As a direct comparison of the estimated dynamics \(\hat{f}\) to the simulated dynamics \(f\), we extracted the fixed-point (FP) structure from our trained models and compared it to the FP structure of the underlying system. We used previously published FP-finding techniques [40] to identify regions of the generator's dynamics where the magnitude of the vector field was close to zero, calling this set of locations the putative FPs. We linearized the dynamics around the FPs and computed the eigenvalues of the Jacobian of \(\hat{f}\) to characterize each FP. Capturing FP location and character gives an indication of how closely the estimated dynamics resemble the simulated dynamics (i.e., \(\hat{f}\approx f\)). To determine how well our embedding \(\hat{g}\) captures the simulated embedding \(g\), we projected the encoding vectors used to generate the synthetic neural activity from the ground-truth system into our model's latent space using the same affine transformation from ground-truth latent activity to inferred latent activity that was used to compute State \(R^{2}\). We projected the inferred latent activity onto each neuron's affine-transformed encoding vector to find the predicted activation of each synthetic neuron. We then related the predicted firing rates of each neuron to its corresponding activations to derive an estimate of each neuron's activation function. Because the inferred latent activity is arbitrarily scaled/translated relative to the true latent activity, we fit an affine transformation from the predicted activation function to the ground-truth activation function. The coefficient of determination \(R^{2}\) of this fit quantifies how well our models were able to recover the synthetic warping applied to each neuron (i.e., \(\hat{g}\approx g\)). We compared the injectivity of the Flow readout to Linear and MLP readouts using effective rank [41] and cycle-consistency, respectively. Effective rank quantifies the number of significant singular values in a Linear readout, while cycle-consistency quantifies how well the inferred latent activity \(\hat{\mathbf{z}}\) can be recovered from the predicted log-firing rates \(\log\hat{\mathbf{y}}\). ## 4 Results ### Finding interpretable latent activity across state dimensionalities with ODIN As the latent dimensionality \(D\) is unknown for biological datasets, we wanted to test how robust each model was to choices of state dimensionality \(\hat{D}\). We trained Linear/MLP -NODE, and ODIN (Fig 2A) to reconstruct synthetic neural activity from the Arneodo system [42] and compared reconstruction performance (i.e. Spike NLL and Rate \(R^{2}\)) and latent recovery (i.e. State \(R^{2}\)) as functions of the dimensionality \(\hat{D}\) of the state space. We trained 5 different random seeds for each of the 3 model types and 5 state dimensionalities (75 total models, model hyperparameters in Supp. Table 1, representative hyperparameter sweeps in Supp. Fig. S1). First, we observed that latent activity inferred by Linear-NODE did not closely resemble the simulated latent activity, with all tested dimensionalities performing worse than either ODIN or the MLP-NODE at \(\hat{D}\) = 3 (Fig 2B,C, mean State \(R^{2}\) = 0.70 for Linear-NODE vs. 0.89, 0.93 for MLP-NODE, ODIN respectively). We also found that Linear-NODE required many more dimensions to reach the peak reconstruction performance (Fig 2C, Rate \(R^{2}\)). These results demonstrate that models that are unable to account for nonlinear embeddings are vulnerable to learning more complex and higher dimensional dynamics than those learned by models with nonlinear readouts. Next, we compared ODIN to MLP-NODE and found that at the correct dimensionality (\(\hat{D}=3\)), these models had similar performance for both reconstruction and latent recovery. However, as the dimensionality increased beyond the true dimensionality (\(\hat{D}>3\)), the latent recovery of the MLP-NODE degraded rapidly while ODIN's latent recovery remained high (Fig 2C, as \(\hat{D}>3\)). As the true latent dimensionality \(D\) is usually unknown, NPDMs with non-injective readouts (like MLPs) may be predisposed to learning misleading latent activity that can make it more difficult to interpret biological datasets. ### Common readouts learn non-injective mappings from latent activity to firing rates We then sought to assess the injectivity of different readouts. First, we used effective rank [41] to quantify the injectivity of our Linear readouts. We trained 5 Linear-NODE models at a range of state dimensionalities (\(\hat{D}=3,5,8,10\)) to reconstruct simulated neural activity from Arneodo that was _linearly_ embedded into 12D neural space. We found that while reconstruction performance was optimal when \(\hat{D}>3\) (Supp. Fig. S3), the effective rank of these best-reconstructing models never exceeded 4 (mean erank = 3.74 at \(\hat{D}=10\)). This means that for the largest Linear-NODE models, around 6 of 10 latent dimensions had no effect on reconstructed log-rates. The fact that linear readouts learn mappings with low effective rank, coupled with improved reconstruction performance when \(\hat{D}>3\) suggests that the Linear readouts utilize non-injectivity to improve reconstruction at the expense of latent accuracy. Figure 2: ODIN recovers latent activity more accurately than alternative models and is robust to overestimates of latent dimensionality. A) Diagram of models tested, including Linear-NODE (green), MLP-NODE (orange), ODIN (red). B) Inferred latent activity of representative model at each state dimensionality \(\hat{D}\). True latent activity (affine-transformed to overlay inferred latent activity) shown in light blue. C) All: Model metrics as a function of \(\hat{D}\). Shaded areas represent one standard deviation around the mean. Dashed vertical line indicates \(\hat{D}=3\) Top: Spike NLL, Middle: Rate \(R^{2}\), Bottom: State \(R^{2}\). Next, we used a cycle consistency metric to show that MLP readouts also have a tendency to become non-injective. Cycle consistency quantifies how well inputs to a function can be recovered from the function's outputs. We trained a separate MLP to predict inferred latents \(\hat{\mathbf{z}}\) from predicted log-firing rates \(\log\hat{\mathbf{y}}\) for 10D MLP-NODE and ODIN models shown in Figure 2. We found that the cycle consistency of the ODIN model was consistently higher than for MLP-NODE (Fig. 3B, Noise Level = 0). It is possible that models may learn to compress latent activity to arbitrarily small firing rate changes while still remaining technically injective. This failure mode could potentially be invisible to the standard cycle-consistency. To address this concern, we added Gaussian noise to the log-firing rates \(\log\hat{\mathbf{y}}\) and tried to recover the inferred latent activity from these noise corrupted log-rates. Consistent with ODIN's bias towards injectivity, we found that ODIN's cycle consistency was more robust to the addition of noise than MLP-NODE (Fig. 3B, Noise Level > 0). To demonstrate that injectivity was the critical feature that allowed ODIN to outperform other models, we tested an alternative injective readout, an Invertible Neural Network (INN). INN implementation differs significantly from Flow, but they share the property of injectivity. We found that INN-NODE qualitatively reproduced ODIN's performance in Figure 2C (Supp. Fig. S4), suggesting that the injectivity is the critical feature for recovering interpretable latent activity. We describe the advantages of ODIN over INN-NODE in the Supplementary material. ### Recovering fixed point structure with ODIN A common method to examine how well dynamics models capture the underlying dynamics from synthetic data is to compare the character and structure of the inferred fixed points (FPs) to the FPs of the ground-truth system [15]. At a high-level, FPs enable a concise description of the dynamics in a small region of state-space around the FP, and can collectively provide a qualitative picture of Figure 4: ODIN recovers fixed point properties accurately at the correct dimensionality. A,B) Representative latent activity and fixed-points from the true (blue, \(\circ\)), ODIN (red, \(\times\)), and Linear-NODE (green, \(+\)) systems. Each fixed point is labeled with reference to C. C) Plots of the real vs. imaginary part of the eigenvalues of the Jacobian evaluated at each fixed point. Unit circle in the complex plane (black curve) shows boundary between attractive and repulsive behavior (the attractive and repulsive sides of the boundary are indicated by inset). Figure 3: Linear- and MLP-NODEs tend towards non-injectivity A) Effective rank of Linear readout as a function of state dimensionality \(\hat{D}\). Each point represents one randomly instantiated model. B) Cycle-consistency \(R^{2}\) for ODIN and MLP-NODE as a function of noise corruption. the overall dynamical landscape. To obtain a set of candidate FPs, we searched the latent space for points at which the magnitude of the vector field \(\|\hat{f}\|\) is minimized (as in [1, 40]). We computed the eigenvalues of the Jacobian of \(\hat{f}\) at each FP location. The real and imaginary components of these eigenvalues identify each FP as attractive, repulsive, etc. We found that 3D ODIN models and 3D Linear-NODEs were both able to recover three fixed points that generally matched the location of the three fixed points of the Arneodo system (Fig 4A), However, while ODIN was also able to capture the eigenspectra of all three FPs (Fig. 4B, red \(\times\)), the Linear-NODE failed to capture the rotational dynamics of the central FP (Fig 4B, middle column, green \(+\)). Both models were able to approximately recover the eigenspectra of outermost FPs of the system (Fig. 4B, left, right columns). We found that the MLP-NODE was also able to find FPs with similar accuracy to ODIN at 3D. These results show that the inability to model the nonlinear embedding can lead to impoverished estimates of the underlying dynamics \(\hat{f}\). ### Recovering simulated activation functions with ODIN While obtaining interpretable dynamics is our primary goal, models that allow unsupervised recovery of the embedding geometry may provide additional insight about the computations performed by the neural system [43, 7]. For this section, we considered a representative model from each readout class with the correct number of latent dimensions (\(D=3\)). We performed an affine transformation from the ground truth encoding vectors into the modeled latent space and computed the projection of the modeled latent activity onto the affine-transformed encoding vectors (Fig 5A). From this projection, we derived an estimate of the activation function for each neuron, and compared this estimate to the ground-truth activation function. We found, as expected, that Linear-NODE was unable to approximate the sigmoidal activation function of individual neurons (Fig 5B, green). On the other hand, both ODIN and MLP-NODE were able to capture activation functions ranging from nearly linear to step function-like in nature (Fig 5B, red, orange). Across all simulated neurons for models with \(D=3\), we found that ODIN more accurately estimated the activation function of individual neurons compared to both Linear- and MLP-NODEs (Fig 5C), suggesting that ODIN's injectivity allows more accurate estimation of nonlinear embeddings (two-sided paired t-test, p-val for ODIN vs. Linear-, MLP-NODE < 1e-10). ### Modeling motor cortical activity with ODIN To validate ODIN's ability to fit neural activity from a biological neural circuit, we applied ODIN to the Maze dataset from the Neural Latents Benchmark, composed of recordings from the motor and pre-motor cortices of a monkey performing a reaching task (Fig. 6A). After performing hyperparameter sweeps across regularization parameters and network size (Supp. Table 2), we trained a set of ODIN and Linear-NODE models to reconstruct the neural activity with a range of state dimensionalities \(\hat{D}\). We visualized the top 3 PCs of the condition-averaged latent trajectories and predicted single-neuron firing rates for example models from each readout type. We found no visually obvious differences in the inferred latent trajectories (Fig. 6B), but when we computed condition-averaged peri-stimulus time histograms (PSTHs) of single neuron firing rates, we found that ODIN typically produced firing rate estimates that more closely resembled the empirical PSTHs than those from the Linear-NODE (Fig. 6C). Figure 5: ODIN can recover nonlinear activation functions of neurons. A) True encoding vectors (numbered lines over true latent activity (blue)) were affine-transformed into a representative model’s latent space. B) Inferred activation function for two example neurons (columns), color coded by readout type (Linear-NODE = green, MLP-NODE = orange, ODIN = red, True = black). Plots show the predicted firing rate vs. the activation of the selected neuron. C) Comparison of the \(R^{2}\) values of the fits across all neurons for models with \(\hat{D}=3\). Without access to a ground truth dynamics \(f\) and embedding \(g\) that generated these biological data, the dimensionality required to reconstruct the neural activity was our primary measure of interpretability. We computed co-bps -a measure of reconstruction performance on held-out neurons- for each model and found that 10D ODIN models substantially outperformed Linear-NODE models, even when the Linear-NODE had more than twice as many dimensions (10D ODIN: 0.333, vs 25D Linear: 0.287). This suggests that ODIN's injective non-linear readout is effective at reducing the state dimensionality required to capture the data relative to a simple linear readout. We also compared ODIN to alternative models including AutoLFADS, GPFA, and MLP-NODE [27] at the same state dimensionalities. Trained AutoLFADS and GPFA models had lower co-bps at all tested state dimensionalities. In particular, co-bps was substantially higher for 10D ODIN compared to the 10D AutoLFADS or GPFA models (0.333 vs. 0.237, 0.204, respectively). As expected, MLP-NODE (not shown) performed similarly to ODIN; however, without a known state dimensionality, the MLP readout may incentivize the MLP-NODE to invent latent activity that is not reflected in the dataset. Of note, increasing AutoLFADS to a very high state dimensionality (\(\hat{D}=100\)) allowed it to outperform ODIN in co-bps. However, as we have shown in Figures 2 and 3, improved reconstruction performance often comes at the expense of accuracy in latent recovery. Together, these results suggest that ODIN is effective at reducing the state dimensionality needed for good neural reconstruction, which may provide more interpretable latent representations than alternative models. ## 5 Discussion Dynamics models have had great success in reproducing neural activity patterns and relating brain activity to behavior [44; 27; 45]. However, it has been difficult to use these models to investigate neural computation directly. If neural population models could be trusted to find interpretable representations of latent dynamics, then recent techniques that can uncover computation in artificial networks could help to explain computations in the brain [1; 40; 46]. In this work, we created a new model called ODIN that can overcome major barriers to learning interpretable latent dynamical systems. By combining Neural ODE generators and approximately injective nonlinear readouts, ODIN offers significant advantages over the current state-of-the-art, including lower latent dimensionality, simpler Figure 6: ODIN can reconstruct cortical activity with low-dimensional dynamics A) Top: Schematic of task [38] Bottom: example hand trajectories and condition-averaged firing rates aligned to move onset. B) Example condition-averaged latent activity from ODIN and Linear-NODE models applied to neural activity recorded during the Maze task. C) Example single-neuron peri-stimulus time histograms for ODIN and Linear-NODE models across conditions. D) Effects of latent state dimensionality \(\hat{D}\) on reconstruction (top, co-bps) and decoding (bottom, Vel \(R^{2}\)) performance. Plot shows mean (point) and standard deviation (shading) of 5 randomly initialized ODIN and Linear-NODE models at each \(\hat{D}\). GPFA and AutoLFADS were a single run, or the best performing model from an adaptive hyperparameter search, respectively. Horizontal lines represent peak performance by AutoLFADS with \(\hat{D}=100\). latent activity that is robust to the choice of latent dimensionality, and the ability to model arbitrary nonlinear activation functions. Circuits in the brain are densely interconnected, and so a primary limitation of this work is that ODIN is not yet able to account for inputs to the system that may be coming from areas that are not directly modeled. Thus ODIN is currently only able to model the dynamics of a given population of neurons as an autonomous system. Inferring inputs is difficult due to ambiguity in the role and timecourse of inputs compared to internal dynamics for driving the state of the system. While some RNN-based models have methods for input inference [44], more work is needed to develop solutions for NODE-based models. Injective readouts are an important step towards addressing the fundamental difficulties of input inference, as models without injective readouts can be incentivized to imagine latent features that are actually the result of inputs. Interpretable dynamics derived from neural population recordings could answer critical scientific questions about the brain and help improve brain-machine interface technology. A potential negative consequence is that human neural interfaces combined with an understanding of neural computation might make it possible and profitable to develop strategies that are effective at influencing behavior. Future researchers should focus on applications of this research that are scientific and medical rather than commercial or political. ## 6 Acknowledgements The authors would like to acknowledge Timothy D. Kim and Carlos Brody for helpful discussions that further developed the ideas in this manuscript. This work was supported by NSF NCS 1835364, NIH-NINDS/OD DP2NS127291, NIH BRAIN/NIDA RF1 DA055667, and the Alfred P. Sloan Foundation (CP), NIH BRAIN/NINDS F32 RFA-MH-23-110 (CV), the Simons Foundation as part of the Simons-Emory International Consortium on Motor Control (CP, CV), and NSF Graduate Research Fellowship DGE-2039655 (ARS).
2302.14374
Testing the performance of Multi-class IDS public dataset using Supervised Machine Learning Algorithms
Machine learning, statistical-based, and knowledge-based methods are often used to implement an Anomaly-based Intrusion Detection System which is software that helps in detecting malicious and undesired activities in the network primarily through the Internet. Machine learning comprises Supervised, Semi-Supervised, and Unsupervised Learning algorithms. Supervised machine learning uses a trained label dataset. This paper uses four supervised learning algorithms Random Forest, XGBoost, K-Nearest Neighbours, and Artificial Neural Network to test the performance of the public dataset. Based on the prediction accuracy rate, the results show that Random Forest performs better on multi-class Intrusion Detection System, followed by XGBoost, K-Nearest Neighbours respective, provided prediction accuracy is taken into perspective. Otherwise, K-Nearest Neighbours was the best performer considering the time of training as the metric. It concludes that Random Forest is the best-supervised machine learning for Intrusion Detection System
Vusumuzi Malele, Topside E Mathonsi
2023-02-28T07:56:46Z
http://arxiv.org/abs/2302.14374v1
Testing the performance of Multi-class IDS public dataset using Supervised Machine Learning Algorithms ###### Abstract Machine learning, statistical-based, and knowledge-based methods are often used to implement an Anomaly-based Intrusion Detection System which is software that helps in detecting malicious and undesired activities in the network primarily through the Internet. Machine learning comprises Supervised, Semi-Supervised, and Unsupervised Learning algorithms. Supervised machine learning uses a trained label dataset. This paper uses four supervised learning algorithms Random Forest, XGBoost, K-Nearest Neighbours, and Artificial Neural Network to test the performance of the public dataset. Based on the prediction accuracy rate, the results show that Random Forest performs better on multi-class Intrusion Detection System, followed by XGBoost, K-Nearest Neighbours respective, provided prediction accuracy is taken into perspective. Otherwise, K-Nearest Neighbours was the best performer considering the time of training as the metric. It concludes that Random Forest is the best-supervised machine learning for Intrusion Detection System. Machine learning Supervised learning algorithm intrusion detection system Random Forest XGBoost K-Nearest Neighbours Artificial Neural Network ## 1 Introduction The trend in the Information and Communication Technology (ICT) sector leads to data and information being stored and accessed from anywhere. For example, a mobile service technician can generate data, save it, and access it using Internet-of-Things (IoT) and cloud computing platforms. He/she could later use that data or use that of the other colleagues. The IoT is the network of physical objects (i.e. sensors, actuators, controllers, computers, etc) accessed through the Internet. While cloud computing is platform that allows for the storing and accessing of data over the Internet, instead of using the local computer hard drive. The IoT and cloud computing trends expose organisations to network and data security vulnerabilities and risks. Since, data, information and knowledge form major parts of the organisations' assets, it needs to be protected. Any leakages or unauthorized sharing and access of critical data should be picked up immediately to avoid putting the organisation into serious business challenges.Vulnerabilities and risks in IoT and cloud computing affect the organisations' confidentiality, availability and integrity of offering services. Avoiding data leakage in these days of IoT and cloud computing is important. In this case, Intrusion detection systems (IDS) could assist. The IDS is a software that helps in detecting malicious and undesired activities in the network primarily through the Internet. In this regard, IDS is a necessary solution for all organizations. The IDS detects various types of attacks and it is categorized into two groups: (i) Signature-based Intrusion Detection System (SIDS), and (ii) Anomaly-based Intrusion Detection System (AIDS) [1]. AIDS overcomes the shortcomings of SIDS and it could be implemented using any of the following three methods [1]: (a) machine learning, (b) statistical-based and knowledge-based methods. Machine learning is a branch of artificial intelligence (AI). It is the process of extracting decision-making information from a large set of data. To recognise or predict behaviour and/or determine data patterns, machine learning uses the set of rules, methods, and/or functions [2]. Machine learning comprise three broad learning algorithms, Supervised Learning, Semi-Supervised Learning, and Unsupervised Learning. Supervised machine learning uses a trained label dataset, Unsupervised Learning does not use the labeled dataset and Semi-Supervised Learning uses small label dataset to guide huge unlabelled datasets. This paper uses four supervised learning algorithms to test the performance of the public dataset. The algorithms are Random Forest, XGBoost, K-Nearest Neighbors (k=5), and Artificial Neural Network (ANN). The remaining part of this paper is organized as follows: the next section will briefly discuss the literature review, followed by the methodology, then results and findings will be presented, and the last section will provide the conclusion and future work. ## 2 Literature Review This section discusses the literature that relates to this work. It begins by briefly looking at machine learning algorithms, then network attacks that could be resolved through machine learning and conclude by summarising the related work. ### Supervised Machine Learning The are several algorithms used to implement Supervised Machine Learning. The focus of this paper is on Random Forest (RF), eXtreme Gradient Boosting (XGB) also known as XGBoost, K-Nearest Neighbors (k=5) (KNN), and Artificial Neural Network (ANN). * Random Forest (RF): comprise building many decision trees that are uncorrelated [3]. Many decision trees create an efficient method for estimating unlabelled data with very high performance in addition to regression problems. * XGBoost (XGB): uses a gradient boosting algorithm, known as boosted tree algorithm. It has a very high speed in addition to a significant performance [4]. Thus, it is dominating any other algorithm for structured data. * K-Nearest Neighbors (k=5) (KNN): used for large datasets and low dimensions [5]. The value k= 5 means that any data point is classified based on its nearest five neighbours classification, based on neighbours voting. For example, if four out of five neighbours belong to class A, then the decision will be to make the last one to belong to class A. * Artificial Neural Network (ANN): comprise fault-tolerance and abrupt response and it is an artificial intelligence technique used to solve rigorous problems, which humans are not able to. ### Machine Learning Attacks As an AI area, machine learning is used to focus on studying network traffic, learning the types of incoming traffic, and detecting any anomalous behaviour in the network. To build an intelligent security system, it is essential to increase the alertness towards network malicious behaviours or attacks. Network attacks are broadly classified into three different categories [5, 6]: * Denial-of-Service (DoS) Attacks: these attacks seek to slow down the traffic in a network, and sometimes, this could result in shutting down the whole network. Some kinds of denial of service attacks are flooding DoS, distributed DoS, and flaw exploitation DoS. * Penetration Attacks: these attacks aim to gain access to the whole system without authorization from the network administration; consequently, the attacker can have access to the system files and change states of the network. The common penetration attacks are: User-to-Root (U2R), Remote-to-Root (R2R), Remote-to-User (R2U), and Remote Disk Read (RDR) attacks. * Reconnaissance or Scanning Attacks whose aim is to scan networks, ports, vulnerabilities, etc., to gain information about the whole network topology, identify the computer users, firewall types into the system, and the operating systems. It subdivided into two [6]: Logical Reconnaissance (i.e. anything that is done in the digital spectrum and in most cases, network admin does not have control [6]) and Physical Reconnaissance (i.e. crosses the lines of what in this case, a network admin has control of issues; however, there are elements that will never be fully protected). ### Similar Work Machine learning could be viewed as the most effective technique for IDS. It should be used in protecting people and business integration that happens over the network/internet. In this regard, different work has been conducted looking at its performance. For example: * the study by [7] looks at machine learning in the context of IoT. It showed that machine learning is efficient in detecting attacks; hence, making it relevant in IoT cybersecurity-related research. Another research work by [8] looked at IDS in IoT infrastructures using supervised learning algorithms to identify cyber-attack). They have applied the algorithms and showed their performances using two public datasets; * the work by [9] contributed a new intrusion detection framework based on the feature selection and ensemble learning techniques. They created an ensemble classifier that achieved 81.31% of accuracy; * while [10] build a classifier and proposed a new joint optimization algorithm using swarm intelligence to optimize the deep belief network (DBN) network structure for optimization. Their classifier work achieved a 82.36% highest accuracy; * in [11] an ANN IDS was used on a public dataset. Their results showed promising performance on detecting malicious attacks in the network on real-time basis. Their performance evaluation was based on specific network features using the four essential prediction accuracy measures: true positive, true negative, false positive, and false negative; and * the work by [12] classified the network traffic as normal or anomaly, and used machine learning techniques, k-nearest neighbours, decision tree, and support vector machine to evaluate IDS. Their evaluation used four necessary measures: accuracy, precision, sensitivity, and F1-score. This paper used the DoS, U2R, Probing, and Root-to-local (R2L) as network malicious behaviours or attacks and the RF, XGB, KNN, and ANN as supervised machine learning techniques to compute the classification models. This paper adopted accuracy, precision, and F1-score. Furthermore, it used the true positive, true negative, false positive, and false negative as the accuracy measure. ## 3 Research Approach This section presents the methodology that was followed in conducting this study. ### Methodology Fig. 1 illustrates the methodology that was adopted to conduct this study. The public file dataset, Table 1, is included as an input, then taken through pre-processing. In pre-processing step the feature selection technique is used since the number of in attributes in the dataset is high. Pre-processing step allows all the categorical data which are in textual form be converted into numerical form using the feature selection techniques. Pre-processed data is divided into testing data and training data. The used features are summarised in Table 2. Subsequently, the output of the pre-processed step and the classification models/algorithm (RF, XGB, KNN, and ANN) were used to compute the classification models' results using some performance measures which are: #### Ii-A1 the confusion matrix that consists of the following classification attacks * Denial-of-service (DoS): number of denial of services attack cases. * Probe: number of surveillance and probing cases. * Root-to-local (R2L): number of unauthorized access from the remote machine to local machine cases. * User-to-root (U2R): number of unauthorized access to local superuser privileges by a local unprivileged user cases. Ii-A2 The four other performance measures are: precision, recall, F1-score, and accuracy are calculated based on the following equations: \[Precision=\frac{t_{p}}{t_{p}+f_{p}} \tag{1}\] \[Recall=\frac{t_{p}}{t_{p}+f_{n}} \tag{2}\] \[F1-score=\frac{2*Recall*Precision}{Recall+Precision} \tag{3}\] \[Accuracy=\frac{t_{p}+t_{n}}{t_{p}+t_{n}+f_{p}+f_{n}} \tag{4}\] Fig. 1: The adopted methodology (_Author’s adaptation_) Where \(t_{p}=true\)-positive, \(t_{n}=true\) negative, \(f_{p}=false\)-positive, and \(f_{n}=false\)-positive, In summary, using the research approach in Fig. 1, the following algorithm was adopted: #### -A3 Pre-process the data set. 4. _The data set is divided as training data and testing data._ 5. _Build the classifier model on training data for(RF, XGB, KNN, and ANN)_ 6. _Read the test data_ 7. _Test the classifier models on training data_ 8. _Compute and compare Normal, DoS, Probe, R2L, and U2R for all models._ ### Public Dataset This paper focuses on a public dataset that was captured for IDS. The file contains 88,180 records that are labelled as normal or attack. Normal records are for non-malicious incoming network traffic and the attack records are for malicious incoming network traffic. The attack records are classified as DoS, Probe, R2L, and U2R. The public dataset is presented in Table 1 and its features are explained in Table 2. The 70% of the dataset is arranged into a training set of 70,459 samples and the remaining a testing set of 17,723 samples (30%). ### Hyper-parameters The hyper-parameters set for each algorithm are as follow: * RF: 100 trees are built with depth of 13. The minimum sample is 1 and the size of the hyper-parameter search is 2. * XGB: the algorithm uses 36 trees with maximum depth of 3. * K-Nearest Neighbours: number of folds is k = 5. The Euclidean distance, \(p=2\), measures the true straight line distance between two points in Euclidean space. * ANN: The number of layer is 10 Layers and rectified Linear Unit (relu) as activation function, and "Adam" solver. The learning rate is Alpha = 0.001. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|} \hline A & B & C & D & E & F & G & H & I & J & L \\ \hline 17 & 9 & 491 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & N \\ \hline 42 & 9 & 146 & 0 & 0 & 0 & 0.08 & 0.15 & 0 & 0 & N \\ \hline 47 & 5 & 0 & 0 & 0 & 1 & 0.05 & 0.07 & 0 & 1 & D \\ \hline 21 & 9 & 232 & \begin{tabular}{l} 815 \\ 3 \\ \end{tabular} & 0 & 0.2 & 1 & 0 & 0.0 & 0.0 & 4 & 1 & N \\ \hline 21 & 9 & 199 & 420 & 0 & 0 & 1 & 0 & 0 & 0 & N \\ \hline 47 & 1 & 0 & 0 & 0 & 0 & 0.16 & 0.06 & 0 & 0 & D \\ \hline 47 & 5 & 0 & 0 & 0 & 1 & 0.05 & 0.06 & 0 & 1 & D \\ \hline 47 & 5 & 0 & 0 & 0 & 1 & 0.14 & 0.06 & 0 & 1 & D \\ \hline 49 & 5 & 0 & 0 & 0 & 1 & 0.09 & 0.05 & 0 & 1 & D \\ \hline 47 & 5 & 0 & 0 & 0 & 1 & 0.06 & 0.06 & 0 & 1 & D \\ \hline 47 & 1 & 0 & 0 & 0 & 0 & 0.06 & 0.06 & 0 & 0 & D \\ \hline 47 & 5 & 0 & 0 & 0 & 1 & 0.02 & 0.06 & 0 & 1 & D \\ \hline 21 & 9 & 287 & \begin{tabular}{l} 225 \\ 1 \\ \end{tabular} & 0 & 0 & 1 & 0 & 0 & 3 & 0 & N \\ \hline \end{tabular} \end{table} Table 1: The Public Dataset ## 4 Results and Discussion In This section presents and discusses the results in order to conclude which best algorithm that can predict an attack and be used with IDS. After applying the dataset on each of the algorithms: RF, XGB, KNN, and ANN the confusion matrices are produced and represented in Tables 3, 4, 5, and 6. The rows represent the instances in an actual class and the columns represent the instances in a predicted class. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Features & Type & Name & Description & & & & & & \\ \hline A & Integer & Service & & & & & & & & \\ \hline B & Integer & Flag & & & & & & & & \\ \hline C & Integer & Src & & & & & & & & \\ \hline D & Integer & & & & & & & & & \\ \hline E & Integer & & & & & & & & & \\ \hline F & Integer & & & & & & & & & \\ \hline F & Integer & & & & & & & & & \\ \hline G & Integer & & & & & & & & \\ \hline H & Integer & & & & & & & & \\ \hline \end{tabular} [MISSING_PAGE_POST] * A Table 7 presents the evaluation metrics that indicate the performance results for the different classification models used in this study. The RF model has the best performance accuracy of 99.7% and the higher F1-score value of 78.6%. The XGB model has the second-best performance accuracy and F1-score results computed to be 99.1% and 76.3%, respectively. It could be concluded that both RF and XGB are best algorithms to be used with IDS. The latter are compared with the findings of [1] who created an ensemble classifier that achieved 81.31% of accuracy. As well as that of [10] which reached the highest accuracy of 82.36%. Of note is that the KNN is also good for being used with IDS as it yields a 97.6% accuracy. In this paper, RF and XGB are best it was necessary to present the features (i.e. drivers of the results obtained due to their significant impacts on the output values) that have the most predictive power. The \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline & **Normal** & **DoS** & **Probe** & **R2L** & **U2R** \\ \hline **Normal** & 9243 & 66 & 92 & 11 & 0 \\ \hline **DoS** & 245 & 6237 & 33 & 0 & 0 \\ \hline **Probe** & 213 & 38 & 1407 & 0 & 0 \\ \hline **R2L** & 102 & 0 & 22 & 10 & 0 \\ \hline **U2R** & 4 & 0 & 0 & 0 & 0 \\ \hline \end{tabular} \end{table} Table 6: ANN confusion matrix \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline & **Normal** & **DoS** & **Probe** & **R2L** & **U2R** \\ \hline **Normal** & 9030 & 10 & 85 & 198 & 89 \\ \hline **DoS** & 2 & 6503 & 7 & 3 & 0 \\ \hline **Probe** & 18 & 6 & 1629 & 2 & 3 \\ \hline **R2L** & 1 & 0 & 1 & 132 & 0 \\ \hline **U2R** & 1 & 0 & 0 & 0 & 3 \\ \hline \end{tabular} \end{table} Table 4: XBG confusion matrix \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & **Normal** & **DoS** & **Probe** & **R2L** & **U2R** \\ \hline **Normal** & 9386 & 3 & 12 & 11 & 0 \\ \hline **DoS** & 1 & 6512 & 2 & 0 & 0 \\ \hline **Probe** & 5 & 4 & 1649 & 0 & 0 \\ \hline **R2L** & 5 & 0 & 0 & 129 & 0 \\ \hline **U2R** & 4 & 0 & 0 & 0 & 0 \\ \hline \end{tabular} \end{table} Table 3: RF confusion matrix \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline & **Normal** & **DoS** & **Probe** & **R2L** & **U2R** \\ \hline **Normal** & 9351 & 7 & 33 & 20 & 1 \\ \hline **DoS** & 9 & 6502 & 4 & 0 & 0 \\ \hline **Probe** & 46 & 11 & 1600 & 1 & 0 \\ \hline **R2L** & 17 & 0 & 1 & 116 & 0 \\ \hline **U2R** & 4 & 0 & 0 & 0 & 0 \\ \hline \end{tabular} \end{table} Table 5: KNN confusion matrix features scores are presented as shown in Fig. 2. Training time is an important metric to consider when choosing which algorithm to use in IDS. Table 8 shows the training time for each algorithm. However, Clear the KNN with 17 seconds is the best followed by Both RF and XGB that an acceptable training time of around 37 seconds. Unfortunately, the accuracy of KNN is below that of RF and XGB, making it not be the best performing algorithm. ## 5 Conclusion and Future Work This paper discusses the classification problem of IDS using supervised machine learning algorithms (RF, XGB, KNN, ANN). A publicly available dataset that contains common attacks (DoS, Probe, R2L, and U2R) was used. Based on the prediction accuracy rate, the results show that RF performs better on multi-class IDS, as well as XGB and KNN taking second and third place, respectively. However, if time of training could key metric that is considered, then KNN could have been preferred over RF and XGB. In the future, this study will concentrate on evaluating the performance of the dataset using other supervised learning algorithms such as Logistic Regression (LR), and Support Vector Machine (SVM). Furthermore it will look at other performance measures such as Log Loss (an error metric that considers the predicted probabilities, which indicate the lower the log loss value is, the better the algorithm performance is). ## Acknowledgments The authors would like to thank the Tshwane University of Technology for financial support. The authors declare that there is no conflict of interest regarding the publication of this paper.
2301.03358
Cost-Effective Two-Stage Network Slicing for Edge-Cloud Orchestrated Vehicular Networks
In this paper, we study a network slicing problem for edge-cloud orchestrated vehicular networks, in which the edge and cloud servers are orchestrated to process computation tasks for reducing network slicing cost while satisfying the quality of service requirements. We propose a two-stage network slicing framework, which consists of 1) network planning stage in a large timescale to perform slice deployment, edge resource provisioning, and cloud resource provisioning, and 2) network operation stage in a small timescale to perform resource allocation and task dispatching. Particularly, we formulate the network slicing problem as a two-timescale stochastic optimization problem to minimize the network slicing cost. Since the problem is NP-hard due to coupled network planning and network operation stages, we develop a Two timescAle netWork Slicing (TAWS) algorithm by collaboratively integrating reinforcement learning (RL) and optimization methods, which can jointly make network planning and operation decisions. Specifically, by leveraging the timescale separation property of decisions, we decouple the problem into a large-timescale network planning subproblem and a small-timescale network operation subproblem. The former is solved by an RL method, and the latter is solved by an optimization method. Simulation results based on real-world vehicle traffic traces show that the TAWS can effectively reduce the network slicing cost as compared to the benchmark scheme.
Wen Wu, Kaige Qu, Peng Yang, Ning Zhang, Xuemin, Shen, Weihua Zhuang
2022-12-31T06:03:14Z
http://arxiv.org/abs/2301.03358v1
# Cost-Effective Two-Stage Network Slicing for Edge-Cloud Orchestrated Vehicular Networks ###### Abstract In this paper, we study a network slicing problem for edge-cloud orchestrated vehicular networks, in which the edge and cloud servers are orchestrated to process computation tasks for reducing network slicing cost while satisfying the quality of service requirements. We propose a two-stage network slicing framework, which consists of 1) _network planning_ stage in a large timescale to perform slice deployment, edge resource provisioning, and cloud resource provisioning, and 2) _network operation_ stage in a small timescale to perform resource allocation and task dispatching. Particularly, we formulate the network slicing problem as a two-timescale stochastic optimization problem to minimize the network slicing cost. Since the problem is NP-hard due to coupled network planning and network operation stages, we develop a two timescAle netWork Slicing (TAWS) algorithm by collaboratively integrating reinforcement learning (RL) and optimization methods, which can jointly make network planning and operation decisions. Specifically, by leveraging the timescale separation property of decisions, we decouple the problem into a large-timescale network planning subproblem and a small-timescale network operation subproblem. The former is solved by an RL method, and the latter is solved by an optimization method. Simulation results based on real-world vehicle traffic traces show that the TAWS can effectively reduce the network slicing cost as compared to the benchmark scheme. ## I Introduction To make autonomous driving from a mere vision to reality, future vehicular networks are required to support various Internet of vehicles (IoV) services, such as object detection, in-vehicle infotainment, and safety message dissemination [1]. Those IoV services have diversified quality of service (QoS) requirements in terms of delay, throughput, reliability, etc. Emerging network slicing is deemed as a _de-facto_ solution to support diversified IoV services in vehicular networks. Its basic idea is to construct multiple isolated logical sub-networks (i.e., slices) for different services on top of the physical network, thereby facilitating flexible, agile, and cost-effective service provisioning. Starting from the fifth-generation (5G) era, standardization efforts from the 3rd generation partnership project (3GPP) body, e.g., Releases 15-17 [2, 3, 4], and proof-of-concept systems, e.g., Orion [5], have fuelled the maturity of network slicing. In the coming 6G era, advanced network slicing techniques are expected to play an increasingly important role [6, 7, 8]. In the literature, significant research efforts have been devoted to network slicing. Ye _et al._ investigated a radio spectrum resource slicing problem, in which radio spectrum is sliced between macro base stations (MBSs) and small BSs (SBSs) [9]. To achieve efficient resource allocation, a deep learning-based algorithm was proposed to jointly allocate radio spectrum and transmit power in a slicing-based network [10]. The previous work in [11] considered the resource provisioning problem and proposed a constrained learning algorithm to solve it. However, this work differs from the existing works in several important aspects. Firstly, the existing works focus on utilizing resources on the network edge, low-cost cloud resources are yet to be considered. As a remedy, a certain amount of computation tasks processed at the congested BSs can be dispatched to the remote cloud, i.e., _task dispatching_, such that system cost can be reduced. Secondly, network slicing includes two stages: 1) _network planning_ stage to provision network resources for slices in the large timescale, and 2) _network operation_ stage to allocate the reserved resources to end users in the small timescale [3, 12]. The existing works mainly decouple network slicing into two independent stages, while the interaction between them is seldom considered. Hence, designing a cost-effective network slicing scheme should take cloud resources and such interaction relationship into consideration. Optimizing network slicing performance in dynamic vehicular networks faces the following _challenges_. Firstly, network planning and operation decisions are _nested_. Large-timescale network planning decisions (e.g., resource reservation), will condition small-timescale network operation decisions (e.g., resource allocation). Meanwhile, the performance achieved in the network operation stage will also affect the decision-making in the network planning stage, which is difficult to be solved by conventional optimization methods. Secondly, since vehicle traffic density varies temporal-spatially, network planning decisions need to be made to optimize long-term performance in the slice lifecycle while accommodating such network _dynamics_. Deep reinforcement learning (RL) is considered as a plausible solution for long-term stochastic optimization. In this paper, we _first_ propose a cost-effective two-stage network slicing framework for edge-cloud orchestrated vehicular networks, by considering nested network planning and operation stages and effectively leveraging cloud resources. We then apply a network slicing cost model that accounts for slice deployment, resource provision, slice configuration adjustment, and QoS satisfaction. Based on the model, we formulate the network slicing problem as a two-timescale stochastic optimization problem to minimize the network slicing cost. _Second_, to solve the problem, we develop a learning-based algorithm, named Two timescAle netWork Slicing (TAWS). The TAWS exploits the timescale separation structure of decision variables and decouples the problem into two subproblems in different timescales. Regarding the large-timescale network planning subproblem, an RL algorithm is designed to minimize network slicing cost via optimizing slice deployment, edge resource provisioning, and cloud resource provisioning. Regarding the small-timescale network operation subproblem, an optimization algorithm is designed to minimize average service delay via optimizing resource allocation and task dispatching. In addition, the achieved service delay in the network operation stage is incorporated into the reward of the RL-based network planning algorithm, thereby capturing the interaction between two stages and enabling _closed-loop_ network control. Simulation results on real-world vehicle traces demonstrate that the proposed algorithm outperforms the benchmark scheme in terms of reducing network slicing cost. The remainder of this paper is organized as follows. The system model and problem formulation are presented in Sections II and III, respectively. Section IV describes the proposed TAWS algorithm. Simulation results are given in Section V, along with the conclusion in Section VI. ## II System Model ### _Network Model_ As shown in Fig. 1, the network slicing framework consists of several components. Physical network: A two-tier cellular network is deployed for serving on-road vehicles. The set of BSs is denoted by \(\mathcal{M}\), including the set of MBSs denoted by \(\mathcal{M}_{m}\) and the set of SBSs denoted by \(\mathcal{M}_{s}\), i.e., \(\mathcal{M}=\mathcal{M}_{m}\cup\mathcal{M}_{s}\). Each BS has a circular coverage and is equipped with an edge server. In the considered scenario, vehicles driving on the road generate computation tasks over time, which are offloaded to roadside BSs. Those tasks can be either processed at edge servers or dispatched to the remote cloud server via backbone networks. Once completed, computation results are sent back to vehicles. Network slice: Multiple network slices are constructed on top of the physical vehicular network. We consider \(K\) delay-sensitive services with differentiated delay requirements, denoted by set \(\mathcal{K}\). Let \(\theta_{k},\forall k\in\mathcal{K}\) denote the tolerable delay of service \(k\). For example, the tolerable delay of objective detection service is 100 \(ms\)[13], whereas the tolerable delay of in-vehicle infotainment can be up to several hundreds of milliseconds. Network controller: A hierarchical network control architecture is adopted, including an upper-layer software defined networking (SDN) controller that connects to all BSs, and lower-layer local network controllers located at BSs. Those controllers are in charge of network information collection and making network slicing decisions. ### _Two-Stage Network Slicing Framework_ We present a two-stage network slicing framework for the considered network. Firstly, a network planning stage operates in the large timescale (referred to as planning windows) to reserve resources at specific network nodes for the constructed slices. The duration of each planning window is denoted by \(T_{p}\). At each planning window, the SDN controller collects the average vehicle traffic density information in the considered area, based on which planning decisions are made. Secondly, the network operation stage operates in the small timescale (referred to as operation slots) to dynamically allocate the reserved resources to vehicles according to real-time vehicles' service requests and network conditions. The duration of each operation slot is denoted by \(T_{o}\). A planning window includes multiple operation slots, i.e., \(T_{p}/T_{o}\in\mathbb{Z}^{+}\). At each operation slot, the local network controller at each BS collects real-time service requests and channel conditions of its associated vehicles, based on which operation decisions are made. Decision structures in two stages are detailed respectively as follows. #### Ii-B1 Network Planning Decision Structure The planning window is indexed by \(w\in\mathcal{W}=\{1,2,...,W\}\), and planning decisions in planning window \(w\) include the following components. _Slice deployment decision_, denoted by \(\mathbf{o}^{w}\in\mathbb{R}^{M_{s}\times 1}\). Each element is a binary variable, i.e., \[o_{m}^{w}\in\{0,1\},m\in\mathcal{M}_{s}. \tag{1}\] If SBS \(m\) is activated for slice deployment, we have \(o_{m}^{w}=1\); otherwise, \(o_{m}^{w}=0\). When service demands are low, deploying slices at a selective subset of BSs can reduce network slicing cost as compared to deploying slices at all BSs while guaranteeing slices' service level agreements (SLAs). This is because running network slicing requires resource virtualization, which incurs network operating costs. For service continuity consideration, we assume that MBSs that cover the entire area are always activated. Note that only when a BS is activated for slice deployment, edge resources at the BS can be provisioned. _Edge resource provisioning decision_, including radio spectrum and computing resource provisioning at all BSs for all slices, denoted by \(\mathbf{B}^{w}\in\mathbb{R}^{K\times M}\) and \(\mathbf{C}^{w}\in\mathbb{R}^{K\times M}\) Fig. 1: Network slicing for edge-cloud orchestrated vehicular networks. respectively. The corresponding elements \[\{b_{k,m}^{w},c_{k,m}^{w}\}\in\mathbb{Z}^{+},\forall k\in\mathcal{K},m\in\mathcal{ M}, \tag{2}\] represent the number of subcarriers and edge virtual machine (VM) instances provisioned for slice \(k\) at BS \(m\), where \(\mathbb{Z}^{+}\) denotes the set of positive integers.1 The bandwidth of a subcarrier is denoted by \(\beta\), and the computing capability of an edge VM is denoted by \(F_{e}\). Due to the limitation of edge resources, the following capacity constraints are imposed: Footnote 1: Memory resource is also allocated to the VM instance to enable task processing, which is matched to its allocated computing resource. \[o_{m}^{w}\sum_{k\in\mathcal{K}}b_{k,m}^{w}\leq B_{m},o_{m}^{w}\sum_{k\in \mathcal{K}}c_{k,m}^{w}\leq C_{m},\forall m\in\mathcal{M}, \tag{3}\] where \(B_{m}\) and \(C_{m}\) represent the total numbers of subcarriers and VM instances at BS \(m\), respectively. _Cloud resource provisioning decision_, denoted by \(\mathbf{h}^{w}\in\mathbb{R}^{K\times 1}\). Each element \[h_{k}^{w}\in\mathbb{Z}^{+},\forall k\in\mathcal{K} \tag{4}\] denotes the number of cloud VM instances reserved for slice \(k\). The computing capability of a cloud VM is denoted by \(F_{c}\). #### Ii-B2 Network Operation Decision Structure Let \(t\in\mathcal{T}=\{1,2,...,T\}\) denote the index of operation slots within a planning window. At operation slot \(t\), the following decisions are determined for each slice \(k\). _Radio spectrum allocation decision_, denoted by \(\mathbf{y}_{k}^{t}\in\mathbb{R}^{N^{t}\times 1}\). The reserved radio spectrum at each BS is allocated to active vehicles within BS's coverage for task offloading. Due to vehicle mobility, the number of vehicles varies across time. Let \(\mathcal{N}^{t}\) denote the set of active vehicles in operation slot \(t\), and \(N^{t}=|\mathcal{N}^{t}|\). For simplicity, each vehicle associates to the nearest BS. Let \(\mathcal{N}_{m}^{t}\) denote the set of active vehicles associated to BS \(m\) at operation slot \(t\), and \(y_{k,n}^{t}\in\mathbb{R}^{+}\) represents the fraction of radio spectrum allocated to vehicle \(n\). The total amount of the allocated bandwidth should not exceed the reserved number of subcarriers at the corresponding BS, i.e., \[\sum_{n\in\mathcal{N}_{m}^{t}}y_{k,n}^{t}\leq b_{k,m}^{w},\forall m\in \mathcal{M}^{w}. \tag{5}\] Here, \(\mathcal{M}^{w}\) denotes the set of the activated BSs in window \(w\). _Task dispatching decision_, denoted by \(\mathbf{x}_{k}^{t}\in\mathbb{Z}^{M^{w}\times 1}\). The BS receives computation tasks uploaded from its associated vehicles. The task arrivals of vehicles follow an arbitrary stochastic process. Let \(a_{k,n}^{t}\) denote the number of the generated tasks of vehicle \(n\) in operation slot \(t\), and the aggregated computation workload at BS \(m\) is given by \(A_{k,m}^{t}=\sum_{n\in\mathcal{N}_{m}^{t}}a_{k,n}^{t}\). Processing all tasks at BSs with limited computing resources may incur prohibitive high queuing delay, and hence a portion of computation tasks can be dispatched to the remote cloud via backbone networks. Let \(x_{k,m}^{t}\) represent the number of dispatched tasks from BS \(m\) in slice \(k\), i.e., \[x_{k,m}^{t}\in\{0,1,2,...,A_{k,m}^{t}\},\forall m\in\mathcal{M}^{w}. \tag{6}\] The operation decisions impact service delay at each operation slot, which is analyzed in the following subsection. ### _Service Delay Model_ The service delay includes task offloading delay and task processing delay at either the edge or the cloud. For service \(k\), the following delay analysis is adopted. Task offloading delay: The transmission rate of one subcarrier from vehicle \(n\) to its associated BS is given by \(R_{n}^{t}=\beta\log_{2}\left(1+\frac{P_{v}g_{n}^{t}}{\beta N^{t}+\beta T}\right),\) where \(P_{v}\), \(g_{n}^{t}\), \(N_{o}\), and \(I\) represent vehicle's transmission power, instantaneous channel gain, noise spectrum density, and interference spectrum density, respectively. With the allocated radio spectrum \(y_{k,n}^{t}b_{k,m}^{w}\), the task offloading delay of vehicle \(n\) is given by \(d_{k,n,o}^{t}=\frac{\xi_{k}}{y_{k,n}^{t}b_{k,m}^{w}R_{n}^{w}},\forall n\in \mathcal{N}_{m}^{t},\) where \(\xi_{k}\) (in bits) denotes the task data size of service \(k\). Edge processing delay: Given the task dispatching decision, \(A_{k,m}^{t}-x_{k,m}^{t}\) tasks are processed at BS \(m\). Let \(Q_{k,m}^{t}\) (in bits) denote the amount of the backlogged tasks at BS \(m\). Taking task computation delay and queuing delay into account, edge processing delay at BS \(m\) is given by \(d_{k,m,e}^{t}=\frac{(Q_{k,m}^{t}+(A_{k,m}^{t}-x_{k,m}^{t}+1)\xi_{k}/2)\eta_{k} }{c_{k,m}^{t}F_{e}},\forall m\in\mathcal{M}^{w},\) where \(\eta_{k}\) (in cycles/bit) denotes task computation intensity of service \(k\), and \(c_{k,m}^{w}F_{e}\) is the computing capability of BS \(m\) with \(c_{k,m}^{w}\) provisioned edge VMs. The task backlog at BS \(m\) is updated by \(Q_{k,m}^{t+1}=\left[Q_{k,m}^{t}+(A_{k,m}^{t}-x_{k,m}^{t})\xi_{k}-c_{k,m}^{w}F_{ e}T_{o}/\eta_{k}\right]^{+},\) where \([x]^{+}=\max{\{x,0\}}\). Cloud processing delay: For BS \(m\), \(x_{k,m}^{t}\) tasks are dispatched via backbone networks and then processed at the cloud, whose delay is given by \(d_{k,m,c}^{t}=d_{r}^{t}+\frac{\xi_{k}\eta_{k}}{b_{k}^{w}F_{e}},\) where \(d_{r}^{t}\) denotes the round trip time in the backbone network. The second term represents the task processing delay in the cloud. Note that the queuing delay at the cloud is negligible as multi-core cloud servers can parallelly process different tasks. As such, the average delay for each computation task is given by \[\begin{split}& D_{k}^{t}(\mathbf{x}_{k}^{t},\mathbf{y}_{k}^{t})= \sum_{m\in\mathcal{M}^{w}}\sum_{n\in\mathcal{N}_{m}^{t}}\frac{d_{k,n,o}^{t}}{ \sum_{m\in\mathcal{M}^{w}}N_{m}^{t}}\\ &+\sum_{m\in\mathcal{M}^{w}}\frac{d_{k,m,e}^{t}\left(A_{k,m}^{t}-x _{k,m}^{t}\right)+d_{k,m,c}^{t}x_{k,m}^{t}}{\sum_{m\in\mathcal{M}^{w}}A_{k,m} ^{t}}.\end{split} \tag{7}\] In the above equation, the first term represents the average task offloading delay for each task, and the second term represents the average task processing delay taking workload distribution between the edge and cloud servers into account. By averaging all operation slots, the average service delay is given by \(\bar{D}_{k}^{w}=\frac{1}{T}\sum_{t=1}^{T}D_{k}^{t}(\mathbf{x}_{k}^{t},\mathbf{y}_ {k}^{t})\). ### _Network Slicing Cost Model_ The following network slicing cost model is adopted for slicing performance evaluation, including several components. Slice deployment cost: The cost is because running network slices at BSs incurs the overhead of resource virtualization, which is given by \(\Phi_{d}^{w}=q_{d}\sum_{m\in\mathcal{M}_{s}}o_{m}^{w}\). Here, \(q_{d}\) denotes the unit cost of deploying network slices at a BS. Resource provisioning cost: The cost component characterizes resource provisioning cost of radio spectrum resources, edge computing resources, and cloud computing resources. For simplicity, we assume the unit costs of a subcarrier, an edge VM instance, and a cloud VM instance are the same, denoted by \(q_{r}>0\). The resource provisioning cost is given by \(\Phi_{p}^{w}=q_{r}\sum_{k\in\mathcal{K}}\left(h_{k}^{w}+\sum_{m\in\mathcal{M}} \left(o_{m}^{w}b_{k,m}^{w}+o_{m}^{w}c_{k,m}^{w}\right)\right).\) Slice adjustment cost: The cost component characterizes the difference between two subsequent planning decisions, i.e., the cost for adjusting the amount of the reserved spectrum and computing resources. For computing resources, VM instances can be resized via advanced virtualization techniques in practical systems, e.g., Kubernetes [14]. Here, \(q_{s}\) represents the unit price of adjusting a unit of reserved network resources. Hence, the slice adjustment cost is given by \[\Phi_{s}^{w}= q_{s} 1\left\{o_{k,m}^{w-1}=1\wedge o_{k,m}^{w}=1\right\}\cdot\sum_{k \in\mathcal{K}}\left(\left[h_{k}^{w}-h_{k}^{w-1}\right]^{+}\right.\] \[\left.+\sum_{m\in\mathcal{M}}\left(\left[b_{k,m}^{w}-b_{k,m}^{w-1 }\right]^{+}+\left[c_{k,m}^{w}-c_{k,m}^{w-1}\right]^{+}\right)\right), \tag{8}\] where \(\mathbb{1}\left\{\cdot\right\}\) is an indicator function and \(\mathbb{1}\left\{o_{k,m}^{w-1}=1\wedge o_{k,m}^{w}=1\right\}\) indicates that slice \(k\) is deployed in the previous and current planning windows. SLA revenue: The cost component characterizes the benefit caused by QoS satisfaction, i.e., the achieved service delay of each slice. The piece-wise SLA revenue function is denoted by \[\Omega_{k}\left(D\right)=\begin{cases}q_{b},&\text{if }D<\theta_{k}^{\prime},\\ q_{b}\left(\frac{D-\theta_{b}^{\prime}}{\theta_{b}-\theta_{b}}\right),&\text{if } \theta_{k}^{\prime}\leq D\leq\theta_{k},\\ -q_{p},&\text{if }D>\theta_{k}.\end{cases} \tag{9}\] Here, \(q_{b}>0\) is the highest unit revenue once a slice's SLA is satisfied, and \(q_{p}>0\) is the unit penalty once the slice's SLA is violated. Obviously, \(q_{p}>q_{b}\) for discouraging slice's SLA violation. In addition, \(\theta_{k}^{\prime}<\theta_{k}\) represents the threshold achieving the highest revenue. For simplicity, we set \(\theta_{k}^{\prime}=\theta_{k}/2\) in the simulation. The overall SLA revenue of all slices is given by \(\Phi_{q}^{w}=\sum_{k\in\mathcal{K}}\Omega_{k}\left(\bar{D}_{k}^{w}\right).\) Taking all cost components into account, the overall network slicing cost in the entire slice lifecycle (i.e., all planning windows) is given by \(\Phi\left(\mathbf{o}^{w},\mathbf{B}^{w},\mathbf{C}^{w},\mathbf{h}^{w},\{ \mathbf{x}_{k}^{t},\mathbf{y}_{k}^{t}\}_{t\in\mathcal{T},k\in\mathcal{K}} \right)=\sum_{w\in\mathcal{W}}\left(\Phi_{d}^{w}+\Phi_{p}^{w}+\Phi_{s}^{w}- \Phi_{q}^{w}\right),\) which is adopted to evaluate network slicing performance. ## III Problem Formulation The network slicing problem aims to minimize the network slicing cost via determining network planning decisions at each planning window and network operation decisions at each operation slot for each slice, which is formulated as: \[\mathbf{P}_{0}:\] \[\underset{\{\mathbf{x}_{i}^{\prime},\mathbf{B}^{w}\}_{i}\in \mathcal{T},k\in\mathcal{K},\sim\mathcal{W}}{\text{min}} \sum_{w\in\mathcal{W}}\Phi\left(\mathbf{o}^{w},\mathbf{B}^{w},\mathbf{C}^{w}, \mathbf{h}^{w}\right)\] \[\text{s.t.}\] ( 10a ) In Problem \(\mathbf{P}_{0}\), the network planning and operation decision making are coupled in two timescales, which should be jointly optimized. To address the challenge, we first decouple the problem into a large-timescale network planning subproblem and multiple small-timescale network operation subproblems. **Subproblem 1**: _Network planning subproblem_ is to minimize the network slicing cost across all the planning windows, which is formulated as: \[\mathbf{P}_{1}:\] \[\underset{\mathbf{C}^{w},\mathbf{B}^{w}\}_{i}\] \[\text{s.t.}\] ( 11a ) \[\text{s.t.}\] ( 11b ) Addressing the above subproblem requires network traffic information of all planning windows, which is difficult to be known _a priori_. To solve it, we leverage an RL method to design a network planning algorithm, which makes online decisions under spatial-temporally varying vehicle traffic. **Subproblem 2**: _Network operation subproblem_ is to schedule network resources of each slice to active vehicles with random task arrivals with the objective of minimizing average service delay, which is formulated as: \[\mathbf{P}_{2}: \underset{\mathbf{x}_{i}^{\prime},\mathbf{y}_{k}^{t}}{\text{min}} D_{k}^{t}(\mathbf{x}_{k}^{t},\mathbf{y}_{k}^{t})\] \[\text{s.t.}\] ( 5 ) and ( 6 ). ( 12a ) In the above subproblem, radio spectrum resource allocation and task dispatching decisions jointly impact the service delay performance. To solve the problem, we analyze the subproblem property and design an optimization algorithm to make real-time network operation decisions. ## IV Learning-Based Network Slicing Algorithm In this section, we solve two subproblems in Sections IV-A and IV-B, respectively. Finally, we present the TWAS algorithm for jointly optimizing planning and operation decisions in Section IV-C. ### _Network Operation Optimization_ We can observe that the radio spectrum allocation decision only impacts offloading delay component, and the task dispatching decision only impacts the computation delay component. Moreover, both decisions are independent in each BS. Hence, the radio spectrum allocation and task dispatching decisions can be optimized individually at each BS. #### Iv-A1 Radio Spectrum Allocation Optimization From (7), the radio spectrum allocation optimization problem is equivalent to minimizing the task offloading delay at each BS, i.e., \[\mathbf{P}_{m}^{r}: \underset{\mathbf{y}_{k}^{t}}{\text{min}} \sum_{n\in\mathcal{N}_{n}^{r}}\frac{\xi_{k}}{y_{k,n}^{t}b_{k,m}^{w}R_{n}^ {t}}\] \[\text{s.t.}\] ( 13a ) The objective function can be proved to be convex since its second-order derivative is positive. In addition, the constraint is convex. Hence, problem \(\mathbf{P}_{m}^{r}\) is a convex optimization problem. Using the Karush-Kuhn-Tucker conditions [15], the optimal radio spectrum resource allocation decision is \[(y_{k,n}^{t})^{\star}=\frac{\sqrt{1/R_{n}^{t}}}{\sum_{i\in\mathcal{N}_{n}^{r}} \sqrt{1/R_{i}^{t}}},\forall n\in\mathcal{N}_{m}^{t}. \tag{14}\] #### Iv-A2 Task Dispatching Optimization Similarly, from (7), task dispatching optimization is to minimize the task processing delay, which is formulated as: \[\mathbf{P}_{m}^{w}: \underset{x_{k,m}^{t}}{\text{min}}\ d_{k,m,e}^{t}\left(A_{k,m}^{t}- x_{k,m}^{t}\right)+d_{k,m,c}^{t}x_{k,m}^{t}\] \[\text{s.t.\ \eqref{eq:constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraintconstraint_constraint_constraint_constraintconstraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraintconstraint_constraint_constraintconstraint_constraint_constraintconstraint_constraint_constraintconstraint_constraint_constraint_constraintconstraint_constraint_constraintconstraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraintconstraint_constraintconstraint_constraint_constraint_constraintconstraint_constraintconstraint_constraintconstraint_constraint_constraint_constraintconstraint_constraintconstraint_constraintconstraint_constraint_constraint_constraintconstraint_constraintconstraint_constraint_constraintconstraint_constraint_constraintconstraint_constraintconstraint_constraintconstraint_constraint_constraintconstraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraint_constraintconstraintconstraint_constraintconstraint_constraintconstraintconstraint_constraintconstraintconstraint_constraintconstraint_constraintconstraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraintconstraint_constraint_constraintconstraintconstraint_constraintconstraint_constraintconstraintconstraint_constraintconstraintconstraint_constraintconstraintconstraint_constraintconstraint_constraint_constraintconstraintconstraint_constraintconstraintconstraint_constraintconstraint_constraintconstraint_constraintconstraintconstraintconstraint_constraintconstraintconstraint_constraintconstraintconstraint_constraintconstraint_constraintconstraintconstraint_constraintconstraintconstraint_constraintconstraint_constraintconstraintconstraint_constraintconstraintconstraint_constraintconstraintconstraint_constraintconstraint_constraintconstraintconstraint_constraintconstraintconstraint_constraintconstraintconstraint_constraintconstraintconstraintconstraint_constraintconstraint_constraintconstraintconstraintconstraint_constraintconstraintconstraint_constraintconstraintconstraint_constraintconstraintconstraintconstraint_constraintconstraintconstraintconstraint_constraintconstraintconstraintconstraintconstraint_constraintconstraintconstraintconstraintconstraintconstraint_constraintconstraintconstraint_constraintconstraintconstraint_constraintconstraintconstraintconstraint_constraintconstraintconstraintconstraint_constraintconstraintconstraint_constraintconstraintconstraint_constraintconstraintconstraint_constraintconstraintconstraintconstraintconstraint_constraint covered by two SBSs and an MBS. Each SBS has a coverage radius of 300 m, and the MBS located in the centre covers the entire simulation area. The vehicle traffic density of the simulation area is measured by a unit of a small region of 250\(\times\)250 m\({}^{2}\), i.e., \(J=16\). This dataset is collected by Didi Chuxing GAIA Initiative2 and contains vehicle traces in the second ring road in Xi'an collected from taxis that are equipped with GPS devices. The periods of a planning window and an operation slot are set to 10 minutes and 1 second, respectively. The period of the slice lifecycle is set to 4 hours, including 24 planning windows. The task arrivals of two services both follow Poisson processes with different task arrival rates. We construct two slices for supporting two types of delay-sensitive services. One is an object detect service whose service delay requirement is 100 \(ms\), while the other is an in-vehicle infotainment service whose service delay requirement is 200 \(ms\). Regarding the TWAS algorithm, the neuron units in hidden layers of both actor and critic networks are set to 128 and 64. Important simulation parameters are summarized in Table I. Footnote 2: Didi Chuxing Dataset: [https://gaia.didchuxing.com](https://gaia.didchuxing.com). As shown in Fig. 2(a), we present the overall network slicing cost with respect to training episodes. All simulation points are processed by a five-point moving average in order to highlight the convergence trend of the proposed algorithm. It can be seen that the proposed algorithm converges after 500 training episodes. As shown in Fig. 2(b), we compare the performance of the proposed algorithm and a short term optimization benchmark. The basic idea of the benchmark is to minimize the network slicing cost at each individual planning window. Since planning decisions are discrete, a simple exhaustive searching method is adopted to obtain the optimal one-shot planning decisions. Firstly, it can be seen that the proposed algorithm can greatly reduce the network slicing cost as compared to the benchmark. Specifically, when the task arrival rate is 2 packets per second, the proposed algorithm can reduce the network slicing cost by 23%. The reason is that the proposed algorithm takes the switching cost between two consequent planning windows into account, while the benchmark scheme does not. Secondly, the overall network slicing cost increases with the increase of the task arrival rate, because more radio and computing resources are consumed in heavy traffic scenarios. ## VI Conclusion In this paper, we have investigated a network slicing problem in edge-cloud orchestrated vehicular networks. A two-stage network slicing algorithm, named TWAS, has been proposed to jointly make network planning and operation decisions in an online fashion. The TAWS can adapt to network dynamics in different timescales, including spatial-temporally varying vehicle traffic density and random task arrivals. Simulation results demonstrat that the TAWS can reduce the network slicing cost as compared to the conventional scheme. For the future work, we aim to determine the optimal planning window size for minimizing the network slicing cost under vehicular network dynamics.
2309.10260
Stochastic control of the Landau-Lifshitz-Gilbert equation
We consider the stochastic Landau-Lifshitz-Gilbert equation in dimension 1. A control process is added to the effective field. We show the existence of a weak martingale solution for the resulting controlled equation. The proof uses the classical Faedo-Galerkin approximation, along with the Jakubowski version of the Skorohod Theorem. We then show pathwise uniqueness for the obtained solution, which is then coupled with the theory of Yamada and Watanabe to give the existence of a unique strong solution. We then show, using some semigroup techniques that the obtained solution satisfies the maximum regularity. We then show the existence of an optimal control. A main ingredient of the proof is using the compact embedding of a space into itself, albeit with the weak topology.
Zdzisław Brzeźniak, Soham Gokhale, Utpal Manna
2023-09-19T02:33:51Z
http://arxiv.org/abs/2309.10260v1
# Stochastic control of the Landau-Lifshitz-Gilbert equation ###### Abstract. We consider the stochastic Landau-Lifshitz-Gilbert equation in dimension \(1\). A control process is added to the effective field. We show the existence of a weak martingale solution for the resulting controlled equation. The proof uses the classical Faedo-Galerkin approximation, along with the Jakubowski's version of the Skorohod Theorem. We then show pathwise uniqueness for the obtained solution, which is then coupled with the theory of Yamada and Watanabe to give the existence of a unique strong solution. We then show, using some semigroup techniques that the obtained solution satisfies the maximum regularity. We then show the existence of an optimal control is then shown. A main ingredient of the proof is using the compact embedding of a space into itself, albeit with the weak topology. That is, we use the compact embedding of the space \(L^{2}(0,T:L^{2})\) into the space \(L^{2}_{w}(0,T:L^{2})\). Key words and phrases:Landau-Lifshitz-Gilbert equation, Ferromagnetism, Stochastic control ## 1. Introduction Magnetic storage devices are widely used to store, process data. A common example would be a hard disk that is used to store computer data. Such media are nowadays comprehensively used to store large amounts of data. Another example is that of a ferromagnetic nanowire (\(d=1\)) separating domains of almost uniform magnetization \(m\). The speed of reading or writing depends upon the magnetization switching time. Hence, understanding magnetization processes and the corresponding mechanisms can help in an optimal design for the storage media and lead to faster and better devices. Magnetic devices are made up of several ferromagnetic particles. Each of them has the capacity to be magnetized in two directions. These can be hence used to store one bit of data each. Therefore it is beneficial to get more efficient switching of the magnetization states of the particles in order to get better data storage and processing. This can be controlled by adding an external control (field pulses). Weiss initiated the study of the theory of ferromagnetism, see [10] and references therein. Landau and Lifshitz [46] and Gilbert [33] developed it further. Let \(\mathcal{O}\subset\mathbb{R}\) be a bounded interval. For a temperature below the Curie temperature, the magnetization \(m\) satisfies the Landau-Lifshitz-Gilbert (LLG) equation \[\begin{cases}&\frac{\partial m}{\partial t}=\alpha_{\mathrm{g}}(m\times H_{ \mathrm{eff}})-\alpha_{\mathrm{g}}\alpha_{\mathrm{d}}m\times(m\times H_{ \mathrm{eff}})\text{ in }\mathcal{O}_{T}=(0,T)\times\mathcal{O},\\ &\frac{\partial m}{\partial\nu}=0\text{ on }\partial\mathcal{O}_{T}=[0,T] \times\partial\mathcal{O},\\ &m(0,\cdot)=m_{0}\text{ on }\mathcal{O}.\end{cases}\] Here \(\times\) denotes the vector product in \(\mathbb{R}^{3}\). The constant \(\alpha_{\mathrm{g}}\) and \(\alpha_{\mathrm{d}}\) are the gyromagnetic ratio and the damping parameter respectively [24]. \(H_{\mathrm{eff}}\) denotes the effective field, which is described after a short remark and some simplifying assumptions. For ferromagnetic materials at temperature below the Curie temperature, the modulus of the magnetization remains constant. For simplicity, we assume the constant to be \(1\). For simplicity, let us assume that \(\alpha_{\mathrm{g}}=1\) and denote the constant \(\alpha_{\mathrm{d}}\) by \(\alpha\). We do not use the smallness of the parameter \(\alpha\) anywhere in the calculations. The resulting equation is ###### Contents * 1 Introduction [MISSING_PAGE_POST] activated phenomenon in micromagnetics. Perturbing the model by a multiplicative noise, Pu and Guo in [57] prove the existence of regular martingale solutions in dimension \(2\), followed by some finite time blow-up criterion. The work [37] establishes the existence of a global weak solution to the stochastic LLG equation for any dimension \(d>0\), also showing that for dimension \(1\), the associated Cauchy problem admits a unique global smooth solution. Brzezniak and Manna in [17] discuss the stochastic LLG equation in dimension \(3\), which is driven by pure jump noise. They show the existence of a weak martingale solution, which satisfies the required constraint condition. See also [48]. Brzezniak, Manna and Mukherjee in [18] show the existence of a strong solution. Manna, Mukherjee and Panda in [49] show the existence of a strong solution with non-zero anisotropy energy. The key ingredients for both the previous results are the Doss-Sussmann transform and the Wong-Zakai approximations. Some important numerical studies include, but are not limited to Banas, Brzezniak, Neklyudov and Prohl [5], [6], see also [7], Goldys, Le and Tran in [36], Brzezniak, Grotowski and Le in [35]. A natural question here is about what happens when the temperature is above the Curie temperature. In that case the model can replaced by the Landau-Lifshitz-Bloch (LLB) equation. The model was proposed by Geranin in [32] in 1997. The LLB equation essentially interpolates between the LLG equation at low temperatures and the Ginzburg-Landau theory of phase transitions. Le in [47] shows the existence of weak solution to the LLB equation. The same author with Brzezniak and Goldys in [15] show the existence and uniqueness of a solution for the stochastic LLB equation, along with the existence of invariant measures for dimensions \(1\) and \(2\). On similar lines, Jiang, Ju and Wang in [40] showed the existence of a weak martingale solution to the stochastic LLB equation. In [58], the authors establish the large deviation principle and the central limit theorem for the \(1\) dimensional stochastic Landau-Lifshitz-Bloch equation. As indicated earlier, our aim is to try and optimize the switching of magnetization by giving some external input. Following the works [28], [29], we add the control to the effective field. We now describe the inclusion of the control process in the stochastic LLG equation. Let \(u:\Omega\times[0,T]\times\mathcal{O}\to\mathbb{R}^{3}\) denote a control process. We denote by \(\mathcal{E}_{u}\), the external field energy, see [29] corresponding to the control process \(u\), which is given by \[\mathcal{E}_{u}(m)=-\int_{\mathcal{O}}\,\langle m,u\rangle\,\,dx. \tag{1.4}\] The energy now considered is the sum of the exchange energy and the external field energy. Therefore the resulting effective field is \[H_{\text{eff}}=\Delta m+u+\zeta. \tag{1.5}\] Summarizing, the equation considered in this paper is the following: \[\begin{cases}&dm=\big{[}m\times\Delta m-\alpha\,m\times(m\times\Delta m)+m \times u-\alpha\,m\times(m\times u)\big{]}\,dt\\ &\quad+\big{(}m\times h-\alpha\,m\times(m\times h)\big{)}\circ\,dW(t),\,\,t \in[0,T],\\ &\frac{\partial m}{\partial\nu}\,=\,0,\,\text{on}\,\,\partial\mathcal{O}_{T},\\ &m(0,\cdot)=m_{0}\,\,\text{on}\,\,\mathcal{O}.\end{cases} \tag{1.6}\] Here \(h:\mathcal{O}\to\mathbb{R}^{3}\) is a bounded function and \(W\) is a real valued Wiener process. Kruzik and Prohl in [45] give an overview of some of the developments in analysis and numerics of ferromagnetism. The initial studies for deterministic optimal control of ferromagnetic dynamics were done in [2],[3]. Dunst, Klein, Prohl and Schafer in [28] show the existence of an optimal control subject to a one dimensional Landau-Lifshitz-Gilbert equation. Dunst, Majee, Prohl and Vallet in [29] consider the stochastic counterpart of the above problem. In [29], the authors show the existence of weak optimal control for (1.3) along with the following cost functional (1.7) (for arbitrary but fixed \(p\geq 2\), \(K>0\)) \[J(\pi)=\mathbb{E}\left[\int_{0}^{T}\,\Big{(}|m(t)-\bar{m}(t)|^{2}_{L^{2}}+|u(t )|^{2p}_{H^{1}}\Big{)}\,dt+\Psi\big{(}m(T)\big{)}\right], \tag{1.7}\] subject to (1.3) and \[\left|u(t)\right|_{L^{2}}^{2}\leq K\text{ for a.a. }t\in[0,T],\mathbb{P}-a.s. \tag{1.8}\] Here \(\bar{m}\) is a given desired state and \(\Psi\) is a given Lipschitz continuous function on \(L^{2}\). The existence has been shown for any \(p\geq 2\) (as given above) in dimension \(d=1,2,3\). They use the smallness of the parameter \(\alpha\) and hence do not consider the terms \(m\times(m\times u)\) and \(m\times(m\times h)\). We consider the following problem in this paper. **Control problem:** Let \(\bar{m}\in L^{2}(\Omega;L^{2}(0,T;H^{1}))\) be a given desired state that takes values on the unit sphere \(\mathcal{S}^{2}\). The terminal cost is given by the function \(\Psi:\mathbb{S}^{2}\to[0,\infty)\). For a fixed \(0<T<\infty\), our aim is to minimize the cost functional \[J(\pi)=\mathbb{E}\left[\int_{0}^{T}\left(\left|m(t)-\bar{m}(t)\right|_{H^{1}}^ {2}+\left|u(t)\right|_{L^{2}}^{2}\right)+\Psi(m(T))\right] \tag{1.9}\] over the space of admissible solutions \(\pi=\left(\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\in[0,T]},\mathbb{P},W,m,u\right)\) to the problem (1.6). **Contribution of the paper:** * The full equation has been considered. That is, we have considered the triple product term both in the drift term and the noise coefficient. Also, the control is added in the Landau-Lifshitz energy \(\mathcal{E}\), and hence the effective field \(H_{\text{eff}}\), making the control operator is non-linear (in \(m\)). The noise has been added to the effective field and the full form of noise (including the triple product term) has been considered. Similar to the previous argument, the noise coefficient is no longer linear. In a sense, the complete problem has been considered. Note that the only energy considered is the exchange energy. Anistropy and stray energy have not been considered. * We believe that our cost functional is more natural than of Prohl _et. al._, because firstly the solution \(m\) takes values in the \(H^{1}\) space and not \(L^{2}\), so controlling its \(H^{1}\)-norm is what matters. And secondly, the natural control set is the \(L^{2}\)-space and not \(H^{1}\) and hence again controlling its \(L^{2}\)-norm is what matters. Moreover, if, as one would hope to prove in the future there exists a solution to our problem with the space-time white noise, then considering the control as an \(L^{2}\)-valued function is natural from Large Deviations point of view. On the other hand Prohl _et. al._[29] consider a constraint (1.8) on the control. We will return to a similar problem in a forthcoming paper. Prohl _et. al._ in [29] show the existence of an optimal relaxed control for the relaxed version of the problem. The control is constructed using compactness properties of random Young measures on a chosen Polish space. The work is inspired mainly from [21], wherein the authors study an optimal relaxed control problem for a semilinear stochastic partial differential equation on a Banach space. In this work, we use the compact embedding of the space \(L^{2}(0,T;L^{2})\) into the space \(L^{2}_{w}(0,T;L^{2})\), which is the space \(L^{2}(0,T;L^{2})\) endowed with the weak topology. **Structure of the paper:** The paper is organized as follows. Before showing the existence of an optimal control, we show that the set of admissible solutions to the problem (3.7) is non-empty. This is done in Theorem 3.3. It shows the existence of a weak martingale solution for the given problem. The proof uses the Faedo-Galerkin approximation, see Section 4, followed by some compact embeddings in order to use the Jakubowski version of the Skorohod Theorem in Section 5. The basic motivation for the proof is taken from [31], see also [13]. Following the resuts in [14], we show the pathwise uniqueness for the obtained weak martingale solution in Section 7. This combined with the theory of Yamada and Watanabe, see [39] give the existence of a strong solution to the problem (3.7). This is followed by showing maximal regularity in Theorem 3.5. The proof uses the analytic properties of the semigroup generated by the operator \((-\Delta)\). The existence of an optimal control is then shown in Section 9. Therein we take a minimizing sequence of admissible solutions and then show that they converge to another admissible solution, which is a minimizer of the cost functional (1.9). ## 2. Notations and Preliminaries Let the domain \(\mathcal{O}\subset\mathbb{R}\) be a bounded interval. We fix this domain throughout the paper. The letter \(C\) is used for denoting a generic constant, whose value may change from line to line. \(L^{p}\) denotes the space \(L^{p}(\mathcal{O}:\mathbb{R}^{3})\) and \(W^{k,p}\) denotes the space \(W^{k,p}(\mathcal{O}:\mathbb{R}^{3})\). \(\langle\cdot,\cdot\rangle\), \(|\cdot|\) respectively denote the standard inner product and norm in \(\mathbb{R}^{3}\). Let \(v\in L^{\infty}\), \(q\in[1,\infty]\). \(G(v):L^{q}\to L^{q}\) is defined as follows \[G(v):L^{q}\ni k\mapsto v\times k-\alpha\,v\times(v\times k)\in L^{q}. \tag{2.1}\] The following lemma shows that the above defined \(G\) is a polynomial map. Moreover, the restriction of \(G\) to the space \(L^{\infty}\cap H^{1}\) is also a polynomial map. **Lemma 2.1**.: _Let \(q\in[1,\infty]\). The map \(G:L^{\infty}\to\mathcal{L}(L^{q})\) is a polynomial map of degree 2. Hence it is of polynomial growth and is Lipschitz on balls, that is there exists a constant \(C_{0}>0\) such that_ \[|G(v)k|_{L^{q}}\leq C_{0}(1+|v|_{L^{\infty}})|v|_{L^{\infty}}|k|_{L^{q}}. \tag{2.2}\] _Thus,_ \[|G(v)|_{\mathcal{L}(L^{q})}\leq C_{0}(1+|v|_{L^{\infty}})|v|_{L^{\infty}}.\] _Moreover for every \(r>0\), there exists a constant \(C_{r}>0\) such that for all \(v_{i}\in L^{\infty}\) with \(|v|_{L^{\infty}}\leq r\), we have_ \[|G(v_{1})k-G(v_{2})k|_{L^{q}}\leq C_{r}|k|_{L^{q}}|v_{1}-v_{2}|_{L^{\infty}}, \ k\in L^{q}.\] _Thus,_ \[|G(v_{1})-G(v_{2})|_{\mathcal{L}(L^{q})}\leq C_{r}|v_{1}-v_{2}|_{L^{\infty}}, \ \text{if }v_{1},v_{2}\in L^{\infty},|v_{i}|_{L^{\infty}}\leq r\ \text{for }i=1,2.\] _Finally, the restriction of the map \(G\) to the space \(H^{1}\cap L^{\infty}\) takes values in the space \(\mathcal{L}(H^{1}\cap L^{\infty})\), i.e._ \[G:H^{1}\cap L^{\infty}\to\mathcal{L}(H^{1}\cap L^{\infty}).\] _As such this map \(G\) is also a polynomial of degree \(2\) and hence of \(C^{\infty}\) class and of quadratic growth._ Proof of Lemma 2.1.: Let us choose and fix \(q\in[1,\infty]\). **Step 1**: The map \(G:L^{\infty}\to\mathcal{L}(L^{q})\) is a polynomial of degree \(2\), because \(G\) is a linear combination of a linear map \(G_{1}:L^{\infty}\ni v\mapsto\{h\mapsto v\times h\}\in\mathcal{L}(L^{q})\) and of a homogenous polynomial of degree \(2\), \(G_{2}:L^{\infty}\ni v\mapsto\{h\mapsto\alpha\,v\times(v\times h)\}\in\mathcal{ L}(L^{q})\). Note that \(G_{1}\) is continuous because for \(v\in L^{\infty},h\in L^{q}\), we have the following. \[|G_{1}(v)h|_{L^{q}}=|v\times h|_{L^{q}}\leq|v|_{L^{\infty}}\,|h|_{L^{q}}\,.\] We define the bilinear map corresponding to \(G_{2}\) by \[\tilde{G}_{2}:L^{\infty}\times L^{\infty}\ni(v_{1},v_{2})\mapsto\{h\mapsto \alpha\,v_{1}\times(v_{2}\times h)\}\in\mathcal{L}(L^{q}).\] Note that \(G_{2}\) is a continuous bilinear map because for \(v_{1},v_{2}\in L^{\infty},k\in L^{q}\) we have \[|\tilde{G}_{2}(v_{1},v_{2})k|_{L^{q}} \leq|v_{1}\times(v_{2}\times k)|_{L^{q}}\] \[\leq|v_{1}|_{L^{\infty}}|(v_{2}\times k)|_{L^{q}}\leq|v_{1}|_{L^{ \infty}}|v_{2}|_{L^{\infty}}|k|_{L^{q}}.\] **Step 2**: So we proved that map \(G:L^{\infty}\to\mathcal{L}(L^{q})\) is a polynomial of degree \(2\). It follows that \(G\) is a \(C^{\infty}\) function, Lipschitz on balls and of quadratic growth. Concerning the last claim, the inequality (2.2) is also a consequence of the above proof. **Step 3**: We have already proved that map \(G\) Lipschitz on balls but we can make this assertion more precise. Let \(v_{1},v_{2}\in L^{\infty}\) with \(|v_{i}|_{L^{\infty}}\leq r\) for \(i=1,2\) and \(k\in L^{q}\). \[|G_{1}(v_{1})k-G_{1}(v_{2})k|_{L^{q}} =|v_{1}\times k-v_{2}\times k|_{L^{q}}\] \[=|(v_{1}-v_{2})\times k|_{L^{q}}\] \[\leq|v_{1}-v_{2}|_{L^{\infty}}|k|_{L^{q}}.\] The second term \(G_{2}\) is also Lipschitz on balls can be shown as follows. Let \(v_{1},v_{2}\in L^{\infty}\) with \(|v_{i}|_{L^{\infty}}\leq r\) for \(i=1,2\) and \(k\in L^{q}\). \[|G_{2}(v_{1})k-G_{2}(v_{2})k|_{L^{q}} =\alpha\,|v_{1}\times(v_{1}\times k)-\alpha\,v_{2}\times(v_{2} \times k)|_{L^{q}}\] \[=\alpha|v_{1}\times(v_{1}\times k)-v_{2}\times(v_{1}\times k)+v_{ 2}\times(v_{1}\times k)-v_{2}\times(v_{2}\times k)|_{L^{q}}\] \[\leq|(v_{1}-v_{2})\times(v_{1}\times k)|_{L^{q}}+|v_{2}\times((v_ {1}-v_{2})\times k)|_{L^{q}}\] \[\leq|v_{1}-v_{2}|_{L^{\infty}}|v_{1}|_{L^{\infty}}|k|_{L^{q}}+|v_ {1}-v_{2}|_{L^{\infty}}|v_{2}|_{L^{\infty}}|k|_{L^{q}}\] \[\leq 2r|v_{1}-v_{2}|_{L^{\infty}}|k|_{L^{q}}.\] Thus, \[|G(v_{1})k-G(v_{2})k|_{L^{q}}\leq C_{r}|v_{1}-v_{2}|_{L^{\infty}}|k|_{L^{q}},\] for some constant \(C_{r}\) depending on \(r\). 1. To prove the last part one can deal separately with the maps \(G_{1}\) and \(G_{2}\). We begin with \(G_{1}\). Let \(v,h\in H^{1}\cap L^{\infty}\). By Step 1 we have \[|G_{1}(v)h|_{L^{\infty}}\leq|v|_{L^{\infty}}|h|_{L^{\infty}},\] and \[|G_{1}(v)h|_{L^{2}}\leq|v|_{L^{\infty}}|h|_{L^{2}}.\] Moreover, \[|\nabla[G_{1}(v)h]|_{L^{2}} =|\nabla(v\times h)|_{L^{2}}\leq|\nabla v\times h|_{L^{2}}+|v \times\nabla h|_{L^{2}}\] \[\leq|\nabla v|_{L^{2}}|h|_{L^{\infty}}+|v|_{L^{\infty}}|\nabla h| _{L^{2}}.\] Summing up, we have proved that for some constant \(C>0\) \[|G_{1}(v)h|_{L^{\infty}}+|G_{1}(v)h|_{H^{1}} \leq C|v|_{L^{\infty}}\big{(}|h|_{L^{\infty}}+|h|_{H^{1}}\big{)}+ |v|_{H^{1}}|h|_{L^{\infty}}\] \[\leq C\big{(}|v|_{L^{\infty}}+|v|_{H^{1}}\big{)}\times\big{(}|h|_ {L^{\infty}}+|h|_{H^{1}}\big{)}.\] We used here that \[|u|_{H^{1}}^{2}:=|u|_{L^{2}}^{2}+|\nabla u|_{L^{2}}^{2}.\] In the same way one can prove that \[|G_{2}(v)h|_{L^{\infty}}+|G_{2}(v)h|_{H^{1}}\leq C\big{(}|v|_{L^{\infty}}^{2} +|v|_{L^{\infty}}|v|_{H^{1}}\big{)}^{2}\times C\big{(}|h|_{L^{\infty}}+|h|_{H^ {1}}\big{)}.\] The proof is complete. We have proved above that \(G\) is of \(C^{\infty}\) class. We can calculate the Frechet derivative \(DG(v),v\in L^{\infty}\), of \(G\) in the following way. **Proposition 2.2**.: _Assume that \(q\in[1,\infty]\). For \(v,w\in L^{\infty}\) we have_ \[DG(v)(w)=\{L^{q}\ni h\mapsto w\times h-\alpha\,\,[v\times(w\times h)+w\times( v\times h)]\in L^{q}\}\in\mathcal{L}(L^{q}). \tag{2.3}\] _Moreover, the same holds when the spaces \(L^{\infty}\) and \(L^{q}\) are replaced by the \(H^{1}\cap L^{\infty}\) as in the last part of the previous Lemma._ Proof of Proposition 2.2.: We only consider the first part since the second is completely analogous. Since \(G=G_{1}-\alpha\,G_{2}\) by the proof of previous Lemma, it is sufficient to consider \(G_{1}\) and \(G_{2}\) separately. Since \(G_{1}\) is bounded linear map, by Proposition 2.4.2, [22], \[DG_{1}(v)(w)=G_{1}(w).\] Concerning \(G_{2}\), we observed that \(G_{2}(v)=\tilde{G}_{2}(v,v)\) for all \(v\in L^{\infty}\), where \(\tilde{G}_{2}\) is the corresponding continuous bilinear map. Hence, see Theorem 2.4.3, [22], \[DG_{2}(v)(w)=\tilde{G}_{2}(v,w)+\tilde{G}_{2}(w,v).\] The result follows. ## 3. Statements of the Main results We first state some assumptions that will be required in the proof for Theorem 3.3. **Assumption 3.1**.: 1. _Let_ \(\left(\Omega,\mathcal{F},\mathbb{F},\mathbb{P}\right)\) _be a probability space which satisfies the usual hypotheses. That is,_ 1. \(\mathbb{P}\) _is complete on_ \(\left(\Omega,\mathcal{F}\right)\)_._ 2. _For every_ \(t\geq 0\)_,_ \(\mathcal{F}_{t}\) _contains every_ \(\left(\mathcal{F},\mathbb{P}\right)\)_-null set._ 3. _The filtration_ \(\mathbb{F}=\left\{\mathcal{F}_{t}\right\}_{t\in[0,T]}\) _is right continuous._ 2. \(W\) _is a real valued Wiener process defined on the above probability space with the filtration_ \(\mathbb{F}\)_._ 3. _The given function_ \(h\) _is assumed to be in the space_ \(H^{1}\)_, and is independent of time._ 4. _A process_ \(u\) _is an_ \(\mathbb{F}\)_- progressively measurable process such that the following inequality holds for each_ \(p\geq 1\)_,_ \[K_{p}:=\mathbb{E}\left(\int_{0}^{T}\left|u(t)\right|_{L^{2}}^{2}\,dt\right)^{p }<\infty.\] (3.1) _In particular, the trajectories of_ \(u\) _take values in_ \(L^{2}(0,T;L^{2})\)_._ Define the following operator \[\Delta: H^{1}\rightarrow\left(H^{1}\right)^{\prime}\] \[v\mapsto\Delta v,\] where for \(w\in H^{1},v\in H^{1}\), the linear map \(\Delta v\in\left(H^{1}\right)^{\prime}\) is defined by \[\left(H^{1}\right)^{\prime}\left\langle\Delta v,w\right\rangle_{H^{1}}=- \left\langle\nabla v,\nabla w\right\rangle_{L^{2}}. \tag{3.2}\] \[\left(\Delta v\right)\left(w\right):=-\left\langle\nabla v,\nabla w\right\rangle _{L^{2}}.\] Since \(v,w\in H^{1}\), \[\left|\,\,\left(H^{1}\right)^{\prime}\left\langle\Delta v,w \right\rangle_{H^{1}}\right| =\left|\left\langle\nabla v,\nabla w\right\rangle_{L^{2}}\right|\] \[\leq\left|\nabla v\right|_{L^{2}}\left|\nabla w\right|_{L^{2}}\] \[\leq\left|v\right|_{H^{1}}\left|w\right|_{H^{1}}<\infty.\] Hence for \(v\in H^{1}\), \(\Delta v\) as defined above is in \(\left(H^{1}\right)^{\prime}\). Let \(v_{1},v_{2},w\in H^{1}\), \(a_{1},a_{2}\in\mathbb{R}\). \[\left(H^{1}\right)^{\prime}\left\langle\Delta\left(a_{1}v_{1}+a_{ 2}v_{2}\right),w\right\rangle_{H^{1}} =-\left\langle\nabla\left(a_{1}v_{1}+a_{2}v_{2}\right),\nabla w \right\rangle_{L^{2}}\] \[=-\left\langle\nabla a_{1}v_{1},\nabla w\right\rangle_{L^{2}}+ \left\langle\nabla a_{2}v_{2},\nabla w\right\rangle_{L^{2}}\] \[=a_{1}\,\left(H^{1}\right)^{\prime}\left\langle\Delta v_{1},w \right\rangle_{H^{1}}+a_{2}\,\left(H^{1}\right)^{\prime}\left\langle\Delta v _{2},w\right\rangle_{H^{1}}.\] Hence \(\Delta\) is a linear operator from the space \(H^{1}\) to its dual \(\left(H^{1}\right)^{\prime}\). Define an operator \(A\) (Neumann Laplacian) on its domain \(D(A)\subset L^{2}\) to \(L^{2}\) as follows. \[\begin{cases}D(A)&:=\left\{v\in H^{2}:\nabla(v)(x)=0,\text{ for }x\in\partial \mathcal{O}\right\},\\ Av&:=-\Delta v,\text{ for }v\in D(A).\end{cases} \tag{3.3}\] It is known that the operator \(A\) is a self-adjoint operator in \(L^{2}\). We define another operator \(A_{1}\) by \[A_{1}=I_{L^{2}}+A, \tag{3.4}\] where \(I_{L^{2}}\) denotes the identity operator on \(L^{2}\). It is also known that \(\left(A_{1}\right)^{-1}\) is compact. Also, the space \(D(A_{1}^{\frac{1}{2}})\) equipped with the graph norm coincides with the space \(H^{1}\). For \(v_{1},v_{2},v_{3}\in H^{1}\), we interpret terms \(\Delta v_{1}\), \(v_{1}\times\Delta v_{2}\) and \(v_{1}\times\left(v_{2}\times\Delta v_{3}\right)\) as elements of \(\left(H^{1}\right)^{\prime}\). Let \(\phi\in H^{1}\). (1) \[{}_{(H^{1})^{\prime}}\left\langle v_{2}\times\Delta v_{1},\phi\right\rangle_{H^{1}} =-\left\langle\nabla\left(\phi\times v_{2}\right),\nabla v_{1} \right\rangle_{L^{2}}.\] (3.5) (2) \[{}_{(H^{1})^{\prime}}\left\langle v_{3}\times\left(v_{2}\times\Delta v_{1} \right),\phi\right\rangle_{H^{1}} =-\left\langle\nabla\left(\left(\phi\times v_{3}\right)\times v_{ 2}\right),\nabla v_{1}\right\rangle_{L^{2}}.\] (3.6) The above equalities can be obtained from the divergence theorem in case \(v_{1}\in D(A)\). **Note:** We require that the magnetization is saturated at each point \(t\in[0,T]\), that is, the constraint condition (3.9) is satisfied by the process \(m\). For that, the initial data \(m_{0}\), see (3.7) should lie on the unit sphere \(\mathbb{S}^{2}\). Towards that, we denote by \(W^{1,2}(\mathcal{O}:\mathbb{S}^{2})\) the space of all \(v\in W^{1,2}(\mathcal{O}:\mathbb{R}^{3})\) such that \(|v(x)|_{\mathbb{R}^{3}}=1\) for Leb.-a.a. \(x\in\mathcal{O}\). We now recall the problem that will be considered (that is (1.6)). \[\begin{cases}&dm=\big{[}m\times\Delta m+m\times u-\alpha\,m\times(m\times u)- \alpha\,m\times(m\times\Delta m)\,\big{]}dt\\ &\quad+G(m)\circ dW(t),\ t\in[0,T],\\ &\frac{\partial m}{\partial\nu}=0,\ \text{on}\ \partial\mathcal{O}_{T},\\ &m(0,\cdot)=m_{0}\ \text{on}\ \mathcal{O}.\end{cases} \tag{3.7}\] Here \[G(m)=m\times h-\alpha\,m\times(m\times h).\] Note that the stochastic term is understood in the Stratonovich sense. It can also be understood in the Ito sense by adding a correction term, see, for example, [25], [51]. The resulting equation is \[dm(t)= \bigg{[}m(t)\times\Delta m(t)-\alpha\,m(t)\times(m(t)\times \Delta m(t))+m(t)\times u(t)-\alpha\,m(t)\times(m(t)\times u(t))\] \[+\frac{1}{2}\big{[}DG(m(t))\big{]}\big{[}G\big{(}m(t)\big{)}\big{]} \bigg{]}dt+G\big{(}m(t)\big{)}\,dW(t). \tag{3.8}\] **Definition 3.2** (Weak martingale solution).: _Assume \(T>0\). Let the function \(h\) and a control \(u\) be given as in Assumption 3.1. A weak martingale solution of (3.7) is a tuple_ \[\pi^{\prime}=(\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{F}^{\prime}, \mathbb{P}^{\prime},W^{\prime},m^{\prime},u^{\prime})\] _such that_ 1. \((\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{P}^{\prime})\) _is a probability space satisfying the usual hypotheses._ \(W^{\prime}\) _is a real valued_ \(\mathbb{F}^{\prime}\)_-adapted Wiener process._ 2. \(m^{\prime}\) _is_ \(H^{1}\)_-valued progressively measurable process such that for_ \(\mathbb{P}^{\prime}\)_-a.s._ \(\omega\in\Omega^{\prime}\)_,_ \(m^{\prime}(\omega,\cdot)\in C([0,T];L^{2})\)_._ 3. _The process_ \(m^{\prime}\) _satisfies the constraint condition. That is_ \[\left|m^{\prime}(t,x)\right|_{\mathbb{R}^{3}}=1,\ \text{for Leb. a.a.}\ x\in\mathcal{O},\ \text{for all}\ t\in[0,T],\ \mathbb{P}^{\prime}\text{-a.s.}\] (3.9) 4. \(u^{\prime}\) _is a control process satisfying the assumptions in Assumption_ 3.1 _and has the same law on the space_ \(L^{2}(0,T;L^{2})\) _as that of the process_ \(u\)_._ 5. _There exist constants_ \(C_{1},C_{2}>0\) _such that for each_ \(p\geq 1\)_,_ (a)__ \[\mathbb{E}^{\prime}\sup_{t\in[0,T]}\left|m^{\prime}(t)\right|_{H^{1}}^{2p}\leq C _{1}+KC_{2},\] (3.10) (b)__ \[\mathbb{E}^{\prime}\left(\int_{0}^{T}\left|m^{\prime}(t)\times\Delta m^{\prime }(t)\right|_{L^{2}}^{2}\,dt\right)^{p}\leq C_{1}+KC_{2}.\] (3.11) _Here_ \(\mathbb{E}^{\prime}\) _denotes the expectation in_ \((\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{P}^{\prime})\)_._ 6. _The paths of_ \(m^{\prime}(\omega^{\prime})\) _are continuous taking values in_ \(X^{\beta}\) _for any_ \(\beta<\frac{1}{2}\) _for_ \(\mathbb{P}^{\prime}\)_-a.s._ \(\omega^{\prime}\in\Omega^{\prime}\)_. The space_ \(X^{\beta}\) _is the domain of the operator_ \(D(A_{1}^{\beta})\)_. More about this is given in Section_ 4_._ _._ 7. _For every_ \(\phi\in H^{1}(\mathcal{O})\) _and every_ \(t\in[0,T]\)_, the following equality holds_ \(\mathbb{P}^{\prime}\)_-a.s._ \[\left\langle m^{\prime}(t),\phi\right\rangle_{L^{2}}= \left\langle m_{0},\phi\right\rangle_{L^{2}}+\int_{0}^{t}\left\langle \nabla m^{\prime}(s),m^{\prime}(s)\times\nabla\phi\right\rangle_{L^{2}}\,ds\] \[+\int_{0}^{t}\left\langle\nabla m^{\prime}(s),\nabla(\phi\times m ^{\prime}(s))\times m^{\prime}(s)\right\rangle_{L^{2}}\,ds\] \[+\int_{0}^{t}\left\langle m^{\prime}(s)\times u^{\prime}(s),\phi \right\rangle_{L^{2}}\,ds\] \[-\alpha\,\int_{0}^{t}\left\langle m^{\prime}(s)\times(m^{\prime} (s)\times u^{\prime}(s)),\phi\right\rangle_{L^{2}}\,ds\] \[+\frac{1}{2}\int_{0}^{t}\left\langle[DG(m(s))]\left(G\big{(}m(s) \big{)}\right],\phi\right\rangle_{L^{2}}\,ds\] \[+\int_{0}^{t}\left\langle G(m^{\prime}(s)),\phi\right\rangle_{L^ {2}}\circ dW^{\prime}(s).\] (3.12) Now we state the existence theorem for a weak martingale solution for the problem (3.7). **Theorem 3.3** (Existence of a weak martingale solution).: _Let the assumptions in Assumption 3.1 hold. Let \(u\) be a given control process satisfying Assumption 3.1. Let the initial data \(m_{0}\) be in \(W^{1,2}(\mathcal{O},\mathbb{S}^{2})\). Then the problem (3.7) admits a weak martingale solution_ \[\pi^{\prime}=(\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{F}^{\prime},\mathbb{ P}^{\prime},W^{\prime},m^{\prime},u^{\prime})\] _as in Definition 3.2._ The proof of Theorem 3.3 has its motivation mainly from the papers [13] and [31]. The way to deal with the triple product term in the noise is similar to the work [14]. The proof begins with the Faedo-Galerkin approximation (Section 4), followed by obtaining uniform energy estimates. Then some compactness results, see [61], [62], etc. are used to show the tightness of the laws of the finite dimensional approximations on appropriate spaces. Prokhorov's theorem, followed by the Jakubowski version of the Skorohod Theorem are then applied (in Section 5) to show the convergence of the sequence of approximates (possibly along a subsequence) and hence the existence of a weak martingale solution. An application of the Ito Lemma in Section 6 yields the constraint condition (3.9). As a corollary of the existence result (Theorem 3.3), we can prove that equation (3.7) makes sense in the strong (P.D.E.) sense in \(L^{2}\). This is formally stated in Corollary 7.1. The next theorem states that the solutions to the problem (3.7) are pathwise unique. Using this and the theory of Yamada and Watanabe, we also show the existence and uniqueness of a strong solution. The main result of the section is Theorem 3.4. **Theorem 3.4** (Pathwise uniqueness).: _Let us assume that process \(u\) is a control process such that the Assumption 3.1 holds. Let \((\Omega,\mathcal{F},\mathbb{P},W,m_{1},u)\) and \((\Omega,\mathcal{F},\mathbb{P},W,m_{2},u)\) be two weak martingale solutions to (3.7) (with the same initial data \(m_{0}\)), corresponding to a given control process \(u\), as in Definition 3.2 and satisfying the properties stated in Theorem 3.3. Then_ \[m_{1}(t)=m_{2}(t)\,\,\mathbb{P}-a.s.\] _for each \(t\in[0,T]\)._ The existence of a unique strong solution to the problem (3.7) is shown as a consequence, see Theorem 7.5 to the above theorem. It follows from the pathwise uniqueness and the theory of Yamada and Watanabe, see [39]. Section 8 deals with the proof of Theorem 3.5. Here we show that the obtained solution takes values in \(D(A_{1})\). We first write the obtained equation in the mild form. Towards this, Corollary 7.3 shows that whenever \(m\) satisfies the constraint condition, we have the following equality \((H^{1})^{\prime}\). \[m\times(m\times\Delta m)=-\Delta m-|\nabla m|_{\mathbb{R}^{3}}^{2}m.\] Therefore the equations (3.7) and (9.1) are equivalent. The proof mainly follows by using the ultracontractivity and the maximal regularity properties of the semigroup generated by the operator \(A\). **Theorem 3.5** (Maximal regularity).: _Let \(\mathcal{O}\subset\mathbb{R}\) be bounded. Let the probability space and initial data, along with the given control process \(u\) be as given in Theorem 3.3. Also assume that the process \(u\) satisfies the assumption (3.1) for \(p=2\). Then there exists a unique strong solution \(m\) which satisfies the properties mentioned in Theorem 3.3. Moreover, there exists a constant \(C>0\) such that_ \[\mathbb{E}\left(\int_{0}^{T}|\nabla m(t)|_{L^{4}}^{4}\,dt+\int_{0}^{T}|A_{1}m( t)|_{L^{2}}^{2}\,dt\right)\leq C. \tag{3.13}\] Section 9 shows that the problem (3.7) admits an optimal control corresponding to the cost functional (1.9). Let \(\mathcal{U}_{ad}(m_{0},T)\) denote the space of all admissible solutions of the problem (3.7) (to be detailed in Section 9). The idea for the proof is to show that the space \(\mathcal{U}_{ad}(m_{0},T)\) is non-empty. This gives us a minimizing sequence of admissible solutions. This sequence is then shown to converge (in a suitable sense) to an admissible solution, which is a minimizer of the cost functional (9.4). **Definition 3.6** (Optimal control).: _Let the law of the initial data \(m_{0}\) be as in Theorem 3.3. Let \(h\in H^{1}\) be fixed. An admissible solution of the problem (3.7)_ \[\pi^{\prime}=(\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{F}^{\prime}, \mathbb{P}^{\prime},W^{\prime},m^{\prime},u^{\prime})\in\mathcal{U}_{ad}(m_{0 },T)\] _is said to be an optimal control for the problem (3.7) if and only if_ \[J(\pi^{\prime})=\inf_{\pi\in\mathcal{U}_{ad}(m_{0},T)}J(\pi), \tag{3.14}\] _i.e._ \[J(\pi^{\prime})=\min_{\pi\in\mathcal{U}_{ad}(m_{0},T)}J(\pi), \tag{3.15}\] The infimum in (3.14) easily exists. The main difficulty lies in showing the existence of a minimum. **Note:** Since we also show the existence of a strong solution, one can also consider the formulation of the control problem using strong solutions instead of weak martingale solutions. We choose to use strong martingale solutions (Definition 9.1) instead. Some more details are given in Remark 9.5. **Theorem 3.7** (Existence of optimal control).: _Let \((\Omega,\mathcal{F},\mathbb{F},\mathbb{P})\) be a probability space satisfying the usual conditions, see Assumption 3.1. Let \(W\) be a real valued Wiener process on the space \((\Omega,\mathcal{F},\mathbb{F},\mathbb{P})\). Let the initial data \(m_{0}\) be as in Theorem 3.3 and the function \(h\) satisfy Assumption 3.1. Let \(u\) be a given control process satisfying (4) of Assumption 3.1, in particular satisfying (3.1) for \(p=4\). Then there exists an optimal control for the problem (3.7) according to Definition 3.6._ ## 4. Faedo-Galerkin approximation Let \(A=-\Delta\) be the Neumann Laplacian operator defined in (3.3). Let \(\{e_{i}\}_{i\in\mathbb{N}}\) be an orthonormal basis of \(L^{2}\) consisting of eigen functions of \(A\) (for example refer to [30] page 335 Theorem 1). Recall the operator \(A_{1}=I_{L^{2}}+A\) defined in (3.4). For \(\beta\geq 0\), let us define the space \[X^{\beta}=\mathrm{dom}\left(A_{1}^{\beta}\right). \tag{4.1}\] Without the loss of generality, if \(\mathcal{O}=(0,1)\), then it is known that \[X^{\beta}=\begin{cases}\left\{v\in H^{2\beta}(0,1;\mathbb{R}^{3})\,;v^{\prime} (1)=v^{\prime}(0)=0\right\},\text{ if }2\beta>\frac{1}{2},\\ H^{2\beta},\text{ if }2\beta\leq\frac{1}{2}.\end{cases} \tag{4.2}\] Its dual space is denoted by \(X^{-\beta}\). For \(\beta=0\), we have \[X^{0}=L^{2}.\] Let \(H_{n}\) denote the linear span of \(\{e_{1},\ldots,e_{n}\}\). Let \(P_{n}\) denote the orthogonal projection \[P_{n}:L^{2}\to H_{n}.\] Since \(e_{i}\) is an eigen function of the operator \(A\) for each \(i\in\mathbb{N}\), we can prove that \(e_{i}\in D(A)\). Therefore, the space \(H_{n}\subset D(A)\). The given control process \(u\) induces the following measurable map, see, for example, Proposition 3.19, Section 3.7 in [25] \[u:\Omega\to L^{2}\left(0,T;L^{2}\right).\] Define \(u_{n}\) as the projection of \(u\) under the projection \(P_{n}\). That is for a.a. \(t\in[0,T]\), \[u_{n}(t):=P_{n}\big{(}u(t)\big{)},\ \mathbb{P}-\text{a.s.}\] Hence note that for \(p\geq 1\) \[\mathbb{E}\left(\int_{0}^{T}|u_{n}(t)|_{L^{2}}^{2}\,dt\right)^{p} =\mathbb{E}\left(\int_{0}^{T}|P_{n}(u(t))|_{L^{2}}^{2}\,dt\right) ^{p}\] \[\leq\mathbb{E}\left(\int_{0}^{T}|u(t)|_{L^{2}}^{2}\,dt\right)^{p} \leq K_{p}. \tag{4.3}\] **Remark 4.1**.: _[Equivalence of norms on \(H_{n}\)] Let us recall that \(H_{n}\) is a finite dimensional vector space. On this vector space we can consider many norms. For instance, the space \(H_{n}\) is a subspace of \(L^{2}\). Hence we can endow the space \(H_{n}\) with the norm inherited from the space \(L^{2}\). This norm on \(H_{n}\) will be denoted by \(|\cdot|_{L^{2}}\)._ _More generally, we can endow the space \(H_{n}\) with the norm inherited from every space \(X\) provided \(H_{n}\) is a subspace of \(X\). This norm on \(H_{n}\) will be denoted by \(|\cdot|_{X}\)._ _For instance, we can take \(X\) to be \(D(A)\) or any Sobolev space \(H^{\theta,p}\) for \(\theta\geq 0\) and \(p\geq 2\). Since, \(H^{1}=H^{1,2}\hookrightarrow L^{\infty}\) and \(H_{n}\hookrightarrow H^{1}\), \(H_{n}\) is also a subspace of the space \(L^{\infty}\) and hence we can consider on \(H_{n}\) the norm inherited from the space \(L^{\infty}\). This norm on \(H_{n}\) will be denoted by \(|\cdot|_{L^{\infty}}\)._ _Finally, let us point out that since all norms on a finite dimensional vector space are equivalent, see e.g. Exercise 1 in section V.9 of the book [27], we infer that for any two norms \(|\cdot|_{X}\) and \(|\cdot|_{Y}\) on \(H_{n}\) there exists a constant \(C_{n}=C_{n}(X,Y)\) such that_ \[\frac{1}{C_{n}}|m|_{X}\leq|m|_{Y}\leq C_{n}|m|_{X},\ \ \text{for every $m\in H_{n}$.}\] _In particular, this holds for the norms \(|\cdot|_{L^{2}}\) and \(|\cdot|_{L^{\infty}}\)._ _Also, for \(v\in H_{n}\),_ \[v=\sum_{i=1}^{n}\left\langle v,e_{i}\right\rangle_{L^{2}}e_{i}. \tag{4.4}\] _Let \(\|\cdot\|\) denote some (\(L^{p}\),\(H^{1}\), etc.) norm on the space \(H_{n}\). Then there exists a constant \(C_{n}>0\) such that_ \[\|v\|=\|\sum_{i=1}^{n}\left\langle v,e_{i}\right\rangle_{L^{2}}e _{i}\|\] \[\leq \sum_{i=1}^{n}\left|\left\langle v,e_{i}\right\rangle_{L^{2}} \right|\|e_{i}\|\] \[\leq \sum_{i=1}^{n}\left|v\right|_{L^{2}}\left|e_{i}\right|_{L^{2}} \left\|e_{i}\right\|\] \[\leq C_{n}\left|v\right|_{L^{2}}.\] _Also,_ \[\Delta v= \Delta\left(\sum_{i=1}^{n}\left\langle v,e_{i}\right\rangle_{L^{2}}e _{i}\right)\] \[= \sum_{i=1}^{n}\left\langle v,e_{i}\right\rangle_{L^{2}}\Delta e_{i}\] \[= \sum_{i=1}^{n}\left\langle v,e_{i}\right\rangle_{L^{2}}\left(- \lambda_{i}\right)e_{i}.\] _Here \(\lambda_{i}\) denote the eigen values corresponding to \(e_{i}\). Here, by \(\Delta\) we mean the operator \(A\) defined in (3.3). Therefore from the above equality, there exists a constant \(C_{n}>0\) such that_ \[\left\|\Delta v\right\|\leq \sum_{i=1}^{n}\left\langle v,e_{i}\right\rangle_{L^{2}}\left| \lambda_{i}\right|\left\|e_{i}\right\|\] \[\leq C_{n}\left|v\right|_{L^{2}}.\] **Lemma 4.2**.: 1. _Consider a function_ \(f:H_{n}\to H_{n}\)_. If_ \(f\) _is Lipschitz continuous on each closed, bounded ball_ \(B\subset H_{n}\)_, then_ \(f\) _is locally Lipshitz continuous on_ \(H_{n}\)_._ 2. _For a locally Lipschitz continuous function_ \(f:H_{n}\to H_{n}\)_, the image of a closed, bounded ball_ \(B\subset H_{n}\) _under_ \(f\) _is compact, and as a result, bounded in_ \(H_{n}\)_._ Proof of Lemma 4.2.: **Proof of (1)**: The space \(H_{n}\) is a finite dimensional vector space. Hence \(H_{n}\) is locally compact. Therefore, \(f\) is locally Lipschitz on \(H_{n}\) if and only if \(f\) is Lipschitz on all compact subsets of \(H_{n}\). For a given compact subset \(K\) of \(H_{n}\), we can choose a closed and bounded ball \(B\subset H_{n}\) (which is again compact) large enough so that \(K\subset B\subset H_{n}\). Therefore in order to show the local Lipschitz continuity of \(f\) on \(H_{n}\), it suffices to show that \(f\) is Lipschitz continuous on all closed and bounded balls in \(H_{n}\). This concludes the proof of (1). **Proof of (2):** That \(f\) is locally Lipschitz continuous implies that \(f\) is also continuous on \(H_{n}\). As mentioned in the proof of (1) above, closed and bounded balls are compact in \(H_{n}\). Therefore the image of \(B\) under \(f\) is a compact, and hence also a bounded subset of \(H_{n}\). **Remark 4.3**.: _[A Remark on locally Lipschitz functions on \(H_{n}\)] The aim of the following calculations is to show that if \(f,g\) are locally Lipschitz continuous functions on the finite dimensional space \(H_{n}\), then their product \(h_{1}:=fg\) and composition \(h_{2}:=f\circ g\) is also locally Lipshitz continuous on \(H_{n}\). Using Lemma 4.2, to show that the functions \(h_{1},h_{2}\) are locally Lipschitz continuous on \(H_{n}\), it suffices to show that they are Lipschitz continuous on all closed and bounded balls \(B\subset H_{n}\). Towards that, let the functions \(f,g:H_{n}\to H_{n}\) be locally Lipschitz continuous on \(H_{n}\). Fix an arbitrary closed and bounded ball \(B\subset H_{n}\). Since \(g\) is continuous and \(B\) is compact, the set \(g(B)\subset H_{n}\) is compact. Being locally Lipschitz continuous on \(H_{n}\), both \(f\) and \(g\) are Lipschitz continuous on the ball \(B\). Let \(C_{f,B}\) and \(C_{g,B}\) denote their respective Lipschitz constants. Similarly, let \(C_{f,g(B)}\) denote the Lipschitz constant for the function \(f\) on \(g(B)\)._ **Claim 1:**: _The map_ \[h_{1}:=fg:H_{n}\ni v\mapsto f(v)g(v)\in H_{n}\] _is Lipschitz on \(B\)._ _Brief proof of Claim 1: : Let_ \(v_{1},v_{2}\in B\)_. Therefore_ \[\left|f(v_{1})g(v_{1})-f(v_{2})g(v_{2})\right|_{L^{2}} \leq \left|f(v_{1})g(v_{1})-f(v_{2})g(v_{1})\right|_{L^{2}}+\left|f(v_ {2})g(v_{1})-f(v_{2})g(v_{2})\right|_{L^{2}}\] \[\leq \left|\left[f(v_{1})-f(v_{2})\right]g(v_{1})\right|_{L^{2}}+ \left|f(v_{2})\left[g(v_{1})-g(v_{2})\right]\right|_{L^{2}}\] \[\leq \left|f(v_{1})-f(v_{2})\right|_{L^{2}}\left|g(v_{1})\right|_{L^{ \infty}}+\left|f(v_{2})\right|_{L^{\infty}}\left|g(v_{1})-g(v_{2})\right|_{L^{2}}\] \[\leq C_{f,B}\left|g(v_{1})\right|_{L^{\infty}}\left|v_{1}-v_{2}\right|_ {L^{2}}+C_{g,B}\left|f(v_{2})\right|_{L^{\infty}}\left|v_{1}-v_{2}\right|_{L^{2}}\] \[\leq\left|v\times\Delta w\right|_{L^{2}}\] \[\leq\left|v\right|_{L^{4}}\left|\Delta w\right|_{L^{4}}\] \[\leq C_{n}\left|v\right|_{L^{2}}\left|w\right|_{L^{2}}.\] Therefore the map \[H_{n}\ni v\mapsto f(v,v)\in H_{n}\] is a polynomial of degree \(2\) on \(H_{n}\), and as a consequence, is locally Lipschitz. We observe that \(F_{n}^{1}(v)=f(v,v)\). Therefore \(F_{n}^{1}\) is locally Lipschitz. The proof for \(G_{n}\) can be given as follows. Consider the map \(f_{1}:H_{n}\to H_{n}\), given by \[f_{1}(v) =P_{n}(v\times h). \tag{4.5}\] \[\left|f_{1}(v)\right|_{L^{2}} \leq\left|P_{n}(v\times h)\right|_{L^{2}}\] \[\leq\left|v\times h\right|_{L^{2}}\] \[\leq\left|h\right|_{L^{\infty}}\left|v\right|_{L^{2}}.\] \(h\in H^{1}\hookrightarrow L^{\infty}\) implies that the map \(f_{1}\) is a bounded linear map. As a consequence, \(f_{1}\) is Lipschitz continuous on \(H_{n}\). Now, define the map \(f_{2}:H_{n}\times H_{n}\to H_{n}\), given by \[f_{2}(v,w)=P_{n}\big{(}v\times\big{(}w\times h\big{)}\big{)}\,.\] Clearly, \(f_{2}\) is a bilinear map. Further, by Remark 4.1, there exists a constant \(C_{n}>0\) such that \[\left|f_{2}(v,w)\right|_{L^{2}}= \left|v\times(w\times h)\right|_{L^{2}}\] \[\leq \left|v\right|_{L^{4}}\left|w\right|_{L^{4}}\left|h\right|_{L^{ \infty}}\] \[\leq C_{n}\left|v\right|_{L^{2}}\left|w\right|_{L^{2}}\left|h\right|_{L^ {\infty}}.\] Therefore \(h\in H^{1}\hookrightarrow L^{\infty}\) implies that \(f_{2}\) is bilinear bounded. The map \[H_{n}\ni v\mapsto f_{2}(v,v)\in H_{n}\] is therefore a homogeneous polynomial of degree \(2\) on \(H_{n}\), and as a consequence, is locally Lipschitz. We now observe that \(G_{n}(v)\) is a linear combination of \(f_{1}(v)\) and \(f_{2}(v,v)\). Therefore \(G_{n}\) is locally Lipschitz on \(H_{n}\). Let \(\psi_{0}:\mathbb{R}\rightarrow[0,1]\) be a function of \(C_{c}^{1}(\mathbb{R})\) class such that \[\psi_{0}(x)=\begin{cases}1\text{ if }|x|\leq|h|_{L^{\infty}}+1,\\ 0\text{ if }|x|\geq|h|_{L^{\infty}}+2.\end{cases} \tag{4.6}\] Define a function \(\psi_{n}:H_{n}\rightarrow\mathbb{R}\) the following formula. \[\psi_{n}(v)=\psi_{0}\big{(}\left|v\right|_{L^{\infty}}\big{)}\,\psi_{0}\big{(} \left|P_{n}\left(v\times h\right)\right|_{L^{\infty}}\big{)}\,\psi_{0}\big{(} \left|P_{n}\left(v\times(v\times h)\right)\right|_{L^{\infty}}\big{)},\,\,\,v \in H_{n}. \tag{4.7}\] **Lemma 4.5**.: _The function_ \[\psi_{n}:H_{n}\ni v\mapsto\psi_{0}\big{(}\left|v\right|_{L^{\infty}}\big{)} \,\psi_{0}\big{(}\left|P_{n}\left(v\times h\right)\right|_{L^{\infty}}\big{)} \,\psi_{0}\big{(}\left|P_{n}\left(v\times(v\times h)\right)\right|_{L^{\infty }}\big{)}\in\mathbb{R}\] _is locally Lipschitz._ Proof of Lemma 4.5.: We define the following auxiliary functions. \[f_{1}:H_{n}\ni v\mapsto P_{n}\left(v\times h\right)\in H_{n}\] \[f_{2}:H_{n}\times H_{n}\ni\left(v,w\right)\mapsto P_{n}\big{(}v \times(w\times h)\big{)}\in H_{n}.\] \[f_{3}:H_{n}\ni v\mapsto\left|v\right|_{L^{\infty}}\in\mathbb{R}\] \[\beta:\mathbb{R}^{3}\ni\left(x_{1},x_{2},x_{3}\right)\mapsto x_{1 }x_{2}x_{3}\in\mathbb{R}\] The map \(f_{1}\) is a linear map from \(H_{n}\) to \(H_{n}\). Moreover, for \(v\in H_{n}\), since \(P_{n}\) is orthonormal projection from \(L^{2}\) to \(H_{n}\) and hence **a contraction** and \(H^{1}\hookrightarrow L^{\infty}\). \[\left|f_{1}(v)\right|_{L^{2}}= \left|P_{n}(v\times h)\right|_{L^{2}}\leq\left|v\times h\right|_{ L^{2}}\] \[\leq \left|v\right|_{L^{2}}\left|h\right|_{L^{\infty}}\leq C\left|v \right|_{L^{2}}\left|h\right|_{H^{1}}.\] Therefore \(f_{1}\) is a bounded linear map, and hence also Lipschitz continuous on \(H_{n}\). Similarly we can treat the map \(f_{2}\). Clearly this map is bilinear. It is also well defined. To see this, let \(v,w\in H_{n}\). Since \(P_{n}\) is a contraction, by Holder's inequality, Remark 4.1 and the continuous embedding \(H^{1}\hookrightarrow L^{\infty}\) we have the following sequence of inequalities. \[\left|f_{2}(v,w)\right|_{L^{2}}= \left|P_{n}\big{(}v\times(w\times h)\big{)}\right|_{L^{2}}\] \[\leq \left|v\times(w\times h)\right|_{L^{2}}\] \[\leq \left|v\right|_{L^{4}}\left|w\right|_{L^{4}}\left|h\right|_{L^{ \infty}}\] \[\leq C_{n}\left|v\right|_{L^{2}}\left|w\right|_{L^{2}}\left|h\right|_{H ^{1}}.\] \(v,w\in H_{n}\) and \(h\in H_{1}\) implies that the right hand side of the above inequality is finite. Therefore the map \[H_{n}\ni v\mapsto f_{2}(v,v)\in H_{n}\] is a homogeneous polynomial of degree \(2\) on \(H_{n}\). Therefore it is analytic, and in particular is locally Lipschitz. By Remark 4.1, the map \(f_{3}\) is well defined and there exists a constant \(C_{n}>0\) such that the following inequality holds. \[\left|f_{3}(v)\right|_{\mathbb{R}}=\left|v\right|_{L^{\infty}}\leq C_{n}\left|v \right|_{L^{2}},\,\,\,v\in H_{n}.\] Moreover, for \(v_{1},v_{2}\in H_{n}\), triangle inequality and also by Remark 4.1, \[\left|f_{3}(v_{1})-f_{3}(v_{2})\right|_{\mathbb{R}}=\left|v_{1}-v_{2}\right|_{L ^{\infty}}\leq C_{n}\left|v_{1}-v_{2}\right|_{L^{2}}.\] Therefore \(f_{3}\) is Lipschitz continuous on \(H_{n}\). The map \(\beta\) is a trilinear map from \(\mathbb{R}^{3}\) to \(\mathbb{R}\), with \[\left|\beta(x_{1},x_{2},x_{3})\right|_{\mathbb{R}}=\left|x_{1}\right|_{ \mathbb{R}}\left|x_{2}\right|_{\mathbb{R}}\left|x_{3}\right|_{\mathbb{R}},\, \,\text{for }(x_{1},x_{2},x_{3})\in\mathbb{R}^{3}.\] Therefore \(\beta\) is locally Lipschitz. So far we have shown that the maps \(f_{1},f_{2},f_{3}\) are locally Lipschitz. The map \(\psi_{0}\) is assumed to be of \(C_{0}^{\infty}\) class, and hence is bounded and locally Lipschitz. We can therefore conclude that the map \(\tilde{\psi}_{n}\), given by \[\tilde{\psi}_{n}:H_{n}\ni v\mapsto\big{(}(\psi_{0}\circ f_{3})\left(v\right),(\psi_{0}\circ f_{3}\circ f_{1})\left(v\right),(\psi_{0}\circ f_{3}\circ f_{ 2})\left(v,v\right)\big{)}\in\mathbb{R}^{3}\] is locally Lipschitz. The map \(\psi_{n}\) can be written as a composition of the functions described so far, as follows. \[\psi_{n}(v)=\Big{(}\beta\circ\tilde{\psi}_{n}\Big{)}\left(v\right)=\beta \big{(}(\psi_{0}\circ f_{3})\left(v\right),(\psi_{0}\circ f_{3}\circ f_{1}) \left(v\right),(\psi_{0}\circ f_{3}\circ f_{2})\left(v,v\right)\big{)}.\] Hence \(\psi_{n}\), as a composition of \(\beta\) with \(\tilde{\psi}_{n}\), is locally Lipschitz. **Lemma 4.6**.: _For each \(n\), the map_ \[\left[DG_{n}\right]\left(G_{n}\right):H_{n}\ni v\mapsto\big{[}DG_{n}(v)\big{]} \big{(}G_{n}(v)\big{)}\in H_{n}\] _is locally Lipschitz._ Proof of Lemma 4.6.: We show the local Lipschitz continuity by first observing that the map \(\left[DG_{n}\right]\left(G_{n}\right)\) is a composition of two maps \(DG_{n}\) and \(G_{n}\), and then showing that both are locally Lipshitz in \(H_{n}\). That \(G_{n}\) is locally Lipschitz has been shown in the previous lemma (Lemma 4.4). Define an auxiliary map \(f_{1}:H_{n}\to H_{n}\), given by \[f_{1}(v)=P_{n}\left(v\times h\right). \tag{4.8}\] Also define the maps \(f_{2},f_{3}:H_{n}\times H_{n}\to H_{n}\), given by \[f_{2}(v,w)=P_{n}\left(v\times(w\times h)\right) \tag{4.9}\] and \[f_{3}(v,w)=P_{n}\left(w\times(v\times h)\right). \tag{4.10}\] Note that the map \(f_{2}\) is similar to the map \(f_{2}\) defined in Lemma 4.4 and is therefore locally Lipschitz on \(H_{n}\). Although \(f_{3}\) is not exactly the same as the map \(f_{2}\), following the similar line of calculations we can show that \(f_{3}\) is also locally Lipschitz on \(H_{n}\). We now observe, see Proposition 2.2, that \(DG_{n}(v)\) is a linear combination of \(f_{1}(v),f_{2}(v,v)\) and \(f_{3}(v,v)\), and is therefore locally Lipschitz. By Remark 4.3, the map \(\left[DG\right](G)\), being a composition of two locally Lipschitz maps is locally Lipschitz. **The Approximated Equation.** The approximated equation in \(H_{n}\) is as follows. For \(n\in\mathbb{N}\), \[m_{n}(t)= P_{n}(m_{0})+\int_{0}^{t}P_{n}\big{(}m_{n}(s)\times\Delta m_{n}(s) \big{)}\,ds-\alpha\,\int_{0}^{t}P_{n}\big{[}m_{n}(s)\times\big{(}m_{n}(s) \times\Delta m_{n}(s)\big{)}\big{]}\,ds\] \[+\int_{0}^{t}P_{n}\left(m_{n}(s)\times u_{n}(s)\right)ds-\alpha\, \int_{0}^{t}P_{n}\left[m_{n}(s)\times\big{(}m_{n}(s)\times u_{n}(s)\big{)} \right]\,ds\] \[+\frac{1}{2}\int_{0}^{t}\big{[}DG_{n}\big{(}m_{n}(s)\big{)}\big{]} \big{[}G_{n}\big{(}m_{n}(s)\big{)}\big{]}\,ds+\int_{0}^{t}G_{n}\big{(}m_{n}(s) \big{)}\,dW(s),\ t\in[0,T]. \tag{4.11}\] Let \(s\in[0,T]\) and \(n\in\mathbb{N}\). Using Proposition 2.2, we can compute the correction term \(DG(m_{n})[G(m_{n})]\) in (4.11) as given by the following. Note that we suppress the argument \((s)\) for brevity. \[DG_{n}\big{(}m_{n}\big{)}\big{(}G_{n}(m_{n})\big{)}= P_{n}\big{(}P_{n}(m_{n}\times h)\times h\big{)}-\alpha\,P_{n} \bigg{(}P_{n}\big{(}m_{n}\times(m_{n}\times h)\big{)}\times h\bigg{)}\] \[-\alpha\,P_{n}\big{(}P_{n}\left(m_{n}\times h\right)\times(m_{n} \times h)\big{)}\] \[+P_{n}\bigg{(}m_{n}\times\big{(}P_{n}(m_{n}\times h)\times h \big{)}\bigg{)}\] \[-\alpha\,P_{n}\bigg{(}P_{n}\big{(}m_{n}\times(m_{n}(s)\times h) \big{)}\times(m_{n}\times h\big{)}\bigg{)}\] \[-\alpha\,P_{n}\bigg{(}m_{n}\times\big{(}P_{n}(m_{n}\times\big{(}m _{n}\times h\big{)}\big{)}\times h\big{)}\bigg{)}. \tag{4.12}\] **The Truncated Approximated Equation:** Let \(n\in\mathbb{N}\). We incorporate the cut-off \(\psi_{n}\) defined in (4.7) into the equation (4.11). **Note:** In the sections that follow, we will replace the notation \(\psi_{n}\) with \(\psi\) for brevity. \[m_{n}(t)= P_{n}(m_{0})+\int_{0}^{t}P_{n}\big{(}m_{n}(s)\times\Delta m_{n}(s) \big{)}\,ds-\alpha\,\int_{0}^{t}P_{n}\left[m_{n}(s)\times\big{(}m_{n}(s)\times \Delta m_{n}(s)\big{)}\right]ds\] \[+\frac{1}{2}\int_{0}^{t}\psi\big{(}m_{n}(s)\big{)}^{2}\big{[}DG_ {n}\big{(}m_{n}(s)\big{)}\big{]}\big{[}G_{n}\big{(}m_{n}(s)\big{)}\big{]}\,ds\] \[+\int_{0}^{t}\psi\big{(}m_{n}(s)\big{)}G_{n}\big{(}m_{n}(s)\big{)} \,dW(s),\ t\in[0,T]. \tag{4.13}\] By Lemma 4.4, Lemma 4.5 and Lemma 4.6, we infer that the coefficients \(F_{n}^{i},\ i=1,\ldots 4\) and \(G_{n}\) are locally Lipschitz on \(H_{n}\), for each \(n\in\mathbb{N}\). Note that in order to prove that the solution to a stochastic differential equation is global, it is not sufficient that the coefficients are only locally Lipschitz. Global existence can be given by the one-sided linear growth, which in turn is given by the following inequality. For all \(v\in H_{n}\), we have the following. \[\big{\langle}F_{n}^{i}(v),v\big{\rangle}_{L^{2}}=0=\langle G_{n}(v),v\rangle_ {L^{2}}\,.\] Combining local Lipschitz regularity and one sided linear growth with Theorem 10.6 in [23], the problem (4.13) admits a unique solution. We now state a lemma that will be used in the calculations that follow. **Lemma 4.7**.: 1. _The following equality holds for all_ \(w\in H_{n}\)_._ \[\big{\langle}\big{[}DG_{n}(w)\big{]}\big{(}G_{n}(w)\big{)},w\big{\rangle}_{L ^{2}}=-\left|G(w)\right|_{L^{2}}^{2}.\] (4.14) 2. _There exists a constant_ \(C>0\) _such that for all_ \(n\in\mathbb{N}\)_, the following inequality holds._ \[\big{|}\big{\langle}\psi(w)^{2}\,\big{[}DG_{n}(w)\big{]}\big{(}G_{n}(w)\big{)},\Delta w\big{\rangle}_{L^{2}}\big{|}\leq C\big{(}1+\left|w\right|_{H^{1}} \big{)}\left|v\right|_{H^{1}},\ w\in H_{n}.\] (4.15) Proof of Lemma 4.7.: For a proof of (1), we refer the reader to Corollary B.3 in [14]. **Proof of (2):** From the proof of Lemma 4.6, we observe that the map \(\left[DG\right](G)\) is a sum of polynomial maps of degree \(2\) and \(3\). The equality in (4.12) gives the precise form of the term considered. The main idea of the proof is the following. \[\big{|}\big{\langle}\psi(v)^{2}\,\left[DG_{n}(v)\right]\big{(}G(v)\big{)}, \Delta v\big{\rangle}_{L^{2}}\big{|}=\big{|}\big{\langle}\psi(v)^{2}\,\nabla \big{[}[DG_{n}(v)]\big{(}G(v)\big{)}\big{]},\nabla v\big{\rangle}_{L^{2}} \big{|} \tag{4.16}\] By using the product rule for derivatives, the term \(\nabla\big{[}\big{[}DG_{n}(v)\big{]}\big{(}G(v)\big{)}\big{]}\) can be split into terms of two types. The first type, wherein the derivative is on \(v\) and the second type where the derivative is on the term \(h\). For the first type, we use Holder's inequality with \(L^{2}\) norm on the gradient term and \(L^{\infty}\) norm on the remaining term/s. For the second type, the \(L^{2}\) norm is applied on one of the terms containing \(v\), while all other terms get the \(L^{\infty}\) norm. The cut-off function \(\Psi\) ensures that the \(L^{\infty}\) norm is taken care of. As an example, we show the calculations here for the first term from (4.12). Rest follow suite. Let \(w\in H^{1}\). First we observe the following. There exists a constant \(C>0\) such that \[\left|w\times h\right|_{H^{1}}\leq C\left|w\right|_{H^{1}}\left|h\right|_{H^{1 }}.\] Now, let \(v\in H_{n}\). Since \(H_{n}\subset H^{1}\), the above inequality also holds for \(w\) replaced by \(v\). In the following sequence of inequalities, we use \(C\) to denote a generic constant that is positive and independent of \(n\in\mathbb{N}\). The constant \(C\) can depend on \(\left|h\right|_{H^{1}}\), and the value of \(C\) may change from line to line. \[\left|\psi(v)^{2}\left\langle P_{n}\left(P_{n}\left(v\times h \right)\times h\right),\Delta v\right\rangle_{L^{2}}\right|= \left|\psi(v)^{2}\left\langle P_{n}\left(v\times h\right) \times h,\Delta v\right\rangle_{L^{2}}\right|\] \[= \left|\psi(v)^{2}\left\langle\nabla\left(P_{n}\left(v\times h \right)\times h\right),\nabla v\right\rangle_{L^{2}}\right|\] \[\leq \left|\psi(v)^{2}\left\langle\nabla\big{(}P_{n}\left(v\times h \right)\big{)}\times h,\nabla v\right\rangle_{L^{2}}\right|\] \[+\left|\psi(v)^{2}\left\langle P_{n}\left(v\times h\right) \right\rangle\right|_{L^{2}}\left|h\right|_{L^{\infty}}\left|\nabla v\right|_{L ^{2}}\] \[+\psi(v)^{2}\left|P_{n}\left(v\times h\right)\right|_{L^{\infty}} \left|\nabla h\right|_{L^{2}}\left|\nabla v\right|_{L^{2}}\] \[\leq C\,\psi(v)^{2}\left|P_{n}\left(v\times h\right)\right|_{H^{1}} \left|h\right|_{H^{1}}\left|v\right|_{H^{1}}\] \[+\psi(v)^{2}\left|P_{n}\left(v\times h\right)\right|_{L^{\infty}} \left|h\right|_{H^{1}}\left|\nabla v\right|_{L^{2}}\] \[\leq C\,\psi(v)^{2}\left|v\times h\right|_{H^{1}}\left|v\right|_{H^{1}}\] \[+C\,\psi(v)^{2}\left|P_{n}\left(v\times h\right)\right|_{L^{\infty }}\left|\nabla v\right|_{L^{2}}\] \[\leq C\,\psi(v)^{2}\left|v\right|_{H^{1}}^{2}+\psi(v)^{2}\left|P_{n} \left(v\times h\right)\right|_{L^{\infty}}\left|v\right|_{H^{1}}.\] By the definition of the cut-off function \(\psi_{0}\), we have \[\psi(v)^{2}\left|P_{n}\left[\left(v\times h\right)\right]\right|_{L^{\infty}} \leq\left|h\right|_{L^{\infty}}+2. \tag{4.17}\] Therefore, combining the calculations given above, there exists a constant \(C>0\) such that \[\left|\psi(v)^{2}\left\langle P_{n}\left(P_{n}\left(v\times h\right)\times h \right),\Delta v\right\rangle_{L^{2}}\right|\leq C\left(1+\left|v\right|_{H^{1 }}\right)\left|v\right|_{H^{1}}. \tag{4.18}\] For \(n\in\mathbb{N}\), let \(m_{n}=\left(m_{n}(t)\right)_{t\in[0,T]}\) be the solution to the problem (4.13). We now obtain some uniform energy estimates which will be used to show tightness of the laws of the processes \(m_{n}\) on a suitable space. **Lemma 4.8**.: _We have the following bounds._ 1. _The following equality holds for each_ \(n\in\mathbb{N}\)_,_ \[\left|m_{n}(t)\right|_{L^{2}}^{2}=\left|m_{n}(0)\right|_{L^{2}}^{2}\text{ for each }t\in[0,T],\ \mathbb{P}-a.s.\] (4.19) 2. _There exists a constant_ \(C>0\) _such that for all_ \(n\in\mathbb{N}\)_, the following inequalities hold._ 1. 2. \[\mathbb{E}\left[\sup_{t\in[0,T]}\left|m_{n}(t)\right|_{H^{1}}^{2}\right]\leq C,\] (4.20) 2. 2. \[\mathbb{E}\left[\int_{0}^{T}\left|m_{n}(t)\times\Delta m_{n}(t)\right|_{L^{2}}^{ 2}\,dt\right]\leq C.\] (4.21) Proof of Lemma 4.8.: The bounds are obtained by applying the Ito formula followed by using the Burkholder-Davis-Gundy inequality (Lemma C.4) for the terms with the stochastic integral. Then we apply the Gronwall lemma to obtain the required bounds. Similar ideas have been used in [14], see Lemma 3.3, [13] among others. **Proof of the bound** (4.19): Let us choose and fix \(n\in\mathbb{N}\). We define a function \(\phi_{1}:H_{n}\to\mathbb{R}\) by \[\phi_{1}(v)=\frac{1}{2}|v|_{L^{2}}^{2},\text{ for }v\in H_{n}. \tag{4.22}\] Note that the Ito formula applied is for finite dimensional (Euclidean spaces) and hence \(H_{n}\) is required instead of \(L^{2}\). For \(v,v_{1},v_{2}\in H_{n}\), we have \[\phi_{1}^{\prime}(v)(v_{2})=\left\langle v,v_{2}\right\rangle_{L^{2}},\] and \[\phi_{1}^{\prime\prime}(v)(v_{1},v_{2})=\left\langle v_{1},v_{2}\right\rangle _{L^{2}}.\] Let \(m_{n}=\left(m_{n}(t)\right)_{t\in[0,T]}\) be the solution of (4.13). Applying the Ito formula to \(\phi_{1}\) gives us the following equation for all \(t\in[0,T]\), \(\mathbb{P}\)-a.s. \[\phi_{1}(m_{n}(t))= \phi_{1}(P_{n}(m_{0}))+\int_{0}^{t}\left\langle P_{n}\big{(}m_{n} (s)\times\Delta m_{n}(s)\big{)},m_{n}(s)\right\rangle_{L^{2}}\,ds\] \[-\alpha\,\int_{0}^{t}\left\langle P_{n}\bigg{(}m_{n}(s)\times \big{(}m_{n}(s)\times\Delta m_{n}(s)\big{)}\bigg{)},m_{n}(s)\right\rangle_{L^ {2}}\,ds\] \[+\int_{0}^{t}\left\langle P_{n}\big{(}m_{n}(s)\times u_{n}(s) \big{)},m_{n}(s)\right\rangle_{L^{2}}\,ds\] \[-\alpha\,\int_{0}^{t}\psi\big{(}m_{n}(s)\big{)}\left\langle P_{n }\bigg{(}m_{n}(s)\times\big{(}m_{n}(s)\times u_{n}(s)\big{)}\bigg{)},m_{n}(s) \right\rangle_{L^{2}}ds\] \[+\frac{1}{2}\int_{0}^{t}\left\langle\psi\big{(}m_{n}(s)\big{)}^{ 2}\left[DG_{n}\big{(}m_{n}(s)\big{)}\right]\!\big{(}G_{n}\big{(}m_{n}(s)\big{)} \big{)},m_{n}(s)\right\rangle_{L^{2}}\,ds\] \[+\frac{1}{2}\int_{0}^{t}\psi\big{(}m_{n}(s)\big{)}^{2}\left|G_{n} \big{(}m_{n}(s)\big{)}\right|_{L^{2}}^{2}\,ds\] \[= \frac{1}{2}\left|P_{n}(m_{0})\right|_{L^{2}}^{2}+\sum_{i=1}^{7}C_ {i}I_{i}(t),\ t\in[0,T]. \tag{4.23}\] Here \(C_{i},i=1,\ldots,7\) are the constants accompanying the integrals. For each \(n\in\mathbb{N}\), the projection operator is self-adjoint. That is for \(v_{1},v_{2}\in L^{2}\), the following holds. \[\left\langle P_{n}v_{1},v_{2}\right\rangle_{L^{2}}=\left\langle v_{1},P_{n}v_{ 2}\right\rangle_{L^{2}}. \tag{4.24}\] Also, \(P_{n}^{2}=P_{n}\) along with the self-adjoint property implies that \[\left\langle P_{n}v_{1},P_{n}v_{2}\right\rangle_{L^{2}}=\left\langle P_{n}^{2}v _{1},v_{2}\right\rangle_{L^{2}}=\left\langle P_{n}v_{1},v_{2}\right\rangle_{L ^{2}}. \tag{4.25}\] The above mentioned properties will be frequently used in the calculations that follow. Another property of vectors that will be used frequently is the following: \[\left\langle a,a\times b\right\rangle_{\mathbb{R}^{3}}=0\text{ for }a,b\in \mathbb{R}^{3}. \tag{4.26}\] Using the properties mentioned above, we can show that for \(i=1,2,3,4,6\). \[I_{i}(t)=0. \tag{4.27}\] In particular, we observe that the terms \(I_{3},I_{4}\) with the control operator do not contribute to the calculations since they are \(0\). We show the calculations for \(I_{1}\). Rest follow suite. Let \(v\in H_{n}\). \[\left\langle P_{n}\left(v\times\Delta v\right),v\right\rangle_{L^{2}} =\left\langle v\times\Delta v,P_{n}v\right\rangle_{L^{2}}\] \[=\left\langle v\times\Delta v,v\right\rangle_{L^{2}}\] \[=\int_{\mathcal{O}}\left\langle v(x)\times\Delta v(x),v(x) \right\rangle_{\mathbb{R}^{3}}\,dx=0.\] Replacing \(v\) in the above setup by \(m_{n}(s)\) and then integrating gives us the desired result. Notice that we have used only the properties of the projection operator \(P_{n}\) on \(L^{2}\) and the \(\mathbb{R}^{3}\) inner product properties. \(\psi\) is a scalar valued function and hence does not contribute to the \(\mathbb{R}^{3}\) inner product. Moreover, \(\psi\) is not a function of the space variable, and hence can be taken out of the \(L^{2}\) inner product as well. Therefore a similar result follows for \(I_{i},i=2,3,4,6\). **Calculation for \(I_{5}\) and \(I_{7}\):** By equality (1) of Lemma 4.7, we have \[I_{5}(t)= \int_{0}^{t}\left\langle\psi\big{(}m_{n}(s)\big{)}^{2}\,\left[ DG_{n}\big{(}m_{n}(s)\big{)}\right]\big{[}G_{n}\big{(}m_{n}(s)\big{)}\right],m_{n}(s )\right\rangle_{L^{2}}\,ds\] \[= -\int_{0}^{t}\psi\big{(}m_{n}(s)\big{)}^{2}\,\big{|}G_{n}\big{(} m_{n}(s)\big{)}\big{|}_{L^{2}}^{2}\,\,ds=-I_{7}(t).\] Observe that in the equality (4.23), we have \(C_{5}=C_{7}=\frac{1}{2}\). Note that the term \(C_{7}I_{7}(t)\) in (4.23) is the term arising from the application of the Ito formula. Therefore, using equality (1) in Lemma 4.7, we have the following. \[C_{5}I_{5}(t)+C_{7}I_{7}(t)=0. \tag{4.28}\] Combining the above calculations with the equation (4.23), we get \[\frac{1}{2}|m_{n}(t)|_{L^{2}}^{2}=\frac{1}{2}|m_{n}(0)|_{L^{2}}^{2}.\] That is \[|m_{n}(t)|_{L^{2}}^{2}=|m_{n}(0)|_{L^{2}}^{2}.\] This holds for each \(t\in[0,T]\) and \(n\in\mathbb{N}\). This concludes the proof of the bound (4.19). Note that by the definition of \(m_{n}(0)\) and the projection operator \(P_{n}\), \[|m_{n}(0)|_{L^{2}}=|P_{n}(m(0))|_{L^{2}}\leq|m(0)|_{L^{2}}.\] Thus, there exists a constant \(C>0\) such that \[\mathbb{E}\sup_{n\in\mathbb{N}}\sup_{t\in[0,T]}|m_{n}(s)|_{L^{2}}^{2}\leq \mathbb{E}\left|m(0)\right|_{L^{2}}^{2}\leq C. \tag{4.29}\] **Proof of bound** (4.20): This can be shown by applying the Ito formula to the function \(\phi_{2}:H_{n}\to\mathbb{R}\) defined by \[\phi_{2}(v)=\frac{1}{2}|\nabla v|_{L^{2}}^{2}\,\,\text{for}\,\,v\in H^{1}. \tag{4.30}\] For each \(v,v_{1},v_{2}\in H_{n}\), \[\phi_{2}^{\prime}(v_{1})(v_{2})=\left\langle\nabla v_{1},\nabla v_{2}\right\rangle _{L^{2}}=\left\langle v_{1},-\Delta v_{2}\right\rangle_{L^{2}}=\left\langle v _{1},Av_{2}\right\rangle_{L^{2}}. \tag{4.31}\] Application of the Ito formula gives the following equation for all \(t\in[0,T]\)\(\mathbb{P}\)-a.s. \[\phi_{2}\big{(}m_{n}(t)\big{)}= \,\phi_{2}\big{(}P_{n}(m_{0})\big{)}+\int_{0}^{t}\left\langle P_{n }\big{(}m_{n}(s)\times\Delta m_{n}(s)\big{)},(-\Delta)m_{n}(s)\right\rangle_{L ^{2}}\,ds\] \[-\alpha\,\int_{0}^{t}\left\langle P_{n}\Big{(}m_{n}(s)\times\big{(}m_ {n}(s)\times\Delta m_{n}(s)\big{)}\Big{)},(-\Delta)m_{n}(s)\right\rangle_{L^{2}}ds\] \[+\int_{0}^{t}\left\langle P_{n}\big{(}m_{n}(s)\times u_{n}(s) \big{)},(-\Delta)m_{n}(s)\right\rangle_{L^{2}}ds\] \[-\alpha\,\int_{0}^{t}\psi\big{(}|m_{n}(s)|_{L^{\infty}}\big{)} \left\langle P_{n}\bigg{(}m_{n}(s)\times\big{(}m_{n}(s)\times u_{n}(s)\big{)} \bigg{)},(-\Delta)m_{n}(s)\right\rangle_{L^{2}}ds\] \[+\frac{1}{2}\int_{0}^{t}\left\langle\psi\big{(}m_{n}(s)\big{)}^ {2}\left[DG_{n}\big{(}m_{n}(s)\big{)}\right]\big{(}G_{n}\big{(}m_{n}(s)\big{)} \big{)}\,,(-\Delta)m_{n}(s)\right\rangle_{L^{2}}ds\] \[+\int_{0}^{t}\psi\big{(}m_{n}(s)\big{)}\left\langle G_{n}\big{(} m_{n}(s)\big{)},(-\Delta)m_{n}(s)\right\rangle_{L^{2}}dW(s)\] \[+\frac{1}{2}\int_{0}^{t}\left|\nabla G_{n}\big{(}m_{n}(s)\big{)} \right|_{L^{2}}^{2}ds\] \[= \frac{1}{2}\left|m_{0}\right|_{L^{2}}^{2}+\sum_{i=1}^{7}C_{i}J_{ i}(t). \tag{4.32}\] Here \(C_{i},i=1,\ldots,7\) are the constants accompanying the integrals. We show induvidual calculations for the terms. **Calculation for \(J_{1}\).** Let \(v\in H_{n}\). Then \[\left\langle P_{n}\left(v\times\Delta v\right),\Delta v\right\rangle_{L^{2}} =\left\langle v\times\Delta v,P_{n}\Delta v\right\rangle_{L^{2}}\] \[=\left\langle v\times\Delta v,\Delta v\right\rangle_{L^{2}}\] \[=\int_{\mathcal{O}}\left\langle v(x)\times\Delta v(x),\Delta v( x)\right\rangle_{\mathbb{R}^{3}}\,dx=0.\] Therefore replacing \(v\) by \(m_{n}(s)\) and integrating gives us the following. \[\int_{0}^{t}\left\langle P_{n}\big{(}m_{n}(s)\times\Delta m_{n}(s)\big{)},(- \Delta)\,m_{n}(s)\right\rangle_{L^{2}}ds=0. \tag{4.33}\] Working similar to the previous calculation, we have **Calculation for \(J_{2}\).** \[\int_{0}^{t}\left\langle P_{n}\bigg{(}m_{n}(s)\times\big{(}m_{n} (s)\times\Delta m_{n}(s)\big{)}\bigg{)},(-\Delta)\,m_{n}(s)\right\rangle_{L^{ 2}}ds\] \[\quad=\int_{0}^{t}\left\langle m_{n}(s)\times\Delta m_{n}(s),m_{n }(s)\times\Delta m_{n}(s)\right\rangle_{L^{2}}ds\] \[\quad=\int_{0}^{t}\left|m_{n}(s)\times\Delta m_{n}(s)\right|_{L^{ 2}}^{2}ds.\] Note that in the equation (4.13), the coefficient \((-\alpha)\) is negative. This along with the above equality can enable us to take this term on the left hand side after applying the Ito Lemma. **Calculation for \(J_{3}\).** Let \(\varepsilon>0\). For \(t\in[0,T]\), we have the following by Holder's inequality followed by Young's inequality. \[\left|J_{3}(t)\right|= \left|\int_{0}^{t}\left\langle P_{n}\big{(}m_{n}(s)\times u_{n}(s )\big{)},(-\Delta)\,m_{n}(s)\right\rangle_{L^{2}}ds\right|\] \[\leq \int_{0}^{t}\left|\left\langle P_{n}\big{(}m_{n}(s)\times u_{n}( s)\big{)},(-\Delta)\,m_{n}(s)\right\rangle_{L^{2}}\right|\,ds\] \[= \int_{0}^{t}\left|\left\langle m_{n}(s)\times u_{n}(s),P_{n} \Delta m_{n}(s)\right\rangle_{L^{2}}\right|\,ds\] \[= \int_{0}^{t}\left|\left\langle m_{n}(s)\times u_{n}(s),\Delta m_{ n}(s)\right\rangle_{L^{2}}\right|\,ds\] \[= \int_{0}^{t}\left|\left\langle u_{n}(s),m_{n}(s)\times\Delta m_{n}(s) \right\rangle_{L^{2}}\right|\,ds\] \[\leq \frac{\varepsilon}{2}\int_{0}^{t}|m_{n}(s)\times\Delta m_{n}(s)|_ {L^{2}}^{2}\,ds+\frac{C(\varepsilon)}{2}\int_{0}^{t}|u_{n}(s)|_{L^{2}}^{2}\,ds.\] **Calculation for \(J_{4}\).** Working similar to the above calculation, there exists a constant \(C(\varepsilon)\) such that for each \(t\in[0,T]\), \[\int_{0}^{t}|\left\langle P_{n}\bigg{(}m_{n}(s)\times\big{(}m_{n} (s)\times u_{n}(s)\big{)}\psi\big{(}m_{n}(s)\big{)},\Delta m_{n}(s)\Big{)}_{L^ {2}}\right|\,ds\] \[\leq\int_{0}^{t}|\left\langle m_{n}(s)\times\big{(}m_{n}(s) \times u_{n}(s)\big{)}\psi(m_{n}(s)),\Delta m_{n}(s)\right\rangle_{L^{2}}|\,ds\] \[\leq\frac{\varepsilon}{2}\frac{C(\varepsilon)}{2}\int_{0}^{t}|m_ {n}(s)\times\Delta m_{n}(s)|_{L^{2}}^{2}\,ds+\int_{0}^{t}\psi(m_{n}(s))^{2}|m _{n}(s)\times u_{n}(s)|_{L^{2}}^{2}\,ds\] \[\leq\frac{\varepsilon}{2}\int_{0}^{t}|m_{n}(s)\times\Delta m_{n}( s)|_{L^{2}}^{2}\,ds+\frac{C(\varepsilon)C(h)}{2}\int_{0}^{t}|u_{n}(s)|_{L^{2}} ^{2}\,ds.\] The second last inequality follows from Young's inequality. We observe that \[|m_{n}(s)\times u_{n}(s)|_{L^{2}}^{2}\leq|m_{n}(s)|_{L^{\infty}}^{2}\,|u_{n}(s )|_{L^{2}}^{2}\,.\] Also by the definition of the bump function (cut-off) \(\psi\) in (4.6), we have \[\psi(m_{n}(s))^{2}\,|m_{n}(s)|_{L^{\infty}}^{2}\leq\left(|h|_{L^{\infty}}+2 \right)^{2}.\] **Calculation for \(J_{5}\).** By Lemma 4.7, we have the following. \[|J_{5}(t)|\leq \int_{0}^{t}\left|\left\langle\psi\big{(}m_{n}(s)\big{)}^{2}\, \left[DG_{n}\big{(}m_{n}(s)\big{)}\right]\big{[}G_{n}\big{(}m_{n}(s)\big{)} \right],\Delta m_{n}(s)\right\rangle_{L^{2}}\right|\,ds\] \[\leq C\int_{0}^{t}\left(1+|m_{n}(s)|_{H^{1}}\right)|m_{n}(s)|_{H^{1}} \,ds.\] Since \(T<\infty\), the term on the right hand side of the above inequality can be replaced by square of \(|m_{n}(s)|_{H^{1}}\). That is, there exists a constants \(C_{1},C_{2}>0\) (which may depend on \(T\), but not on \(n\in\mathbb{N}\)) such that \[|J_{5}(t)|\leq C_{1}+C_{2}\int_{0}^{t}|m_{n}(s)|_{H^{1}}^{2}\,\,ds. \tag{4.34}\] The integral \(J_{7}\) can be bounded similarly. What remain now are the terms that constitute the noise. **Calculations related to the noise term \(J_{6}\).** The idea for bounding these terms is to use the Burkholder-Davis-Gundy inequality. With that in view, we present some calculations that will be required when bounding the terms (more precisely their expectation). The map \(G_{n}\) can be expressed as a sum of two maps \(G_{n}^{1},G_{n}^{2}\) on \(H_{n}\). \[G_{n}^{1}(v)=P_{n}(v\times h), \tag{4.35}\] \[G_{n}^{2}(v)=P_{n}\big{(}v\times(v\times h)\big{)}.\] (4.36) \[G_{n}=G_{n}^{1}-\alpha\,G_{n}^{2} \tag{4.37}\] Now, \[J_{6}(t)= \int_{0}^{t}\psi(m_{n}(s))\left\langle G_{n}\big{(}m_{n}(s) \big{)},(-\Delta)\,m_{n}(s)\right\rangle_{L^{2}}\,dW(s)\] \[= \int_{0}^{t}\psi\big{(}m_{n}(s)\big{)}\left\langle G_{n}^{1} \big{(}m_{n}(s)\big{)}-\alpha\,G_{n}^{2}\big{(}m_{n}(s)\big{)},(-\Delta)\,m_{n} (s)\right\rangle_{L^{2}}\,dW(s)\] \[= \int_{0}^{t}\psi\big{(}m_{n}(s)\big{)}\,\big{\langle}G_{n}^{1} \big{(}m_{n}(s)\big{)},(-\Delta)\,m_{n}(s)\big{\rangle}_{L^{2}}\ dW(s)\] \[-\alpha\,\int_{0}^{t}\psi\big{(}m_{n}(s)\big{)}\,\big{\langle}G_{n }^{2}\big{(}m_{n}(s)\big{)},(-\Delta)\,m_{n}(s)\big{\rangle}_{L^{2}}\ dW(s)\] \[= \int_{0}^{t}\psi\big{(}m_{n}(s)\big{)}\,\big{\langle}P_{n}\,(m_{n }(s)\times h)\,,(-\Delta)\,m_{n}(s)\big{\rangle}_{L^{2}}\ dW(s)\] \[-\alpha\,\int_{0}^{t}\psi\big{(}m_{n}(s)\big{)}\,\big{\langle}P_{ n}\big{(}m_{n}(s)\times(m_{n}(s)\times h)\big{)},(-\Delta)\,m_{n}(s)\big{\rangle}_{L^ {2}}\ dW(s)\] \[= \int_{0}^{t}\psi\big{(}m_{n}(s)\big{)}\,\big{\langle}\nabla\, \big{(}m_{n}(s)\times h\big{)}\,,\nabla m_{n}(s)\big{\rangle}_{L^{2}}\ dW(s)\] \[-\alpha\,\int_{0}^{t}\psi\big{(}m_{n}(s)\big{)}\,\big{\langle} \nabla\,\big{[}m_{n}(s)\times\big{(}m_{n}(s)\times h\big{)}\big{]}\,,\nabla m _{n}(s)\big{\rangle}_{L^{2}}\ dW(s).\] The plan now is to apply the Burkholder-Davis-Gundy inequality. Prior to that, we will establish the following estimates: By the product rule for derivatives, followed by the use of Holder's inequality, we get a constant \(C(h)>0\) such that for any \(t\in[0,T]\), \[\int_{0}^{t}\bigl{|}\psi(m_{n}(s))\,\big{\langle}\nabla m_{n}(s),\nabla(m_{n} (s)\times h)\big{\rangle}_{L^{2}}\bigr{|}^{2}\,ds\leq C(h)\int_{0}^{t}|m_{n}( s)|_{H^{1}}^{4}\,ds.\] Similarly, The following inequality holds for \(t\in[0,T]\). \[\int_{0}^{t}|\psi\big{(}m_{n}(s)\big{)}\langle\nabla m_{n}(s), \nabla\big{[}m_{n}(s)\times\big{(}m_{n}(s)\times h\big{)}\big{]} \rangle_{L^{2}}|^{2}\,ds\] \[\leq C(h)\int_{0}^{t}|m_{n}(s)|_{H^{1}}^{4}\ ds.\] The details for these calculations are similar to the proof of Lemma 4.7. Effectively, we have shown that there exists a constant \(C>0\) such that \[\left|\int_{0}^{t}\big{\langle}\psi\big{(}m_{n}(s)\big{)}G_{n} \big{(}m_{n}(s)\big{)},\Delta m_{n}(s)\big{\rangle}_{L^{2}}^{2}\ ds\right| \leq C(h)\int_{0}^{t}|m_{n}(s)|_{H^{1}}^{4}\ ds. \tag{4.38}\] Let \(\varepsilon>0\). By the Burkholder-Davis-Gundy inequality, see Lemma C.4, we deduce \[\mathbb{E}\sup_{t\in[0,T]}\left|\int_{0}^{t}\big{\langle}\psi \big{(}m_{n}(s)\big{)}G_{n}\big{(}m_{n}(s)\big{)},\Delta m_{n}(s)\big{\rangle} _{L^{2}}^{2}\ dW(s)\right|\] \[\leq C(h)\mathbb{E}\left(\int_{0}^{T}|\nabla m_{n}(s)|_{H^{1}}^{4} \,ds\right)^{\frac{1}{2}}\] \[\leq C(h)\mathbb{E}\left[\int_{0}^{T}|m_{n}(s)|_{H^{1}}^{2}\,|m_{n}( s)|_{H^{1}}^{2}\ ds\right]^{\frac{1}{2}}\] \[\leq C(h)\mathbb{E}\left[\left(\sup_{t\in[0,T]}|m_{n}(t)|_{H^{1}}^{2 }\right)^{\frac{1}{2}}\left(\int_{0}^{T}|m_{n}(s)|_{H^{1}}^{2}\ ds\right)^{ \frac{1}{2}}\right]\text{(By the Cauchy-Schwartz inequality)}\] \[\leq \frac{\varepsilon}{2}\mathbb{E}\left[\sup_{t\in[0,T]}|m_{n}(t)|_{H^{1}}^{2 }\right]+\frac{C(\varepsilon)C(h)^{2}}{2}\mathbb{E}\left[\int_{0}^{T}|m_{n}(s)| _{H^{1}}^{2}\ ds\right].\text{(By the Young's inequality)}\] The first term on the right hand side of the above inequality has the coefficient \(\frac{\varepsilon}{2}\). This \(\varepsilon>0\) is chosen later such that the coefficient on the left hand side remains positive. We combine all the above inequalities (except the calculations for \(J_{6}\)) with the equation (4.32) to get \[|\nabla m_{n}(t)|_{L^{2}}^{2}+ (\alpha\,-\varepsilon)\int_{0}^{t}|m_{n}(s)\times\Delta m_{n}(s)| _{L^{2}}^{2}\,ds\leq|m_{n}(0)|_{H^{1}}^{2}+\frac{C(\varepsilon)}{2}[C(h)+1] \int_{0}^{t}|u_{n}(s)|_{L^{2}}^{2}\,ds \tag{4.39}\] \[+ C(h)\int_{0}^{t}|m_{n}(s)|_{H^{1}}^{2}\,ds+C(h)\bigg{|}\int_{0}^ {t}\big{\langle}\psi\big{(}m_{n}(s)\big{)}G_{n}\big{(}m_{n}(s)\big{)},\Delta m _{n}(s)\big{\rangle}_{L^{2}}\ dW(s)\bigg{|}.\] Choose \(\varepsilon\) small enough such that \(\alpha\,-\varepsilon>0\). For instance, \(\varepsilon=\frac{\alpha}{2}\) works here. This implies that the second term on the left hand side of the above inequality is non-negative. Therefore we can remove that term, still keeping the inequality intact. We take \(\sup_{t\in[0,T]}\) of both sides of the resulting inequality, followed by taking the expectation to get \[\mathbb{E}\sup_{t\in[0,T]}|\nabla m_{n}(t)|_{L^{2}}^{2}\leq \mathbb{E}|m_{n}(0)|_{H^{1}}^{2}+\frac{C(\varepsilon)}{2}[C(h)+1]\mathbb{E} \int_{0}^{T}|u_{n}(s)|_{L^{2}}^{2}\,ds\] \[+C(h)\mathbb{E}\int_{0}^{T}|m_{n}(s)|_{H^{1}}^{2}\,ds+C(h) \mathbb{E}\sup_{t\in[0,T]}\bigg{|}\int_{0}^{t}\big{\langle}\psi(m_{n}(s))G_{ n}(m_{n}(s)),\Delta m_{n}(s)\big{\rangle}_{L^{2}}\ dW(s)\bigg{|}.\] Combining all the constants into a suitable constant \(C>0\) and replacing \(|m_{n}(s)|_{H^{1}}\) (inside the integrals) by \(\sup_{r\in[0,s]}|m_{n}(r)|_{H^{1}}\) gives \[\mathbb{E}\sup_{t\in[0,T]}|\nabla m_{n}(t)|_{L^{2}}^{2}\leq\mathbb{E}|m_{n}(0) |_{H^{1}}^{2}+C(K)+\varepsilon\mathbb{E}\sup_{t\in[0,T]}|m_{n}(t)|_{H^{1}}^{2} +C\mathbb{E}\int_{0}^{T}\sup_{r\in[0,s]}|m_{n}(r)|_{H^{1}}^{2}\ ds.\] Here again, \(\varepsilon=\min\{\frac{\alpha}{2},\frac{1}{2}\}\) is chosen in order to keep the coefficient on the left hand side positive. We observe the following. For \(v\in H^{1}\), \[|v|_{H^{1}}^{2}=|v|_{L^{2}}^{2}+|\nabla v|_{L^{2}}^{2}\,.\] Therefore applying the bound (4.29), we replace the left hand side by the full \(H^{1}\) norm by adding a constant (using Lemma 4.9 and finiteness of \(T\)) to the right hand side. The resulting inequality is \[(1-\varepsilon)\,\mathbb{E}\sup_{t\in[0,T]}|m_{n}(t)|_{H^{1}}^{2}\leq\mathbb{ E}|m_{n}(0)|_{H^{1}}^{2}+C(K)+C\mathbb{E}\int_{0}^{T}\sup_{r\in[0,s]}|m_{n}(r)| _{H^{1}}^{2}\ ds.\] Using Fubini's theorem and then applying the Gronwall Lemma, the assumption on the initial data \(m_{0}\) implies that there exists a constant \(C>0\) such that \[\mathbb{E}\sup_{t\in[0,T]}|m_{n}(t)|_{H^{1}}^{2}\leq C. \tag{4.40}\] This concludes the proof of bound (4.20). **Proof of bound (4.21)**: Going back to the inequality (4.39) (with \(\varepsilon=\min\{\frac{\alpha}{2},\frac{1}{2}\}\)), we observe that the first term on the right hand side is non-negative. Hence the term can be neglected without changing the inequality. We take the supremum over \([0,T]\), followed by the expectation of both sides. In particular, the bound (4.20) implies that there exists a constant \(C>0\) such that \[\mathbb{E}\int_{0}^{T}|m_{n}(t)\times\Delta m_{n}(t)|_{L^{2}}^{2}\ dt\leq C. \tag{4.41}\] This concludes the proof of bound (4.21), and hence the proof of Lemma 4.8. Having shown some energy estimates, we show that the \(p\)-th order moments for the approximations are also bounded. **Lemma 4.9**.: _Assume \(p\geq 1\). There exists a constant \(C>0\) such that for all \(n\in\mathbb{N}\), the following bounds hold_ \[\mathbb{E}\left[\sup_{r\in[0,T]}|m_{n}(r)|_{H^{1}}^{2p}\right]\leq C_{p}, \tag{4.42}\] \[\mathbb{E}\left[\int_{0}^{T}|m_{n}(s)\times\Delta m_{n}(s)|_{L^{2}}^{2}\,ds \right]^{p}\leq C_{p}, \tag{4.43}\] \[\mathbb{E}\left[\left(\int_{0}^{T}|m_{n}(s)\times(m_{n}(s)\times\Delta m_{n} \left(s\right))|_{L^{2}}^{2}\,ds\right)^{p}\right]\leq C_{2p}, \tag{4.44}\] \[\mathbb{E}\left[\left(\int_{0}^{T}|m_{n}(s)\times u_{n}(s)|_{L^{2}}^{2}\,ds \right)^{p}\right]\leq C_{2p}, \tag{4.45}\] \[\mathbb{E}\left[\left(\int_{0}^{t}|m_{n}(s)\times(m_{n}(s)\times u_{n}(s))|_{ L^{2}}^{2}\,ds\right)^{p}\right]\leq C_{4p}. \tag{4.46}\] The quantity written at the base of the constants \(C\) represents the regularity of \(u\) that is required for the bound with \(p\) to hold on the left hand side. For instance in the bound (4.45), the constant \(C\) depends on \(K_{2p}\) and in the bound (4.46) the constant depends on \(K_{4p}\). In particular for \(p=1\), we require the bound \(K_{4}\). Proof of Lemma 4.9.: The proof uses the calculations from the previous lemma (Lemma 4.8). The idea is to again apply Ito's formula and obtain (4.52). The same bounds as before will be used, except for the stochastic integral term. We write some more calculations regarding the individual terms that are not given previously. **Proof of the bound** (4.42): Let \(p\geq 1\). As before, let \(\varepsilon>0\). We recall the following inequality established in the calculations for \(J_{6}\) in the proof for (4.20) in the previous lemma. \[\left|\int_{0}^{t}\left\langle\psi\big{(}m_{n}(s)\big{)}G_{n}\big{(}m_{n}(s) \big{)},\Delta m_{n}(s)\right\rangle_{L^{2}}^{2}\,ds\right|\leq C\int_{0}^{t}|m _{n}(s)|_{H^{1}}^{4}\,\,ds. \tag{4.47}\] By Lemma C.4, followed by Cauchy-Schwartz inequality and Young's inequality and then Jensen's inequality, there exists constant \(C,C(\varepsilon)\) (which can depend on \(h\)) such that \[\mathbb{E}\sup_{t\in[0,T]}\left|\int_{0}^{t}\left\langle\psi(m_{ n}(s))G_{n}\big{(}m_{n}(s)\big{)},\Delta m_{n}(s)\right\rangle_{L^{2}}\,dW(s) \right|^{p}\] \[\leq C\mathbb{E}\left(\int_{0}^{t}\left\langle\psi\big{(}m_{n}(s) \big{)}G_{n}\big{(}m_{n}(s)\big{)},\Delta m_{n}(s)\right\rangle_{L^{2}}^{2}\, ds\right)^{\frac{p}{2}}\] \[\leq C\mathbb{E}\left(\int_{0}^{T}|m_{n}(s)|_{H^{1}}^{4}\,\,ds\right) ^{\frac{p}{2}}\] \[\leq C\mathbb{E}\left[\int_{0}^{T}|m_{n}(s)|_{H^{1}}^{2}\,|m_{n}(s)| _{H^{1}}^{2}\,\,ds\right]^{\frac{p}{2}}\] \[\leq C\mathbb{E}\left[\left(\sup_{t\in[0,T]}|m_{n}(t)|_{H^{1}}^{2} \right)^{\frac{p}{2}}\left(\int_{0}^{T}|m_{n}(s)|_{H^{1}}^{2}\,\,ds\right)^{ \frac{p}{2}}\right]\text{(By Cauchy-Schwartz inequality)}\] \[\leq \frac{\varepsilon}{2}\mathbb{E}\left[\sup_{t\in[0,T]}|m_{n}(t)|_{ H^{1}}^{2p}\right]+\frac{C(\varepsilon)C^{2}}{2}\mathbb{E}\left[\int_{0}^{T}|m_{n}(s)| _{H^{1}}^{2p}\,\,ds\right].\text{(By Young's inequality)}\] We recall the inequality (4.39) and restate it here. \[|\nabla m_{n}(t)|_{L^{2}}^{2}+ (\alpha\,-\varepsilon)\int_{0}^{t}|m_{n}(s)\times\Delta m_{n}(s)|_{ L^{2}}^{2}\,ds\leq|m_{n}(0)|_{H^{1}}^{2}+\frac{C(\varepsilon)}{2}[C(h)+1]\int_{0}^{t} |u_{n}(s)|_{L^{2}}^{2}\,ds\] \[+C(h)\int_{0}^{t}|m_{n}(s)|_{H^{1}}^{2}\,ds+C(h)\bigg{|}\int_{0}^{t }\big{\langle}\psi\big{(}m_{n}(s)\big{)}G_{n}\big{(}m_{n}(s)\big{)},\Delta m_{ n}(s)\big{\rangle}_{L^{2}}\;dW(s)\bigg{|}. \tag{4.48}\] Choose \(\varepsilon>0\) such that \[\alpha\,-\varepsilon>0. \tag{4.49}\] Therefore the second term on the left hand side of the resulting inequality is non-negative. Hence the inequality remains the same even if that term is neglected. We raise both sides of the inequality (4.39) (after choosing \(\varepsilon=\min\{\frac{1}{2},\frac{\alpha}{2}\}\)) to power \(p\geq 1\) and use Jensen's inequality (to bring the power \(p\) inside the time integral) to get \[\sup_{t\in[0,T]}|\nabla m_{n}(t)|_{L^{2}}^{2p}\leq C(p)\bigg{[}|m_ {n}(0)|_{H^{1}}^{2p}+\frac{C(\varepsilon)^{2}}{2}[C(h)+1]^{p}\left(\int_{0}^{ T}|u_{n}(s)|_{L^{2}}\,ds\right)^{p}\] \[+C(h)\int_{0}^{T}|m_{n}(s)|_{H^{1}}^{2p}\,ds+C(h)\bigg{|}\int_{0}^ {t}\big{\langle}\psi\big{(}m_{n}(s)\big{)}\nabla G_{n}\big{(}m_{n}(s)\big{)},\nabla m_{n}(s)\big{\rangle}_{L^{2}}\;dW(s)\bigg{|}^{p}.\] In the steps that follow, the constant \(C(p)\) is absorbed into the existing constants. We take the expectation of both sides to get \[\mathbb{E}\sup_{t\in[0,T]}|\nabla m_{n}(t)|_{L^{2}}^{2p}\leq C(p)^{p}\mathbb{E}|m_{n}(0)|_{H^{1}}^{2p}+\frac{C(\varepsilon)^{p}}{2^{p} }[C(h)+1]^{p}\mathbb{E}\left(\int_{0}^{T}|u_{n}(s)|_{L^{2}}^{2}\,ds\right)^{p}\] \[+C(h)^{p}\mathbb{E}\int_{0}^{T}|m_{n}(s)|_{H^{1}}^{2p}\,ds\] \[+C(h)^{p}\mathbb{E}\bigg{|}\int_{0}^{t}\big{\langle}\psi\big{(}m_ {n}(s)\big{)}\nabla G_{n}\big{(}m_{n}(s)\big{)},\nabla m_{n}(s)\big{\rangle}_ {L^{2}}\;dW(s)\bigg{|}^{p}.\] The inequalities established at the start of this proof enable us to write the following inequality. \[\mathbb{E}\sup_{t\in[0,T]}|\nabla m_{n}(t)|_{L^{2}}^{2p}\leq C(p)^{p}\mathbb{E}\,|m_{n}(0)|_{H^{1}}^{2p}+C(h)\mathbb{E}\int_{0}^{T}|m_{n} (s)|_{H^{1}}^{2p}\,ds. \tag{4.50}\] The constants \(C,C(h)\) may depend on \(p,K,h,\varepsilon,m_{0}\) but not on \(n\) and may vary from line to line. Since \(m_{n}(0)=P_{n}m_{0}\), the following holds. \[\mathbb{E}\,|m_{n}(0)|_{H^{1}}^{2p}\leq\mathbb{E}\,|m_{0}|_{H^{1}}^{2p}\,. \tag{4.51}\] By the assumptions on the initial data \(m_{0}\), the right hand side of the above inequality is finite, thus allowing the first term on the right hand side of the previous inequality to be bounded by a constant. The idea now is to add the term \(\mathbb{E}\sup_{t\in[0,T]}|m_{n}(t)|_{L^{2}}^{2p}\) to both sides of the inequality, as done in the proof of the bound (4.20) in Lemma 4.8. The left hand can therefore be replaced by \(\mathbb{E}\sup_{t\in[0,T]}|m_{n}(t)|_{H^{1}}^{2p}\). On the right hand side, we use the bound (4.19) to bound the added term by a constant. Hence the above inequality implies that there exists constants \(C_{1},C_{2}>0\) such that \[\mathbb{E}\sup_{t\in[0,T]}|m_{n}(t)|_{H^{1}}^{2p}\leq C_{1}+C_{2}\mathbb{E} \int_{0}^{T}|m_{n}(s)|_{H^{1}}^{2p}\,ds. \tag{4.52}\] We now use the Gronwall's inequality to get a constant \(C_{p}>0\) such that \[\mathbb{E}\sup_{t\in[0,T]}|m_{n}(t)|_{H^{1}}^{2p}\leq C_{p},\ n\in\mathbb{N}. \tag{4.53}\] This completes the proof of the bound (4.42). **Proof for the bound (4.43)**: Consider the inequality (4.39). The first term on the left hand side is non-negative. Therefore the following inequality also holds. \[(\alpha\,-\varepsilon)\int_{0}^{t}|m_{n}(s)\times\Delta m_{n}(s)|_{L ^{2}}^{2}ds \leq|m_{n}(0)|_{H^{1}}^{2}+\frac{C(\varepsilon)}{2}[C(h)+1]\int_{0}^{t}|u_ {n}(s)|_{L^{2}}^{2}ds\] \[\quad+C(h)\int_{0}^{t}|m_{n}(s)|_{H^{1}}^{2}ds+C(h)\int_{0}^{t}|m_ {n}(s)|_{H^{1}}^{2}dW(s).\] We have chosen \(\varepsilon\leq\frac{\alpha}{2}\). Multiplying by a suitable constant, raising the power of both sides to \(p\geq 1\), followed by taking the expectation of both sides gives \[\mathbb{E}\int_{0}^{T}|m_{n}(s)\times\Delta m_{n}(s)|_{L^{2}}^{2 p}ds\] \[\leq C(p)\bigg{[}\mathbb{E}|m_{n}(0)|_{H^{1}}^{2p}+\frac{C( \varepsilon)^{p}}{2^{p}}[C(h)+1]^{p}C(p)^{p}\mathbb{E}\left(\int_{0}^{T}|u_{n} (s)|_{L^{2}}^{2}\,ds\right)^{p}\] \[\quad+C(h)^{p}\mathbb{E}\int_{0}^{T}|m_{n}(s)|_{H^{1}}^{2p}ds+C(h )^{p}\mathbb{E}\left(\int_{0}^{T}|m_{n}(s)|_{H^{1}}^{2}\,dW(s)\right)^{p} \bigg{]}\] \[\leq C(p)\mathbb{E}|m_{n}(0)|_{H^{1}}^{2p}+C(\varepsilon,h,p,K)+C (h,p,T)\mathbb{E}\sup_{s\in[0,T]}|m_{n}(s)|_{H^{1}}^{2p}.\] The inequality follows from the Jensen's inequality, combining the constants and using the Burkholder-Davis-Gundy inequality inequality. Also the constant \(K\) arises due to (4) in Assumption (3.1) on the control process \(u\). Thus, there exists a constant \(C_{p}>0\) such that \[\mathbb{E}\left(\int_{0}^{T}|m_{n}(s)\times\Delta m_{n}(s)|_{L^{ 2}}^{2}ds\right)^{p}\leq C_{p},\ n\in\mathbb{N}. \tag{4.54}\] **Proof of the bound (4.44):** The proof is done using the bounds (4.53) and (4.54), along with the continuous embedding \(H^{1}\hookrightarrow L^{\infty}\). \[\int_{0}^{T}|m_{n}(s)\times(m_{n}(s)\times\Delta m_{n}(s))|_{L^{ 2}}^{2}\,ds \leq\int_{0}^{T}|m_{n}(s)|_{L^{\infty}}^{2}|m_{n}(s)\times\Delta m _{n}(s)|_{L^{2}}^{2}\,ds\] \[\leq C\int_{0}^{T}|m_{n}(s)|_{H^{1}}^{2}|m_{n}(s)\times\Delta m _{n}(s)|_{L^{2}}^{2}\,ds\] \[\leq C\sup_{r\in[0,T]}(|m_{n}(r)|_{H^{1}}^{2})\int_{0}^{T}|m_{n}( s)\times\Delta m_{n}(s)|_{L^{2}}^{2}\,ds.\] The proof follows by raising the power to \(p\), taking the expectation of both sides and using the bounds (4.53) and (4.54). By the above inequality followed by the Cauchy-Schwartz inequality, we get \[\mathbb{E}\left(\int_{0}^{T}|m_{n}(s)\times(m_{n}(s)\times\Delta m _{n}(s))|_{L^{2}}^{2}\,ds\right)^{p}\] \[\leq C\mathbb{E}\left(\sup_{s\in[0,T]}(|m_{n}(s)|_{H^{1}}^{2}) \int_{0}^{T}|m_{n}(s)\times\Delta m_{n}(s)|_{L^{2}}^{2}\,ds\right)^{p}\] \[\leq C\left[\mathbb{E}\left(\int_{0}^{T}|m_{n}(s)\times\Delta m_{ n}(s)|_{L^{2}}^{2}\,ds\right)^{2p}\right]^{\frac{1}{2}}\left[\mathbb{E}\sup_{s\in[0,T]}|m _{n}(s)|_{H^{1}}^{4p}\right]^{\frac{1}{2}}.\] The last inequality follows by the Holder's inequality. Thus there exists a constant \(C>0\) such that for each \(n\in\mathbb{N}\), the following inequality holds: \[\mathbb{E}\left[\left(\int_{0}^{T}|m_{n}(s)\times(m_{n}(s)\times\Delta m_{n}(s)) |^{2}_{L^{2}}\right)^{p}\right]\leq C_{2p}. \tag{4.55}\] This completes the proof of the bound (4.44). **Proof of the bounds (4.45) and (4.46):** Let \(s\in[0,T]\). There exists a constant \(C>0\), independent of \(n,s\) such that \[|m_{n}(s)\times u_{n}(s)|^{2}_{L^{2}}\leq|m_{n}(s)|^{2}_{L^{\infty}}|u_{n}(s)|^ {2}_{L^{2}}\leq C|m_{n}(s)|^{2}_{H^{1}}|u_{n}(s)|^{2}_{L^{2}}.\] The above inequality uses the following continuous embedding: \[H^{1}\hookrightarrow L^{\infty}.\] Therefore by Holder's inequality, followed by Jensen's inequality we have \[\mathbb{E}\left[\int_{0}^{T}|m_{n}(s)\times u_{n}(s)|^{2}_{L^{2} }\,ds\right]^{p} \leq C\mathbb{E}\left[\int_{0}^{T}|m_{n}(s)|^{2}_{H^{1}}|u_{n}(s )|^{2}_{L^{2}}\,ds\right]^{p}\] \[\leq C\mathbb{E}\left[\sup_{s\in[0,T]}|m_{n}(s)|^{2p}_{H^{1}} \left(\int_{0}^{T}|u_{n}(s)|^{2}_{L^{2}}\,ds\right)^{p}\right]\] \[\text{(By Cauchy-Schwartz inequality)} \leq C\left(\mathbb{E}\sup_{s\in[0,T]}|m_{n}(s)|^{4p}_{H^{1}} \right)^{\frac{1}{2}}\left[\mathbb{E}\left(\int_{0}^{T}|u_{n}(s)|^{2}_{L^{2} }\,ds\right)^{2p}\right]^{\frac{1}{2}}.\] Thus by the bound (4.42), there exists a constant \(C_{2p}>0\) such that \[\mathbb{E}\left[\left(\int_{0}^{T}|m_{n}(s)\times u_{n}(s)|^{2}_{L^{2}}\,ds \right)^{p}\right]\leq C_{2p},\ \forall n\in\mathbb{N}. \tag{4.56}\] For the bound (4.46), \[|m_{n}(s)\times(m_{n}(s)\times u_{n}(s))|^{2}_{L^{2}} \leq|m_{n}(s)|^{2}_{L^{\infty}}\left|m_{n}(s)\times u_{n}(s) \right|^{2}_{L^{2}}\] \[\leq C\left|m_{n}(s)\right|^{2}_{H^{1}}\left|m_{n}(s)\times u_{n} (s)\right|^{2}_{L^{2}}\ \text{By }H^{1}\hookrightarrow L^{\infty}\] \[\leq C\left|m_{n}(s)\times u_{n}(s)\right|^{2}_{L^{2}}\sup_{s\in [0,t]}|m_{n}(s)|^{2}_{H^{1}}\,.\] Hence by the Holder's inequality, \[\mathbb{E}\left[\int_{0}^{T}|m_{n}(s)\times(m_{n}(s)\times u_{n} (s))|^{2}_{L^{2}}\ ds\right]^{p}\] \[\leq C\mathbb{E}\left[\sup_{s\in[0,T]}|m_{n}(s)|^{2}_{H^{1}}\int _{0}^{T}|m_{n}(s)\times u_{n}(s)|^{2}_{L^{2}}\ ds\right]^{p}\] \[\leq C\mathbb{E}\left[\sup_{s\in[0,t]}|m_{n}(s)|^{2p}_{H^{1}} \left(\int_{0}^{t}|m_{n}(s)\times u_{n}(s)|^{2}_{L^{2}}\ ds\right)^{p}\right]\] \[\leq C\left(\mathbb{E}\sup_{s\in[0,T]}|m_{n}(s)|^{4p}_{H^{1}} \right)^{\frac{1}{2}}\left(\mathbb{E}\left[\int_{0}^{T}|m_{n}(s)\times u_{n} (s)|^{2}_{L^{2}}\ ds\right]^{2p}\right)^{\frac{1}{2}}.\] Therefore there exists a constant \(C_{4p}>0\) such that \[\mathbb{E}\left[\int_{0}^{T}\left|m_{n}(s)\times\left(m_{n}(s)\times u_{n}(s) \right)\right|^{2}_{L^{2}}\ ds\right]^{p}\leq C_{4p}. \tag{4.57}\] This concludes the proof of Lemma 4.9. Having shown some uniform energy estimates, we now proceed to show some more uniform bounds for \(m_{n}\). **Lemma 4.10**.: _For each \(\gamma\in(0,\frac{1}{2})\) and \(p\geq 2\), then there exists a constant \(C>0\) such that for all \(n\in\mathbb{N}\), the following estimate holds._ \[\mathbb{E}\left[\left|m_{n}\right|_{W^{\gamma,p}(0,T;L^{2})}^{2}\right]\leq C. \tag{4.58}\] Proof of Lemma 4.10.: We show that each term on the right hand side of the approximate equation (4.13) satisfy the above bound and hence the process \(m\) satisfies the bound as well. The bounds established in the Lemma 4.9 imply that each integrand (except the Ito integrals) on the right hand side of (4.13) is uniformly bounded in the space \(L^{2}(\Omega;L^{2}(0,T;L^{2}))\), which implies that the integrals lie in the space \(L^{2}(\Omega;W^{1,2}(0,T;L^{2}))\). This combined with the continuous embedding (By Corollary 18, [62]) \[W^{1,2}(0,T;L^{2})\hookrightarrow W^{\gamma,p}(0,T;L^{2}). \tag{4.59}\] concludes the inequality. What remain are the Ito integrals. For those terms, we use Lemma C.2. Again, the bounds in Lemma 4.9 make sure that the required hypotheses are satisfied. We now recall a notation that will be used in the following sections. The space \(L^{2}_{w}(0,T;L^{2})\) denotes the space \(L^{2}(0,T;L^{2})\) endowed with the weak topology. **Lemma 4.11**.: _Let \(p\geq 1\) and \(q\geq 2\). The set of laws \(\left(\mathcal{L}(m_{n},u_{n})\right)_{n\in\mathbb{N}}\) is tight on the space \(L^{p}(0,T;L^{q})\cap C(0,T;L^{2})\times L^{2}_{w}(0,T;L^{2})\). Also, the law of the Wiener process \(W\) is tight on the space \(C([0,T];\mathbb{R})\)._ Proof.: First we show that the laws of \(m_{n}\) are concentrated inside a ball in the space \(L^{p}(0,T;H^{1})\cap W^{\gamma,p}(0,T;L^{2})\). Let \(R\geq 0\). By Chebyshev's inequality and the bounds established in Lemma 4.8 and Lemma 4.10, there exists a constant \(C>0\) such that \[\mathbb{P}\left(|m_{n}|_{L^{\infty}(0,T;H^{1})\cap W^{\gamma,p}(0,T;L^{2})}>R\right) \leq\mathbb{P}\left(|m_{n}|_{L^{\infty}(0,T;H^{1})}>\frac{R}{2} \right)+\mathbb{P}\left(|m_{n}|_{W^{\gamma,p}(0,T;L^{2})}>\frac{R}{2}\right)\] \[\text{(By Chebyshev's inequality)} \leq\frac{4}{R^{2}}\mathbb{E}\left|m_{n}\right|_{L^{\infty}(0,T ;H^{1})}^{2}+\frac{4}{R^{2}}\mathbb{E}\left|m_{n}\right|_{W^{\gamma,p}(0,T;L^ {2})}^{2}\] \[\text{(By Lemma \ref{lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemmalemma:lemma:lemmalemma:lemmalemma:lemma:lemmalemma:lemmalemma:lemmalemma: Then for each \(R\geq 0\), \(B_{R}\) is a closed ball in \(L^{\infty}(0,T;H^{1})\cap W^{\gamma,p}(0,T;L^{2})\). Therefore \(B_{R}\) has compact closure in \(L^{p}(0,T;L^{q})\cap C([0,T];L^{2})\). For \(n\in\mathbb{N}\), \[\left(\mathcal{L}(m_{n})\right)(B_{R})=\mathbb{P}\left(\omega\in\Omega:\left| m_{n}(\omega)\right|_{L^{\infty}(0,T;H^{1})\cap W^{\gamma,p}(0,T;L^{2})}\leq R \right).\] The right hand side will be denoted by \(\mathbb{P}(B_{R})\). Let \(\tilde{B}_{R}\) denote the closure of \(B_{R}\) in \(L^{p}(0,T;L^{q})\cap C([0,T];L^{2})\). Similarly, \[\left(\mathcal{L}(m_{n})\right)(\tilde{B}_{R})=\mathbb{P}\left(\omega\in \Omega:m_{n}(\omega)\in\tilde{B}\right).\] The right hand side will be denoted by \(\mathbb{P}(\tilde{B}_{R})\). Note that \(\tilde{B}_{R}\) is a compact subset of \(L^{p}(0,T;L^{q})\cap C([0,T];L^{2})\). By (4.60), we have \[1-\mathbb{P}(\tilde{B}_{R})\leq 1-\mathbb{P}(B_{R})\leq\frac{C}{R^{2}}.\] Let \(\varepsilon\) be given. Choosing \(R\) large enough in (4.60) \(\left(R>\sqrt{\frac{C}{\varepsilon}}\right)\), we get \[1-\mathbb{P}(\tilde{B}_{R})<\varepsilon,\] giving the required tightness of laws. For showing the tightness of the laws of \(u_{n}\), we observe that by Chebyshev's inequality and the assumption on \(u\) there exists a constant \(C>0\) such that, \[\mathbb{P}(|u_{n}|_{L^{2}(0,T;L^{2})}>R)\leq\frac{1}{R^{2}}\mathbb{E}(|u_{n}| _{L^{2}(0,T;L^{2})}^{2})\leq\frac{C}{R^{2}}.\] The right hand side of the above inequality, and hence the left hand side can be made as small as desired by choosing \(R\) large enough. By the Banach Alaoglu theorem, a closed ball in the space \(L^{2}(0,T;L^{2})\) has compact closure in the space \(L^{2}_{w}(0,T;L^{2})\). Thus the sequence of laws \((\mathcal{L}(u_{n}))_{n\in\mathbb{N}}\) is tight on the space \(L^{2}_{w}(0,T;L^{2})\). \(\mathcal{L}(W)\) is a probability measure on \(C([0,T];\mathbb{R})\) and hence is tight on \(C([0,T];\mathbb{R})\). This concludes the proof of Lemma 4.11. In the sections that follow we choose and fix \(p=q=4\). ## 5. Proof of Theorem 3.3: Proof of the existence of a weak martingale solution We have so far shown the tightness of laws. Since the space \(L^{4}(0,T;L^{4})\cap C([0,T];L^{2})\times C([0,T];\mathbb{R})\times L^{2}_{w} (0,T;L^{2})\) is not a metric space, we cannot proceed by applying the Prokhorov Theorem (Theorem II.6.7, [54]) followed by the Skorohod Theorem (Theorem 6.7). We, instead, obtain convergence by using the Jakubowski version of the Skorohod Theorem. **Proposition 5.1**.: _There exists a sequence \((m^{\prime}_{n},W^{\prime}_{n},u^{\prime}_{n})\) of \(L^{4}(0,T;L^{4})\cap C([0,T];L^{2})\times C([0,T];\mathbb{R})\times L^{2}_{w }(0,T;L^{2})\)-valued random variables defined on a probability space \((\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{P}^{\prime})\) such that for each \(n\in\mathbb{N}\)\((m^{\prime}_{n},W^{\prime}_{n},u^{\prime}_{n})\) and \((m_{n},W,u_{n})\) have the same laws on \(L^{4}(0,T;L^{4})\cap C([0,T];L^{2})\times C([0,T];\mathbb{R})\times L^{2}_{w }(0,T;L^{2})\). Further, there exists a random variable \((m^{\prime},W^{\prime},u^{\prime})\) defined on \((\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{P}^{\prime})\) such that the law of \((m^{\prime},W^{\prime},u^{\prime})\) is equal to \(\nu\) on \(L^{4}(0,T;L^{4})\cap C([0,T];L^{2})\times C([0,T];\mathbb{R})\times L^{2}_{w} (0,T;L^{2})\). Moreover, the following convergences hold \(\mathbb{P}^{\prime}\)-a.s. as \(n\) goes to infinity._ \[m^{\prime}_{n}\to m^{\prime}\text{ in }L^{4}(0,T;L^{4})\cap C([0,T];L^{2}), \tag{5.1}\] \[W^{\prime}_{n}\to W^{\prime}\text{ in }C([0,T];\mathbb{R}), \tag{5.2}\] \[u^{\prime}_{n}\to u^{\prime}\text{ in }L^{2}_{w}(0,T;L^{2}). \tag{5.3}\] Proof.: This result follows from the Jakubowski version of the Skorohod Theorem, see Theorem 3.11 in [19], or Theorem A.1 in [20]. For using the result, it is required that the space \(\mathcal{X}=L^{4}(0,T;L^{4})\cap C([0,T];L^{2})\times C([0,T];\mathbb{R})\times L ^{2}_{w}(0,T;L^{2})\) satisfy certain hypothesis. That is, \(\mathcal{X}\) must be a topological space such that there exists a sequence \(\{f_{n}\}_{n\in\mathbb{N}}\) of continuous functions, \(f_{n}:\mathcal{X}\to\mathbb{R}\), that separates the points of \(\mathcal{X}\). A proof for this can be done similarly to the proof of Corollary 3.2 in [19]. For the sake of completion, we present some details here. It is now sufficient to show that each space in the Cartesian product \(\mathcal{X}\) satisfies the required hypothesis. \(L^{4}(0,T;L^{4})\cap C([0,T];L^{2})\) and \(C([0,T];\mathbb{R})\) are separable, complete metric spaces and hence satisfy the hypothesis. Regarding the space \(L^{2}_{w}(0,T;L^{2})\), let \(\{w_{n}\}_{n\in\mathbb{N}}\) be a countable dense set in \(L^{2}(0,T;L^{2})\). Such a set exists since the space \(L^{2}(0,T;L^{2})\) is separable. Now consider the following functions for \(n\in\mathbb{N}\). \[f_{n}(v)=\int_{0}^{T}\left\langle v(t),w_{n}(t)\right\rangle_{L^{2}}\,dt.\] We now claim that the sequence of functions defined above separates points of \(L^{2}_{w}(0,T;L^{2})\). Towards that, let us choose and fix \(v_{1},v_{2}\in L^{2}_{w}(0,T;L^{2})\) such that \(f_{n}(v_{1})=f_{n}(v_{2})\) for each \(n\in\mathbb{N}\). The set \(\{w_{n}\}_{n\in\mathbb{N}}\) is dense in \(L^{2}(0,T;L^{2})\). Therefore \(f_{n}(v_{1})=f_{n}(v_{2})\) for each \(n\in\mathbb{N}\) implies that \(v_{1}=v_{2}\). Therefore if \(v_{1}\neq v_{2}\) then there exists at least one \(n\in\mathbb{N}\) such that \(f_{n}(v_{1})\neq f_{n}(v_{2})\). Hence the functions \(f_{n}\) separate points. **Remark 5.2**.: _As a consequence of Proposition 5.1, we have that the laws of \(u^{\prime}\) and \(u\) are equal. To see this, first, recall that \(u_{n}=P_{n}(u)\) and that \(u_{n}\to u,\ \mathbb{P}\)-a.s. in \(L^{2}(0,T;L^{2})\). Proposition 5.1 implies that the processes \(u_{n}\) and \(u^{\prime}\) have the same laws on the space \(L^{2}_{w}(0,T;L^{2})\). This, combined with the \(\mathbb{P}^{\prime}\)-a.s. convergence of \(u^{\prime}_{n}\) to \(u^{\prime}\) gives the desired result._ As a corollary of Proposition 5.1, we have the following. **Corollary 5.3**.: \[m^{\prime}\in C([0,T];L^{2}),\ \mathbb{P}^{\prime}-a.s.\] Proof of Corollary 5.3.: The proof follows from the \(\mathbb{P}^{\prime}\)-a.s. convergence of the processes \(m^{\prime}_{n}\) to \(m^{\prime}\) in the space \(C([0,T];L^{2})\). **Remark 5.4**.: _The processes \(m^{\prime},u^{\prime}\) obtained in Proposition 5.1 are Borel measurable. Let the filtration \(\mathbb{F}^{\prime}=\{\mathcal{F}^{\prime}_{t}\}_{t\in[0,T]}\) be defined by_ \[\mathcal{F}^{\prime}_{t}=\sigma\left\{m^{\prime}(s),u^{\prime}(s),W^{\prime}( s):0\leq s\leq t\right\}.\] _Hence \(m^{\prime},u^{\prime}\) are \(\mathbb{F}^{\prime}\) adapted. Thus, the processes \(m^{\prime}\) and \(u^{\prime}\) have progressively measurable modifications, see Proposition 1.12, [43]. From now on, these progressively measurable modifications will be considered._ **Remark 5.5**.: _By the Kuratowski Theorem (Lemma C.10) see Theorem 1.1, [64], the Borel subsets of \(C([0,T];H_{n})\) are Borel subsets of \(L^{4}(0,T;L^{4})\cap C([0,T];L^{2})\). Hence we can assume that \(m^{\prime}_{n}\) and that the laws of \(m_{n}\) and \(m^{\prime}_{n}\) are equal on \(C([0,T];H_{n})\)._ _Similar can be said about the laws of the processes \(u_{n}\) and \(u^{\prime}_{n}\) on the space \(L^{2}(0,T;L^{2})\)._ The processes \(m^{\prime}_{n}\) satisfy the same estimates that are satisfied by \(m_{n}\), \(n\in\mathbb{N}\). In particular, we have the following proposition. **Proposition 5.6**.: _For each \(p\geq 1\), there exists a constant \(C>0\) such that for all \(n\in\mathbb{N}\), the following bounds hold:_ \[|m^{\prime}_{n}(t)|_{L^{2}}^{2} =|m^{\prime}_{n}(0)|_{L^{2}}^{2}\text{ for each }t\in[0,T]\ \mathbb{P}-a.s. \tag{5.4}\] \[\mathbb{E}^{\prime}\left[\sup_{s\in[0,T]}|m^{\prime}_{n}(s)|_{H^{ 1}}^{2p}\right]\leq C, \tag{5.5}\] \[\mathbb{E}^{\prime}\left(\int_{0}^{T}|m^{\prime}_{n}(s)\times\Delta m^{\prime}_{n}( s)|_{L^{2}}^{2}\,ds\right)^{p}\leq C, \tag{5.6}\] \[\mathbb{E}^{\prime}\left[\left(\int_{0}^{T}|m^{\prime}_{n}(s)\times(m^{\prime}_ {n}(s)\times\Delta m^{\prime}_{n}(s))|_{L^{2}}^{2}\right)^{p}\right]\leq C, \tag{5.7}\] \[\mathbb{E}^{\prime}\left[\left(\int_{0}^{T}|m^{\prime}_{n}(s)\times u^{\prime }_{n}(s)|_{L^{2}}^{2}\,ds\right)^{p}\,ds\right]\leq C, \tag{5.8}\] \[\mathbb{E}^{\prime}\left[\left(\int_{0}^{T}|m^{\prime}_{n}(s)\times(m^{\prime }_{n}(s)\times u^{\prime}_{n}(s))|_{L^{2}}^{2}\,ds\right)^{p}\right]\leq C. \tag{5.9}\] Proof.: The proposition follows from Lemma 4.9 and Remark 5.5. By the bounds established in Proposition 5.6 above, in particular for \(p=1\), there exists processes \(\mathbb{Y},\mathbb{Z}\in L^{2}(\Omega^{\prime}:L^{2}(0,T;L^{2}))\) such that \[m^{\prime}_{n}\times\Delta m^{\prime}_{n}\to\mathbb{Y}\text{ weakly in }L^{2}(\Omega^{\prime}:L^{2}(0,T;L^{2})), \tag{5.10}\] \[m^{\prime}_{n}\times(m^{\prime}_{n}\times\Delta m^{\prime}_{n})\to\mathbb{Z} \text{ weakly in }L^{2}(\Omega^{\prime}:L^{2}(0,T;L^{2})), \tag{5.11}\] In Lemma 5.11 and Lemma 5.12, we show that for \(\phi\in L^{4}\left(\Omega^{\prime};L^{4}(0,T;H^{1})\right)\) \[\lim_{n\to\infty}\mathbb{E}^{\prime}\int_{0}^{T}\left\langle m^{\prime}_{n}(s )\times\Delta m^{\prime}_{n}(s)-m^{\prime}(s)\times\Delta m^{\prime}(s),\phi( s)\right\rangle_{L^{2}}\,ds=0, \tag{5.12}\] and \[\lim_{n\to\infty}\mathbb{E}^{\prime}\int_{0}^{T}\left\langle m^{\prime}_{n}( s)\times\left(m^{\prime}_{n}(s)\times\Delta m^{\prime}_{n}(s)\right)-m^{\prime}(s) \times\left(m^{\prime}(s)\times\Delta m^{\prime}(s)\right),\phi(s)\right\rangle _{L^{2}}\,ds=0. \tag{5.13}\] Since the space \(L^{4}\left(\Omega^{\prime};L^{4}(0,T;H^{1})\right)\) is dense in the space \(L^{2}\left(\Omega^{\prime};L^{2}(0,T;L^{2})\right)\), by uniqueness of limits, we can conclude that \[\mathbb{Y}=m^{\prime}\times\Delta m^{\prime}, \tag{5.14}\] and \[\mathbb{Z}=m^{\prime}\times\left(m^{\prime}\times\Delta m^{\prime}\right). \tag{5.15}\] #### Structure of the remaining section We briefly describe the contents of the reminder of the section. The broad aim is to show the convergence of each term on the right hand side of the approximated equation (4.13) to the corresponding term, effectively to show that the obtained limit process \(m^{\prime}\) satisfies the equation (3.7). Lemma 5.7 gives some bounds on the obtained limit processes \(m^{\prime}\) and \(u^{\prime}\). Lemma 5.8 shows that the paths of the process \(m^{\prime}\) are continuous in \(H^{1}\)(with the weak topology) and in \(X^{\beta}\), for \(\beta<\frac{1}{2}\). Lemma 5.9 shows the convergence on \(m^{\prime}_{n}\) to \(m^{\prime}\) in the space of continuous functions on \(L^{2}\). Lemma 5.10 shows some convergence results for the bump function \(\psi\). Lemma 5.13 shows the convergence of the terms containing the control process to the respective terms. Similarly, Lemma 5.14 shows the convergence for the terms containing \(G_{n}\). All these results are then collectively used to show that the process \(m^{\prime}\) is a weak martingale solution for the problem (3.7). **Lemma 5.7**.: _We have the following bounds._ 1. \[\sup_{0\leq t\leq T}|m^{\prime}(t)|_{L^{2}}\leq|m_{0}|_{L^{2}}\,\,\mathbb{P}^{ \prime}-\text{a.s.}\] (5.16) 2. _There exists a constant_ \(C>0\) _such that_ \[\mathbb{E}^{\prime}\sup_{0\leq t\leq T}|m^{\prime}(t)|_{H^{1}}^{2p}\leq C.\] (5.17) 3. \[\mathbb{E}^{\prime}\left(\int_{0}^{T}|u^{\prime}(t)|_{L^{2}}^{2}\right)^{p}\leq K _{p}.\] (5.18) Proof.: For the first inequality, recall that the process \(m^{\prime}_{n}\) converges to \(m^{\prime}\) in \(C([0,T];L^{2})\)\(\mathbb{P}^{\prime}\)-a.s. Hence \(\mathbb{P}^{\prime}\)-a.s., \[\sup_{0\leq t\leq T}|m^{\prime}(t)|_{L^{2}}\leq\liminf_{n\to\infty}\sup_{0 \leq t\leq T}|m^{\prime}_{n}(t)|_{L^{2}}\leq|m_{0}|_{L^{2}}\,. \tag{5.19}\] This concludes the proof of the first inequality (5.16). Moreover, by the Fatou Lemma, see [59] \[\mathbb{E}^{\prime}\sup_{0\leq t\leq T}|m^{\prime}(t)|_{L^{2}}^{2}\leq\liminf _{n\to\infty}\mathbb{E}^{\prime}\sup_{0\leq t\leq T}|m^{\prime}_{n}(t)|_{L^{ 2}}^{2}\leq C. \tag{5.20}\] For the second inequality, extend the definition of the norm \(|\cdot|_{H^{1}}\) to the domain \(L^{2}\) as follows: \[|v|_{H^{1}}=\begin{cases}|v|_{H^{1}},&\text{ if }v\in H^{1},\\ \infty,&\text{ if }v\in L^{2},v\notin H^{1}.\end{cases} \tag{5.21}\] The extended maps are lower semicontinuous. Therefore pointwise convergence of \(m^{\prime}_{n}\) to \(m^{\prime}\) in \(C([0,T];L^{2})\) implies that \(\mathbb{P}^{\prime}\)-a.s. \[\sup_{0\leq t\leq T}|m^{\prime}(t)|_{H^{1}}\leq\liminf_{n\to\infty}\sup_{0 \leq t\leq T}|m^{\prime}_{n}(t)|_{H^{1}}.\] Thus by Fatou's Lemma followed by (5.5), there exists a constant \(C>0\) such that \[\mathbb{E}^{\prime}\sup_{0\leq t\leq T}|m^{\prime}(t)|_{H^{1}}^{ 2p} \leq\mathbb{E}^{\prime}\liminf_{n\to\infty}\sup_{0\leq t\leq T}|m^{ \prime}_{n}(t)|_{H^{1}}^{2p}\] \[\leq\liminf_{n\to\infty}\mathbb{E}^{\prime}\sup_{0\leq t\leq T}|m^ {\prime}_{n}(t)|_{H^{1}}^{2p}\leq C.\] That is, \[\mathbb{E}^{\prime}\sup_{0\leq t\leq T}|m^{\prime}(t)|_{H^{1}}^{2p}\leq C. \tag{5.22}\] The sequence \(u^{\prime}_{n}\) converges to \(u^{\prime}\) in \(L^{2}(0,T;L^{2})\)\(\mathbb{P}^{\prime}\)-a.s. Hence by the Fatou Lemma, \[\mathbb{E}^{\prime}\,|u^{\prime}|_{L^{2}(0,T;L^{2})}^{2p}\leq\liminf_{n\to \infty}\mathbb{E}^{\prime}\,|u^{\prime}_{n}|_{L^{2}(0,T;L^{2})}^{2p}\leq K_{p}. \tag{5.23}\] This concludes the proof of the Lemma (5.7). The uniform bound (5.5), along with the continuous embedding \(H^{1}\hookrightarrow L^{4}\) implies that the sequence \(\{m^{\prime}_{n}\}_{n\in\mathbb{N}}\) is uniformly integrable in \(L^{4}(\Omega^{\prime};L^{4}(0,T;L^{4}))\). Hence by the Vitali Convergence Theorem, see for example Section 6, Exercise 10 in [59], \[\mathbb{E}^{\prime}\int_{0}^{T}|m^{\prime}_{n}(t)-m^{\prime}|_{L^{4}}^{4}dt \to 0. \tag{5.24}\] Note that \[\mathbb{E}^{\prime}\left[\int_{0}^{T}|u^{\prime}_{n}(t)|_{L^{2}}^{2}dt\right] ^{4}\leq C, \tag{5.25}\] for some constant \(C\) independent of \(n\). Weak convergence of \(u^{\prime}_{n}\) to \(u^{\prime}\) pointwise (from (5.3)) implies that for any \(\phi\in L^{2}(\Omega^{\prime};L^{2}(0,T;L^{2}))\), we have \[\int_{0}^{t}\left\langle u^{\prime}_{n}(s,\omega^{\prime})-u^{\prime}(s,\omega ^{\prime}),\phi\right\rangle_{L^{2}}\,ds\to 0, \tag{5.26}\] as \(n\) tends to infinity. Using the bounds in (3.1) for \(p=4\), we can prove (see the proof of (9.44) in Section 9 for a similar calculation) that there exists a constant \(C>0\) such that \[\mathbb{E}^{\prime}\left|\int_{0}^{t}\left\langle u^{\prime}_{n}(s)-u^{\prime}(s ),\phi\right\rangle_{L^{2}}\,ds\right|^{\frac{4}{3}}\leq C. \tag{5.27}\] In fact, the following holds for any \(1\leq q\leq\frac{4}{3}\) \[\mathbb{E}^{\prime}\left|\int_{0}^{t}\left\langle u^{\prime}_{n}(s)-u^{\prime }(s),\phi\right\rangle_{L^{2}}\,ds\right|^{q}\leq C. \tag{5.28}\] Therefore in particular for \(q=1\), we have \[\mathbb{E}^{\prime}\left|\int_{0}^{t}\left\langle u^{\prime}_{n}(s)-u^{\prime }(s),\phi\right\rangle_{L^{2}}\,ds\right|\leq C, \tag{5.29}\] giving (5.30). Therefore, we have the following convergence as a result of the pointwise convergence of \(u^{\prime}_{n}\) to \(u^{\prime}\) and the Vitali Convergence Theorem. \[u^{\prime}_{n}\to u^{\prime}\text{ weakly in }L^{2}(\Omega^{\prime}:L^{2}(0,T;L^{2})), \tag{5.30}\] as \(n\) goes to infinity. **Lemma 5.8**.: _There exists an event \(\Omega^{\prime\prime}\subset\Omega^{\prime}\) of full \(\mathbb{P}^{\prime}\)-measure such that for every \(\omega^{\prime}\in\Omega^{\prime\prime}\), the following assertions hold:_ 1. _The path of_ \(m^{\prime}(\omega^{\prime})\) _is continuous in_ \(H^{1}\) _with the weak topology._ 2. _The path of_ \(m^{\prime}(\omega^{\prime})\) _is continuous in_ \(X^{\beta}\) _(with the norm topology) for_ \(\beta<\frac{1}{2}\)_._ Proof of Lemma 5.8.: The inequality (5.22) holds in \(L^{2}(\Omega^{\prime})\). Hence there exists a subset \(\Omega^{\prime\prime}\in\mathcal{F}^{\prime}_{0}\) of full \(\mathbb{P}^{\prime}\)-measure such that \[\sup_{t\in[0,T]}|m^{\prime}(t)(\omega^{\prime})|_{H^{1}}<\infty,\ \forall\, \omega^{\prime}\in\Omega^{\prime\prime}.\] Hence the quantity \(\sup_{t\in[0,T]}|m^{\prime}(t)(\omega^{\prime})|_{H^{1}}\) is finite, on an event \(\Omega^{\prime\prime}\subset\Omega^{\prime}\) of full \(\mathbb{P}^{\prime}\)-measure. Fix \(\omega^{\prime}\in\Omega^{\prime\prime}\). **Idea of the proof:** To show the continuity of the process \(m^{\prime}(\omega^{\prime})\) at \(t\in[0,T]\), we consider a sequence \(\left(t_{n}\right)_{n\in\mathbb{N}}\) in \([0,T]\) that converges to \(t\). We then show that the sequence \(\left(m^{\prime}(t_{n})(\omega^{\prime})\right)_{n\in\mathbb{N}}\) converges in the appropriate topology to \(m^{\prime}(t)(\omega^{\prime})\) as \(n\) goes to infinity. **Proof of (1):** Let \((t_{n})_{n\in\mathbb{N}}\) be a sequence in \([0,T]\) such that \[t_{n}\to t\in[0,T],\] in \([0,T]\). The sequence \(\left(m^{\prime}(t_{n})(\omega^{\prime})\right)_{n\in\mathbb{N}}\) is bounded in \(H^{1}\) and hence has a weakly convergent subsequence, say \(\left(m^{\prime}(t_{n_{k}})(\omega^{\prime})\right)_{n\in\mathbb{N}}\) such that \[m^{\prime}(t_{n_{k}})(\omega^{\prime})\to z \tag{5.31}\] weakly in \(H^{1}\) for some \(z\in H^{1}\). The space \(H^{1}\) is compactly embedded in the space \((H^{1})^{\prime}\). Hence as \(k\) goes to infinity (possibly along a subsequence), \[m^{\prime}(t_{n_{k}})(\omega^{\prime})\to z\text{ in }(H^{1})^{\prime}. \tag{5.32}\] That \(m^{\prime}(\omega^{\prime})\) is continuous with values in \((H^{1})^{\prime}\) implies that \[m^{\prime}(t_{n_{k}})(\omega^{\prime})\to m^{\prime}(t)(\omega^{\prime}) \tag{5.33}\] in \((H^{1})^{\prime}\). Hence by the uniqueness of limit, \[z=m^{\prime}(t)(\omega^{\prime}).\] From the above argument we can conclude that every subsequence of \(m^{\prime}(t_{n})(\omega^{\prime})\) thus has a further subsequence which converges to the same limit \(m^{\prime}(t)(\omega^{\prime})\). Hence \[m^{\prime}(t_{n})(\omega^{\prime})\to m^{\prime}(t)(\omega^{\prime})\text{ weakly in }H^{1}.\] The sequence of arguments can be repeated for any \(t\in[0,T]\). Hence \(m^{\prime}(\omega^{\prime})\) is continuous in \(H^{1}\) with the weak topology. **Sketch of a proof of (2).**\(H^{1}\) is compactly embedded in \(X^{\beta}\) for \(\beta<\frac{1}{2}\), see Lemma C.5 in Appendix C. Hence \[m^{\prime}(t_{n_{k}})(\omega^{\prime})\to m^{\prime}(t)(\omega^{\prime})\text{ in }X^{\beta}.\] Replicating the above arguments, the continuity of \(m^{\prime}(\omega^{\prime})\) in \(X^{\beta}\) can be shown. **Lemma 5.9**.: _We have the following convergence._ \[\mathbb{E}^{\prime}\sup_{t\in[0,T]}|m^{\prime}_{n}(t)-m^{\prime}(t)|_{L^{2}}^{ 8}\to 0\text{ as }n\to\infty.\] Proof of Lemma 5.9.: We use the following inequality in the subsequent calculations. For \(v\in H^{1}\), \[|v|_{L^{2}}\leq|v|_{H^{1}}^{\frac{1}{2}}|v|_{(H^{1})^{\prime}}^{\frac{1}{2}} \tag{5.34}\] **Sketch of a proof of the inequality (5.34).** Use the Gelfand triple \(H^{1}\hookrightarrow L^{2}\hookrightarrow(H^{1})^{\prime}\) and hence \[\left\langle a,b\right\rangle_{L^{2}}=\ _{H^{1}}\left\langle a,b\right\rangle_{(H^{1})^{ \prime}}\] for \(a\in H^{1}\) and \(b\in L^{2}\). Thus \[|\left\langle a,b\right\rangle_{L^{2}}|=|\ _{H^{1}}\left\langle a,b \right\rangle_{(H^{1})^{\prime}}|\leq|a|_{H^{1}}^{\frac{1}{2}}|b|_{(H^{1})^{ \prime}}^{\frac{1}{2}}.\] \[\mathbb{E}^{\prime}\sup_{t\in[0,T]}|m^{\prime}_{n}(t)-m^{\prime}( t)|_{L^{2}}^{8} \leq C\mathbb{E}^{\prime}\sup_{t\in[0,T]}\left[|m^{\prime}_{n}(t)- m^{\prime}(t)|_{(H^{1})^{\prime}}^{4}|m^{\prime}_{n}(t)-m^{\prime}(t)|_{H^{1}}^{4}\right]\] \[\leq C\mathbb{E}^{\prime}\sup_{t\in[0,T]}\left[(|m^{\prime}_{n}(t )|_{H^{1}}^{4}+|m^{\prime}(t)|_{H^{1}}^{4})|m^{\prime}_{n}(t)-m^{\prime}(t)|_{( H^{1})^{\prime}}^{4}\right]\] \[\leq C\left[\left(\mathbb{E}^{\prime}\sup_{t\in[0,T]}|m^{\prime} _{n}(t)|_{H^{1}}^{8}+\mathbb{E}^{\prime}\sup_{t\in[0,T]}|m^{\prime}(t)|_{H^{1} }^{8}\right)^{\frac{1}{2}}\right]\boldsymbol{\cdot}\] \[\quad\left(\mathbb{E}^{\prime}\sup_{t\in[0,T]}|m^{\prime}_{n}(t)- m^{\prime}(t)|_{(H^{1})^{\prime}}^{4}\right)^{\frac{1}{2}}\] \[\leq C\left(\mathbb{E}^{\prime}\sup_{t\in[0,T]}|m^{\prime}_{n}(t) -m^{\prime}(t)|_{(H^{1})^{\prime}}^{4}\right)^{\frac{1}{2}}.\] The above calculation uses the inequality (5.34) followed by Cauchy-Schwartz inequality and concluding with applying the bounds (5.4) and (5.5). Moreover, there exists a constant \(C>0\) such that \(\mathbb{P}^{\prime}\)-a.s. \[\sup_{t\in[0,T]}|m^{\prime}_{n}(t)-m^{\prime}(t)|_{(H^{1})^{\prime}}^{8} \leq\sup_{t\in[0,T]}|m^{\prime}_{n}(t)|_{(H^{1})^{\prime}}^{8}+ \sup_{t\in[0,T]}|m^{\prime}_{n}(t)|_{(H^{1})^{\prime}}^{8}\] \[\leq C\left(\sup_{t\in[0,T]}|m^{\prime}_{n}(t)|_{H^{1}}^{8}+\sup _{t\in[0,T]}|m^{\prime}_{n}(t)|_{H^{1}}^{8}\right).\] The last step along with the bound (5.5) gives us a bound for using the Lebesgue convergence theorem, thus concluding the proof. **Lemma 5.10**.: _We have the following convergences._ \[\lim_{n\to\infty}\mathbb{E}^{\prime}\sup_{t\in[0,T]}\left|\psi_{0}\left(|m^{ \prime}_{n}(t)|_{L^{\infty}}\right)-\psi_{0}\left(|m^{\prime}(t)|_{L^{\infty} }\right)\right|^{4}=0. \tag{1}\] \[\lim_{n\to\infty}\mathbb{E}^{\prime}\sup_{t\in[0,T]}\big{|}\psi_{0}\big{(}|P_{n} \left(m^{\prime}_{n}(t)\times h\right)|_{L^{\infty}}\big{)}-\psi_{0}\left(|m^{ \prime}(t)\times h|_{L^{\infty}}\right)\big{|}^{4}=0. \tag{2}\] \[\lim_{n\to\infty}\mathbb{E}^{\prime}\sup_{t\in[0,T]}\big{|}\psi_{0}\big{(}|P_{n} \big{(}m^{\prime}_{n}(t)\times(m^{\prime}_{n}(t)\times h)\big{)}\big{|}_{L^{ \infty}}\big{)}-\psi_{0}\big{(}|m^{\prime}(t)\times(m^{\prime}(t)\times h)|_{L^ {\infty}}\big{)}\big{|}^{4}=0. \tag{3}\] _In particular,_ \[\lim_{n\to\infty}\mathbb{E}^{\prime}\sup_{t\in[0,T]}\big{|}\psi\big{(}m^{ \prime}_{n}(t)\big{)}-\psi\big{(}m^{\prime}(t)\big{)}\big{|}^{4}=0. \tag{5.35}\] Proof of Lemma 5.10.: We state a couple of results that will be used in the proof that follows. 1. Since, \(\psi_{0}\in C^{1}_{c}(D)\), the following holds for any \(x,y\in\mathbb{R}\). \[|\psi_{0}(x)-\psi_{0}(y)|\leq\sup_{z\in\mathbb{R}}|D\psi(z)||x-y|.\] 2. We refer the reader to Theorem 5.8, [1], for the following inequality. \[|v|_{L^{\infty}}\leq C|v|_{L^{2}}^{\frac{1}{2}}|v|_{H^{1}}^{\frac{1}{2}},\ v\in H^{1}.\] (5.36) We show the first convergence in Lemma 5.10. The other two can be shown similarly. \[\mathbb{E}^{\prime}\sup_{t\in[0,T]}\big{|}\psi_{0}\left(|m^{\prime }_{n}(t)|_{L^{\infty}}\right)-\psi_{0}\big{(}|m^{\prime}(t)|_{L^{\infty}} \big{)}\big{|}^{4} \leq C\mathbb{E}^{\prime}\sup_{t\in[0,T]}||m^{\prime}_{n}(t)|_{L^{ \infty}}-|m^{\prime}(t)|_{L^{\infty}}^{4}\] \[\leq C\mathbb{E}^{\prime}\sup_{t\in[0,T]}|m^{\prime}_{n}(t)-m^{ \prime}(t)|_{L^{\infty}}^{4}\] \[\leq C\mathbb{E}^{\prime}\sup_{t\in[0,T]}\Big{(}|m^{\prime}_{n}(t )-m^{\prime}(t)|_{L^{2}}^{2}\left|m^{\prime}_{n}(t)-m^{\prime}(t)\right|_{H^{1 }}^{2}\Big{)}\] \[\leq C\mathbb{E}^{\prime}\sup_{t\in[0,T]}|m^{\prime}_{n}(t)-m^{ \prime}(t)|_{L^{2}}^{2}\left(|m^{\prime}_{n}(t)|_{H^{1}}^{2}+|m^{\prime}(t)|_{ H^{1}}^{2}\right)\] \[\leq\left(\mathbb{E}^{\prime}\sup_{t\in[0,T]}|m^{\prime}_{n}(t)-m^ {\prime}(t)|_{L^{2}}^{4}\right)^{\frac{1}{2}}\boldsymbol{\cdot}\] \[\left(\mathbb{E}^{\prime}\sup_{t\in[0,T]}\left(|m^{\prime}_{n}(t) |_{H^{1}}^{4}+|m^{\prime}(t)|_{H^{1}}^{4}\right)\right)^{\frac{1}{2}}.\] The above calculation uses triangle inequality in the first step followed by (5.36) and concluding with the Holder inequality. By the bounds (5.4), (5.5) and Lemma 5.9, the right hand side of the above inequality goes to \(0\) as \(n\) goes to infinity. **Lemma 5.11**.: _Let \(\phi\in L^{4}(\Omega^{\prime};L^{4}(0,T;H^{1}))\). Then_ \[\lim_{n\to\infty}\mathbb{E}^{\prime}\int_{0}^{T}\left\langle m^{\prime}_{n}(s )\times\Delta m^{\prime}_{n}(s),\phi(s)\right\rangle_{L^{2}}\,ds=\mathbb{E}^{ \prime}\int_{0}^{T}\left\langle m^{\prime}(s)\times\Delta m^{\prime}(s),\phi (s)\right\rangle_{L^{2}}\,ds. \tag{5.37}\] Proof of Lemma 5.11.: By the uniform bounds (5.4) and (5.5), there exists a subsequence of \((m^{\prime}_{n})_{n\in\mathbb{N}}\) (denoted by the same sequence) such that \[\nabla m^{\prime}_{n}\to\nabla m^{\prime}\ \text{weakly in}\ L^{2}\big{(}\Omega^{ \prime};(L^{2}(0,T);L^{2})\big{)}, \tag{5.38}\] as \(n\) goes to infinity. Now, the use of Holder's inequality and Agmon's inequality gives us the following set of inequalities. \[\left|\mathbb{E}^{\prime}\int_{0}^{T}\left\langle m^{\prime}_{n}(s)\times \Delta m^{\prime}_{n}(s),\phi(s)\right\rangle_{L^{2}}-\left\langle m^{\prime}(s )\times\Delta m^{\prime}(s),\phi(s)\right\rangle_{L^{2}}\,ds\right|\] \[\leq C\mathbb{E}^{\prime}\sup_{t\in[0,T]}|m^{\prime}_{n}(s)|_{H^{1}} \sup_{s\in[0,T]}\left(|m^{\prime}_{n}(s)|_{H^{1}}+m^{\prime}(s)|_{H^{1}}\right)^{ \frac{1}{2}}\left(\int_{0}^{T}|\phi(s)|_{H^{1}}^{2}\ ds\right)^{\frac{1}{2}}\] \[\quad\quad\bullet\left(\int_{0}^{T}|m^{\prime}_{n}(s)-m^{\prime}( s)|_{L^{2}}\,ds\right)^{\frac{1}{2}}\] \[+\left|\mathbb{E}^{\prime}\int_{0}^{T}\left\langle\nabla(m^{ \prime}_{n}(s)-m^{\prime}(s)),\nabla\phi\times m^{\prime}(s)\right\rangle_{L^{2 }}\,ds\right|.\] The bounds (5.4) and (5.5) along with the convergence of \(m^{\prime}_{n}\) imply that the first term in the above inequality goes to \(0\) as \(n\) goes to \(\infty\). Due to the continuous embedding \(H^{1}\hookrightarrow L^{\infty}\), there exists a constant \(C>0\) such that \[\mathbb{E}^{\prime}\int_{0}^{T}\left|\nabla\phi(s)\times m^{ \prime}(s)\right|_{L^{2}}^{2} ds \leq\mathbb{E}^{\prime}\int_{0}^{T}\left|\nabla\phi(s)\right|_{L^ {2}}^{2}|m^{\prime}(s)|_{L^{\infty}}^{2}\ ds\] \[\leq C\mathbb{E}^{\prime}\left[\left(\sup_{t\in[0,T]}|m^{\prime}( s)|_{H^{1}}^{2}\right)\int_{0}^{T}\left|\nabla\phi(s)\right|_{L^{2}}^{2}ds\right]\] \[\leq C\left[\mathbb{E}^{\prime}\left(\sup_{t\in[0,T]}|m^{\prime}( s)|_{H^{1}}^{4}\right)\right]^{\frac{1}{2}}\left[\mathbb{E}^{\prime}\left(\int_{0}^{T} |\nabla\phi(s)|_{L^{2}}^{2}\,ds\right)^{2}\right]^{\frac{1}{2}}<\infty.\] The above inequality along with the bound on \(|m^{\prime}|_{H^{1}}\) implies that the second term also goes to \(0\) as \(n\) goes to \(\infty\). The right hand side of the above inequality goes to \(0\) as \(n\) goes to \(\infty\), thus concluding the proof. **Lemma 5.12**.: _Let \(\phi\in L^{4}(\Omega^{\prime};L^{4}(0,T;H^{1}))\). Then_ \[\lim_{n\to\infty} \mathbb{E}^{\prime}\int_{0}^{T}\left\langle m_{n}(s)\times(m^{ \prime}_{n}(s)\times\Delta m^{\prime}_{n}(s)),\phi\right\rangle_{L^{2}}\,ds\] \[=\mathbb{E}^{\prime}\int_{0}^{T}\left\langle m^{\prime}(s)\times (m^{\prime}(s)\times\Delta m^{\prime}(s)),\phi\right\rangle_{L^{2}}\,ds.\] Proof of Lemma 5.12.: By the triangle inequality, we have \[\left|\mathbb{E}^{\prime}\int_{0}^{T}\left\langle m_{n}(s)\times (m^{\prime}_{n}(s)\times\Delta m^{\prime}_{n}(s)),\phi\right\rangle_{L^{2}}ds- \left\langle m^{\prime}(s)\times(m^{\prime}(s)\times\Delta m^{\prime}(s)), \phi\right\rangle_{L^{2}}\,ds\right|\] \[\leq \bigg{|}\mathbb{E}^{\prime}\int_{0}^{T}\left\langle(m^{\prime}_ {n}(s)-m^{\prime}(s))\times(m^{\prime}_{n}(s)\times\Delta m^{\prime}_{n}(s)), \phi\right\rangle_{L^{2}}\,ds\bigg{|}\] \[+\left|\mathbb{E}^{\prime}\int_{0}^{T}\left\langle m^{\prime}(s) \times(m^{\prime}_{n}(s)\times\Delta m^{\prime}_{n}(s)-m^{\prime}(s)\times \Delta m^{\prime}(s)),\phi\right\rangle_{L^{2}}\,ds\bigg{|}. \tag{5.39}\] The first term of (5.3) goes to \(0\) as \(n\) goes to infinity as follows. \[\left|\mathbb{E}^{\prime}\int_{0}^{T}\left\langle(m^{\prime}_{n} (s)-m^{\prime}(s))\times(m^{\prime}_{n}(s)\times\Delta m^{\prime}_{n}(s)),\phi (s)\right\rangle_{L^{2}}\,ds\right|\] \[\leq\mathbb{E}^{\prime}\int_{0}^{T}\left|\left\langle(m^{\prime} _{n}(s)-m^{\prime}(s))\times(m^{\prime}_{n}(s)\times\Delta m^{\prime}_{n}(s)), \phi(s)\right\rangle_{L^{2}}\right|ds\] \[\leq\left(\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}_{n}(s) -m^{\prime}(s)\right|_{L^{4}}^{4}\,ds\right)^{\frac{1}{4}}\left(\mathbb{E}^{ \prime}\int_{0}^{T}\left|\phi(s)\right|_{L^{4}}^{4}\,ds\right)^{\frac{1}{4}} \left(\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}_{n}(s)\times\Delta m^{ \prime}_{n}(s)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}\] \[\leq C\left(\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}_{n}(s )-m^{\prime}(s)\right|_{L^{4}}^{4}ds\right)^{\frac{1}{2}}. \tag{5.40}\] By the convergence in (5.24) the right hand side of the above inequality (5.40) goes to \(0\) as \(n\) goes to infinity. The above bound uses the inequality (5.36) followed by the use of the generalized Holder inequality. More precisely, \[|v_{1}v_{2}v_{3}|_{L^{1}}\leq|v_{1}|_{L^{4}}|v_{2}|_{L^{4}}|v_{3}|_{L^{2}}\text { for }v_{1},v_{2}\in L^{4},v_{3}\in L^{2}.\] For the second term, we have the following. \[\left|\mathbb{E}^{\prime}\int_{0}^{T}\left\langle m^{\prime}(s) \times(m^{\prime}_{n}(s)\times\Delta m^{\prime}_{n}(s)-m^{\prime}(s)\times \Delta m^{\prime}(s)),\phi(s)\right\rangle_{L^{2}}\,ds\right|\] \[= \bigg{|}\mathbb{E}^{\prime}\int_{0}^{T}\left\langle(m^{\prime}_{n} (s)\times\Delta m^{\prime}_{n}(s)-m^{\prime}(s)\times\Delta m^{\prime}(s)),m^{ \prime}(s)\times\phi(s)\right\rangle_{L^{2}}\,ds\bigg{|}. \tag{5.41}\] For \(v_{1},v_{2}\in H^{1}\), using the continuous embedding \(H^{1}\hookrightarrow L^{\infty}\), we can show that there exists a constant \(C>0\) such that \[\left|v_{1}v_{2}\right|_{H^{1}}\leq C\left|v_{1}\right|_{H^{1}}\left|v_{2} \right|_{H^{1}}. \tag{5.42}\] Therefore, \[\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}(s)\times\phi(s) \right|_{H^{1}}^{2}\,ds \leq C\,\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}(s)\right|_ {H^{1}}^{2}\left|\phi(s)\right|_{H^{1}}^{2}\,ds\] \[\leq C\,\mathbb{E}^{\prime}\left[\sup_{s\in[0,T]}\left|m^{\prime} (s)\right|_{H^{1}}^{2}\int_{0}^{T}\left|\phi(s)\right|_{H^{1}}^{2}\,ds\right]\] \[\leq \mathbb{E}^{\prime}\int_{0}^{T}\left|\psi(m^{\prime}(s))\left\langle \left(m^{\prime}_{n}(s)-m^{\prime}(s)\right)\times\left(m^{\prime}_{n}(s)\times u ^{\prime}_{n}(s)\right),\phi\right\rangle_{L^{2}}\left|\,ds\right.\] \[+\left|\mathbb{E}^{\prime}\int_{0}^{T}\psi(m^{\prime}(s))\left\langle m ^{\prime}(s)\times\left((m^{\prime}_{n}(s)-m^{\prime}(s)\right)\times u^{ \prime}_{n}(s)\right),\phi\right\rangle_{L^{2}}\left|\,ds\right|\] \[+\left|\mathbb{E}^{\prime}\int_{0}^{T}\psi(m^{\prime}(s))\left\langle m ^{\prime}(s)\times\left(m^{\prime}(s)\times(u^{\prime}_{n}(s)-u^{\prime}(s) )\right),\phi\right\rangle_{L^{2}}\left.ds\right|. \tag{5.44}\] **Claim:** All the three terms on the right hand side of the above inequality go to \(0\) as \(n\) goes to infinity. We use the assumption on \(\phi\) along with the fact that the space \(H^{1}\) is continuously embedded into the space \(L^{\infty}\). By (5.24), the sequence \(m^{\prime}_{n}\) converges to \(m^{\prime}\) in \(L^{4}\left(\Omega^{\prime};L^{4}\left(0,T;L^{4}\right)\right)\). Hence for the first term on the right hand side of (5.44), it is sufficient to show that \((m^{\prime}_{n}\times u^{\prime}_{n})\times\phi\in L^{\frac{4}{3}}\left(\Omega ^{\prime};L^{\frac{4}{3}}\left(0,T;L^{\frac{4}{3}}\right)\right)\). Note that \[\mathbb{E}^{\prime}\int_{0}^{T}\left|\,(m^{\prime}_{n}(s)\times u ^{\prime}_{n}(s))\times\phi\right|_{L^{\frac{4}{3}}}^{\frac{4}{3}}ds\] \[\leq\mathbb{E}^{\prime}\int_{0}^{T}|m^{\prime}_{n}(s)|_{L^{ 4}}^{\frac{4}{3}}|u_{n}(s)|_{L^{2}}^{\frac{4}{3}}|\phi|_{L^{\infty}}^{\frac{4}{ 3}}\,ds\] \[\leq C\mathbb{E}^{\prime}|\phi|_{H^{1}}^{\frac{4}{3}}\int_{0}^{T}|m_{ n}^{\prime}(s)|_{H^{1}}^{4}\,|u_{n}^{\prime}(s)|_{L^{2}}^{\frac{4}{3}}\ ds\ (\text{Since}\ H^{1}\hookrightarrow L^{\infty}\hookrightarrow L^{4})\] \[\leq C\left(\mathbb{E}^{\prime}\,|\phi|_{H^{1}}^{4}\right)^{\frac {1}{3}}\left(\mathbb{E}^{\prime}\int_{0}^{T}|m_{n}^{\prime}(s)|_{H^{1}}^{4}\ ds\right)^{\frac{1}{3}}\left(\mathbb{E}^{\prime}\left( \int_{0}^{T}|u_{n}^{\prime}(s)|_{L^{2}}^{2}\ ds\right)^{2}\right)^{\frac{1}{3}}.\] The right hand side of the above inequality is finite by the bounds (5.17) and (5.25). The second term follows similarly. The third term goes to zero due to the cut-off function and the weak convergence (5.30). Hence all the three terms on the right hand side of the inequality (5.44) go to \(0\) as \(n\) goes to infinity and the claim holds. The following proposition proves the convergence of the terms corresponding to \(G_{n}(m_{n}^{\prime})\). **Lemma 5.14**.: \[\lim_{n\to\infty}\mathbb{E}^{\prime}\sup_{s\in[0,T]}|G_{n}(m_{n}^{\prime}(s) )-G(m^{\prime}(s))|_{L^{2}}^{2}=0.\] Proof of Lemma 5.14.: The proof follows from Lemma 5.9 and Lemma 5.10. Define the following \(L^{2}\)-valued random variables \(\{M_{n}(t)\}_{t\in[0,T]}\) and \(\{M_{n}^{\prime}(t)\}_{t\in[0,T]}\) on \((\Omega,\mathbb{F},\mathbb{P})\) and \((\Omega^{\prime},\mathbb{F}^{\prime},\mathbb{P}^{\prime})\), respectively by \[M_{n}(t):= m_{n}(t)-m_{n}(0)-\int_{0}^{t}\left[F_{n}^{1}(m_{n}(s))-\alpha\,F_{ n}^{2}(m_{n}(s))+F_{n}^{3}(m_{n}(s))\right.\] \[\left.+\frac{1}{2}\psi\big{(}m_{n}(s)\big{)}^{2}\left[DG\big{(}m_ {n}(s)\big{)}\right]\left[G_{n}\big{(}m_{n}(s)\big{)}\right]\bigg{]}\,ds, \tag{5.45}\] and \[M_{n}^{\prime}(t):= m_{n}^{\prime}(t)-m_{n}^{\prime}(0)-\int_{0}^{t}[F_{n}^{1}(m_{n}^ {\prime}(s))-\alpha\,F_{n}^{2}(m_{n}^{\prime}(s))+F_{n}^{3}(m_{n}^{\prime}(s))\] \[+\frac{1}{2}\psi\big{(}m_{n}^{\prime}(s)\big{)}^{2}\left[DG\big{(} m_{n}^{\prime}(s)\big{)}\right]\left[G_{n}\big{(}m_{n}^{\prime}(s)\big{)} \right]\bigg{]}\,ds, \tag{5.46}\] The aim here is to show that for each \(t\in[0,T]\), \(M_{n}^{\prime}(t)\) converges in some sense to \(M^{\prime}(t)\), where \(M^{\prime}(t)\) is defined as \[M^{\prime}(t):=m^{\prime}(t)-m_{0}^{\prime}-\int_{0}^{t}\left[m^ {\prime}(s)\times\Delta m^{\prime}(s)-\alpha\,m^{\prime}(s)\times(m^{\prime}(s )\times\Delta m^{\prime}(s))+m^{\prime}(s)\times u^{\prime}(s)\right.\] \[\left.-\alpha\,\psi(m^{\prime}(s))m^{\prime}(s)\times\big{(}m^{ \prime}(s)\times u^{\prime}(s)\big{)}+\frac{1}{2}\psi\big{(}m^{\prime}(s) \big{)}^{2}\left[DG\big{(}m_{n}^{\prime}(s)\big{)}\right]\left[G\big{(}m_{n}^{ \prime}(s)\big{)}\right]\right]ds. \tag{5.47}\] The main contents of the remainder of this section will be as follows: 1. Showing the convergence of \(M_{n}^{\prime}(t)\) to \(M^{\prime}(t)\) in some sense (Lemma 5.15). 2. Showing that the process \(W^{\prime}\), obtained as a limit of Wiener processes \(W_{n}^{\prime}\) is a Wiener process (Lemma 5.16). 3. Showing that the limit \(M^{\prime}\) is indeed an Ito's integral (with respect to the process \(W^{\prime}\)) as required. This will be done in two steps: first we prove Lemma 5.17, which shows that \(M_{n}^{\prime}\) converges to the required stochastic integral and then comparing this with Lemma 5.15 gives us the required result. **Lemma 5.15**.: _For \(\phi\in L^{4}(\Omega;H^{1})\), and \(t\in[0,T]\)_ \[\mathbb{E}^{\prime}\left\langle M_{n}^{\prime}(t),\phi\right\rangle_{L^{2}} \to\mathbb{E}^{\prime}\left\langle M^{\prime}(t),\phi\right\rangle_{L^{2}}\ \text{as}\ n\to\infty.\] Proof of Lemma 5.15.: We show the convergence of the terms individually. The previously stated lemmata, viz. Lemma 5.9, Lemma 5.10, Lemma 5.11, Lemma 5.13, Lemma 5.12, Lemma 5.14 show the convergence of some of the terms. The terms that remain are the ones corresponding to the Stratonovich to Ito correction term. The convergence follows from the convergence described in Lemma 5.9 and Lemma 5.10. We show the calculations for one term. Rest of the terms follow similarly. **Claim:** \[\lim_{n\to\infty}\mathbb{E}^{\prime}\int_{0}^{T}\left[\left|\psi^{2}(m^{\prime}_ {n}(s))P_{n}\big{(}P_{n}(m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times h)) \times(m^{\prime}_{n}(s)\times h)\big{)}\right.\right.\] \[\left.\left.\qquad-\psi^{2}\big{(}m^{\prime}(s)\big{)}m^{\prime}(s)\times(m^{ \prime}(s)\times h)\times(m^{\prime}(s)\times h)\right|_{(H^{1})^{\prime}}^{2} \right]ds=0.\] Let \(v_{1},v_{2},w_{1},w_{2}\in H_{n}\). Then \[\left.\left|\psi(v_{1})w_{1}-\psi(v_{1})w_{1}\right|_{L^{2}}\leq\left|\left[ \psi(v_{1})-\psi(v_{2})\right]w_{1}\right|_{L^{2}}+\left|\psi(v_{2})\left[w_{ 1}-w_{2}\right]\right|_{L^{2}}.\] The convergence in the claim can be seen into two parts, one with the convergence for the cut-off and one with the convergence for the remaining term. For the convergence of the cut-off function, we have Lemma 5.10. We therefore continue with the remaining part. Note that the function \(\psi\) need not be written here since it takes values in \([0,1]\) and hence does not affect the inequalities. The convergence can be split up into the following parts. \[\left.\begin{aligned} &\left|P_{n}(m^{\prime}_{n}(s)\times(m^{ \prime}_{n}(s)\times h))\times(m^{\prime}_{n}(s)\times h))-m^{\prime}(s) \times(m^{\prime}(s)\times h)\times(m^{\prime}(s)\times h)\right|_{(H^{1})^{ \prime}}\\ \leq&\left|P_{n}(P_{n}(m^{\prime}_{n}(s)\times(m^{ \prime}_{n}(s)\times h))\times(m^{\prime}_{n}(s)\times h))-P_{n}(m^{\prime}(s) \times(m^{\prime}(s)\times h)\times(m^{\prime}(s)\times h))\right|_{(H^{1})^{ \prime}}\\ &+\left|P_{n}(m^{\prime}(s)\times(m^{\prime}(s)\times h)\times(m^ {\prime}(s)\times h))-m^{\prime}(s)\times(m^{\prime}(s)\times h)\times(m^{ \prime}(s)\times h)\right|_{(H^{1})^{\prime}}\\ \leq&\left|P_{n}(P_{n}(m^{\prime}_{n}(s)\times(m^{ \prime}_{n}(s)\times h))\times(m^{\prime}_{n}(s)\times h)-m^{\prime}(s)\times( m^{\prime}(s)\times h)\times(m^{\prime}(s)\times h)\right|_{(H^{1})^{\prime}}\\ \leq&\left|P_{n}(m^{\prime}_{n}(s)\times(m^{\prime} (s)\times h))\times(m^{\prime}_{n}(s)\times h)-m^{\prime}(s)\times(m^{\prime}( s)\times h)\times(m^{\prime}(s)\times h)\right|_{(H^{1})^{\prime}}\\ &+\left|P_{n}(m^{\prime}(s)\times(m^{\prime}(s)\times h)\times(m^ {\prime}(s)\times h))-m^{\prime}(s)\times(m^{\prime}(s)\times h)\times(m^{ \prime}(s)\times h)\right|_{(H^{1})^{\prime}}.\end{aligned}\right.\] Thus, \[\mathbb{E}^{\prime}\int_{0}^{T}\left|P_{n}(m^{\prime}_{n}(s)\times( m^{\prime}_{n}(s)\times h))\times(m^{\prime}_{n}(s)\times h)-m^{\prime}(s) \times(m^{\prime}(s)\times h)\times(m^{\prime}(s)\times h)\right|_{(H^{1})^{ \prime}}ds\] \[\leq \mathbb{E}^{\prime}\int_{0}^{T}\left|P_{n}(m^{\prime}_{n}(s) \times(m^{\prime}_{n}(s)\times h))\times(m^{\prime}_{n}(s)\times h)-(m^{\prime} (s)\times(m^{\prime}(s)\times h))\times(m^{\prime}_{n}(s)\times h)\right.\] \[+(m^{\prime}(s)\times(m^{\prime}(s)\times h))\times(m^{\prime}_{n }(s)\times h)-m^{\prime}(s)\times(m^{\prime}(s)\times h)\times(m^{\prime}(s) \times h)\right|_{(H^{1})^{\prime}}ds\] \[\leq \mathbb{E}^{\prime}\int_{0}^{T}\left|(P_{n}(m^{\prime}_{n}(s) \times(m^{\prime}_{n}(s)\times h))-(m^{\prime}(s)\times(m^{\prime}(s)\times h)) \right)\times(m^{\prime}_{n}(s)\times h)\right|_{(H^{1})^{\prime}}ds\] \[+\mathbb{E}^{\prime}\int_{0}^{T}\left|(m^{\prime}(s)\times(m^{ \prime}(s)\times h))\times(m^{\prime}_{n}(s)\times h)-m^{\prime}(s)\times(m^{ \prime}(s)\times h)\times(m^{\prime}(s)\times h)\right|_{(H^{1})^{\prime}}\] Using the following inequality \[\left.\left|v_{1}v_{2}v_{3}\right|_{(H^{1})^{\prime}}\leq C|v_{1}v_{2}v_{3}|_{L^ {1}}\leq C|v_{1}|_{L^{2}}|v_{2}|_{L^{2}}|v_{3}|_{L^{\infty}}, \tag{5.48}\] (for \(v_{1},v_{2}\in L^{2}\) and \(v_{3}\in L^{\infty}\)) we observe that for \(s\in[0,T]\) and \(n\in\mathbb{N}\), \[\left.\begin{aligned} &\left|(P_{n}(m^{\prime}_{n}(s)\times(m^{ \prime}_{n}(s)\times h))-(m^{\prime}(s)\times(m^{\prime}(s)\times h)))\times(m^{ \prime}_{n}(s)\times h)\right|_{(H^{1})^{\prime}}\\ &\leq\left|P_{n}(m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times h) )-(m^{\prime}(s)\times(m^{\prime}(s)\times h))\right|_{L^{2}}|m^{\prime}_{n}(s)| _{L^{2}}|h|_{L^{\infty}}\\ &\leq C(h)\sup_{s\in[0,T]}\left|P_{n}(m^{\prime}_{n}(s)\times(m^{ \prime}_{n}(s)\times h))-(m^{\prime}(s)\times(m^{\prime}(s)\times h))\right|_{L^ {2}}\sup_{s\in[0,T]}|m^{\prime}_{n}(s)|_{L^{2}}.\end{aligned}\right.\] The right hand side of the above inequality goes to \(0\) as \(n\) goes to infinity. This follows from the argument mentioned next along with the use of Lebesgue dominated convergence theorem, which is again justified in the following steps. Using the fact that \(P_{n}\) is a projection operator on \(L^{2}\) and the Holder inequality, we get \[\mathbb{E}^{\prime}\sup_{s\in[0,T]}\left|P_{n}(m^{\prime}_{n}(s)\times(m^{ \prime}_{n}(s)\times h))\right|_{L^{2}}\leq\mathbb{E}^{\prime}\sup_{s\in[0,T]} \left|m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times h)\right|_{L^{2}}\] \[\leq C\oplus_{s\in[0,T]}|m^{\prime}_{n}(s)|_{L^{2}}|m^{\prime}_{n}(s) |_{L^{2}}|m^{\prime}_{n}(s)|_{H^{1}}|h|_{L^{\infty}}\] \[\leq C|h|_{L^{\infty}}|m(0)|_{L^{2}}\mathbb{E}^{\prime}\sup_{s\in[0, T]}|m^{\prime}_{n}(s)|_{H^{1}}.\] This along with the bound (5.5) give us a uniform bound for using the Lebesgue Dominated Convergence Theorem. \[|(m^{\prime}(s)\times(m^{\prime}(s)\times h))\times(m^{\prime}_{n }(s)\times h-m^{\prime}(s)\times h)|_{(H^{1})^{\prime}}\] \[\leq|(m^{\prime}(s)\times(m^{\prime}(s)\times h))|_{L^{2}}|m^{ \prime}_{n}(s)-m^{\prime}(s)|_{L^{2}}|h|_{L^{\infty}}\] \[\leq\sup_{s\in[0,T]}|m^{\prime}(s)|_{L^{2}}\sup_{s\in[0,T]}|m^{ \prime}(s)|_{L^{\infty}}\sup_{s\in[0,T]}|m^{\prime}_{n}(s)-m^{\prime}(s)|_{L^ {2}}|h|_{L^{\infty}}\] \[\leq C\sup_{s\in[0,T]}|m^{\prime}(s)|_{L^{2}}\sup_{s\in[0,T]}|m^{ \prime}(s)|_{H^{1}}\sup_{s\in[0,T]}|m^{\prime}_{n}(s)-m^{\prime}(s)|_{L^{2}}|h |_{L^{\infty}}\] \[\leq CC(h)|m_{0}|_{L^{2}}\sup_{s\in[0,T]}|m^{\prime}_{n}(s)-m^{ \prime}(s)|_{L^{2}}.\] Thus, \[\mathbb{E}^{\prime}|(m^{\prime}(s)\times(m^{\prime}(s)\times h)) \times(m^{\prime}_{n}(s)\times h-m^{\prime}(s)\times h)|_{(H^{1})^{\prime}}^{2}\] \[\qquad\leq CC(h)|m_{0}|_{L^{2}}^{2}\mathbb{E}^{\prime}\sup_{s\in [0,T]}|m^{\prime}_{n}(s)-m^{\prime}(s)|_{L^{2}}^{2}.\] The right hand side of the above inequality goes to \(0\) by Lemma 5.9. Hence \[\lim_{n\to\infty}\mathbb{E}^{\prime}\left|(P_{n}(m^{\prime}_{n}(s)\times(m^{ \prime}_{n}(s)\times h))-(m^{\prime}(s)\times(m^{\prime}(s)\times h)))\times( m^{\prime}_{n}(s)\times h)\right|_{(H^{1})^{\prime}}\,ds=0\] and \[\lim_{n\to\infty}\mathbb{E}^{\prime}\int_{0}^{T}|(m^{\prime}(s)\times(m^{ \prime}(s)\times h))\times(m^{\prime}_{n}(s)\times h-m^{\prime}(s)\times h)|_{ (H^{1})^{\prime}}\,ds=0.\] Concerning the remaining term, the calculations can be done as follows. For \(s\in[0,T]\), \[\lim_{n\to\infty}|P_{n}(m^{\prime}(s)\times(m^{\prime}(s)\times h))-m^{\prime }(s)\times(m^{\prime}(s)\times h)|=0.\] The above pointwise convergence and the uniform bound \[\mathbb{E}^{\prime}\int_{0}^{T}|m^{\prime}(s)\times(m^{\prime}(s)\times h)|_{( H^{1})^{\prime}}\,ds\leq C(h)\mathbb{E}^{\prime}|m_{0}|_{L^{2}}^{2}\] together with the Lebesgue Dominated Convergence Theorem gives \[\lim_{n\to\infty}\mathbb{E}^{\prime}\int_{0}^{T}\big{|}P_{n}\big{(}m^{\prime }(s)\times(m^{\prime}(s)\times h)\times\big{(}m^{\prime}(s)\times h\big{)} \big{)}-m^{\prime}(s)\times\big{(}m^{\prime}(s)\times h\big{)}\times\big{(}m^ {\prime}(s)\times h\big{)}\big{|}_{(H^{1})^{\prime}}\,ds\] \[\qquad=0.\] Combining the above calculations with the Lemma 5.10 justifies the claim. We now show that the driving process \(W^{\prime}\) is a Wiener process. **Lemma 5.16**.: _The process \(W^{\prime}\) is a Wiener process on the space \((\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{P}^{\prime})\). Also, \(W^{\prime}_{n}(t)-W^{\prime}_{n}(s)\) is independent of the \(\sigma\)- algebra generated by \(m^{\prime}_{n}(r),u^{\prime}(r),W^{\prime}_{n}(r)\) for \(0\leq r\leq s<t\)._ Proof of Lemma 5.16.: \(W^{\prime}_{n}\) converges to \(W^{\prime}\) in \(C([0,T];\mathbb{R})\)\(\mathbb{P}\)-a.s. Hence, \(W^{\prime}\in C([0,T];\mathbb{R})\)\(\mathbb{P}\)-a.s. That is, \(W^{\prime}\) thus has almost surely continuous trajectories. We proceed as follows: First show that \(W^{\prime}_{n}\) is a Wiener process on \((\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{P}^{\prime})\) for each \(n\in\mathbb{N}\). Recall that the processes \(W^{\prime}_{n}\) and \(W\) have the same laws on the space \(C([0,T];\mathbb{R})\). Let \(\phi_{i},\zeta_{i},\,i=1,\ldots,k\) be continuous and bounded real valued functions on \((H^{1})^{\prime}\). Let \(\psi,\psi_{i}\), \(i=1,\ldots,k\) be continuous and bounded real valued functions on \(\mathbb{R}\). Let \(0<r_{1}<\cdots<r_{k}\leq s\leq t\), \(0<s_{1}<\cdots<s_{k}\leq s\leq t\). Now for each \(n\in\mathbb{N}\) \[\mathbb{E}^{\prime}\left[\prod_{j=1}^{k}\phi_{j}\big{(}m_{n}^{ \prime}(r_{j})\big{)}\prod_{j=1}^{k}\zeta_{j}\big{(}u_{n}^{\prime}(r_{j})\big{)} \prod_{j=1}^{k}\psi_{j}\big{(}W_{n}^{\prime}(s_{j})\big{)}\psi\big{(}W_{n}^{ \prime}(t)-W_{n}^{\prime}(s)\big{)}\right]\] \[=\mathbb{E}\left[\prod_{j=1}^{k}\phi_{j}\big{(}m_{n}(r_{j})\big{)} \prod_{j=1}^{k}\zeta_{j}\big{(}u_{n}(r_{j})\big{)}\prod_{j=1}^{k}\psi_{j} \big{(}W(s_{j})\big{)}\right]\mathbb{E}\left[\psi\big{(}W(t)-W(s)\big{)}\right]\] \[=\mathbb{E}^{\prime}\left[\prod_{j=1}^{k}\phi_{j}\big{(}m_{n}^{ \prime}(r_{j})\big{)}\prod_{j=1}^{k}\zeta_{j}\big{(}u_{n}^{\prime}(r_{j}) \big{)}\prod_{j=1}^{k}\psi_{j}\big{(}W_{n}^{\prime}(s_{j})\big{)}\right] \mathbb{E}^{\prime}\left[\psi\big{(}W_{n}^{\prime}(t)-W_{n}^{\prime}(s)\big{)} \right].\] Thus, \(W_{n}^{\prime}(t)-W_{n}^{\prime}(s)\) is independent of the \(\sigma\)- algebra generated by \(m_{n}^{\prime}(r),u_{n}^{\prime}(r),W_{n}^{\prime}(r)\) for \(r\leq s\). Taking the limit as \(n\) goes to infinity, we get \[\lim_{n\to\infty}\mathbb{E}^{\prime}\left[\prod_{j=1}^{k}\phi_{j }\big{(}m_{n}^{\prime}(r_{j})\big{)}\prod_{j=1}^{k}\zeta_{j}\big{(}u_{n}^{ \prime}(r_{j})\big{)}\prod_{j=1}^{k}\psi_{j}\big{(}W_{n}^{\prime}(s_{j})\big{)} \psi\big{(}W_{n}^{\prime}(t)-W_{n}^{\prime}(s)\big{)}\right]\\ =\lim_{n\to\infty}\mathbb{E}^{\prime}\left[\prod_{j=1}^{k}\phi_{ j}\big{(}m_{n}^{\prime}(r_{j})\big{)}\prod_{j=1}^{k}\zeta_{j}\big{(}u_{n}^{ \prime}(r_{j})\big{)}\prod_{j=1}^{k}\psi_{j}\big{(}W_{n}^{\prime}(s_{j})\big{)} \right]\mathbb{E}^{\prime}\left[\psi\big{(}W_{n}^{\prime}(t)-W_{n}^{\prime}(s) \big{)}\right].\] By Lebesgue dominated convergence theorem, we have \[\mathbb{E}^{\prime}\left[\prod_{j=1}^{k}\phi_{j}\big{(}m^{\prime }(r_{j})\big{)}\prod_{j=1}^{k}\zeta_{j}(u^{\prime}\big{(}r_{j})\big{)}\prod_{j =1}^{k}\psi_{j}\big{(}W^{\prime}(s_{j})\big{)}\psi\big{(}W^{\prime}(t)-W^{ \prime}(s)\big{)}\right]\\ =\mathbb{E}^{\prime}[\prod_{j=1}^{k}\phi_{j}\big{(}m^{\prime}(r_{ j})\big{)}\prod_{j=1}^{k}\zeta_{j}\big{(}u^{\prime}(r_{j})\big{)}\prod_{j=1}^{k} \psi_{j}\big{(}W^{\prime}(s_{j})\big{)}]\mathbb{E}^{\prime}\left[\psi\big{(}W ^{\prime}(t)-W^{\prime}(s)\big{)}\right].\] Thus, \(W^{\prime}(t)-W^{\prime}(s)\) is independent of the \(\sigma\)- algebra generated by \(m^{\prime}(r),u^{\prime}(r),W^{\prime}(r)\) for \(r\leq s\leq t\). Now, let \(k\in\mathbb{N}\), \(s_{0}=0<s_{1}<\cdots<s_{k}\leq T\). For \((t_{1},\ldots t_{k})\in\mathbb{R}^{k}\). Then for each \(n\in\mathbb{N}\), we have \[\mathbb{E}^{\prime}\left[e^{i\sum_{j=1}^{k}t_{j}\big{(}W_{n}^{ \prime}(s_{j})-W_{n}^{\prime}(s_{j-1})\big{)}}\right] =\mathbb{E}\left[e^{i\sum_{j=1}^{k}t_{j}\big{(}W(s_{j})-W(s_{j-1} )\big{)}}\right]\] \[=e^{-\frac{1}{2}\sum_{j=1}^{k}t_{j}^{2}\big{(}s_{j}-s_{j-1}\big{)}}.\] Thus \[\lim_{n\to\infty}\mathbb{E}^{\prime}\left[e^{i\sum_{j=1}^{k}t_{j} \big{(}W_{n}^{\prime}(s_{j})-W_{n}^{\prime}(s_{j-1})\big{)}}\right]=\lim_{n\to \infty}e^{-\frac{1}{2}\sum_{j=1}^{k}t_{j}^{2}(s_{j}-s_{j-1})}\] and by the Lebesgue dominated convergence theorem, \[\mathbb{E}^{\prime}\left[e^{i\sum_{j=1}^{k}t_{j}\big{(}W^{\prime}(s_{j})-W^{ \prime}(s_{j-1})\big{)}}\right]=e^{-\frac{1}{2}\sum_{j=1}^{k}t_{j}^{2}(s_{j}-s_ {j-1})}.\] Hence, the increments are normally distributed. **Lemma 5.17**.: _For each \(t\in[0,T]\), \(M^{\prime}_{n}(t)\) converges to \(\int_{0}^{t}\psi\left(m^{\prime}(s)\right)G\big{(}m^{\prime}(s)\big{)}\,dW^{ \prime}(s)\) in \(L^{2}(\Omega^{\prime};(H^{1})^{\prime})\). In particular,_ \[M^{\prime}(t)=\int_{0}^{t}\psi(m^{\prime}(s))G(m^{\prime}(s))\,dW^{\prime}(s), \ \mathbb{P}^{\prime}-a.s. \tag{5.49}\] _Idea of proof of Lemma 5.17._ We first give a brief idea of the proof in mainly two steps. We then go on to justify the steps. 1. Let us choose and fix \(t\in[0,T]\) and \(n\in\mathbb{N}\). We show that \[M^{\prime}_{n}(t)=\int_{0}^{t}\psi(m^{\prime}_{n}(s))G\big{(}m^{\prime}_{n}(s )\big{)}\,dW^{\prime}_{n}(s),\ \mathbb{P}^{\prime}-a.s.\] (5.50) 2. Again, Let us choose and fix \(t\in[0,T]\). Then, using step (1) we show that \(M^{\prime}_{n}(t)\) converges to \(\int_{0}^{t}\psi(m^{\prime}(s))G(m^{\prime}(s))\,dW^{\prime}(s)\) as \(n\to\infty\) in \(L^{2}(\Omega^{\prime};(H^{1})^{\prime})\), and hence, in particular, weakly in \(L^{\frac{3}{3}}(\Omega^{\prime};(H^{1})^{\prime})\). From Lemma 5.15, we know that \(M^{\prime}_{n}(t)\) converges to \(M^{\prime}(t)\) weakly in \(L^{\frac{4}{3}}(\Omega^{\prime};(H^{1})^{\prime})\). Combining this convergence with the convergence from step (2), we have, \[M^{\prime}(t)=\int_{0}^{t}\psi(m^{\prime}(s))G(m^{\prime}(s))\,dW^{\prime}(s),\ \mathbb{P}^{\prime}-a.s. \tag{5.51}\] Proof of Lemma 5.17.: **Proof of Step 1:** Let \(k,n\in\mathbb{N}\), and let \(t\in[0,T]\). Let \(\mathcal{P}_{k}:=\left\{s_{j}^{k}:s_{j}^{k}=\frac{jT}{k},j=0,\ldots,k\right\}\) be a partition of \([0,T]\). **Claim**: \[M^{\prime}_{n}(t)=\int_{0}^{t}\psi(m^{\prime}_{n}(s))G\big{(}m^{\prime}_{n}(s )\big{)}\,dW^{\prime}_{n}(s). \tag{5.52}\] For \(n\in\mathbb{N}\) and \(t\in[0,T]\), consider the following random variables. \[M_{n}(t)-\sum_{j=0}^{k-1}\psi(m_{n}(s_{j}^{k}))G_{n}\big{(}m_{n}(s_{j}^{k}) \big{)}\big{(}W(s_{j+1}^{k}\wedge t\big{)}-W(s_{j}^{k}\wedge t)\big{)}, \tag{5.53}\] and \[M^{\prime}_{n}(t)-\sum_{j=0}^{k-1}\psi(m^{\prime}_{n}(s))G_{n}\big{(}m^{\prime }_{n}(s_{j}^{k})\big{)}\big{(}W^{\prime}_{n}(s_{j+1}^{k}\wedge t)-W^{\prime}_{ n}(s_{j}^{k}\wedge t)\big{)}. \tag{5.54}\] **Sub-claim:** For each \(t\in[0,T]\) and \(n\in\mathbb{N}\), we have the following convergence. The random variable \[\sum_{j=0}^{k-1}\psi(m_{n}(s_{j}^{k}\wedge t))G_{n}\big{(}m_{n}(s _{j}^{k}\wedge t)\big{)}\big{(}W(s_{j+1}^{k}\big{)}-W(s_{j}^{k})\big{)}\] \[=\int_{0}^{t}\chi_{[s_{j}^{k},s_{j+1}^{k}]}(s)\psi(m_{n}(s_{j}^{k }\wedge t))G_{n}(m_{n}(s_{j}^{k}\wedge t))\,dW(s), \tag{5.55}\] converges to the random variable \[\int_{0}^{t}\psi(m_{n}(s))G_{n}(m_{n}(s))\,dW(s), \tag{5.56}\] in the space \(L^{2}(\Omega;L^{2})\) as \(k\to\infty\). By the equality in (5.45), we, therefore, have the first variable to be \(0\) (in the limit as \(k\to\infty\)) \(\mathbb{P}^{\prime}\)-a.s. Proof of the sub-claim.: Firstly, for any \(f\in C([0,T];H_{n})\), we have the following \[\lim_{k\to\infty}\int_{0}^{t}\Big{|}\chi_{[s_{j}^{k},s_{j+1}^{k})}(s)f(s_{j}^{ k}\wedge t)-f(s)\Big{|}_{L^{2}}^{2}\ ds=0. \tag{5.57}\] Now, observe that \(\psi(m_{n})G_{n}(m_{N})\in C([0,T];H_{n})\). Therefore for \(f(\cdot)=\psi(\cdot)G_{n}(m_{n}(\cdot))\in C([0,T];H_{n})\), we have \[\lim_{k\to\infty}\int_{0}^{t}\left|\chi_{[s^{k}_{j},s^{k}_{j+1})}(s)\psi(m_{n}(s ^{k}_{j}\wedge t))G_{n}(m_{n}(s^{k}_{j}\wedge t))-\psi(m_{n}(s))G(m_{n}(s)) \right|^{2}_{L^{2}}\,ds=0,\ \mathbb{P}-a.s. \tag{5.58}\] Moreover, by Lemma 4.9, there exists a constant \(C\) independent of \(k\) such that \[\mathbb{E}\left[\int_{0}^{t}\left|\chi_{(s^{k}_{j},s^{k}_{j+1})}( s)\psi(m_{n}(s^{k}_{j}\wedge t))G_{n}(m_{n}(s^{k}_{j}\wedge t))-\psi(m_{n}(s))G(m_{n }(s))\right|^{2}_{L^{2}}\,ds\right]^{2} \tag{5.59}\] \[\leq 4\mathbb{E}\int_{0}^{t}\left|\chi_{[s^{k}_{j},s^{k}_{j+1})}(s) \psi(m_{n}(s^{k}_{j}\wedge t))G_{n}(m_{n}(s^{k}_{j}\wedge t))\right|^{4}_{L^{2 }}\,ds+4\mathbb{E}\int_{0}^{t}\left|\psi(m_{n}(s))G(m_{n}(s))\right|^{4}_{L^{2 }}\,ds\leq C. \tag{5.60}\] Therefore by the Vitali Convergence Theorem, we have the following convergence. \[\lim_{k\to\infty}\mathbb{E}\int_{0}^{t}\left|\chi_{[s^{k}_{j},s^{k}_{j+1})}(s) \psi(m_{n}(s^{k}_{j}\wedge t))G_{n}(m_{n}(s^{k}_{j}\wedge t))-\psi(m_{n}(s))G( m_{n}(s))\right|^{2}_{L^{2}}\,ds=0. \tag{5.61}\] In order to prove the claim, we consider the following difference. By the Ito isometry, we have \[\mathbb{E}\left|\int_{0}^{t}\chi_{[s^{k}_{j},s^{k}_{j+1})}(s)\psi (m_{n}(s^{k}_{j}\wedge t))G_{n}(m_{n}(s^{k}_{j}\wedge t))-\psi(m_{n}(s))G_{n}( m_{n}(s))\,dW(s)\right|^{2}_{L^{2}}\] \[=\mathbb{E}\int_{0}^{t}\left|\chi_{[s^{k}_{j},s^{k}_{j+1})}(s) \psi(m_{n}(s^{k}_{j}\wedge t))G_{n}(m_{n}(s^{k}_{j}\wedge t))-\psi(m_{n}(s))G_ {n}(m_{n}(s))\right|^{2}_{L^{2}}\,ds. \tag{5.62}\] The right hand side, and hence the left hand side of the above inequality converges to \(0\) as \(k\to\infty\). This completes the proof of the sub-claim. Note that the two random variables in (5.53) and (5.54) are obtained by applying measurable transformations to \(m_{n},m^{\prime}_{n},W^{\prime}_{n}\) and \(W\) and hence have the same distributions. Strong convergence of \(M_{n}(t)\) implies convergence of the corresponding laws. Since the random variables in (5.53) and (5.54) have the same laws, the laws of \(M^{\prime}_{n}(t)\) also converge to the law of some random variable, the law of which is the same as that of the law of the limit of \(M_{n}(t)\). But since \(M_{n}(t)-\int_{0}^{t}\psi(m_{n}(s))G_{n}(m_{n}(s))\,dW(s)=0,\ \mathbb{P}\)-a.s. (because \(m_{n}\) is a solution to (4.13)), we have \[\lim_{k\to\infty}\left[M^{\prime}_{n}(t)-\int_{0}^{t}\chi_{[s^{k}_{j},s^{k}_{j+ 1})}(s)\psi(m_{n}(s^{k}_{j}\wedge t))G_{n}(m_{n}(s^{k}_{j}\wedge t))\,dW^{ \prime}_{n}(s)\right]=0,\ \mathbb{P}^{\prime}-a.s. \tag{5.63}\] Thus, \[M^{\prime}_{n}(t)=\int_{0}^{t}\psi(m^{\prime}_{n}(s))G\big{(}m^{\prime}_{n}(s) \big{)}\,dW^{\prime}_{n}(s),\ \mathbb{P}^{\prime}-a.s.\] Hence the claim is shown. This concludes step 1. **Proof of Step 2:** In the second step, we have to show the convergence of \(M^{\prime}_{n}(t)\) to the stochastic integral \(\int_{0}^{t}\psi(m^{\prime}(s))G(m^{\prime}(s))\,dW^{\prime}(s)\) as \(n\to\infty\). In step 1, we have shown that \(M^{\prime}_{n}(t)=\int_{0}^{t}\psi(m^{\prime}_{n}(s))G(m^{\prime}_{n}(s))\,dW^ {\prime}_{n}(s),\mathbb{P}^{\prime}\)-a.s. Now, some standard adding and subtracting, along with the triangle inequality, gives us the following inequality. \[\mathbb{E}^{\prime}\left|\int_{0}^{t}\psi(m^{\prime}_{n}(s))G_{n} \big{(}m^{\prime}_{n}(s)\big{)}\,dW^{\prime}_{n}(s)-\int_{0}^{t}\psi(m^{\prime}( s))G\big{(}m^{\prime}(s)\big{)}\,dW^{\prime}(s)\right|^{2}_{(H^{1})^{\prime}}\] \[\leq \mathbb{E}^{\prime}\left[\left|\int_{0}^{t}\psi(m^{\prime}_{n}(s) )G_{n}\big{(}m^{\prime}_{n}(s)\big{)}\,dW^{\prime}_{n}(s)-\int_{0}^{t}\psi(m^{ \prime}(s))G_{n}\big{(}m^{\prime}_{n}(s)\big{)}\,dW^{\prime}_{n}(s)\right|^{2}_{( H^{1})^{\prime}}\right]\] \[+\mathbb{E}^{\prime}\left[\left|\int_{0}^{t}\psi(m^{\prime}(s))G_{n} \big{(}m^{\prime}_{n}(s)\big{)}\,dW^{\prime}_{n}(s)-\int_{0}^{t}\psi(m^{\prime}( s))G\big{(}m^{\prime}(s)\big{)}\,dW^{\prime}_{n}(s)\right|^{2}_{(H^{1})^{\prime}}\right]\] \[+\mathbb{E}^{\prime}\left[\left|\sum_{j=0}^{k-1}\psi(m^{\prime}(s_{j} ^{k}))G(m^{\prime}(s_{j}^{k}))\,\left(W^{\prime}(s_{j+1}^{k})-W^{\prime}(s_{j+1}^ {k})\right)-\int_{0}^{t}\psi(m^{\prime}(s))G(m^{\prime}(s))\,dW^{\prime}(s) \right|_{(H^{1})^{\prime}}^{2}\right].\] Since the mentioned sums approximate the corresponding Ito integrals in \(L^{2}(\Omega^{\prime};(H^{1})^{\prime})\), the first and the third term converge to \(0\) as \(k\to\infty\). Convergence of the processes \(W^{\prime}_{n}\) to \(W\) along with uniform integrability implies that the second term goes to \(0\) as \(n\) goes to infinity. Combining the convergences concludes step 2, and hence the proof of the lemma. ## 6. Continuation of the proof of Theorem 3.3: verification of the constraint condition After showing the existence of a solution to the equation, we now have to show that the obtained process \(m\) satisfies the constraint condition (3.9). We use the Ito formula version from the paper of Pardoux [53], Theorem 1.2. For \(t\in[0,T]\), consider the equation in \((H^{1})^{\prime}\) \[m^{\prime}(t)=m_{0}^{\prime}+\int_{0}^{t}\left[m^{\prime}(s)\times\Delta m^{ \prime}(s)-\alpha\,m^{\prime}(s)\times\left(m^{\prime}(s)\times\Delta m^{ \prime}(s)\right)+m^{\prime}(s)\times u^{\prime}(s)\right.\] \[\phi_{4}(m^{\prime}(t))= \phi_{4}(m_{0})+\int_{0}^{t}\left\langle m^{\prime}(s)\times\Delta m ^{\prime}(s),m^{\prime}(s)\right\rangle_{L^{2}}\,ds\] \[-\alpha\,\int_{0}^{t}\left\langle m^{\prime}(s)\times\big{(}m^{ \prime}(s)\times\Delta m^{\prime}(s)\big{)},m^{\prime}(s)\right\rangle_{H^{1}} \,ds\] \[+\int_{0}^{t}\left\langle m^{\prime}(s)\times u^{\prime}(s),m^{\prime}(s) \right\rangle_{H^{1}}\,ds\] \[-\alpha\int_{0}^{t}\left\langle\psi(m^{\prime}(s))m^{\prime}(s) \times\big{(}m^{\prime}(s)\times u^{\prime}(s)\big{)},m^{\prime}(s)\right\rangle _{H^{1}}\,ds\] \[+\frac{1}{2}\int_{0}^{t}\left\langle\psi^{2}(m^{\prime}(s)) \left[DG\big{(}m^{\prime}(s)\big{)}\right]\big{[}G\big{(}m^{\prime}(s)\big{)} \big{]},m^{\prime}(s)\right\rangle_{L^{2}}\,ds\] \[+\frac{1}{2}\int_{0}^{t}\psi^{2}(m^{\prime}(s))\left[\phi_{4}^{ \prime\prime}(m^{\prime}(s))\right]\left\langle\big{(}G(m^{\prime}(s)),G(m^{ \prime}(s))\big{)}\right\rangle_{L^{2}}\,ds\] \[+\int_{0}^{t}\left\langle\psi(m^{\prime}(s))G\big{(}m^{\prime}(s )\big{)},m^{\prime}(s)\right\rangle_{L^{2}}\,dW^{\prime}(s)\] \[= \phi_{4}(m_{0})+\sum_{i=1}^{7}I_{i}(t). \tag{6.1}\] Our first observation for the integrals on the right hand side of (6.1) is that \[I_{i}(t)=0,\text{ for }i=1,2,3,4,\text{ and }7. \tag{6.2}\] We give a brief justification for the following. We mainly use the fact that for vectors \(a,b\in\mathbb{R}^{3}\), we have \[\left\langle a\times b,a\right\rangle_{\mathbb{R}^{3}}=0.\] For any \(p\geq 1\), the above equality gives \[{}_{L^{p^{\prime}}}\left\langle a\times b,a\right\rangle_{L^{p}}=0, \tag{6.3}\] with \({}_{L^{p^{\prime}}}\langle\cdot,\cdot\rangle_{L^{p}}\) denoting the \(L^{p}\) duality pairing. Observe that not all the inner products on the right hand side of (6.4) are the \(L^{2}\) inner products. To use the above equality, we replace the \((H^{1})^{\prime}-H^{1}\) duality pairing by \(L^{p}\) duality pairing for some convenient \(p\). To see this, first, note that the space \(H^{1}\) is compactly embedded into the spaces \(L^{4}\) and \(L^{6}\). Therefore, the \((H^{1})^{\prime}-H^{1}\) duality pairing can be appropriately replaced by the \((H^{1})^{\prime}-H^{1}\) duality pairing can be replaced by the \(L^{\frac{4}{3}}-L^{4}\) (for \(I_{2},I_{3}\)) and \(L^{\frac{6}{3}}-L^{6}\) (for \(I_{4}\)) duality pairings. For the triple product term \(m\times(m^{\prime}\times\Delta m^{\prime})\) (inside the integral \(I_{2}\)), note that \[\left|m^{\prime}\times(m^{\prime}\times\Delta m^{\prime})\right|_{L^{\frac{4}{ 3}}}\leq C\left|m^{\prime}\right|_{L^{4}}\left|m^{\prime}\times\Delta m^{ \prime}\right|_{L^{2}}.\] Similar can be said about \(m^{\prime}\times u^{\prime}\) for \(I_{3}\). For \(m^{\prime}\times(m^{\prime}\times u^{\prime})\) (inside the integral \(I_{4}\)), note that \[\left|m^{\prime}\times(m^{\prime}\times u^{\prime})\right|_{L^{\frac{6}{3}}} \leq C\left|m^{\prime}\right|_{L^{6}}^{2}\left|u^{\prime}\right|_{L^{2}}.\] Now, the terms that remain are \(I_{5},I_{6}\). Note that \[\left[\phi_{4}^{\prime\prime}(m^{\prime})\right]\left((G(m^{\prime}),G(m^{ \prime}))\right)=\left\langle G(m^{\prime}),G(m^{\prime})\right\rangle_{L^{2} }=\left|G(m^{\prime})\right|_{L^{2}}^{2}.\] Moreover, the following equality holds from Lemma B.2 in [14]. \[\left\langle[DG(m^{\prime})]\left(G(m^{\prime})\right),m^{\prime}\right\rangle _{L^{2}}=-\left|G(m^{\prime})\right|_{L^{2}}^{2}.\] Therefore, \[I_{6}(t)+I_{7}(t)=0,\ \forall t\in[0,T].\] Hence, the equality (6.1) is now \[\phi_{4}\big{(}m^{\prime}(t)\big{)}=\phi_{4}(m_{0}),\] for each \(t\in[0,T]\). That is \[\int_{\mathcal{O}}\phi(x)|m^{\prime}(t,x)|_{\mathbb{R}^{3}}^{2}\,dx=\int_{ \mathcal{O}}\phi(x)|m_{0}(x)|_{\mathbb{R}^{3}}^{2}\,dx. \tag{6.4}\] Now, the equality (6.4) holds for all \(\phi\in C_{c}^{\infty}(\mathcal{O})\). Hence, we have the following \[|m^{\prime}(t,x)|_{\mathbb{R}^{3}}^{2}=|m_{0}(x)|_{\mathbb{R}^{3}}^{2}=1,\text{ \,Leb.a.a. }x\in\mathcal{O}\text{ for all }t\in[0,T]\ \mathbb{P}^{\prime}-\text{a.s.} \tag{6.5}\] Thus, the constraint condition (3.9) is satisfied. **Remark 6.1**.: _Now that the constraint condition has been satisfied, we observe that the cut-off \(\psi\) only takes the value \(1\), and hence can be removed from the equation. This completes the proof of existence of a weak martingale solution to the problem (3.7), as per Definition 3.2._ Proof of Theorems 3.4 and 7.5 about the pathwise uniqueness and the existence of a unique strong solution For this section, let us fix a probability space \(\left(\Omega,\mathcal{F},\mathbb{P}\right)\) and a Wiener process \(W\) on this space, as in Definition 3.2. The existence theorem (Theorem 3.3) states that the process \(m\) satisfies the equation (3.7) with the help of a test function. The following result, which is a corollary of Theorem 3.3, states that the equation also makes sense in the strong (PDE) form. **Corollary 7.1**.: _Let us assume that the process \(u\) is a control process such that (3.1) holds. Let \(\left(\Omega,\mathcal{F},\mathbb{P},W,m,u\right)\) be a weak martingale solution of (3.7) corresponding to the control process \(u\), satisfying the properties stated in Theorem 3.3. Then the following equation is satisfied in the strong (PDE) sense in the space \(L^{2}\) for each \(t\in\left[0,T\right]\)._ \[m(t) =\int_{0}^{t}m(s)\times\Delta m(s)\,ds-\alpha\,\int_{0}^{t}m(s) \times\left(m(s)\times u(s)\right)ds-\alpha\,\int_{0}^{t}m(s)\times\left(m(s) \times\Delta m(s)\right)\,ds\] \[+\int_{0}^{t}m(s)\times u(s)\,ds+\frac{1}{2}\int_{0}^{t}\left[DG \left(m(s)\right)\right]\left[G\!\left(m\left(s\right)\right)\right]\,ds+ \int_{0}^{t}G\!\left(m(s)\right)dW(s),\mathbb{P}-a.s. \tag{7.1}\] Proof of Corollary 7.1.: The proof of the above corollary follows once we note that each of the integrands of the equality lies in the space \(L^{2}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right)\). This can be verified by using the bounds established in the previous Section 6, Lemma 5.6 and Lemma 5.7. By Theorem 3.3, the process \(m\times\Delta m\) lies in the space \(L^{2}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right)\)\(\mathbb{P}\)-a.s. By the constraint condition (3.9) \[\mathbb{E}\int_{0}^{T}\left|m(t)\times\left(m(t)\times\Delta m(t) \right)\right|_{L^{2}}^{2}\,dt \leq C\mathbb{E}\int_{0}^{T}\left|m(t)\right|_{L^{\infty}}^{2} \left|m(t)\times\Delta m(t)\right|_{L^{2}}^{2}\,dt\] \[=\mathbb{E}\int_{0}^{T}\left|m(t)\times\Delta m(t)\right|_{L^{2} }^{2}\,dt<\infty. \tag{7.2}\] Hence the process \(m\times\left(m\times\Delta m\right)\) also lies in the space \(L^{2}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right)\)\(\mathbb{P}\)-a.s. Arguing similarly, we say that by the constraint condition (3.9) and part (4) in the Assumption 3.1 on the process \(u\), \[\mathbb{E}\int_{0}^{T}\left|m(t)\times u(t)\right|_{L^{2}}^{2}\,dt \leq\mathbb{E}\int_{0}^{T}\left|m(t)\right|_{L^{\infty}}^{2}\, \left|u(t)\right|_{L^{2}}^{2}\,dt\] \[=\mathbb{E}\int_{0}^{T}\left|u(t)\right|_{L^{2}}^{2}\,dt<\infty. \tag{7.3}\] Again from the constraint condition (3.9) and the above inequality, \[\mathbb{E}\int_{0}^{T}\left|m(t)\times\left(m(t)\times u(t)\right) \right|_{L^{2}}^{2}\,dt \leq C\mathbb{E}\int_{0}^{T}\left|m(t)\right|_{L^{\infty}}^{2}\, \left|m(t)\times u(t)\right|_{L^{2}}^{2}\,dt\] \[=C\mathbb{E}\int_{0}^{T}\left|m(t)\times u(t)\right|_{L^{2}}^{2} \,dt\text{ By \eqref{eq:m_p_p_}}\] \[<\infty. \tag{7.4}\] We recall that \[G(m)=m\times h-\alpha\,m\times(m\times h).\] It is thus sufficient to verify the above inequality for the two terms individually. We also recall that \(h\) is assumed to be in \(H^{1}\). The continuous embedding \(H^{1}\hookrightarrow L^{\infty}\) implies that there exists a constant \(C>0\) such that \[\left|h\right|_{L^{\infty}}\leq C\left|h\right|_{H^{1}}<\infty.\] Thus, \[\mathbb{E}\int_{0}^{T}\left|m(t)\times h\right|_{L^{2}}^{2}\,dt \leq\mathbb{E}\int_{0}^{T}\left|m(t)\right|_{L^{2}}^{2}\left|h \right|_{L^{\infty}}^{2}\,dt\] \[\leq T\left|h\right|_{L^{\infty}}^{2}\mathbb{E}\sup_{t\in[0,T]} \left|m(t)\right|_{L^{2}}^{2}<\infty. \tag{7.5}\] The right hand side of the last inequality is finite because of the constraint condition. Similarly, \[\mathbb{E}\int_{0}^{T}\left|m(t)\times(m(t)\times h)\right|_{L^{2}}^{2}\,dt\leq \mathbb{E}\int_{0}^{T}\left|m(t)\right|_{L^{\infty}}\left|m(t)\times h\right|_ {L^{2}}^{2}\,dt<\infty. \tag{7.6}\] The right hand side of the above inequality is finite by the constraint condition (3.9) and the assumption on \(h\). Hence \(G(m)\) takes values in the space \(L^{2}\left(0,T;L^{2}\right)\)\(\mathbb{P}\)-a.s. What remains is to verify the bounds for the correction term, that is to show that the term \(\left(DG(m)\right)\left(G(m)\right)\) also lies in the space \(L^{2}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right)\), \(\mathbb{P}\)-a.s. Recall that Proposition 2.2 shows that the correction term is locally Lipschitz. Also, by the definition of the term \(\left[DG(m)\right]\left(G(m)\right)\), we have \[\left[DG(0)\right]\left(G(0)\right)=0.\] The constraint condition (3.9) implies that the process \(m\) takes values in the unit ball in the space \(L^{\infty}\). Hence there exists a constant \(C>0\) such that \[\left|DG\big{(}m(t)\big{)}\big{[}G\big{(}m(t)\big{)}\big{]}\right|_{L^{2}}\leq C \left|m(t)\right|_{L^{2}}.\] Hence \[\mathbb{E}\int_{0}^{T}\left|DG\big{(}m(t)\big{)}\big{[}G\big{(}m(t)\big{)} \big{]}\right|_{L^{2}}^{2}\,dt\leq C\mathbb{E}\int_{0}^{T}\left|m(t)\right|_{ L^{2}}^{2}\,dt<\infty.\] The right hand side of the last inequality is finite by Theorem (3.3). This concludes the proof of Corollary 7.1. Before we start the proof of the Theorem 3.4, we state a proposition, followed by a corollary that will be used for the proof. **Proposition 7.2**.: _Let \(v\in H^{1}\). Further assume that_ \[\left|v(x)\right|_{\mathbb{R}^{3}}=1\text{ for Leb. a.a. }x\in D. \tag{7.7}\] _Then the following equality holds in \((H^{1})^{\prime}\)._ \[v\times(v\times\Delta v)=-\Delta v-\left|\nabla v\right|_{\mathbb{R}^{3}}^{2}v. \tag{7.8}\] Proof of Proposition 7.2.: We begin by verifying that each side of equality (7.8) belongs to the space \((H^{1})^{\prime}\). By the equality in (3.2), we now show that \(\Delta v\) takes values in the space \(\big{(}H^{1}\big{)}^{\prime}\). Let \(\phi\in H^{1}\). Then \[\big{|}\,_{(H^{1})^{\prime}}\left\langle\Delta v,\phi\right\rangle _{H^{1}}\big{|} =\left|-\left\langle\nabla v,\nabla\phi\right\rangle_{L^{2}}\right|\] \[=\left|\left\langle\nabla v,\nabla\phi\right\rangle_{L^{2}}\right|\] \[\leq\left|\nabla v\right|_{L^{2}}\left|\nabla\phi\right|_{L^{2}}.\] The assumptions on \(v\) and \(\phi\) imply that the right hand side, and hence the left hand side of the above inequality, is finite. The second term on the right hand side of (7.8) is interpreted as follows: \[\left\langle\left|\nabla v\right|_{\mathbb{R}^{3}}^{2}v,\phi\right\rangle_{H^ {1}}=\int_{\mathcal{O}}\left|\nabla v(x)\right|_{\mathbb{R}^{3}}^{2}\left\langle v (x),\phi(x)\right\rangle_{\mathbb{R}^{3}}\,dx. \tag{7.9}\] To show that the right hand side of the above equality makes sense, we observe that \(\phi\in H^{1}\) implies that \(\phi\in L^{\infty}\). This along with the equality (7.7) \[\left|\int_{\mathcal{O}}\left|\nabla v(x)\right|_{\mathbb{R}^{3}}^{2}\left\langle v (x),\phi(x)\right\rangle_{\mathbb{R}^{3}}\,dx\right|\leq C\int_{\mathcal{O}} \left|\nabla v(x)\right|_{\mathbb{R}^{3}}^{2}\,dx.\] The right hand side of the above inequality is finite since \(v\in H^{1}\). The left hand side of the equality (7.8) is in \((H^{1})^{\prime}\) by the way the triple product is understood in (3.6). Hence both the terms on the right hand side of the equality (7.8) belong to the space \(\big{(}H^{1}\big{)}^{\prime}\). We now proceed to show the equality. Let \(\phi\in H^{1}\). The proof uses the following identity in \(\mathbb{R}^{3}\): \[a\times(b\times c)=b\,\langle a,c\rangle_{\mathbb{R}^{3}}-c\,\langle a,b \rangle_{\mathbb{R}^{3}}\,,\ a,b,c\in\mathbb{R}^{3}. \tag{7.10}\] By (3.6), we have \[{}_{(H^{1})^{\prime}}\left\langle v\times(v\times\Delta v),\phi \right\rangle_{H^{1}} =\langle v\times\nabla(\phi\times v),\nabla v\rangle_{L^{2}}\] \[=\langle v\times(\nabla\phi\times v),\nabla v\rangle_{L^{2}}+ \langle v\times(\phi\times\nabla v),\nabla v\rangle_{L^{2}}\] \[=\left\langle\nabla\phi|v|_{\mathbb{R}^{3}}^{2}-v\left\langle v,\nabla\phi\right\rangle_{\mathbb{R}^{3}},\nabla v\right\rangle_{L^{2}}+ \left\langle\left\langle\nabla v,v\right\rangle_{\mathbb{R}^{3}}\phi-\nabla v \left\langle\phi,v\right\rangle_{L^{2}},\nabla v\right\rangle_{L^{2}}\] \[=\left\langle\nabla\phi|v|_{\mathbb{R}^{3}}^{2},\nabla v\right\rangle _{L^{2}}-\left\langle\nabla v\left\langle\phi,v\right\rangle_{\mathbb{R}^{3}}, \nabla v\right\rangle_{L^{2}}\ \left(\text{By (\ref{eq:11})}\right)\] \[=\left\langle\nabla\phi,\nabla v\right\rangle_{L^{2}}-\left\langle \nabla v\left\langle\phi,v\right\rangle_{\mathbb{R}^{3}},\nabla v\right\rangle _{L^{2}}.\ (\text{By (\ref{eq:11})})\] In view of the equalities (3.2) and (7.9), the right hand side of the above equality equals \[-\Delta v-|\nabla v|_{\mathbb{R}^{3}}^{2}\,v\] in \((H^{1})^{\prime}\). The following equality has been used in the calculations above: \[\langle v,\nabla v\rangle_{\mathbb{R}^{3}}=\frac{1}{2}\nabla|v|_{\mathbb{R}^{3 }}^{2}=0. \tag{7.11}\] The right hand side of the above equality is \(0\) since by (7.7), \(|v|_{\mathbb{R}^{3}}^{2}\) is constant. Hence \[v\times(v\times\Delta v)=-\Delta v-|\nabla v|_{\mathbb{R}^{3}}^{2}v.\] This concludes the proof of Proposition 7.2. We have the following result as a corollary of the above proposition. **Corollary 7.3**.: _Let \((\Omega,\mathcal{F},\mathbb{P},W,m,u)\) be a weak martingale solution of (3.7) corresponding to the control process \(u\), as in Corollary 7.1. Then the following equality holds in \(\big{(}H^{1}\big{)}^{\prime}\) for every \(t\in[0,T]\)_ \[m(t)\times(m(t)\times\Delta m(t))=-\Delta m(t)-|\nabla m(t)|_{\mathbb{R}^{3}}^ {2}m(t),\ \mathbb{P}-a.s.\] Proof of Corollary 7.3.: To prove this corollary, it is sufficient to show that the process \(m\) satisfies the assumptions in Proposition 7.2. Theorem 3.3 implies that, in particular, for each \(t\in[0,T]\), \(m(t)\in H^{1}\), \(\mathbb{P}\)-a.s. Also, the constraint condition (3.9) implies that \(|m(t,x)|_{\mathbb{R}^{3}}=1\), Leb-a.a. \(x\in D\) for all \(t\in[0,T]\), \(\mathbb{P}\)-a.s. Hence the corollary follows by applying Proposition 7.2 to \(m(t)\) for each \(t\in[0,T]\). Using the above mentioned corollary, we proceed to prove the pathwise uniqueness. Proof of Theorem 3.4.: Let us choose and fix a control process \(u\) satisfying Assumption 3.1 and two weak martingale solutions \((\Omega,\mathcal{F},\mathbb{P},W,m_{1},u)\) and \((\Omega,\mathcal{F},\mathbb{P},W,m_{2},u)\) corresponding to \(u\) as in Definition 3.2 and satisfying the properties stated in Theorem 3.3. Let us first observe that in view of Corollary 7.3, for each \(i=1,2\), the following identity holds in \((H^{1})^{\prime}\): \[m_{i}(t)= \,\alpha\,\int_{0}^{t}\Delta m_{i}(s)\,ds+\alpha\,\int_{0}^{t}| \nabla m_{i}(s)|_{\mathbb{R}^{3}}^{2}m_{i}(s)\,ds\] \[+\int_{0}^{t}m_{i}(s)\times\Delta m_{i}(s)\,ds+\int_{0}^{t}m_{i}( s)\times u(s)\,ds-\alpha\,\int_{0}^{t}m_{i}(s)\times(m_{i}(s)\times u(s))\,ds\] \[+\frac{1}{2}\int_{0}^{t}\big{[}DG\big{(}m_{i}(s)\big{)}\big{]}\, \big{[}G\big{(}m_{i}(s)\big{)}\big{]}\ ds+\int_{0}^{t}G\big{(}m_{i}(s)\big{)}\, dW(s), \tag{7.12}\] for all \(t\in[0,T]\), \(\mathbb{P}\)-a.s. The above equation is same as the equation in Corollary 7.1, except that the triple product term is expressed as a sum of two terms. The equality holds in \((H^{1})^{\prime}\) and hence it should not make a difference to the equation. It is thus sufficient to show that individually both the integrands lie in the space \(L^{2}\left(0,T;(H^{1})^{\prime}\right)\). Following the arguments in Proposition 7.2, we can prove that for \(v\in L^{2}(0,T;H^{1})\) and \(t\in[0,T]\), \[\int_{0}^{t}\,\left({}^{H^{1})^{\prime}}\left\langle\Delta m_{i}(s),v\right\rangle _{H^{1}}\,ds=-\int_{0}^{t}\left\langle\nabla m_{i}(s),\nabla v(s)\right\rangle _{L^{2}}\,ds.\] Thus by the Cauchy-Schwartz inequality, \[\left|\int_{0}^{t}\left\langle\nabla m_{i}(s),\nabla v(s)\right\rangle_{L^{2} }\,ds\right|\leq\left(\int_{0}^{t}\left|m_{i}(s)\right|_{L^{2}}^{2}\,ds\right) ^{\frac{1}{2}}\left(\int_{0}^{t}\left|v(s)\right|_{L^{2}}^{2}\,ds\right)^{ \frac{1}{2}}<\infty.\] The right hand side of the above inequality is finite because of the assumptions on \(m_{i}\) and \(v\). We now show that the remaining (second) term also takes values in the space \((H^{1})^{\prime}\). \[\int_{0}^{T}\left||\nabla m(t)|_{\mathbb{R}^{3}}^{2}\,m_{i}(t) \right|_{(H^{1})^{\prime}}^{2}dt \leq\int_{0}^{T}\left|\nabla m_{i}(t)\right|_{L^{2}}^{2}\left| \nabla m_{i}(t)\right|_{L^{2}}^{2}\left|m_{i}(t)\right|_{L^{\infty}}^{2}\,dt\] \[\leq\left(\int_{0}^{T}\left|\nabla m_{i}(t)\right|_{L^{2}}^{4}\, dt\right)^{\frac{1}{2}}\left(\int_{0}^{T}\left|\nabla m_{i}(t)\right|_{L^{2}}^{4}\, dt\right)^{\frac{1}{2}}\] \[=\int_{0}^{T}\left|\nabla m_{i}(t)\right|_{L^{2}}^{4}\,dt\] \[\leq CT\sup_{t\in[0,T]}\left|\nabla m_{i}(t)\right|_{L^{2}}^{4}.\] Hence \[\mathbb{E}\left[\int_{0}^{T}\left||\nabla m(t)|_{\mathbb{R}^{3}}^{2}\,m(t) \right|_{(H^{1})^{\prime}}^{2}\,dt\right]\leq C\mathbb{E}\left[\sup_{t\in[0,T] }\left|\nabla m(t)\right|_{L^{2}}^{4}\right]<\infty.\] The last inequality follows from the Theorem 3.3. This justifies the writing of equation (7.12). Define a process \(m\) by \[m(t)=m_{1}(t)-m_{2}(t)\text{ for }t\in[0,T].\] We now consider the equation (7.12) satisfied by each \(m_{i}\) for \(i=1,2\). To get the equation satisfied by the process \(m\), take the difference of (7.12) for \(i=1\) and \(i=2\). We then simplify it to get the equality in (7.13). \[m(t)=\alpha\,\int_{0}^{t}\Delta m(s)\,ds+\alpha\,\int_{0}^{t}| \nabla m_{1}(s)|_{\mathbb{R}^{3}}^{2}m(s)\,ds\] \[+\alpha\,\int_{0}^{t}\left\langle\big{(}\nabla m_{1}(s)-\nabla m _{2}(s)\big{)},\big{(}\nabla m_{1}(s)+\nabla m_{2}(s)\big{)}\right\rangle_{ \mathbb{R}^{3}}m_{2}(s)\,ds\] \[+\int_{0}^{t}m(s)\times\Delta m_{1}(s)\,ds+\int_{0}^{t}m_{2}(s) \times\Delta m(s)\,ds+\int_{0}^{t}m(s)\times u(s)\,ds\] \[-\alpha\,\bigg{[}\int_{0}^{t}m(s)\times(m_{1}(s)\times u(s))\,ds +\int_{0}^{t}m_{2}(s)\times\big{(}m(s)\times u(s)\big{)}\,ds\bigg{]}+\int_{0}^ {t}V_{n}(s)\,ds\] \[+\int_{0}^{t}(m(s)\times h)\,dW(s)-\alpha\,\bigg{[}\int_{0}^{t}m( s)\times\big{(}m_{1}(s)\times h\big{)}\,dW(s)+\int_{0}^{t}m_{2}(s)\times\big{(}m(s) \times h\big{)}\,dW(s)\bigg{]}, \tag{7.13}\] where \[\int_{0}^{t}V_{n}(s)\,ds=\int_{0}^{t}(m(s)\times h)\,ds+\frac{1}{2}\int_{0}^{ t}((m(s)\times h)\times h)\,ds-\frac{1}{2}\alpha\,\bigg{[}\int_{0}^{t}(m(s) \times(m_{1}(s)\times h))\times h\,ds\] \[+\int_{0}^{t}\left(m_{2}(s)\times(m(s)\times h)\right)\times h\,ds\bigg{]}\] \[-\frac{1}{2}\alpha\left[\int_{0}^{t}m(s)\times((m_{1}(s)\times h) \times h)\,ds+\int_{0}^{t}m_{2}(s)\times((m(s)\times h)\times h)\,ds\right]\] \[+\frac{1}{2}\alpha^{2}\bigg{[}\int_{0}^{t}(m(s)\times(m_{1}(s) \times h))\times(m_{1}(s)\times h)\,ds\] \[+\int_{0}^{t}(m_{2}(s)\times(m(s)\times h))\times(m_{1}(s)\times h )\,ds+\int_{0}^{t}(m_{2}(s)\times(m_{2}(s)\times h))\times(m(s)\times h)\,ds \bigg{]}\] \[+\frac{1}{2}\alpha^{2}\bigg{[}\int_{0}^{t}m(s)\times((m_{1}(s) \times(m_{1}(s)\times h))\times h)\,ds\] \[+\int_{0}^{t}m_{2}(s)\times((m(s)\times(m_{1}(s)\times h))\times h )\,ds+\int_{0}^{t}m_{2}(s)\times((m_{2}(s)\times(m(s)\times h))\times h) \bigg{]}\,ds \tag{7.14}\] For convenience of notation, let us write equation (7.13) as \[m(t)=\sum_{i=1}^{9}\int_{0}^{t}C_{i}\,z_{i}(s)\,ds+\sum_{i=1}^{12}\int_{0}^{t} C_{i}\,z_{i}(s)\,dW(s). \tag{7.15}\] Here \(C_{i},\ i=1,\ldots,12\) are constants accompanying the integrals. Consider the function \(\phi_{5}:L^{2}\to\mathbb{R}\) defined by \[v\mapsto\frac{1}{2}|v|_{L^{2}}^{2}.\] Consider the process \(m\) defined above. We apply the Ito formula [53] to \(\phi_{5}\). That the integrands on the right hand side of the equation (7.15) satisfy the conditions mentioned in [53] can be verified as done in section 6. Applying the Ito formula gives us the following equation: \[\frac{1}{2}\left|m(t)\right|_{L^{2}}^{2}= \frac{1}{2}\left|m(0)\right|_{L^{2}}^{2}+\sum_{i=1}^{9}\int_{0}^{t }C_{i}\left\langle z_{i}(s),m(s)\right\rangle_{L^{2}}\,ds+\sum_{i=10}^{12} \int_{0}^{t}C_{i}\left\langle z_{i}(s),m(s)\right\rangle_{L^{2}}\,dW(s)\] \[+\frac{1}{2}\int_{0}^{t}\big{|}G\big{(}m(s)\big{)}\big{|}_{L^{2}} ^{2}\,ds, \tag{7.16}\] for all \(t\in[0,T]\)\(\mathbb{P}\)-a.s. Let us denote the last term on the right hand side of the above equality by \(Z_{13}\). Note that since \(m_{1}\) and \(m_{2}\) have the same initial data, \(m(0)=0\), \(\mathbb{P}=a.s.\) For the sake of simplicity, we write some calculations separately and then combining them gives the desired result. **Calculation for \(z_{1}\).** For each \(t\in[0,T]\), the following equality holds \(\mathbb{P}^{\prime}\)-a.s., see (3.2) \[\int_{0}^{t}\left\langle\Delta m(s),m(s)\right\rangle_{(H^{1})^{\prime}}\,ds =-\int_{0}^{t}\left|\nabla m(s)\right|^{2}\,ds.\] The negative sign here implies that this term goes to the left hand side of the equality with a positive coefficient and hence can be used to balance the other \(\int_{0}^{t}\left|\nabla m(s)\right|_{L^{2}}^{2}ds\) terms coming from some of the other estimates. **Calculations for the terms \(z_{2}\) and \(z_{3}\).** The bound on the terms is calculated below. By Holder's inequality, \[\int_{0}^{t}\left\langle\left|\nabla m_{1}(s)\right|_{\mathbb{R} ^{3}}^{2}m(s),m(s)\right\rangle_{L^{2}}\,ds \leq C\int_{0}^{t}\left|\nabla m_{1}\right|_{L^{2}}^{2}\left|m \right|_{L^{\infty}}^{2}\,ds\] \[\text{(By Agmon's inequality)} \leq C\int_{0}^{t}\left|\nabla m_{1}\right|_{L^{2}}^{2}\left|m \right|_{L^{2}}\left|m\right|_{H^{1}}\,ds\] \[\leq C\int_{0}^{t}|\nabla m_{1}|_{L^{2}}^{2}\,|m|_{L^{2}}\,\left[|m|_ {L^{2}}+|\nabla m|_{L^{2}}\right]ds\] \[\leq C\int_{0}^{t}|\nabla m_{1}|_{L^{2}}^{2}\,|m|_{L^{2}}^{2}\,\,ds +C\int_{0}^{t}|\nabla m_{1}|_{L^{2}}^{2}\,|m|_{L^{2}}\,|\nabla m|_{L^{2}}\,\,ds\] \[\text{(By Young's inequality)} \leq C\int_{0}^{t}|\nabla m_{1}|_{L^{2}}^{2}\,|m|_{L^{2}}^{2}\,\,ds +C^{2}\frac{C(\varepsilon)}{2}\int_{0}^{t}|\nabla m_{1}|_{L^{2}}^{4}\,|m(s)|_{ L^{2}}^{2}\,\,ds\] \[+\frac{\varepsilon}{2}\int_{0}^{t}|\nabla m(s)|_{L^{2}}^{2}\,\,ds.\] Here \(\varepsilon>0\) will be chosen later. The above sequence of inequalities uses the inequality (5.36) along with Young's inequality. \[\int_{0}^{t}\left\langle\left\langle\nabla m_{1}(s),\nabla m(s) \right\rangle_{\mathbb{R}^{3}}m_{2}(s),m(s)\right\rangle_{L^{2}}\,ds \leq\int_{0}^{t}|\nabla m_{1}(s)|_{L^{2}}\,|m_{2}(s)|_{L^{ \infty}}\left|\nabla m(s)\right|_{L^{2}}\left|m(s)\right|_{L^{\infty}}\,\,ds\] (Since \(\left|m_{2}(s)\right|_{L^{\infty}}=1\), Agmon's inequality) \leq C\int_{0}^{t}|\nabla m_{1}(s)|_{L^{2}}\,|\nabla m(s)|_{L^{2 }}\,|m(s)|_{L^{2}}^{\frac{1}{2}}\,|m(s)|_{H^{1}}^{\frac{1}{2}}\,\,ds\] \[\leq C\int_{0}^{t}|\nabla m_{1}(s)|_{L^{2}}\,|\nabla m(s)|_{L^{2} }\,|m(s)|_{L^{2}}^{\frac{1}{2}}\,\bigg{[}\,|m(s)|_{L^{2}}^{\frac{1}{2}}\] \[\quad+\left|\nabla m(s)\right|_{L^{2}}^{\frac{1}{2}}\bigg{]}\,ds\] \[\leq C\int_{0}^{t}|\nabla m_{1}(s)|_{L^{2}}\,|\nabla m(s)|_{L^{2} }\,|m(s)|_{L^{2}}\,\,ds\] \[\quad+C\int_{0}^{t}|\nabla m_{1}(s)|_{L^{2}}\,|m(s)|_{L^{2}}^{ \frac{1}{2}}\,|\nabla m(s)|_{L^{2}}^{\frac{3}{2}}\,ds\] \[\text{(By Young's inequality for $p=q=2$)} \leq C\frac{C(\varepsilon)}{2}\int_{0}^{t}|\nabla m_{1}(s)|_{L^{2 }}^{2}\,|m(s)|_{L^{2}}^{2}\,\,ds\] \[+\frac{\varepsilon}{2}\int_{0}^{t}|\nabla m(s)|_{L^{2}}^{2}\,\,ds\] \[(\text{By Young's inequality for $p=4,q=\frac{4}{3}$}) +C^{4}\frac{C(\varepsilon)}{4}\int_{0}^{t}|\nabla m_{1}(s)|_{L^{2 }}^{4}\,|m(s)|_{L^{2}}^{2}\,\,ds\] \[+\frac{3\varepsilon}{4}\int_{0}^{t}|\nabla m(s)|_{L^{2}}^{2}\,\,ds\] \[=C(\varepsilon)\int_{0}^{t}|m(s)|_{L^{2}}^{2}\,\Big{[}|\nabla m_{1 }(s)|_{L^{2}}^{2}+|\nabla m_{1}(s)|_{L^{2}}^{4}\Big{]}\,\,ds\] \[+\frac{5\varepsilon}{4}\int_{0}^{t}|\nabla m(s)|_{L^{2}}^{2}\,\,ds.\] Similarly, \[\int_{0}^{t}\left\langle\left\langle\nabla m_{2}(s),\nabla m(s) \right\rangle_{\mathbb{R}^{3}}m_{2}(s),m(s)\right\rangle_{L^{2}}ds\leq C(\varepsilon)\int_{0}^{t}|m(s)|_{L^{2}}^{2}\,\Big{[}|\nabla m _{2}(s)|_{L^{2}}^{2}+|\nabla m_{2}(s)|_{L^{2}}^{4}\Big{]}\,\,ds\] \[+\frac{5\varepsilon}{4}\int_{0}^{t}|\nabla m(s)|_{L^{2}}^{2}\,\,ds.\] **Note:** All the constants have been condensed into \(C(\varepsilon)\). Hence \[\int_{0}^{t}\left\langle\left|\nabla m_{1}(s)\right|_{\mathbb{R}^ {3}}^{2}m_{1}(s)-|\nabla m_{2}(s)|_{\mathbb{R}^{3}}^{2}m_{2}(s),m(s)\right\rangle _{L^{2}}\,ds\] \[\leq C(\varepsilon)\int_{0}^{t}|m(s)|_{L^{2}}^{2}\,\Big{[}| \nabla m_{1}(s)|_{L^{2}}^{2}+|\nabla m_{1}(s)|_{L^{2}}^{4}+|\nabla m_{2}(s)|_{L^ {2}}^{2}+|\nabla m_{2}(s)|_{L^{2}}^{4}\Big{]}\,\,ds\] \[+\frac{\varepsilon}{2}\int_{0}^{t}\left|\nabla m(s)\right|_{L^{2}}^{2}\, ds+\frac{\varepsilon}{2}\int_{0}^{t}\left|\nabla m(s)\right|_{L^{2}}^{2}\,ds.\] Here \(\varepsilon>0\) will be chosen later. The second equality is basically the way \(m\times\Delta m\) is interpreted (as an element of \((H^{1})^{\prime}\)). The fourth inequality comes from the use of Young's \(\varepsilon\) inequality. Combining the constants into one constant \(C(\varepsilon)\), we get \[\left|\int_{0}^{t}\left\langle z_{4}(s)+z_{5}(s),m(s)\right\rangle _{L^{2}}ds\right|\leq C(\varepsilon)\int_{0}^{t}\left[\left|\nabla m_{2}(s) \right|_{L^{2}}^{2}\right.\] \[\left.+\left|\nabla m_{2}(s)\right|_{L^{2}}^{4}\,\right]\left|m(s )\right|_{L^{2}}^{2}\,ds+\varepsilon\int_{0}^{t}\left|\nabla m(s)\right|_{L^ {2}}^{2}\,ds. \tag{7.17}\] Here the constants depending on \(\varepsilon\) are combined into one constant suitable \(C(\varepsilon)\). **Calculation for \(z_{6}\).** Concerning the first term with the control process \(u\), that is \(z_{6}\), we observe that \[\int_{0}^{t}\left\langle z_{6}(s),m(s)\right\rangle_{L^{2}}\,ds=\int_{0}^{t} \left\langle m(s)\times u(s),m(s)\right\rangle_{L^{2}}\,ds=0.\] **Calculation for \(z_{7},z_{8}\).** For the remaining terms (with the control process \(u\)), the following estimate can be done. By Holder's inequality, followed first by Agmon's inequality and then by Young's inequality implies that for \(\varepsilon>0\), there exists constants \(C,C(\varepsilon)\) such that for \(t\in[0,T]\), \[\int_{0}^{t} \big{|}\big{\langle}m_{1}(s)\times\big{(}m_{1}(s)\times u(s) \big{)}-m_{2}(s)\times\big{(}m_{2}(s)\times u(s)\big{)},m(s)\big{\rangle}_{L^{ 2}}\,|\,ds\] \[\leq C\int_{0}^{t}\left(1+\left|u(s)\right|_{L^{2}}^{2}\right)|m( s)|_{L^{2}}^{2}\ ds+\frac{\varepsilon}{2}\int_{0}^{t}\left|\nabla m(s)\right|_{L^{ 2}}^{2}\,ds.\] The terms that remain are the terms corresponding to the noise term, that is \(G(m)\) (\(z_{10},z_{11},z_{12}\)), Ito to Stratonovich correction term \(DG(m)(G(m))\), i.e. (\(z_{9}\)), along with the last term on the right hand side of (7.16), i.e. \(Z_{13}\). **Calculations for the terms \(z_{9}\) and \(Z_{13}\).** By Lemma 2.1 and Proposition 2.2, both \(z_{9},Z_{13}\) are locally Lipschitz. Hence it is sufficient to show that the processes \(m_{1}\) and \(m_{2}\) lie in a ball in the space \(L^{2}\). In this direction, by the continuous embedding \(L^{\infty}\hookrightarrow L^{2}\) and the Theorem 3.3, there exists a constant \(C>0\) such that \[|m_{i}|_{L^{2}}\leq C|m_{i}|_{L^{\infty}}\leq 2C. \tag{7.18}\] for \(i=1,2\). The processes \(m_{1}(s)\) and \(m_{2}(s)\) thus take values in a ball in \(L^{2}\). Hence there exists a constant \(C>0\) such that for each \(s\in[0,T]\), \[|G(m_{1}(s))-G(m_{2}(s))|_{L^{2}}\leq C_{1}|m_{1}(s)-m_{2}(s)|_{L^{2}}=C_{1}|m (s)|_{L^{2}}.\] Similarly, there exists another constant \(C_{2}\) such that for each \(s\in[0,T]\), \[\big{|}DG\big{(}m_{1}(s)\big{)}\left[G(m_{1})(s)\right]-DG\big{(} m_{2}(s)\big{)}\left[G\big{(}m_{2}(s)\big{)}\right]\big{|}_{L^{2}} \leq C_{2}\left|G(m_{1}(s))-G\big{(}m_{2}(s)\big{)}\right|_{L^{2}}\] \[\leq C_{1}C_{2}|m_{1}(s)-m_{2}(s)|_{L^{2}}\] \[=C_{1}C_{2}|m(s)|_{L^{2}}.\] Hence by the Cauchy-Schwartz inequality and the above estimate, we have \[\int_{0}^{T}\left\langle G(m_{1})-G(m_{2}),m(s)\right\rangle_{L^{ 2}}\,ds \leq\int_{0}^{T}\left|G(m_{1})-G(m_{2})\right|_{L^{2}}\left|m(s) \right|_{L^{2}}\,ds\] \[\leq C_{1}\int_{0}^{T}\left|m(s)\right|_{L^{2}}^{2}\,ds.\] Similarly, \[\int_{0}^{t}\left\langle DG\big{(}m_{1}(s)\big{)}\left[G\big{(}m _{1}(s)\big{)}\right]-DG\big{(}m_{2}(s)\big{)}\left[G\big{(}m_{2}(s)\big{)} \right],m(s)\right\rangle_{L^{2}}\,ds\] \[\leq C_{1}C_{2}\int_{0}^{t}\left|m(s)\right|_{L^{2}}^{2}\,ds.\] Regarding the correction term that appears after the use of the Ito formula, by the locally Lipschitz continuity of \(G\), there exists a constant \(C>0\) such that \[\int_{0}^{t}\left|G(m_{1}(s))-G(m_{2}(s))\right|_{L^{2}}^{2}\,ds\leq C\int_{0} ^{t}\left|m(s)\right|_{L^{2}}^{2}\,ds.\] Now we combine (7.16) and the above mentioned estimates. We collect the integrals with similar integrands. While doing this, we also combine the corresponding constants for simplifying the presentation. Thus there exists a constant \(C>0\) such that \[|m(t)|_{L^{2}}^{2}+(\alpha\,-4\varepsilon)\int_{0}^{t}\left|\nabla m(s)\right| _{L^{2}}^{2}\,ds\leq|m(0)|_{L^{2}}^{2}\] \[+\int_{0}^{t}\left|m(s)\right|_{L^{2}}^{2}\left[C+C\bigg{(}\left| \nabla m_{1}(s)\right|_{L^{2}}^{2}+\left|\nabla m_{1}(s)\right|_{L^{2}}^{4}\right.\] \[\left.+\left|\nabla m_{2}(s)\right|_{L^{2}}^{2}+\left|\nabla m_{2} (s)\right|_{L^{2}}^{4}\bigg{)}+\left|u(s)\right|_{L^{2}}+\left|u(s)\right|_{L^ {2}}^{2}\bigg{]}\,ds\] \[+\int_{0}^{t}\left\langle G\big{(}m_{1}(s)\big{)}-G\big{(}m_{2}( s)\big{)},m(s)\right\rangle_{L^{2}}\,dW(s).\] We choose \(\varepsilon>0\) such that \((\alpha\,-4\varepsilon)<0\). We recall that the processes \(m_{1}\) and \(m_{2}\) have the same initial condition \(m_{0}\). Hence \(\left|m(0)\right|_{L^{2}}=0\). Also by the choice of \(\varepsilon\), the term \((\alpha\,-4\varepsilon)\int_{0}^{t}\left|\nabla m(s)\right|_{L^{2}}^{2}\,ds\) is non-negative. Let \(C>0\) be a constant. For \(t\in[0,T]\), let \[\Phi_{C}(t)=C+C\left(\left|\nabla m_{1}(s)\right|_{L^{2}}^{2}+\left|\nabla m_ {1}(s)\right|_{L^{2}}^{4}+\left|\nabla m_{2}(s)\right|_{L^{2}}^{2}+\left| \nabla m_{2}(s)\right|_{L^{2}}^{4}\right)+\left|u(s)\right|_{L^{2}}^{2}. \tag{7.19}\] Hence \[\left|m(t)\right|_{L^{2}}^{2}\leq\int_{0}^{t}\Phi_{C}(t)\left|m(s)\right|_{L^ {2}}^{2}\,ds+\int_{0}^{t}\left\langle G\big{(}m_{1}(s)\big{)}-G\big{(}m_{2}(s) \big{)},m(s)\right\rangle_{L^{2}}\,dW(s). \tag{7.20}\] The application of the Ito formula gives \[\left|m(t)\right|_{L^{2}}^{2}e^{-\int_{0}^{t}\Phi_{C}(s)\,ds}\leq\int_{0}^{t}e ^{-\int_{0}^{s}\Phi_{C}(r)\,dr}\left\langle G\big{(}m_{1}(s)\big{)}-G\big{(} m_{2}(s)\big{)},m(s)\right\rangle_{L^{2}}\,dW(s). \tag{7.21}\] Some details of this calculation are given in the Appendix B. A similar idea has been used in [14, 60]. By the definition of \(\Phi_{C}\), \(\Phi_{C}(t)\geq 0\) for each \(t\in[0,T]\)\(\mathbb{P}-\)a.s. and the bounds established in Theorem 3.3 imply that for any \(t\in[0,T]\), \[\int_{0}^{t}\Phi_{C}(s)\,ds<\infty,\ \mathbb{P}-\text{a.s.} \tag{7.22}\] Hence \(\mathbb{P}-\)a.s., \[e^{-\int_{0}^{t}\Phi_{C}(s)\,ds}\leq 1. \tag{7.23}\] The mapping \(G\) is Lipschitz on balls. The processes \(m_{1},m_{2}\) satisfy the constraint condition (3.9), and hence are uniformly bounded. Hence the processes \(m\) is also uniformly bounded. This implies that the stochastic integral on the right hand side of the inequality (7.21) is a martingale. Thus taking the expectation on both the sides of the inequality (7.21), we get \[\mathbb{E}\left|m(t)\right|_{L^{2}}^{2}e^{-\int_{0}^{t}\Phi_{C}(s)\,ds}\leq \mathbb{E}\int_{0}^{t}e^{-\int_{0}^{s}\Phi_{C}(r)\,dr}\left\langle G\big{(}m_ {1}(s)\big{)}-G\big{(}m_{2}(s)\big{)},m(s)\right\rangle_{L^{2}}\,dW(s)=0. \tag{7.24}\] Hence \[\mathbb{E}\left|m(s)\right|_{L^{2}}^{2}e^{-\int_{0}^{t}\Phi_{C}(s)\,ds}\leq 0.\] But for each \(t\in[0,T]\), \(e^{-\int_{0}^{t}\Phi_{C}(s)\,ds}\geq 0\). Hence \[\left|m(t)\right|_{L^{2}}^{2}=0\ \mathbb{P}-\text{a.s.} \tag{7.25}\] This concludes the proof of Theorem 3.4. We now define what we mean by a strong solution to the problem (3.7). **Definition 7.4** (Strong solution).: _The problem (3.7) is said to admit a strong solution if the following holds: Let \((\Omega,\mathbb{F},\mathcal{F},\mathbb{P})\) be a filtered probability space along with initial data \(m_{0}\) and a control process \(u\) on the space, satisfying Assumption 3.1. Then there exists an \(\mathbb{F}\)-adapted process \(m\) on the said probability space such that the tuple \((\Omega,\mathbb{F},\mathcal{F},\mathbb{P},W,m,u)\) is a weak martingale solution to the problem (3.7) according to Definition 3.2._ The existence of a strong solution now follows as a consequence, which is stated in the following result. **Theorem 7.5**.: _The problem (3.7) for a given initial data \(m_{0}\) and a control process \(u\), both satisfying the assumptions mentioned in the Theorem 3.3, has a pathwise unique strong solution as defined in Definition 7.4. Moreover, the strong solution is unique in law._ Proof of Theorem 7.5.: To prove the existence of a strong solution, we apply Theorem 2 from [52], which is a special case of Theorem 12.1 in the same reference. First, Theorem 3.3 ensures that the problem (3.7) admits a weak martingale solution for initial data and control process satisfying Assumption 3.1. Further, Theorem 3.4 ensures that the obtained solution is pathwise unique in the following sense. Let \((\Omega,\mathbb{F},\mathcal{F},\mathbb{P},m_{1},u,W)\) and \((\Omega,\mathbb{F},\mathcal{F},\mathbb{P},m_{2},u,W)\) be two weak martingale solutions corresponding to the same initial data \(m_{0}\) and control \(u\), on the same probability space. Let \(m_{1}\) and \(m_{2}\) satisfy the bounds in (5) of Definition 3.2. Then for each \(t\in[0,t]\), we have \(m_{1}(t)=m_{2}(t),\ \mathbb{P}-a.s.\). Let \(C_{0}([0,T];\mathbb{R})\) denote the space \[\left\{v\in C([0,T];\mathbb{R}):v(0)=0\right\}.\] By part (3) of Theorem 12.1, Theorem 13.2 and Lemma E, [52], there exists a Borel measurable map \[J:C_{0}([0,T];\mathbb{R})\to C([0,T];L^{2})\cap L^{2}(0,T;H^{1})\] such that the following holds. Let \((\Omega,\mathcal{F},\mathbb{F},\mathbb{P})\) be a given filtered probability space along with a control process \(u\), all satisfying Assumption 3.1. Let \(W=(W(t))_{t\in[0,T]}\) be an arbitrary real valued Wiener process on the said space. Let \(m=J\circ W\). That is, \[m:\Omega\ni\omega\mapsto J(W(\omega))\in C([0,T];L^{2})\cap L^{2}(0,T;H^{1}).\] Then, the tuple \((\Omega,\mathcal{F},\mathbb{F},\mathbb{P},W,m,u)\) is a weak martingale solution to the problem (3.7) on the space \((\Omega,\mathcal{F},\mathbb{F},\mathbb{P})\). Therefore, given a filtered probability space \((\Omega,\mathbb{F},\mathcal{F},\mathbb{P})\) along with initial data \(m_{0}\) and a control process \(u\) on the space, satisfying Assumption 3.1, we have shown that there exists a \(\mathbb{F}\)-adapted process \(m\) such that the tuple \((\Omega,\mathbb{F},\mathcal{F},\mathbb{P},W,m,u)\) is a weak martingale solution to the problem (3.7), thus showing the existence of a strong solution according to Definition 7.4. ## 8. Further regularity: Proof of Theorem 3.5 So far we have shown that there exists a strong solution to the problem (3.7) with the initial condition and the given control satisfying the assumptions given in Theorem 3.3. This section is dedicated to proving further regularity for the above mentioned strong solution. Recall that by definition \[Av=-\Delta v\ \text{for}\ v\in D(A),\] and \[D(A)=\left\{v\in H^{2}:\frac{\partial v}{\partial\nu}=0\ \text{on}\ \partial\mathcal{O}\right\},\] where \(\nu\) denotes the outward pointing normal vector and \(\partial\mathcal{O}\) denotes the boundary of \(\mathcal{O}\). In other words, the domain of \(A\) is the subspace of elements of \(H^{2}\) that satisfy the Neumann boundary condition. We also recall that \[A_{1}=I_{L^{2}}+A.\] Here \(I_{L^{2}}\) denotes the identity operator on the space \(L^{2}\). Thus showing the bound for \(\Delta m\) should be enough since \(m\) is already bounded in \(L^{2}\). The existence of the process \(m\) is guaranteed by Theorem 7.5. What remains to show is that \(m\) satisfies the inequality (3.13). Let \(\{e^{-tA}\}_{t\in[0,T]}\) denote the semigroup generated by the operator \(A\). The solution \(m\) to the problem (3.7) can be written in mild form, see for example, Section 6 in [25], or the proof of first part of Theorem 9.15 in [56], as \[m(t)= e^{-\alpha\,tA}m_{0}+\alpha\,\int_{0}^{t}e^{-\alpha(t-s)A}(|\nabla m(s)| _{\mathbb{R}^{3}}^{2})m(s)\,ds+\int_{0}^{t}e^{-\alpha(t-s)A}\left(m(s)\times \Delta m(s)\right)\,ds\] \[-\alpha\,\int_{0}^{t}e^{-\alpha(t-s)A}\left[m(s)\times\left(m(s) \times u(s)\right)\right]\,ds+\int_{0}^{t}e^{-\alpha(t-s)A}\big{[}\big{(}m(s) \times u(s)\big{)}\big{]}\,ds\] \[+\alpha\,\int_{0}^{t}e^{-\alpha(t-s)A}\big{(}m(s)\times(m(s) \times h)\big{)}\,dW(s)+\int_{0}^{t}e^{-\alpha(t-s)A}(m(s)\times h)\,dW(s)\] \[+\frac{1}{2}\int_{0}^{t}e^{-\alpha(t-s)A}\left[DG\big{(}m(s)\big{)} \right]G\big{(}m(s)\big{)}\,ds. \tag{8.1}\] **Idea of the proof of (3.13):** The proof will primarily consist of two steps. Step 1 shows the bound on the first term in the inequality (3.13). We consider the above mentioned mild formulation (8.1). Instead of showing the bound directly on the process \(m\), the bound will be shown on each term on the right hand side of (8.1). Step 2 will use the bound so obtained to show a bound on the second term in the inequality (3.13). The following properties of the operators \(A,A_{1}\) will be used throughout the proof. 1. \(e^{-tA}\) is ultracontractive, see Section 7.2.7 in [4]. That is, for \(1\leq p\leq q\leq\infty\), there exists a constant \(C>0\) such that \[\big{|}e^{-tA}f\big{|}_{L^{q}}\leq\frac{C}{t^{\frac{1}{2}\left(\frac{1}{p}- \frac{1}{q}\right)}}\,|f|_{L^{p}}\,\text{ for }f\in L^{p},\ t>0.\] (8.2) 2. \(A\) has the maximal regularity property. Let \(f\in L^{2}\left(0,T;L^{2}\right)\) and \[v(t)=\int_{0}^{t}e^{-(t-s)A}f(s)\,ds,\quad t\in[0,T].\] Then we have \[\int_{0}^{t}\left|Av(t)\right|_{L^{2}}^{2}\,dt\leq C\int_{0}^{t} \left|f(t)\right|_{L^{2}}^{2}\,dt.\] (8.3) 3. The operator \(A_{1}=I+A\) generates a semigroup (denoted by \(e^{-tA_{1}}\)), see Theorem 1.1 in [55]. Thus using (8.2) for \(f\in L^{p}\) and \(t>0\), we get \[\big{|}e^{-tA_{1}}f\big{|}_{L^{q}} =\big{|}e^{-tA}e^{-tI}f\big{|}_{L^{q}}\] \[\leq C\,\big{|}e^{-tA}f\big{|}_{L^{q}}\] \[\leq\frac{C}{t^{\frac{1}{2}\left(\frac{1}{p}-\frac{1}{q}\right)}} \,|f|_{L^{p}}\,.\] (8.4) 4. The operators \(A^{\delta}e^{-tA}\) and \(A_{1}^{\delta}e^{-tA_{1}}\) are bounded on \(L^{2}\), see Theorem 6.13 in [55]. Moreover, there exists a constant \(C>0\) such that \[\big{|}A^{\delta}e^{-tA}\big{|}\leq\frac{C}{t^{\delta}}\] (8.5) and \[\big{|}A_{1}^{\delta}e^{-tA_{1}}\big{|}\leq\frac{C}{t^{\delta}}.\] (8.6) Here \(\big{|}A^{\delta}e^{-tA}\big{|}\) and \(\big{|}A_{1}^{\delta}e^{-tA_{1}}\big{|}\) denote the operator norms of \(A^{\delta}e^{-tA}\) and \(A_{1}^{\delta}e^{-tA_{1}}\) respectively. **Step 1:** We show that \[\mathbb{E}\int_{0}^{T}|\nabla m(t)|_{L^{4}}^{4}\,dt<\infty. \tag{8.7}\] The following Sobolev embedding holds for \(\delta\in\left(\frac{5}{8},\frac{3}{4}\right)\), see Lemma C.1. \[X^{\delta}\hookrightarrow W^{1,4}.\] It is thus sufficient to prove the following stronger estimate to show (8.7). \[\mathbb{E}\int_{0}^{T}\left|A_{1}^{\delta}m(t)\right|_{L^{2}}^{4}\,dt<\infty. \tag{8.8}\] We recall that for \(v\in X^{\delta}=D(A_{1}^{\delta})\), \[\left|v\right|_{X^{\delta}}=\left|A_{1}^{\delta}v\right|_{L^{2}}.\] The step will be further divided into \(3\) sub steps. The first dealing with the first two terms appearing in the equality (8.1). In the second sub step, we consider a function \(f\) satisfying certain bounds and show the bounds for this \(f\). The idea is that the remaining terms in (8.1) (except the terms with the stochastic integral) fall into this category and hence it suffices to show the calculations for \(f\). The third sub step deals with the terms that contain the stochastic integral. **Sub step 1:** Consider the first term \(e^{-tA}m_{0}\). \[\begin{split}|A_{1}^{\delta}e^{-tA}m_{0}|_{L^{2}}^{4}& =|\left(I+A\right)^{\delta}e^{tI_{L^{2}}}e^{-t(A+I)}m_{0}|_{L^{2}}^ {4}\ (\text{Since }A_{1}=I_{L^{2}}+A)\\ &\leq Ce^{t}|A_{1}^{\delta}e^{-tA_{1}}m_{0}|_{L^{2}}^{4}\ (\text{Since }\left|e^{tI_{L^{2}}}\right|\leq Ce^{t})\\ &\leq Ce^{T}|A_{1}^{\delta-\frac{1}{2}}e^{-tA_{1}}A_{1}^{\frac{1} {2}}m_{0}|_{L^{2}}^{4}\ (\text{Since }\delta=\delta-\frac{1}{2}+\frac{1}{2})\\ &\leq\frac{C}{t^{4(\frac{2\delta}{2})}}\left|A_{1}^{\frac{1}{2}}m _{0}\right|_{L^{2}}^{4}\ (\text{By (\ref{eq:1}))}\\ &\leq\frac{C}{t^{4\delta-2}}\left|m_{0}\right|_{H^{1}}^{4}.\ (\text{Since }\left|A_{1}^{\frac{1}{2}}\cdot\right|_{L^{2}}=\left|\cdot\right|_{H^{1}}) \end{split}\] Hence \[\int_{0}^{T}|A_{1}^{\delta}e^{-tA}m_{0}|_{L^{2}}^{4}\,dt\leq|m_{0}|_{H^{1}}^{ 4}\int_{0}^{T}\frac{C}{t^{4\delta-2}}\,dt.\] Since \(\delta<\frac{3}{4}\), the integral on the right hand side of the above inequality is finite. Hence there exists a constant \(C>0\) such that \[\int_{0}^{T}|A_{1}^{\delta}e^{-tA}m_{0}|_{L^{2}}^{4}\,dt\leq C. \tag{8.9}\] And hence \[\mathbb{E}\int_{0}^{T}|A_{1}^{\delta}e^{-tA}m_{0}|_{L^{2}}^{4}\,dt\leq C. \tag{8.10}\] For the second term, first we observe the following. Let \(t\in[0,T]\). \[\int_{\mathcal{O}}\left|\nabla m(t,x)\right|_{\mathbb{R}^{3}}^{2}\left|m(t,x) \right|_{\mathbb{R}^{3}}\,dx=\int_{\mathcal{O}}\left|\nabla m(t,x)\right|_{ \mathbb{R}^{3}}^{2}\,dx\leq\left|m(t)\right|_{H^{1}}^{2}.\] Hence \[\sup_{t\in[0,T]}\int_{\mathcal{O}}\left|\nabla m(t,x)\right|_{\mathbb{R}^{3}} ^{2}\left|m(t,x)\right|_{\mathbb{R}^{3}}\,dx\leq\sup_{t\in[0,T]}\left|m(t) \right|_{H^{1}}^{2}.\] For simplicity of notation, let \(g(s)=\left|\nabla m(s)\right|_{\mathbb{R}}^{2}m(s)\). \[\begin{split}\left|A_{1}^{\delta}e^{-(t-s)A}g(s)\right|_{L^{2}}& \leq C\left|A_{1}^{\delta}e^{-(t-s)A_{1}}g(s)\right|_{L^{2}}\\ &=C\left|A_{1}^{\delta}e^{-\frac{(t-s)}{2}A_{1}}e^{-\frac{(t-s)}{ 2}A_{1}}g(s)\right|_{L^{2}}\\ &\leq C\left|A_{1}^{\delta}e^{-\frac{(t-s)}{2}A_{1}}\right| \left|e^{-\frac{(t-s)}{2}A_{1}}g(s)\right|_{L^{2}}\\ &\leq C\left|A_{1}^{\delta}e^{-\frac{(t-s)}{2}A_{1}}\right|\frac{ 1}{(t-s)^{\frac{1}{2}}}\left|g(s)\right|_{L^{1}}\ (\text{By (\ref{eq:1}) with }p=1,q=2)\end{split}\] \[\leq\frac{C}{\left(t-s\right)^{\delta+\frac{1}{4}}}\left|g(s)\right|_{ L^{2}}\,\text{By \eqref{eq:C1}}\] \[\leq\frac{C}{\left(t-s\right)^{\delta+\frac{1}{4}}}\left|m(s) \right|_{H^{1}}^{2}.\] Therefore, \[\int_{0}^{T}\left|\int_{0}^{t}A_{1}^{\delta}e^{-(t-s)A}g(s)\,ds\right|_{L^{2}}^ {4}\,dt\leq C\sup_{t\in[0,T]}\left|m(s)\right|_{H^{1}}^{8}\int_{0}^{T}\int_{0}^ {t}\left(\frac{1}{\left(t-s\right)^{\delta+\frac{1}{4}}}\,ds\right)^{4}\,dt.\] Since \(\delta<\frac{3}{4}\), that is \(\delta+\frac{1}{4}<1\), the integration on the right hand side is finite. Hence there exists a constant \(C>0\) such that \[\mathbb{E}\int_{0}^{T}\left|\int_{0}^{t}A_{1}^{\delta}e^{-(t-s)A}g(s)\,ds \right|_{L^{2}}^{4}\,dt\leq C. \tag{8.11}\] **Sub step 2:** Consider a function \(f\in L^{4}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right)\). There exists constants \(C_{1},C_{2}>0\) such that \[\left|A_{1}^{\delta}e^{-(t-s)A}f(s)\right|_{L^{2}} =\left|A_{1}^{\delta}e^{-(t-s)A_{1}}e^{(t-s)I_{L^{2}}}f(s)\right| _{L^{2}}\] \[\leq\left|A_{1}^{\delta}e^{-(t-s)A_{1}}e^{(t-s)I_{L^{2}}}\left| \left|f(s)\right|_{L^{2}}\right.\] \[\leq C_{1}\left|A_{1}^{\delta}e^{-(t-s)A_{1}}\right|\left|f(s) \right|_{L^{2}}\,\text{ (Since }\left|e^{(t-s)I_{L^{2}}}\right|\leq C_{1}\right)\] \[\leq\frac{C_{2}}{\left(t-s\right)^{\delta}}\left|f(s)\right|_{L^{2 }}.\,\text{(By \eqref{eq:C1})}\] Therefore replacing the constants \(C_{1},C_{2}\) above by a suitable constant \(C\), we get \[\int_{0}^{T}\left(\int_{0}^{t}\left|A_{1}^{\delta}e^{-(t-s)A}f(s)\right|_{L^{2 }}\,ds\right)^{4}\,dt\leq C\int_{0}^{T}\left(\int_{0}^{t}\frac{1}{\left(t-s \right)^{\delta}}\left|f(s)\right|_{L^{2}}\,ds\right)^{4}\,dt,\] Using Young's convolution inequality for \(p=\frac{4}{3}\) and \(q=2\), we get \[\int_{0}^{T}\left(\int_{0}^{t}\frac{1}{\left(t-s\right)^{\delta}}\left|f(s) \right|_{L^{2}}\,ds\right)^{4}\,dt\leq\left(\int_{0}^{T}s^{-\frac{4\delta}{3}} \,ds\right)^{\left(\frac{3}{4}\right)\left(\int_{0}^{T}\left|f(s)\right|_{L^{2 }}^{2}\,ds\right)^{2}.\] That \(\delta<\frac{3}{4}\) implies \(\frac{4\delta}{3}<1\). Hence the first integral on the right hand side of the above inequality is finite. Hence \[\int_{0}^{T}\left(\int_{0}^{t}\frac{1}{\left(t-s\right)^{\delta}}\left|f(s) \right|_{L^{2}}\,ds\right)^{4}\,dt\leq C\left(\int_{0}^{T}\left|f(s)\right|_{L^ {2}}^{2}\,ds\right)^{2}.\] Therefore \[\mathbb{E}\int_{0}^{T}\left(\int_{0}^{t}\frac{1}{\left(t-s\right)^{\delta}} \left|f(s)\right|_{L^{2}}\,ds\right)^{4}\,dt\leq C\mathbb{E}\left(\int_{0}^{T} \left|f(s)\right|_{L^{2}}^{2}\,ds\right)^{2}<\infty.\] Now consider the remaining terms on the right hand side of the equality (8.1), except for the terms with the Ito integral. By Theorem 3.3, the solution \(m\) takes values on the unit sphere in \(\mathbb{R}^{3}\). By the bounds mentioned in Theorem 3.3 and the Assumption 3.1 on the control process \(u\), we have \[m\times\Delta m\in L^{4}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right). \tag{8.12}\] The constraint condition (3.9) implies that \[m\times(m\times\Delta m)\in L^{4}\left(\Omega;L^{2}\left(0,T;L^{2}\right) \right). \tag{8.13}\] The assumption on \(u\), viz. 3.1 along with the constraint condition (3.9) and the assumption on the function \(h\) implies that \[m\times u\in L^{4}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right), \tag{8.14}\] \[m\times(m\times u)\in L^{4}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right), \tag{8.15}\] and \[DG\left(m\right)\left(G(m)\right)\in L^{4}\left(\Omega;L^{2}\left(0,T;L^{2} \right)\right). \tag{8.16}\] Note that the Assumption 3.1 has been applied here for \(p=2\). Hence each of the integrands (except for the terms with the Ito integral) takes values in \(L^{4}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right)\). Hence by replacing \(f\) in the above calculations by the integrands, one can show that each of the terms also satisfies the required bounds. **Sub step 3:** What remains now is the Ito integral term. Recall that by Proposition 2.2 and the bound on the process \(m\) in Theorem 3.3, \[\mathbb{E}\int_{0}^{T}\left|G(m(t))\right|_{H^{1}}^{4}\,dt\leq C\mathbb{E} \int_{0}^{T}\left|m(t)\right|_{H^{1}}^{2}<\infty. \tag{8.17}\] \[\mathbb{E}\left|\int_{0}^{t}A_{1}^{\delta}e^{-(t-s)A_{1}}G(m(s)) \,dW(s)\right|_{L^{2}}^{4} \leq C\mathbb{E}\left(\int_{0}^{t}\left|A_{1}^{\delta}e^{-(t-s)A_{1 }}G(m(s))\right|_{L^{2}}^{2}\,ds\right)^{2}\] (See Proposition 7.3 [25]) \[\leq C\mathbb{E}\left(\int_{0}^{t}\left|A_{1}^{\delta-\frac{1}{2} }e^{-(t-s)A_{1}}A_{1}^{\frac{1}{2}}G(m(s))\right|_{L^{2}}^{2}\,ds\right)^{2}\] (Since \[\delta=\delta-\frac{1}{2}+\frac{1}{2}\] \[\leq C\mathbb{E}\left(\int_{0}^{t}\frac{1}{(t-s)^{2\delta-1}} \left|A_{1}^{\frac{1}{2}}G(m(s))\right|_{L^{2}}^{2}\,ds\right)^{2}\text{ \ (By \eqref{eq:bound \[=\left|\Delta m(t,x)\right|_{\mathbb{R}^{3}}^{2}.\] Hence to show the bound on the second term, it is sufficient to show the corresponding bound on the two terms on the left hand side of the above equality. For the second term, \[\mathbb{E}\int_{0}^{T}\int_{\mathcal{O}}\left|\left\langle m(t,x),\Delta m(t,x) \right\rangle_{\mathbb{R}^{3}}\right|^{2}\,ds\,dt=\mathbb{E}\int_{0}^{T}\int_ {\mathcal{O}}\left|\nabla m(t,x)\right|_{\mathbb{R}^{3}}^{4}\,ds\,dt.\] The right hand side of the above equality is finite because of the bound (8.7) in Step 1. This, along with the bound in Theorem 3.3 (for the first term) concludes the proof of the bound on the second term. Hence the proof of Theorem 3.13 is complete. **Lemma 8.1**.: _The process \(m\) lies in the space \(C\left(\left[0,T\right];H^{1}\right)\,\mathbb{P}-\text{a.s.}\)._ We postpone the proof of this lemma to Appendix A. ## 9. Proof of Theorem 3.7 : Optimal control The objective of this section is to show that there exists an optimal control to the problem (3.7), with an appropriate admissibility criterion. We fix a probability space \((\Omega,\mathcal{F},\mathbb{P})\) as in Section 3. _Outline of the section:_ We start by giving an equivalent equation (9.1) to equation (3.7). We follow it up with the definition of a _strong martingale solution_ to the problem in Definition 9.1. Assumption 9.3 outlines the assumption that is required on the control processes. The class \(\mathcal{U}_{ad}(m_{0},T)\) of admissible solutions is then defined. This is followed by a proof for Theorem 3.7. For the remainder of this section, we will consider the following equation. For \(t\in[0,T]\) \[\begin{split} m(t)=&\int_{0}^{t}m(s)\times\Delta m( s)\,ds-\alpha\,\int_{0}^{t}m(s)\times(m(s)\times u(s))\,ds\\ &+\alpha\,\int_{0}^{t}\Delta m(s)\,ds+\alpha\,\int_{0}^{t}|\nabla m (s)|_{\mathbb{R}^{3}}^{2}m(s)\,ds\\ &+\int_{0}^{t}m(s)\times u(s)\,ds+\frac{1}{2}\int_{0}^{t}\left[ DG\left(m(s)\right)\right]\left(G(m\left(s\right))\right)\,ds+\int_{0}^{t}G(m(t))\, dW(t),\ \mathbb{P}-a.s.\end{split} \tag{9.1}\] Recall that by Corollary 7.3, the equation (3.7) and the above equation (9.1) are equivalent in \((H^{1})^{\prime}\), since \(m\) satisfies the constraint condition. **Definition 9.1** (Strong martingale solution).: _Let the initial data \(m_{0}\), the function \(h\) and time \(T\) be fixed. A strong martingale solution of (9.1) is a tuple_ \[\pi=(\Omega,\mathcal{F},\mathbb{P},W,m,u)\] _such that \(\pi\) is a weak martingale solution as in Definition 3.2 and the process \(m\) satisfies the additional regularity property (3.13), i.e._ \[\mathbb{E}\left(\int_{0}^{T}|\nabla m(t)|_{L^{4}}^{4}\,dt+\int_{0}^{T}|A_{1}m (t)|_{L^{2}}^{2}\,dt\right)<\infty. \tag{9.2}\] **Remark 9.2**.: _A weak martingale solution is defined for the problem (3.7). By Corollary 7.3 the equations (3.7) and (9.1) are equivalent in \((H^{1})^{\prime}\). Hence the above definition makes sense. Hence Theorem 7.5 implies that the problem (9.1), with the initial data \(m_{0}\) has a strong solution corresponding to any control process satisfying (3.1)._ **Assumption 9.3** (Admissibility criterion for the control process).: _We say that a given control process \(u\) satisfies the admissibility criterion if for \(p\geq 1\) and a given constant \(K_{p}>0\),_ \[\mathbb{E}\left(\int_{0}^{T}\left|u(t)\right|_{L^{2}}^{2}\,dt\right)^{p}\leq K _{p}. \tag{9.3}\] _In particular, we assume (9.3) for \(p=4\)._ We now describe the class of admissible solutions over which the cost function will be minimized. Let us fix the law of the initial data \(m_{0}\) such that it satisfies the assumptions in Theorem 3.3. Also fix the function \(h\in H^{1}\). Fix \(T<\infty\). Consider a tuple \(\pi=(\Omega,\mathcal{F},\mathbb{F},\mathbb{P},W,m,u)\) which is a strong martingale solution to (9.1) as defined in Definition 9.1. Let the control process \(u\) also satisfy the Assumption 9.3 for \(p=4\). Hence the process \(m\) satisfies the bounds mentioned in Theorem 3.3. Such a tuple \(\pi\) will be called an _admissible solution_ and the space of all such admissible solutions will be denoted by \(\mathcal{U}_{ad}(m_{0},T)\). **Remark 9.4**.: _Even if the tuples are strong martingale solutions, the equations still make sense in \((H^{1})^{\prime}\), (and even in \(L^{2}\), see Corollary 7.1) due to the regularity proved in Theorem 3.5._ We recall the optimal control problem here for the reader's convenience. The cost functional is defined as follows. Let \(\pi=(\Omega,\mathcal{F},\mathbb{F},\mathbb{P},W,m,u)\in\mathcal{U}_{ad}(m_{0},T)\). Assume that the terminal cost \(\Psi\) is continuous on \(L^{2}\). For a given process (desired state) \(\bar{m}\in L^{2}(\Omega;L^{2}(0,T;\mathcal{S}^{2}))\) \[J(\pi)=\mathbb{E}\left[\int_{0}^{T}\left(|m(t)-\bar{m}(t)|_{H^{1}}^{2}+|u(t)|_ {L^{2}}^{2}\right)\,dt+\Psi\left(m(T)\right)\right]. \tag{9.4}\] Our aim is to minimize the above mentioned cost functional over the space \(\mathcal{U}_{ad}(m_{0},T)\). Stated formally, the optimal control problem is to find an admissible solution \(\pi^{*}\in\mathcal{U}_{ad}(m_{0},T)\) such that \[J(\pi^{*})=\inf_{\pi\in\mathcal{U}_{ad}(m_{0},T)}J(\pi). \tag{9.5}\] Let us denote the infimum of the cost functional by \(\Lambda\). That is \[\inf_{\pi\in\mathcal{U}_{ad}(m_{0},T)}J(\pi)=\Lambda. \tag{9.6}\] Idea of the proof of Theorem 3.7.: First, we show that the set of admissible solutions is non-empty. Hence the infimum \(\Lambda\) is finite. This implies the existence of a minimizing sequence \(\{\pi_{n}\}_{n\in\mathbb{N}}\). Lemma 9.6 and Lemma 9.7 show that the minimizing sequence \(\{\pi_{n}\}_{n\in\mathbb{N}}\) is uniformly bounded. Lemma 9.8 shows that the minimizing sequence is bounded in the maximal regular space. Further, Lemma 9.9 shows that the sequence of laws of \((m_{n},u_{n})\) are tight on the space \(L^{2}(0,T;H^{1})\cap C([0,T];L^{2})\times L^{2}_{w}(0,T;L^{2})\). In Proposition 9.10, we use the Jakubowski's version of the Skorohod Theorem to obtain another sequence \(\{(m^{\prime}_{n},u^{\prime}_{n})\}_{n\in\mathbb{N}}\) of processes, along with random variables \(m^{\prime},u^{\prime},W^{\prime}\), possibly on a different probability space \((\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{F}^{\prime},\mathbb{P}^{\prime})\). As before, we denote the tuple \(\{\pi^{\prime}_{n}\}_{n\in\mathbb{N}}:=(\Omega^{\prime},\mathcal{F}^{\prime}, \mathbb{F}^{\prime},\mathbb{P}^{\prime},m^{\prime}_{n},u^{\prime}_{n},W^{ \prime}_{n})\) and \(\{\pi^{\prime}\}_{n\in\mathbb{N}}:=(\Omega^{\prime},\mathcal{F}^{\prime}, \mathbb{P}^{\prime},\mathbb{P}^{\prime},m^{\prime},u^{\prime},W^{\prime})\). Proposition 9.10 further gives us pointwise convergence of the processes \(m^{\prime}_{n},u^{\prime}_{n}\) and \(W^{\prime}_{n}\) to their corresponding limits in \(\pi^{\prime}\), in appropriate spaces. Lemma 9.13, Lemma 9.14 and Lemma 9.15 establish uniform bounds on the newly obtained processes \(m^{\prime}_{n},n\in\mathbb{N}\) and \(m^{\prime}\). Then arguing similarly to Section 5, we show that the obtained tuple \(\pi^{\prime}\) is a strong martingale solution of the problem (9.1). A main difference in the calculations is that in Section 5 we consider processes that have values in finite dimensional spaces, whereas that cannot be assumed here. One needs to be careful while applying the Kuratowski Theorem. Some more details are given in Remark 9.12. Moreover, we go on to show that the obtained tuple \(\pi^{\prime}\) is an admissible solution. Then we show that the infimum for the cost \(J\) is attained at \(\pi^{\prime}\), thus showing the existence of an optimal control and completing the proof. **Remark 9.5**.: _Before we begin with the proof of Theorem 3.7, we make a small comment. Theorem 7.5, combined with Remark 9.2 gives us the existence of a strong solution for the problem (9.1), which is stated in Theorem 7.5. That is, given a filtered probability space \((\Omega,\mathcal{F},\mathbb{F},\mathbb{P})\), a Wiener process \(W\), an initial data and a control process \(u\) on the given space, there exists a process \(m\) which is a solution of the problem (9.1). The optimization problem can then be posed by fixing the given probability space and Wiener process, and then finding a tuple \((m^{*},u^{*})\) such that:_ 1. \(m^{*}\) _is a solution of the problem (_9.1_) corresponding to the control process_ \(u^{*}\) _._ 2. _The tuple_ \((m^{*},u^{*})\) _minimizes the cost (_9.4_) on the given probability space._ _This could be one way of formulating the problem. But, as of now, this does not contribute significantly to the overall progression of the problem and hence has not been considered._ Proof of Theorem 3.7.: Theorem 3.3 along with Theorem 3.5 shows that the space \(\mathcal{U}_{ad}(m_{0},T)\) is non-empty. Hence \(\Lambda<\infty\). Hence there exists a minimizing sequence \(\{\pi_{n}\}_{n\in\mathbb{N}}\) of strong martingale solutions, \[\pi_{n}=(\Omega_{n},\mathcal{F}_{n},\mathbb{F}_{n},\mathbb{P}_{n},W_{n},m_{n}, u_{n}).\] That is \[\lim_{n\to\infty}J(\pi_{n})=\Lambda. \tag{9.7}\] Since \(\pi_{n}\) is a minimizing sequence, there exists a constant \(R>0\) such that for each \(n\in\mathbb{N}\), \[J(\pi_{n})\leq R. \tag{9.8}\] Hence there exists a constant \(C>0\) such that for any \(n\in\mathbb{N}\), \[\mathbb{E}^{n}\int_{0}^{T}|m_{n}(t)|_{H^{1}}^{2}\ dt\leq C \tag{9.9}\] and \[\mathbb{E}^{n}\int_{0}^{T}|u_{n}(t)|_{L^{2}}^{2}\ dt\leq K_{1}. \tag{9.10}\] Here \(\mathbb{E}^{n}\) denotes the expectation with respect to the probability space \((\Omega_{n},\mathcal{F}_{n},\mathbb{P}_{n})\). Before we continue with the main line of the proof we formulate and prove some essential auxiliary results. **Lemma 9.6**.: _There exists a constant \(C>0\) such that for each \(n\in\mathbb{N}\), the following bounds hold._ \[\mathbb{E}^{n}\int_{0}^{T}|m_{n}(t)|_{H^{1}}^{2}\ dt\leq C, \tag{9.11}\] \[\mathbb{E}^{n}\sup_{t\in[0,T]}|m_{n}(t)|_{H^{1}}^{4}\leq C, \tag{9.12}\] \[\mathbb{E}^{n}\int_{0}^{T}|m_{n}(s)\times\Delta m_{n}(s)|_{L^{2}}^{2}\ ds\leq C, \tag{9.13}\] \[\mathbb{E}^{n}\int_{0}^{T}|m_{n}(s)\times(m_{n}(s)\times\Delta m_{n}(s))|_{L^ {2}}^{2}\ ds\leq C, \tag{9.14}\] \[\mathbb{E}^{n}\int_{0}^{T}|m_{n}(s)\times u_{n}(s)|_{L^{2}}^{2}\ ds\leq C, \tag{9.15}\] \[\mathbb{E}^{n}\int_{0}^{t}|m_{n}(s)\times(m_{n}(s)\times u_{n}(s))|_{L^{2}}^{ 2}\ ds\leq C. \tag{9.16}\] Proof of Lemma 9.6.: The first inequality (9.11) follows from the fact that \(\pi_{n}\) is a minimizing sequence and the inequality (9.9). The following equation is satisfied by the process \(m_{n}\) for all \(t\in[0,T]\) \[m_{n}(t) =m_{n}(0)+\int_{0}^{t}m_{n}(s)\times\Delta m_{n}(s)\,ds-\alpha\, \int_{0}^{t}m_{n}(s)\times(m_{n}(s)\times\Delta m_{n}(s))\ ds\] \[+\int_{0}^{t}m_{n}(s)\times u_{n}(s)\,ds-\alpha\,\int_{0}^{t}m_{n} (s)\times(m_{n}(s)\times u_{n}(s))\ ds\] \[+\frac{1}{2}\int_{0}^{t}\left[DG(m_{n}(s))\right](G(m_{n}(s)))\ ds+ \int_{0}^{t}G(m_{n}(s))\,dW_{n}(s),\ \mathbb{P}_{n}-a.s. \tag{9.17}\] Let \(\bar{\phi}:H^{1}\to\mathbb{R}\) be given by \[\bar{\phi}(v)=\frac{1}{2}\left|\nabla v\right|_{L^{2}}^{2}. \tag{9.18}\] We now apply the Ito Lemma for the above function. The calculations are similar to the proofs of Lemma 4.9 and Lemma 4.10, and hence are skipped. A difference is that the calculations here are in infinite dimensions, for which we apply the Ito formula from [53]. It is therefore sufficient to show that the integrands on the right hand side of the equality (9.17) lie in appropriate spaces, see [53], so that the Ito formula can be applied. Theorem 3.3 implies that the terms \(m_{n}\times\Delta m_{n},m_{n}\times(m_{n}\times\Delta m_{n})\in M^{2}(0,T;L^{2})\). For the definition of the space, see Section 6, see also [53]. By the constraint condition (3.9), \[\mathbb{E}^{n}\int_{0}^{T}\left|m_{n}(t)\times u_{n}(t)\right|_{L^{2}}^{2}\,dt \leq\mathbb{E}^{n}\int_{0}^{T}\left|u_{n}(t)\right|_{L^{2}}^{2}\,dt<\infty.\] The last inequality holds by (9.10). Similarly, the constraint condition (3.9) implies that \[\mathbb{E}^{n}\int_{0}^{T}\left|m_{n}(t)\times\big{(}m_{n}(t) \times u_{n}(t)\big{)}\right|_{L^{2}}^{2}\,dt\leq\mathbb{E}^{n}\int_{0}^{T} \left|m_{n}(t)\times u_{n}(t)\right|_{L^{2}}^{2}\,dt<\infty.\] By the assumption \(h\in H^{1}\), the embedding \(H^{1}\hookrightarrow L^{\infty}\) and by the constraint condition (3.9),we have \[\mathbb{E}^{n}\int_{0}^{T}\left|\big{[}DG(m_{n}(t))\big{]}\big{[}G \big{(}m_{n}(t)\big{)}\big{]}\right|_{L^{2}}^{2}\,dt<\infty.\] Hence \(m_{n}\times u_{n},\ m_{n}\times(m_{n}\times u_{n}),\ \big{[}DG\big{(}m_{n}\big{)} \big{]}\big{[}G\big{(}m_{n}\big{)}\big{]}\in M^{2}(0,T;L^{2})\). Also, by the constraint condition implies that \[\mathbb{E}^{n}\int_{0}^{T}\left|G(m_{n}(t))\right|_{L^{2}}^{2}\,dt<\infty.\] Hence \(G(m_{n})\in M^{2}(0,T;L^{2})\). The inequalities (9.12), (9.13) then follow by applying the Ito formula. The inequalities (9.14), (9.15) and (9.16) follow from the assumption on \(u_{n}\) and the constraint condition (3.9). **Lemma 9.7**.: _Let \(\gamma\in\big{(}0,\frac{1}{2}\big{)}\) and \(p\geq 2\). Then there exists a constant \(C>0\) such that for each \(\mathbb{N}\), the following bound holds._ \[\mathbb{E}^{n}\left[\left|m_{n}\right|_{W^{\gamma,p}(0,T;L^{2})}^{2}\right] \leq C. \tag{9.19}\] Proof of Lemma 9.7.: The proof is similar to the proof of Lemma 4.10. The idea of the proof is to show a stronger bound (in \(W^{1,2}(0,T;L^{2})\)) for the terms without the stochastic intergral, as done in the proof of Lemma 4.10. Then use the embedding \[W^{1,2}(0,T;L^{2})\hookrightarrow W^{\gamma,p}(0,T;L^{2}), \tag{9.20}\] to conclude the bound. For the stochastic integral, the proof is similar to the proof in Lemma 4.10, using Lemma C.2. Combining the bound (9.12) in Lemma 9.6 along with the Lemma 9.7, we have that the sequence \(\{m_{n}\}_{n\in\mathbb{N}}\) is bounded in the space \(L^{2}(\Omega;L^{\infty}(0,T;H^{1}))\cap L^{2}(\Omega;W^{\gamma,p}(0,T;L^{2}))\). That each \(m_{n}\) satisfies (3.13) follows from Theorem 3.5. The aim here is to show that the bound is uniform in \(n\in\mathbb{N}\). **Lemma 9.8**.: _There exists a constant \(C>0\) such that for all \(n\in\mathbb{N}\),_ \[\mathbb{E}\left(\int_{0}^{T}\left|\nabla m_{n}(t)\right|_{L^{4}}^{4}dt+\int_{ 0}^{T}\left|A_{1}m_{n}(t)\right|_{L^{2}}^{2}dt\right)\leq C. \tag{9.21}\] _Idea of the proof of Lemma 9.8._ That \(m_{n}\) is a strong martingale solution for each \(n\in\mathbb{N}\) implies that the left hand side of the inequality (9.21) is finite for each \(n\in\mathbb{N}\). The aim of this lemma is to show that the constant on the right hand side is independent of \(n\). One can verify from the proof of Theorem 3.5 that the bounds on the right hand side depends only on \(\mathbb{E}\left|u\right|_{L^{2}(0,T;L^{2})}^{2p}\), the initial data \(m_{0}\) and the fixed time \(T\). By the Assumption 3.1 and the fact that \(\{\pi_{n}\}_{n\in\mathbb{N}}\) is a minimizing sequence, we can conclude the lemma. _An outline of the proof of Lemma 9.8._ To prove the lemma, we will follow Step 1 and Step 2 (Section 8) of the proof of Theorem 3.5 and show that the bound on the right hand side does not depend on \(n\). In that direction, first we recall that by Lemma 9.6, the bounds on \(m_{n},u_{n}\) are independent of \(n\). We now recall Step 1 in the proof of Theorem 3.5. The bound on \(\mathbb{E}\int_{0}^{T}\left|A_{1}^{\delta}m_{n}(t)\right|_{L^{2}}^{2}\,dt\) depends only on the choice of \(\delta\) and the \(L^{4}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right)\) norm of the functions on the right hand side of (9.1). Following the above arguments, we can show that the required bounds do not depend on \(n\in\mathbb{N}\). For the Ito integral term, We observe that the bound depends on the time \(T\), the choice of \(\delta\) and the norm \(\mathbb{E}\int_{0}^{T}\left|G(m_{n}(t))\right|_{H^{1}}^{2}\,dt\), which again depends on the norm \(\mathbb{E}\int_{0}^{T}\left|m_{n}(t)\right|_{H^{1}}^{2}\,dt\), the constraint condition and the fixed function \(h\). Hence, from the above arguments, this bound also does not depend on \(n\in\mathbb{N}\). Going back to Step 2 of the proof of Theorem 3.5, we observe that it is sufficient to bound the term \(m_{n}\times\Delta m_{n}\), along with Step 1 to complete the proof of (9.21). Hence combining the arguments above, we conclude that the bound (9.21) is independent of \(n\in\mathbb{N}\). From the bounds established in Lemma 9.8, we can prove that the sequence \(\{m_{n}\}_{n\in\mathbb{N}}\) is bounded in the space \(L^{2}(\Omega;L^{2}(0,T;H^{2})\cap L^{2}(\Omega;W^{\gamma,p}(0,T;L^{2}))\). We use the uniform bounds to show that the sequence of laws of \(m_{n}\) is tight on the space \(L^{2}(0,T;H^{1})\cap C([0,T];L^{2})\). Similarly, we use the uniform bound on the sequence of control processes \(u_{n}\) to talk about tightness of laws on a suitable space. This is outlined in the following lemma. **Lemma 9.9**.: _The sequence of laws of \(\{(m_{n},u_{n})\}_{n\in\mathbb{N}}\) is tight on the space \(L^{2}(0,T;H^{1})\cap C([0,T];L^{2})\times L^{2}_{w}(0,T;L^{2})\)._ Proof of Lemma 9.9.: The proof will be similar to the proof of Lemma 4.11. This lemma shows tightness on a smaller (more regular) space than the previous counterpart. For completion, we give some details here. We show calculations for the sequence \(\{m_{n}\}_{n\in\mathbb{N}}\). Tightness for the sequence of laws of \(\{u_{n}\}_{n\in\mathbb{N}}\) follows similar to Lemma 4.11. The main idea is to show that the laws of \(m_{n},n\in\mathbb{N}\) are concentrated inside a ball in the space \(L^{\infty}(0,T;H^{1})\cap L^{2}(0,T;H^{2})\cap W^{\gamma,p}(0,T;L^{2})\), which is compactly embedded into the space \(L^{2}(0,T;H^{1})\cap C([0,T];L^{2})\). Towards that, let \(r\in\mathbb{R}\) be arbitrary and fixed. \[\mathbb{P}_{n}\left(\left|m_{n}\right|_{L^{\infty}(0,T;H^{1}) \cap L^{2}(0,T;H^{2})\cap W^{\gamma,p}(0,T;L^{2})}\geq r\right)\] \[\leq\mathbb{P}_{n}\left(\left|m_{n}\right|_{L^{\infty}(0,T;H^{1}) }\geq\frac{r}{3}\right)+\mathbb{P}_{n}\left(\left|m_{n}\right|_{L^{2}(0,T;H^{ 2})}\geq\frac{r}{3}\right)+\mathbb{P}_{n}\left(\left|m_{n}\right|_{W^{\gamma,p }(0,T;L^{2})}\geq\frac{r}{3}\right)\] \[\leq\frac{9}{r^{2}}\mathbb{E}^{n}\left|m_{n}\right|_{L^{\infty}( 0,T;H^{1})}^{2}+\frac{9}{r^{2}}\mathbb{E}^{n}\left|m_{n}\right|_{L^{2}(0,T;H^ {2})}^{2}+\frac{9}{r^{2}}\mathbb{E}^{n}\left|m_{n}\right|_{W^{\gamma,p}(0,T;L^ {2})}^{2}\] \[\leq\frac{C}{r^{2}}. \tag{9.22}\] The second last inequality follows from the Chebyshev inequality. For the last inequality, Lemma 9.7 and Lemma 9.8 imply the existence of a constant \(C>0\) used in the inequality. Observe that the right hand side of the above inequality, and hence the left hand side can be made as small as desired by choosing \(r\) large enough. Let \[B_{r}:=\left\{v\in L^{\infty}(0,T;H^{1})\cap L^{2}(0,T;H^{2})\cap W^{\gamma,p} (0,T;L^{2})\right.\] \[:|v|_{L^{\infty}(0,T;H^{1})\cap L^{2}(0,T;H^{2})\cap W^{\gamma,p}(0,T;L^{2})} \geq r\bigg{\}}. \tag{9.23}\] Let \(\varepsilon>0\) be given. In order to show tightness of the laws, it suffices to show that there exists a compact set \(B^{\varepsilon}\subset L^{2}(0,T;H^{1})\cap C([0,T];L^{2})\) such that for each \(n\in\mathbb{N}\), \[\mathbb{P}_{n}\left(B^{\varepsilon}\right)>1-\varepsilon. \tag{9.24}\] In (9.22), we choose \(r\) such that \(r^{2}>\frac{C}{\varepsilon}\). Therefore \[\mathbb{P}_{n}\left(B_{r}\right)\leq\frac{C}{r^{2}}<\varepsilon. \tag{9.25}\] Let \(B^{\varepsilon}\) denote the closure of the complement of this \(B_{r}\). Therefore for each \(n\in\mathbb{N}\), we have \[\mathbb{P}_{n}\left(B^{\varepsilon}\right)\geq 1-\mathbb{P}_{n}\left(B_{r} \right)>1-\varepsilon. \tag{9.26}\] By Lemma C.7 and Lemma C.9, for \(\gamma p>1\), the set \(B^{\varepsilon}\) is a compact subset of \(L^{2}(0,T;H^{1})\cap C([0,T];L^{2})\). Hence the sequence of laws \(\left\{\mathcal{L}(m_{n})\right\}_{n\in\mathbb{N}}\) is tight on the space \(L^{2}(0,T;H^{1})\cap C([0,T];L^{2})\). The proof for the tightness of the sequence of laws of \(u_{n}\) on the space \(L^{2}_{w}(0,T;L^{2})\) is similar to the proof of Lemma 4.11. Note that each strong martingale solution has its own Wiener process. The processes \(W_{n}\) have the same laws on \(C([0,T];\mathbb{R})\). Hence it is sufficient to show that the law of \(W_{n}\) is tight on the space \(C([0,T];\mathbb{R})\) for any \(n\in\mathbb{N}\). Let \(n\in\mathbb{N}\). Since the space \(C([0,T];\mathbb{R})\) is a Radon space, every probability measure is tight. Hence, given \(\varepsilon>0\) there exists \(K_{\varepsilon}\subset C([0,T];\mathbb{R})\) such that \[\mathbb{P}_{n}\left(W_{n}\in K_{\varepsilon}\right)\geq 1-\varepsilon. \tag{9.27}\] Since \(W_{n}\) and \(W_{k}\) have the same laws on the space \(C([0,T];\mathbb{R})\), for any \(n,k\in\mathbb{N}\), \[\mathbb{P}_{n}\left(W_{n}\in K_{\varepsilon}\right)=\mathbb{P}_{k}\left(W_{k} \in K_{\varepsilon}\right)\geq 1-\varepsilon. \tag{9.28}\] Hence the sequence of laws of \(\left\{W_{n}\right\}_{n\in\mathbb{N}}\) is tight on the space \(C([0,T];\mathbb{R})\). Now that we have shown the tightness, we proceed as done in Section 5. **Proposition 9.10**.: _There exists a probability space \((\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{P}^{\prime})\) and a sequence \((m^{\prime}_{n},u^{\prime}_{n},W^{\prime}_{n})\) of \(L^{2}(0,T;H^{1})\cap C([0,T];L^{2})\times L^{2}_{w}(0,T;L^{2})\times C([0,T]; \mathbb{R})\)-valued random variables, along with random variables \((m^{\prime},u^{\prime}\ W^{\prime})\) defined on \(\Omega^{\prime}\) such that for each \(n\in\mathbb{N}\), the law of \((m_{n},u_{n},W_{n})\) equals the law of \((m^{\prime}_{n},u^{\prime}_{n},W^{\prime}_{n})\) on \(L^{2}(0,T;H^{1})\cap C([0,T];L^{2})\times L^{2}_{w}(0,T;L^{2})\times C([0,T]; \mathbb{R})\) and the following convergences hold \(\mathbb{P}^{\prime}\)-a.s. as \(n\) goes to infinity._ \[m^{\prime}_{n}\to m^{\prime}\text{ in }L^{2}(0,T;H^{1})\cap C([0,T];L^{2}), \tag{9.29}\] \[u^{\prime}_{n}\to u^{\prime}\text{ in }L^{2}_{w}(0,T;L^{2}), \tag{9.30}\] \[W^{\prime}_{n}\to W^{\prime}\text{ in }C([0,T];\mathbb{R}). \tag{9.31}\] Proof of Proposition 9.10.: The proof, similar to the proof of Proposition 5.1, follows from the Jakubowski version of the Skorohod Theorem, see Theorem 3.11 in [19]. **Remark 9.11**.: _The processes \(m^{\prime}\) and \(u^{\prime}\) obtained in Proposition 9.10 are Borel measurable. Let the filtration \(\mathbb{F}^{\prime}=\mathcal{F}^{\prime}_{t\in[0,T]}\) be defined by_ \[\mathcal{F}^{\prime}_{t}=\sigma\{m^{\prime}(s),u^{\prime}(s),W^{\prime}(s):0 \leq s\leq t\}.\] _Hence \(m^{\prime},u^{\prime}\) are \(\mathbb{F}^{\prime}\)-adapted. Thus, the processes \(m^{\prime}\) and \(u^{\prime}\) have progressively measurable modifications, see Proposition 1.12, [43]. From now on, these progressively measurable modifications will be considered._ **Remark 9.12**.: _This remark is written in the same spirit as that of Remark 5.5. The main difference between Remark 5.5 and this Remark 9.12 is that we cannot use the finite dimensionality of the spaces \(H_{n}\) here. Let us show how we need to modify the previous argument. First, we discuss the laws of \(m_{n},n\in\mathbb{N}\), and next we discuss the laws of \(u_{n},n\in\mathbb{N}\)._ 1. _Note that the spaces_ \(C([0,T];L^{2})\)_,_ \(C([0,T];H^{1})\)_,_ \(L^{2}(0,T;H^{1})\)_,_ \(L^{4}(0,T;W^{1,4})\)_, and_ \(L^{2}(0,T;H^{2})\) _are Polish spaces. In particular, since the embedding of_ \(C([0,T];H^{1})\) _into the space_ \(C([0,T];L^{2})\cap L^{2}(0,T;H^{1})\) _is continuous and injective, by using the Kuratowski Theorem, Lemma_ C.10_, we infer that the Borel subsets of_ \(C([0,T];H^{1})\) _are also the Borel subsets of_ \(C([0,T];L^{2})\cap L^{2}(0,T;H^{1})\)_. Now, since by Lemma_ 8.1 \(\mathbb{P}_{n}\left\{m_{n}\in C([0,T];H^{1})\right\}=1\) _for each_ \(n\) _and_ \(m_{n}\) _and_ \(m^{\prime}_{n}\) _have the same laws on_ \(C([0,T];L^{2})\cap L^{2}(0,T;H^{1})\) _and_ \(C([0,T];H^{1})\) _is a Borel subset of_ \(C([0,T];L^{2})\cap L^{2}(0,T;H^{1})\)_, we deduce the following_ \[\mathbb{P}^{\prime}\left\{m^{\prime}_{n}\in C([0,T];H^{1})\right\}=1,\text{ for each }n\in\mathbb{N}.\] _Arguing similarly (i.e. using the continuous embedding of the spaces_ \(C([0,T];H^{1})\)_,_ \(L^{2}(0,T;H^{1})\)_,_ \(L^{4}(0,T;W^{1,4})\)_, and_ \(L^{2}(0,T;H^{2})\) _into the space_ \(C([0,T];L^{2})\cap L^{2}(0,T;H^{1})\)_), we can prove that the processes_ \(m^{\prime}_{n},n\in\mathbb{N}\) _satisfy the same bounds as the processes_ \(m_{n},n\in\mathbb{N}\)_, in particular the bounds (_1_), (_2_) and (_3_) in Lemma_ 4.9_._ 2. _Regarding the control processes corresponding to the processes_ \(u_{n}\) _and_ \(u^{\prime}_{n}\)_, we have the following. Firstly, the space_ \(L^{2}_{w}(0,T;L^{2})\) _is the space_ \(L^{2}(0,T;L^{2})\) _endowed with the weak topology, which is weaker than the norm topology. Therefore every open set in_ \(L^{2}_{w}(0,T;L^{2})\) _is also an open set in_ \(L^{2}(0,T;L^{2})\)_. Therefore, the Borel sigma-algebra corresponding to_ \(L^{2}_{w}(0,T;L^{2})\) _is contained in the Borel sigma-algebra corresponding to_ \(L^{2}(0,T;L^{2})\)_. In other words, Borel subsets of_ \(L^{2}_{w}(0,T;L^{2})\) _are also Borel subsets of_ \(L^{2}(0,T;L^{2})\)_. Moreover, by Theorem_ 7.19 _in_ _[_66_]__, see also page number 112 in_ _[_12_]__, we infer that the Borel sigma algebras corresponding to_ \(L^{2}_{w}(0,T;L^{2})\) _and_ \(L^{2}(0,T;L^{2})\) _are equal._ _By Proposition_ 9.10_, we infer that for each_ \(n\in\mathbb{N}\)_, the law of the process_ \(u^{\prime}_{n}\) _is equal to the law of the process_ \(u_{n}\) _on the space_ \(L^{2}_{w}(0,T;L^{2})\)_. In particular, the following holds for any constant_ \(K>0\)_._ \[\mathbb{P}\left\{\left|u_{n}\right|_{L^{2}(0,T;L^{2})}\leq K\right\}=\mathbb{ P}^{\prime}\left\{\left|u^{\prime}_{n}\right|_{L^{2}(0,T;L^{2})}\leq K \right\}.\] _Hence we infer that the processes_ \(u^{\prime}_{n}\) _satisfy the same bounds as the processes_ \(u_{n}\)_._ The processes \(m^{\prime}_{n}\) and \(u^{\prime}_{n}\), therefore, satisfy the same bounds as the processes \(m_{n}\) and \(u_{n}\) respectively, for each \(n\in\mathbb{N}\). We state this in the following two lemmata. **Lemma 9.13**.: _There exists a constant \(C>0\) such that for all \(n\in\mathbb{N}\), the following bounds hold._ \[\mathbb{E}^{\prime}\sup_{t\in[0,T]}\left|m^{\prime}_{n}(t)\right|^{2}_{H^{1}} \leq C, \tag{9.32}\] \[\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}_{n}(s)\times\Delta m^{\prime} _{n}(s)\right|^{2}_{L^{2}}\,ds\leq C, \tag{9.33}\] \[\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}_{n}(s)\times\left(m^{\prime}_{ n}(s)\times\Delta m^{\prime}_{n}(s)\right)\right|^{2}_{L^{2}}\,ds\leq C, \tag{9.34}\] \[\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}_{n}(s)\times u^{\prime}_{n}(s) \right|^{2}_{L^{2}}\,ds\leq C, \tag{9.35}\] \[\mathbb{E}^{\prime}\int_{0}^{t}\left|m^{\prime}_{n}(s)\times\left(m^{\prime}_{ n}(s)\times u^{\prime}_{n}(s)\right)\right|^{2}_{L^{2}}\,ds\leq C. \tag{9.36}\] Proof of Lemma 9.13.: The proof of this Lemma is similar to the proof of Proposition 5.6. It follows from the bounds established in Lemma 9.6. We now use Lemma 9.8 along with the Remark 9.12 to get the following lemma. **Lemma 9.14**.: _There exists a constant \(C>0\) such that for all \(n\in\mathbb{N}\),_ \[\mathbb{E}^{\prime}\left(\int_{0}^{T}\left|m_{n}^{\prime}(t)\right|_{W^{1,4}}^{ 4}\,dt+\int_{0}^{T}\left|m_{n}^{\prime}(t)\right|_{H^{2}}^{2}dt\right)\leq C. \tag{9.37}\] Proof of Lemma 9.14.: The proof follows from the Lemma 9.8 and Remark 9.12. Having shown uniform estimates for the sequence \(\{m_{n}^{\prime}\}_{\mathbb{N}}\), we show similar bounds for the limit process \(m^{\prime}\). **Lemma 9.15**.: _The process \(m^{\prime}\) satisfies the following bounds._ 1. \[\sup_{0\leq t\leq T}\left|m^{\prime}(t)\right|_{L^{2}}\leq\left|m_{0}\right|_{ L^{2}},\ \mathbb{P}^{\prime}-\text{a.s.}\] (9.38) 2. \[\mathbb{E}^{\prime}\sup_{0\leq t\leq T}\left|m^{\prime}(t)\right|_{H^{1}}^{4}<\infty,\] (9.39) 3. \[\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}(t)\right|_{W^{1,4}}^{4}\,dt<\infty,\] (9.40) 4. \[\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}(t)\right|_{H^{2}}^{2}\,dt<\infty.\] (9.41) Proof.: The proof is essentially similar to the proof of Lemma 5.7. A sketch for the proofs of the last two inequalities is given here. For the last inequality, we first extend the norm \(\left|\cdot\right|_{H^{2}}\) to the space \(H^{1}\) as follows. \[\left|v\right|_{H^{2}}=\begin{cases}&\left|v\right|_{H^{2}},\ \text{ if }\ v\in H ^{2},\\ &\infty,\ \text{if }\ v\in H^{1}\text{ and }v\notin H^{2}.\end{cases}\] This extended norm is lower semicontinuous. Therefore the following holds for each \(t\in[0,T]\). \[\left|m^{\prime}(t)\right|_{H^{2}}^{2}\leq\liminf_{n\to\infty}\left|m_{n}^{ \prime}(t)\right|_{H^{2}}^{2}.\] Hence by the Fatou Lemma, \[\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}(t)\right|_{H^{2}}^{2}\,dt\leq \liminf_{n\to\infty}\mathbb{E}^{\prime}\int_{0}^{T}\left|m_{n}^{\prime}(t) \right|_{H^{2}}^{2}\,dt.\] The bound in Lemma 9.14 implies that the right hand side of the above inequality is finite. This concludes the proof. For the second last inequality,we extend the norm \(\left|\centerdot\right|_{L^{4}(0,T;W^{1,4})}\) to the space \(L^{2}(0,T;L^{2})\) as follows \[\left|v\right|_{L^{4}(0,T;W^{1,4})}=\begin{cases}&\left|v\right|_{L^{4}(0,T;W ^{1,4})},\ \text{if }v\in L^{4}(0,T;W^{1,4}),\\ &\infty,\ \text{if }v\in L^{2}(0,T;L^{2})\text{ and }v\notin L^{4}(0,T;W^{1,4}). \end{cases}\] The above defined map is lower semicontinuous. Therefore the following holds for each \(t\in[0,T]\)\(\mathbb{P}^{\prime}\)-a.s. \[\left|m^{\prime}(t)\right|_{L^{4}(0,T;H^{1})}\leq\liminf_{n\to\infty}\left|m_{ n}^{\prime}(t)\right|_{L^{4}(0,T;H^{1})}.\] Hence by the Fatou Lemma, \[\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}(t)\right|_{L^{4}(0,T;W^{1,4}) }^{4}\,dt\leq\liminf_{n\to\infty}\mathbb{E}^{\prime}\int_{0}^{T}\left|m_{n}^{ \prime}(t)\right|_{L^{4}(0,T;W^{1,4})}^{4}<\infty. \tag{9.42}\] This concludes the proof of the Lemma 9.15. This concludes the Auxilliary results. We now use them to prove Theorem 3.7. Continuation of the proof of Theorem 3.7.: We now show that the obtained limit is a strong martingale solution to the problem (9.1). For this aim, we first show that it is a weak martingale solution, for which we need to show that the process \(m^{\prime}\) satisfies (9.1) with the corresponding probability space. The main difference between this proof and the proof of Theorem 3.3 is that now the solutions no more have values in finite dimensional spaces. Previously, we employed results for the finite dimensional spaces \(H_{n}\). Now the solutions are in infinite dimensional spaces. The core of the arguments and the overall structure remains the same. Moreover, for the convergence arguments, the projection operators and the cut-off are absent. But this is more of a simplification. We have pointwise convergence of \(m^{\prime}_{n}\) to \(m^{\prime}\) from Proposition 9.10. We now show that the convergence is in stronger sense. The bound in (9.32), Lemma 9.13 gives us that the sequence \(\{m^{\prime}_{n}\}_{n\in\mathbb{N}}\) is uniformly integrable. Hence by the Vitali convergence theorem, see Theorem 4.5.4 in [9], we have the following convergence as \(n\) goes to infinity. \[m^{\prime}_{n}\to m^{\prime}\text{ in }L^{2}(\Omega^{\prime};L^{2}(0,T;H^{1})). \tag{9.43}\] We give here some details of the convergence arguments for the terms containing \(u^{\prime}_{n}\). The arguments for other terms are similar to the previous arguments, see Section 5. From (9.30), we get the pointwise convergence of \(u^{\prime}_{n}\) to \(u^{\prime}\). To show that \(m^{\prime}\) satisfies the required equation, it suffices now to show the following assertions for each \(t\in[0,T]\) and \(\phi\in L^{2}(\Omega^{\prime}:H^{1})\). \[\lim_{n\to\infty}\mathbb{E}^{\prime}\left[\int_{0}^{t}\left\langle\left(m^{ \prime}_{n}(s)\times u^{\prime}_{n}(s)-m^{\prime}(s)\times u^{\prime}(s) \right),\phi\right\rangle_{L^{2}}\,ds\right]=0 \tag{9.44}\] and \[\lim_{n\to\infty}\mathbb{E}^{\prime}\left[\int_{0}^{t}\left\langle\left(m^{ \prime}_{n}(s)\times\left(m^{\prime}_{n}(s)\times u^{\prime}_{n}(s)\right)-m^ {\prime}(s)\times\left(m^{\prime}(s)\times u^{\prime}(s)\right)\right),\phi \right\rangle_{L^{2}}\,ds\right]=0, \tag{9.45}\] We will give details of the proof of (9.44). The proof of (9.45) follows the suite. First, we have the convergence given in (9.30) in Proposition 9.10. Fix \(t\in[0,T]\) and \(\psi\in L^{2}\left(\Omega^{\prime};L^{2}\left(0,T;L^{2}\right)\right)\). For \(n\in\mathbb{N}\), let us define auxilliary functions \(f_{n}:\Omega^{\prime}\to\mathbb{R}\) as follows. \[f_{n}(\omega^{\prime}):=\int_{0}^{t}\left\langle u^{\prime}_{n}(s,\omega^{ \prime})-u^{\prime}(s,\omega^{\prime}),\psi(s,\omega^{\prime})\right\rangle_{ L^{2}}\,ds,\ \omega^{\prime}\in\Omega^{\prime}. \tag{9.46}\] By Assumption 9.3 and (9.52) (established in the subsequent calculations), each \(f_{n}\) is well defined. Moreover by (9.30), we have \[f_{n}\to 0,\ \mathbb{P}^{\prime}-a.s. \tag{9.47}\] Note that (9.30) gives weak convergence for the processes on the space \(L^{2}(0,T;L^{2})\). This result can be used for \(t\in[0,T]\) instead of \(T\) by considering \(\chi_{[0,t]}\psi\) instead of \(\psi\). The idea now is to use the Vitali convergence theorem [9] to show the convergence (of \(\{f_{n}\}_{n\in\mathbb{N}}\)) in \(L^{1}(\Omega^{\prime}:\mathbb{R})\). To use the Vitali convergence theorem, it suffices to show that the sequence \(\{f_{n}\}_{n\in\mathbb{N}}\) is uniformly bounded in \(L^{\frac{4}{3}}(\Omega^{\prime}:\mathbb{R})\). \[\mathbb{E}^{\prime}\left|f_{n}(\omega^{\prime})\right|^{\frac{4}{3}}= \mathbb{E}^{\prime}\left|\int_{0}^{t}\left\langle u^{\prime}_{n}(s,\omega^{\prime})-u^{\prime}(s,\omega^{\prime}),\psi(s,\omega^{\prime})\right \rangle_{L^{2}}\,ds\right|^{\frac{4}{3}}\] \[\leq \mathbb{E}^{\prime}\left(\int_{0}^{t}\left|\left\langle u^{ \prime}_{n}(s,\omega^{\prime})-u^{\prime}(s,\omega^{\prime}),\psi(s,\omega^{ \prime})\right\rangle_{L^{2}}\right|\,ds\right)^{\frac{4}{3}}\] \[\leq \mathbb{E}^{\prime}\left(\int_{0}^{t}\left|u^{\prime}_{n}(s, \omega^{\prime})-u^{\prime}(s,\omega^{\prime})\right|_{L^{2}}^{2}\,ds\right)^ {\frac{2}{3}}\left(\int_{0}^{t}\left|\psi(s,\omega^{\prime})\right|_{L^{2}}^{2} \,ds\right)^{\frac{2}{3}}\] \[\leq \left[\mathbb{E}^{\prime}\Big{[}\bigg{(}\int_{0}^{t}\left|u^{ \prime}_{n}(s,\omega^{\prime})-u^{\prime}(s,\omega^{\prime})\right|_{L^{2}}^{2 }\,ds\Big{)}^{2}\Big{]}\right]^{\frac{1}{3}}\left[\mathbb{E}^{\prime}\left( \int_{0}^{t}\left|\psi(s,\omega^{\prime})\right|_{L^{2}}^{2}\,ds\right) \right]^{\frac{2}{3}}\leq C_{p=2},\] for some constant \(C_{p=2}\) independent of \(n\in\mathbb{N}\). The existence of such a constant \(C_{p=2}\) is guaranteed by Assumption 9.3 and (9.52). Therefore the sequence of processes \(\{f_{n}\}_{n\in\mathbb{N}}\) is uniformly bounded in \(L^{\frac{4}{3}}(\Omega^{\prime}:\mathbb{R})\), and hence uniformly integrable on \(L^{1}(\Omega^{\prime}:\mathbb{R})\). Therefore, by the Vitali convergence theorem, we have \[\lim_{n\to\infty}\mathbb{E}^{\prime}\left|f_{n}\right|=\lim_{n\to\infty}\mathbb{ E}^{\prime}\left|\int_{0}^{t}\left\langle u^{\prime}_{n}(s,\omega^{\prime})-u^{ \prime}(s,\omega^{\prime}),\phi\right\rangle_{L^{2}}\,ds\right|=0, \tag{9.48}\] thereby giving \[u^{\prime}_{n}\to u^{\prime}\text{ weakly in }L^{2}\left(\Omega^{\prime}:L^{2} \left(0,T;L^{2}\right)\right) \tag{9.49}\] Let us choose and fix \(\phi\in L^{2}(\Omega^{\prime}:L^{4}(0,T;H^{1}))\). Then we have the following equality \[\mathbb{E}\int_{0}^{t}\left\langle m^{\prime}_{n}(s)\times u^{ \prime}_{n}(s)-m^{\prime}(s)\times u^{\prime}(s),\phi\right\rangle_{L^{2}}\,ds= \mathbb{E}\int_{0}^{t}\left\langle\left(m^{\prime}_{n}(s)-m^{ \prime}(s)\right)\times u^{\prime}_{n}(s),\phi\right\rangle_{L^{2}}\,ds \tag{9.50}\] In what follows we are going to prove that the first term on the right hand side of equality (9.50) converges to \(0\). For this aim we have the following sequence of inequalities. \[\left|\mathbb{E}\int_{0}^{t}\left\langle(m^{\prime}_{n}(s)-m^{ \prime}(s))\times u^{\prime}_{n}(s),\phi\right\rangle_{L^{2}}\,ds\right|\leq \mathbb{E}\int_{0}^{t}\left|\left\langle(m^{\prime}_{n}(s)-m^{\prime}(s)) \times u^{\prime}_{n}(s),\phi(s)\right\rangle_{L^{2}}\right|\,ds\] \[\leq\mathbb{E}\int_{0}^{t}\left|m^{\prime}_{n}(s)-m^{\prime}(s) \right|_{L^{4}}\left|u^{\prime}_{n}(s)\right|_{L^{2}}\left|\phi(s)\right|_{L^{4 }}\,ds\] \[\leq C\left[\mathbb{E}\int_{0}^{t}\left|m^{\prime}_{n}(s)-m^{ \prime}(s)\right|_{L^{4}}^{4}\,ds\right]^{\frac{1}{4}}\left[\mathbb{E}\left( \int_{0}^{t}\left|u^{\prime}_{n}(s)\right|_{L^{2}}^{2}\,ds\right)\right]^{ \frac{1}{4}}\left[\mathbb{E}\int_{0}^{t}\left|\phi(s)\right|_{H^{1}}^{4}\,ds \right]^{\frac{1}{2}}\] \[\leq C\left[\mathbb{E}\int_{0}^{t}\left|m^{\prime}_{n}(s)-m^{ \prime}(s)\right|_{L^{4}}^{4}\,ds\right]^{\frac{1}{4}}.\] Hence our claim follows by applying earlier proven assertion (9.43). The last inequality is a consequence of our assumption 9.3. By (9.43), the right hand side, and hence the left hand side of the above inequality goes to \(0\) as \(n\) goes to infinity. For the second term in (9.50), \[\mathbb{E}\int_{0}^{t}\left\langle m^{\prime}(s)\times\left(u^{\prime}_{n}(s)- u^{\prime}(s)\right),\phi\right\rangle_{L^{2}}\,ds=\mathbb{E}\int_{0}^{t}\left\langle \left(u^{\prime}_{n}(s)-u^{\prime}(s)\right),m^{\prime}(s)\times\phi\right\rangle _{L^{2}}\,ds.\] By the constraint condition (3.9), \[m^{\prime}\times\phi\in L^{2}(\Omega^{\prime};L^{2}(0,T;L^{2})).\] Hence by (9.49) we infer the right hand side above converges to \(0\) as \(n\) goes to infinity. In particular, for \(\psi\in L^{2}(\Omega^{\prime}:H^{1})\), \[\lim_{n\in\mathbb{N}}\mathbb{E}\left[\int_{0}^{t}\left\langle m^{\prime}_{n}(s )\times u^{\prime}_{n}(s)-m^{\prime}(s)\times u^{\prime}(s),\psi\right\rangle _{L^{2}}\right]=0.\] Following the arguments in Section 5, we can show that the process \(m^{\prime}\) is a weak martingale solution to the problem (9.1). A couple of key differences are as follows. First of all, the solutions are no longer taking values in finite dimensional spaces. But the arguments can still be repeated with appropriate infinite dimensional spaces replacing the corresponding finite dimensional spaces. Secondly, the projection operator and the cut-off function are absent. But this is actually a simplification. That \(m^{\prime}\) is a weak martingale solution to the problem (9.1), along with Lemma 9.15 implies that the process \(m^{\prime}\) is a strong martingale solution to the problem (9.1) on the probability space \((\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{P}^{\prime})\) corresponding to the control process \(u^{\prime}\) and Wiener process \(W^{\prime}\). We also need to show that the process \(u^{\prime}\) satisfies the Assumption 9.3. To show this, let \(p\geq 1\). The sequence \(u^{\prime}_{n}\) converges to \(u^{\prime}\) in \(L^{2}_{w}(0,T;L^{2})\)\(\mathbb{P}^{\prime}\)-a.s. Hence \(\mathbb{P}^{\prime}\)-a.s. \[\left|u^{\prime}\right|_{L^{2}(0,T;L^{2})}\leq\liminf_{n\to\infty}\left|u^{ \prime}_{n}\right|_{L^{2}(0,T;L^{2})} \tag{9.51}\] Therefore \[\left|u^{\prime}\right|_{L^{2}(0,T;L^{2})}^{2p} \leq\left(\liminf_{n\to\infty}\left|u^{\prime}_{n}\right|_{L^{2}( 0,T;L^{2})}\right)^{2p}\] \[\leq\liminf_{n\to\infty}\left|u^{\prime}_{n}\right|_{L^{2}(0,T;L^ {2})}^{2p}.\] Taking the expectation of both the sides gives \[\mathbb{E}^{\prime}\left|u^{\prime}\right|_{L^{2}(0,T;L^{2})}^{2p}\leq \mathbb{E}^{\prime}\left[\liminf_{n\to\infty}\left|u^{\prime}_{n}\right|_{L^{ 2}(0,T;L^{2})}^{2p}\right].\] Hence by the Fatou Lemma, \[\mathbb{E}^{\prime}\left|u^{\prime}\right|_{L^{2}(0,T;L^{2})}^{2p}\leq\liminf _{n\to\infty}\mathbb{E}^{\prime}\left|u^{\prime}_{n}\right|_{L^{2}(0,T;L^{2}) }^{2p}\leq K_{p}. \tag{9.52}\] Therefore the control process \(u^{\prime}\) satisfies the Assumption 9.3. What remains to show is that this solution minimizes the cost functional. We show that in the following steps. Using the strong convergence in (9.43), we can show that \[\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}(t)-\bar{m}(t)\right|_{L^{2}(0, T;H^{1})}^{2}\,dt=\lim_{n\to\infty}\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{ \prime}_{n}(t)-\bar{m}(t)\right|_{L^{2}(0,T;H^{1})}^{2}\,dt. \tag{9.53}\] Similarly, (9.52) implies that \[\mathbb{E}^{\prime}\int_{0}^{T}\left|u^{\prime}(t)\right|_{L^{2}(0,T;L^{2})}^ {2}\,dt\leq\liminf_{n\to\infty}\mathbb{E}^{\prime}\int_{0}^{T}\left|u^{\prime }_{n}(t)\right|_{L^{2}(0,T;L^{2})}^{2}\,dt. \tag{9.54}\] Recall that the terminal cost function \(\Psi\) is assumed to be continuous on \(L^{2}\). We have, by the convergence in (9.29) in Proposition 9.10, that \(m^{\prime}_{n}\to m^{\prime}\in C([0,T];L^{2})\)\(\mathbb{P}^{\prime}\)-a.s. Therefore we have \(\Psi(m^{\prime}_{n}(T))\to\Psi(m^{\prime}(T))\)\(\mathbb{P}^{\prime}\)-a.s. as \(n\) goes to infinity. In particular, we have \[\mathbb{E}^{\prime}\Psi(m^{\prime}(T))\leq\liminf_{n\to\infty}\mathbb{E}^{ \prime}\Psi(m^{\prime}_{n}(T)). \tag{9.55}\] Combining (9.53), (9.54) and (9.55), we have \[J(\pi^{\prime})\leq\liminf_{n\to\infty}J(\pi^{\prime}_{n}) \tag{9.56}\] Recall that by Proposition 9.10, the laws of \(m_{n}\) and \(m^{\prime}_{n}\) are equal on the space \(L^{2}(0,T;H^{1})\cap C([0,T];L^{2})\). Similarly, the laws of \(u_{n}\) and \(u^{\prime}_{n}\) are equal on the space \(L^{2}(0,T;L^{2})\). Hence \[\liminf_{n\to\infty}J(\pi^{\prime}_{n})\leq\liminf_{n\to\infty}J(\pi_{n}). \tag{9.57}\] Therefore combining (9.56) with (9.57), we have \[J(\pi^{\prime}) \leq\liminf_{n\to\infty}J(\pi^{\prime}_{n})\] \[\leq\liminf_{n\to\infty}J(\pi_{n})=\Lambda. \tag{9.58}\] Since \(\Lambda\) is the infimum, \(J(\pi^{\prime})=\Lambda\). Hence \(\pi^{\prime}\) is an optimal control as defined in Definition 3.6. This concludes the proof of Theorem 3.7. **Acknowledgement**: The authors would like to thank Dr. Manil T. Mohan, Indian Institute of Technology, Roorkee, for his rigorous reading of the manuscript and useful comments and suggestions. ## Appendix A Proof of Lemma 8.1 In this section we give a proof for the Lemma 8.1 Proof of Lemma 8.1.: The idea of the proof is as follows. We show that there exists a sequence of functions in \(C\left([0,T];H^{1}\right)\) that converges uniformly in \(C\left([0,T];H^{1}\right)\) to the process \(m\), hence showing that \(m\in C\left([0,T];H^{1}\right)\). Let \(P_{n}:L^{2}\to H_{n}\) be the projection operator in Section 4. Let us fix the following notation for this lemma. For \(n\in\mathbb{N}\), \(m^{n}:=P_{n}(m)\). Fix two numbers \(n,k\in\mathbb{N}\) such that \(n\geq k\). The following equation is satisfied \(\mathbb{P}\)-a.s. by the projections \(m^{i}\) for \(i=n,k\). \[m^{i}(t)= P_{i}(m_{0})+\int_{0}^{t}P_{i}\big{(}m(s)\times\Delta m(s)\big{)} \,ds-\alpha\,\int_{0}^{t}P_{i}\big{[}m(s)\times\big{(}m(s)\times\Delta m(s) \big{)}\big{]}\,ds\] \[+\frac{1}{2}\int_{0}^{t}P_{i}\left[DG\big{(}m(s)\big{)}\right] \big{[}G\big{(}m\left(s\right)\big{)}\big{]}\,\,ds+\int_{0}^{t}P_{i}G\big{(}m (t)\big{)}\,dW(t).\] (A.1) Also, \[m^{n}-m^{k}=\left(P_{n}-P_{k}\right)m.\] Hence, the difference \(m^{n}-m^{k}\) satisfies the following equation \(\mathbb{P}\)-a.s. \[m^{n}(t)-m^{k}(t)= \left(P_{n}-P_{k}\right)m_{0}+\int_{0}^{t}\left[P_{n}-P_{k}\right] \left[m(s)\times\Delta m(s)\right]\,ds\] \[-\alpha\,\int_{0}^{t}\left(P_{n}-P_{k}\right)\big{[}m(s)\times \big{(}m(s)\times\Delta m(s)\big{)}\big{]}\,ds\] \[+\int_{0}^{t}\left(P_{n}-P_{k}\right)\big{(}m(s)\times u(s) \big{)}\,ds-\alpha\,\int_{0}^{t}\left(P_{n}-P_{k}\right)\big{[}m(s)\times \big{(}m(s)\times u(s)\big{)}\big{]}\,\,ds\] \[+\frac{1}{2}\int_{0}^{t}\left(P_{n}-P_{k}\right)\big{[}DG\big{(} m(s)\big{)}\big{]}\left[G\big{(}m\left(s\right)\big{)}\right]\,\,ds+\int_{0}^{t} \left(P_{n}-P_{k}\right)G\big{(}m(t)\big{)}\,dW(t).\] (A.2) Consider a function \(\phi_{5}:H^{1}\to\mathbb{R}\) defined by \[\phi_{5}(v)=\frac{1}{2}\left|v\right|_{H^{1}}^{2},\text{ for }v\in H^{1}.\] (A.3) Let \(v,w_{1},w_{2}\in H^{1}.\) We observe that \(\phi_{5}(v)=\frac{1}{2}\left|v\right|_{L^{2}}^{2}+\frac{1}{2}\left|\nabla v \right|_{L^{2}}^{2}\). We also recall that \(A_{1}=I_{L^{2}}+A\). Hence \[\phi_{5}^{\prime}(v)(w_{1})=\left\langle A_{1}v,w_{1}\right\rangle_{L^{2}}.\] We observe the following. Let \(v,w\in H^{1}\). \[\phi_{5}^{\prime}(v)(w) =\left\langle v,w\right\rangle_{L^{2}}+\left\langle\nabla v, \nabla w\right\rangle_{L^{2}}\] \[=\left\langle v,w\right\rangle_{L^{2}}+\left\langle-\Delta v,w \right\rangle_{L^{2}}\] \[=\left\langle v+(-\Delta)v,w\right\rangle_{L^{2}}\] \[=\left\langle A_{1}v,w\right\rangle_{L^{2}}\] \[\phi_{5}^{\prime\prime}(v)(w_{1},w_{2})=\left\langle w_{1},w_{2}\right\rangle_{L ^{2}}+\left\langle\nabla w_{1},\nabla w_{2}\right\rangle_{L^{2}}.\] We apply the Ito formula to \(\phi_{5}\) for the process \(m^{n}-m^{k}\). Thus we get the following. For \(t\in[0,T]\) \[\frac{1}{2}\left|m^{n}(t)-m^{k}(t)\right|_{H^{1}}^{2}= \frac{1}{2}\left|A_{1}\left(P_{n}-P_{k}\right)m_{0}\right|_{L^{2}} ^{2}\] \[+\int_{0}^{t}\left\langle A_{1}\left(P_{n}-P_{k}\right)m(s),\big{(} m(s)\times\Delta m(s)\big{)}\right\rangle_{L^{2}}\,ds\] \[-\alpha\,\int_{0}^{t}\left\langle A_{1}\left(P_{n}-P_{k}\right)m(s), \left[m(s)\times\left(m(s)\times\Delta m(s)\right)\right]\right\rangle_{L^{2}}\,ds\] \[+\int_{0}^{t}\left\langle A_{1}\left(P_{n}-P_{k}\right)m(s), \left(m(s)\times u(s)\right)\right\rangle_{L^{2}}\,ds\] \[-\alpha\,\int_{0}^{t}\left\langle A_{1}\left(P_{n}-P_{k}\right)m (s),\left[m(s)\times\left(m(s)\times u(s)\right)\right]\right\rangle_{L^{2}}\,ds\] \[+\frac{1}{2}\int_{0}^{t}\left\langle A_{1}\left(P_{n}-P_{k} \right)m(s),\left[DG\left(m(s)\right)\right]\left(G\big{(}m\left(s\right) \big{)}\right)\right\rangle_{L^{2}}\,ds\] \[+\frac{1}{2}\int_{0}^{t}\left|\left(P_{n}-P_{k}\right)G\big{(}m( t)\big{)}\right|_{H^{1}}^{2}\,ds\] \[+\int_{0}^{t}\left\langle A_{1}\left(P_{n}-P_{k}\right)G\big{(}m( t)\big{)}m(s),G\big{(}m(t)\big{)}\right\rangle_{L^{2}}\,dW(s),\ \mathbb{P}-a.s.\] (A.4) We take \(\sup_{t\in[0,T]}\) of both sides of the above inequality to get \[\sup_{t\in[0,T]}\frac{1}{2}\left|m^{n}(t)-m^{k}(t)\right|_{H^{1}}^ {2}\leq\frac{1}{2}\left|A_{1}\left(P_{n}-P_{k}\right)m_{0} \right|_{L^{2}}^{2}\] \[+\int_{0}^{T}\left|\left\langle A_{1}\left(P_{n}-P_{k}\right)m(s ),\left(m(s)\times\Delta m(s)\right)\right\rangle_{L^{2}}\right|\,ds\] \[-\alpha\,\int_{0}^{T}\left|\left\langle A_{1}\left(P_{n}-P_{k} \right)m(s),\left[m(s)\times\left(m(s)\times\Delta m(s)\right)\right]\right\rangle _{L^{2}}\right|\,ds\] \[+\int_{0}^{T}\left|\left\langle A_{1}\left(P_{n}-P_{k}\right)m(s ),\left(m(s)\times u(s)\right)\right\rangle_{L^{2}}\right|\,ds\] \[-\alpha\,\int_{0}^{T}\left|\left\langle A_{1}\left(P_{n}-P_{k} \right)m(s),\left[m(s)\times\left(m(s)\times u(s)\right)\right]\right\rangle _{L^{2}}\right|\,ds\] \[+\frac{1}{2}\int_{0}^{T}\left|\left\langle A_{1}\left(P_{n}-P_{k }\right)m(s),\left[DG\big{(}m(s)\big{)}\right]\left[G\big{(}m\left(s\right) \big{)}\right]\right\rangle_{L^{2}}\right|\,ds\] \[+\frac{1}{2}\int_{0}^{T}\left|\left(P_{n}-P_{k}\right)G\big{(}m(s )\big{)}\right|_{H^{1}}^{2}\,ds\] \[+\sup_{t\in[0,T]}\int_{0}^{t}\left|\left\langle A_{1}\left(P_{n} -P_{k}\right)m(s),G\big{(}m(s)\big{)}\right\rangle_{L^{2}}\right|\,dW(s)\] \[= \frac{1}{2}\left|A_{1}\left(P_{n}-P_{k}\right)m_{0}\right|_{H^{1} }^{2}+\sum_{i=1}^{7}c_{i}I_{i}(T).\] (A.5) We now take the expectation of both sides of the above inequality. We then fix \(k\) and let \(n\) go to infinity. We claim convergence of the right hand side by the monotone convergence theorem. After that we show that each term on the right hand side of the resulting inequality goes to \(0\) as \(k\) goes to infinity. To simplify the presentation, we write the bounds on each term individually and combine them to get the desired result. We have the following inequalities by the Cauchy-Schwartz inequality. **Calculation for \(I_{1}\)** \[\mathbb{E}\int_{0}^{T}\left|\left\langle A_{1}\left(P_{n}-P_{k} \right)m(s),\left(m(s)\times\Delta m(s)\right)\right\rangle_{L^{2}}\right|\,ds\] \[\leq\left(\mathbb{E}\int_{0}^{T}\left|A_{1}\left(P_{n}-P_{k} \right)m(s)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}\left(\mathbb{E}\int_{ 0}^{T}\left|m(s)\times\Delta m(s)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}.\] **Calculation for \(I_{2}\)** \[\mathbb{E}\int_{0}^{T}\left|\left\langle A_{1}\left(P_{n}-P_{k} \right)m(s),m(s)\times\left(m(s)\times\Delta m(s)\right)\right\rangle_{L^{2}} \right|\,ds\] \[\leq\left(\mathbb{E}\int_{0}^{T}\left|A_{1}\left(P_{n}-P_{k} \right)m(s)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}\left(\mathbb{E}\int_{0 }^{T}\left|m(s)\times(m(s)\times\Delta m(s))\right|_{L^{2}}^{2}\,ds\right)^{ \frac{1}{2}}.\] **Calculation for \(I_{3}\)** \[\mathbb{E}\int_{0}^{T}\left|\left\langle A_{1}\left(P_{n}-P_{k} \right)m(s),m(s)\times u(s)\right\rangle_{L^{2}}\right|\,ds\] \[\leq\left(\mathbb{E}\int_{0}^{T}\left|A_{1}\left(P_{n}-P_{k} \right)m(s)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}\left(\mathbb{E}\int_{ 0}^{T}\left|m(s)\times u(s)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}.\] **Calculation for \(I_{4}\)** \[\mathbb{E}\int_{0}^{T}\left|\left\langle A_{1}\left(P_{n}-P_{k} \right)m(s),m(s)\times\left(m(s)\times u(s)\right)\right\rangle_{L^{2}}\right| \,ds\] \[\leq\left(\mathbb{E}\int_{0}^{T}\left|A_{1}\left(P_{n}-P_{k} \right)m(s)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}\left(\mathbb{E}\int_{ 0}^{T}\left|m(s)\times\left(m(s)\times u(s)\right)\right|_{L^{2}}^{2}\,ds \right)^{\frac{1}{2}}.\] **Calculation for \(I_{5}\)** Similarly, \[\mathbb{E}\int_{0}^{T}\left|\left\langle A_{1}\left(P_{n}-P_{k} \right)m(s),\left[DG\big{(}m(s)\big{)}\right]\left[G\big{(}m\left(s\right) \big{)}\right]\right\rangle_{L^{2}}\right|\,ds\] \[\leq\left(\mathbb{E}\int_{0}^{T}\left|A_{1}\left(P_{n}-P_{k} \right)m(s)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}\left(\mathbb{E}\int_{ 0}^{T}\left|\left[DG\big{(}m(s)\big{)}\right]\left[G\big{(}m\left(s\right) \big{)}\right]\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}.\] **Calculation for \(I_{7}\)** By the Burkholder-Davis-Gundy inequality, followed by the Cauchy-Schwartz inequality, there exists a constant \(C_{1}>0\) such that \[\mathbb{E}\sup_{0\leq t\leq T}\left|\int_{0}^{t}\left|\left\langle A_{1} \left(P_{n}-P_{k}\right)m(s),G\big{(}m(s)\big{)}\right\rangle_{L^{2}}\right| \,dW(s)\right|\] \[\leq C_{1}\mathbb{E}\left[\left(\int_{0}^{T}\left|A_{1}\left(P_{n }-P_{k}\right)m(s)\right|_{L^{2}}^{2}\left|G\big{(}m(s)\big{)}\right|_{L^{2}} ^{2}\,ds\right)^{\frac{1}{2}}\right].\] By the constraint condition and the assumption on the function \(h\), there exists another constant \(C>0\) such that \[\mathbb{E}\sup_{0\leq t\leq T}\left|\int_{0}^{t}\left|\left\langle A_{1}\left( P_{n}-P_{k}\right)m(s),G\big{(}m(s)\big{)}\right\rangle_{L^{2}}\right|\,dW(t) \right|\leq C\left(\mathbb{E}\int_{0}^{T}\left|A_{1}\left(P_{n}-P_{k}\right)m( s)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}.\] Thus combining the above mentioned inequalities gives \[\mathbb{E}\sup_{t\in[0,T]}\left|m^{n}(t)-m^{k}(t)\right|_{H^{1}}^ {2}\] \[\leq\frac{1}{2}\left|A_{1}\left(P_{n}-P_{k}\right)m_{0}\right|_{H^ {1}}^{2}\] \[+\left(\mathbb{E}\int_{0}^{T}\left|A_{1}\left(P_{n}-P_{k}\right)m(s) \right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}\left[\left(\mathbb{E}\int_{0}^{T} \left|m(s)\times\Delta m(s)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}\right.\] \[+\left(\mathbb{E}\int_{0}^{T}\left|m(s)\times\left(m(s)\times \Delta m(s)\right)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}+\left(\mathbb{E }\int_{0}^{T}\left|\left[DG\left(m(s)\right)\right]\left[G\left(m\left(s\right) \right)\right]\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}\] \[+\left(\mathbb{E}\int_{0}^{T}\left|m(s)\times u(s)\right|_{L^{2}} ^{2}\,ds\right)^{\frac{1}{2}}+\left(\mathbb{E}\int_{0}^{T}\left|m(s)\times \left(m(s)\times u(s)\right)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}\Bigg{]}\] \[+\mathbb{E}\int_{0}^{T}\left|\left(P_{n}-P_{k}\right)G(m(s)) \right|_{H^{1}}^{2}\,ds+C\left(\mathbb{E}\int_{0}^{T}\left|A_{1}\left(P_{n}-P _{k}\right)m(s)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}.\] (A.6) In the above inequality, we fix \(k\) and let \(n\) go to infinity. For any \(v\in L^{2}\), \[P_{n}v\to v\] as \(n\) goes to infinity. Recall that \(n,k\) were chosen such that \(n\geq k\). Since \(P_{n}\) is an orthonormal projection on for each \(n\in\mathbb{N}\), \[\left|P_{n_{1}-k}v\right|_{L^{2}}\leq\left|P_{n_{2}-k}v\right|_{L^{2}}\] for any \(n_{1},n_{2}\in\mathbb{N}\) such that \(n_{1}\geq n_{2}\geq k\). Hence by the monotone convergence theorem, we have the following inequality as \(n\) goes to infinity in (A.6). \[\mathbb{E}\sup_{t\in[0,T]}\left|m(t)-m^{k}(t)\right|_{H^{1}}^{2} \leq\frac{1}{2}\left|A_{1}\left(I_{L^{2}}-P_{k}\right)m_{0}\right|_{H^{1}}^{2}\] \[+\left(\mathbb{E}\int_{0}^{T}\left|A_{1}\left(I_{L^{2}}-P_{k} \right)m(s)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}\Bigg{[}\left(\mathbb{ E}\int_{0}^{T}\left|m(s)\times\Delta m(s)\right|_{L^{2}}^{2}\,ds\right)^{ \frac{1}{2}}\] \[+\left(\mathbb{E}\int_{0}^{T}\left|m(s)\times\left(m(s)\times \Delta m(s)\right)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}+\left(\mathbb{E }\int_{0}^{T}\left|\left[DG\big{(}m(s)\big{)}\right]\left[G\big{(}m\left(s \right)\big{)}\right]\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}\] \[+\mathbb{E}\int_{0}^{T}\left|\left(I_{L^{2}}-P_{k}\right)G\big{(}m (s)\big{)}\right|_{H^{1}}^{2}\,ds+C\left(\mathbb{E}\int_{0}^{T}\left|A_{1} \left(P_{n}-P_{k}\right)m(s)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}.\] (A.7) By Lemma 4.9, the terms on the right hand side of the above inequality are bounded. Hence The right hand side of the above inequality goes to \(0\) as \(k\) goes to infinity. Hence \[\lim_{k\to\infty}\mathbb{E}\sup_{t\in[0,T]}\left|m(t)-m^{k}(t)\right|_{H^{1}}^ {2}=0.\] (A.8) Thus there exists a subsequence of \(m^{k}\), again denoted by \(m^{k}\) such that \(\mathbb{P}\)-a.s. \[\lim_{k\to\infty}\sup_{t\in[0,T]}\left|m(t)-m^{k}(t)\right|_{H^{1}}^{2}=0.\] (A.9) The paths of the process \(m^{k}\) lie in \(C([0,T];H^{1})\). Hence the above convergence implies that \(\mathbb{P}\)-a.s. the process \(m\) takes values in \(C(0,T;H^{1})\). This concludes the proof of Lemma 8.1. ## Appendix B Proof for (7.21) Consider a real valued stochastic process \(\{X(t)\}_{t\in[0,T]}\) given by \[X(t)=\int_{0}^{t}a(s)\,ds+\int_{0}^{t}b(s)\,dW(s)\ \mathbb{P}\ \text{-a.s.}\] (B.1) Consider the function \(F\) given by \[F(t,x)=xe^{-\int_{0}^{t}\Phi_{C}(s)\,ds},\] (B.2) Let \(F_{t}^{\prime},F_{x}^{\prime}\) and \(F_{xx}^{\prime\prime}\) denote its partial derivatives. Then \[F_{t}^{\prime}(t,x)=-\Phi_{C}(t)xe^{-\int_{0}^{t}\Phi_{C}(s)\,ds}\] and \[F_{x}^{\prime}(t,x)=e^{-\int_{0}^{t}\Phi_{C}(s)\,ds}.\] Also, \[F_{xx}^{\prime\prime}(t,x)=0.\] Applying the Ito formula for the function \(F\), we get \[F(t,X(t)) =\int_{0}^{t}\left[F_{t}^{\prime}\big{(}s,X(s)\big{)}+F_{x}^{ \prime}\big{(}s,X(s)\big{)}a(s)+\frac{1}{2}F_{xx}^{\prime\prime}\big{(}s,X(s) \big{)}b^{2}(s)\right]\,ds\] \[+\int_{0}^{t}F_{x}^{\prime}\big{(}s,X(s)\big{)}b(s)\,dW(s).\] Therefore, \[X(t)e^{-\int_{0}^{t}\Phi_{C}(s)\,ds} =\int_{0}^{t}-\Phi_{C}(t)X(s)e^{-\int_{0}^{s}\Phi_{C}(r)\,dr}+e^{- \int_{0}^{s}\Phi_{C}(r)\,dr}a(s)\,ds\] \[+\int_{0}^{t}e^{-\int_{0}^{s}\Phi_{C}(r)\,dr}b(s)\,dW(s).\] Going back to Section 7, the application of the Ito Lemma for the function \[v\mapsto\frac{1}{2}\left|v\right|_{L^{2}}^{2},\] gives a representation of the process \(X(t)=\left|m(t)\right|_{L^{2}}^{2}\). Applying the Ito forumla for the function \(F\) as done above gives \[\left|m(t)\right|_{L^{2}}^{2}e^{-\int_{0}^{t}\Phi_{C}(s)\,ds} =\int_{0}^{t}-\Phi_{C}(t)\left|m(s)\right|_{L^{2}}^{2}e^{-\int_{0 }^{s}\Phi_{C}(r)\,dr}+e^{-\int_{0}^{s}\Phi_{C}(r)\,dr}a(s)\,ds\] \[+\int_{0}^{t}e^{-\int_{0}^{s}\Phi_{C}(r)\,dr}b(s)\,dW(s).\] The non-negative function \(\Phi_{C}\) is chosen such that for each \(t\in[0,T]\), \[\int_{0}^{t}a(s)\,ds\leq\int_{0}^{t}\left|m(s)\right|_{L^{2}}^{2}\Phi_{C}(s) \,ds.\] Moreover, if \(\Phi_{C}\) is \(\mathbb{P}\)-a.s. integrable over \([0,T]\), then \[0\leq e^{-\int_{0}^{t}\Phi_{C}(s)\,ds}\leq 1,\] for each \(t\in[0,T]\). Hence \[\int_{0}^{t}e^{-\int_{0}^{s}\Phi_{C}(r)\,dr}a(s)\,ds-\Phi_{C}(t)\left|m(s) \right|_{L^{2}}^{2}e^{-\int_{0}^{s}\Phi_{C}(r)\,dr}\leq 0.\] Therefore, \[\left|m(t)\right|_{L^{2}}^{2}e^{-\int_{0}^{t}\Phi_{C}(s)\,ds}=\int_{0}^{t}e^{ -\int_{0}^{s}\Phi_{C}(r)\,dr}b(s)\,dW(s).\] This concludes the proof of the inequality (7.21). ## Appendix C Section Some embeddings **Lemma C.1** (Result 1, Appendix A, [18]).: _We have the following continuous embedding for \(\delta\in\left(\frac{5}{8},\frac{3}{4}\right)\)._ \[W^{2\delta,2}\hookrightarrow W^{1,4}.\] (C.1) **Lemma C.2** (Lemma A.1 [13]).: _Let \(E\) be a separable Hilbert space. Let \(2\leq p<\infty\) and \(0<\alpha\,<\frac{1}{2}\)._ _Let \(\zeta\) be a process \(\zeta:[0,T]\times\Omega\to E\) such that_ \[\mathbb{E}\int_{0}^{T}|\zeta(t)|_{E}^{p}\ dt<\infty.\] (C.2) _Define a process \(I(\zeta)\) by_ \[I(\zeta)=\int_{0}^{t}\zeta(s)\,dW(s),\quad t\geq 0.\] (C.3) _Then for all such processes \(\zeta\), there exists a constant \(C\) that depends on \(T,\alpha\) such that_ \[\mathbb{E}\left|I(\zeta)\right|_{W^{\alpha\,.p}(0,T;E)}^{p}\leq C\mathbb{E} \int_{0}^{T}|\zeta(t)|_{E}^{p}\ dt.\] (C.4) _In particular, \(\mathbb{P}-a.s.\)the paths of \(I(\zeta)\) belong to the space \(W^{\alpha\,.2}(0,T;E)\)._ We now state the Young's convolution inequality. **Lemma C.3** (Young's convolution inequality, Theorem 3.9.4 in [9]).: _Let \(f\in L^{p},g\in L^{q}\). Let \(p,q,r\geq 1\) be such that_ \[\frac{1}{p}+\frac{1}{q}=\frac{1}{r}+1.\] _Then_ \[\left|f*g\right|_{L^{r}}\leq\left|f\right|_{L^{p}}\left|g\right|_{L^{q}}.\] **Lemma C.4** (Theorem 17.7, [41]).: _Let \(p\in(0,\infty)\). Then there exists a constant \(K_{p}>0\) with the following property. Let \(T\in(0,\infty)\) and \((\Omega,\mathcal{F},\mathbb{F},\mathbb{P})\) be a filtered probability space satisfying the usual conditions. Let \(\{W(t),t\geq 0\}\) be a Wiener process on this space. For any progressively measurable function_ \[F:[0,T]\times\Omega\to\mathbb{R}\] _such that \(\mathbb{P}\left(\int_{0}^{T}F^{2}(s)dt<\infty\right)=1\), the following holds_ \[\mathbb{E}\left[\sup_{t\in[0,T]}\left|\int_{0}^{t}F(s)\,dW(s)\right|^{2p} \right]\leq K_{p}\mathbb{E}\left[\left(\int_{0}^{T}F^{2}(s)\,ds\right)^{p} \right].\] **Lemma C.5** (Theorem 1.4.8, [38]).: _For \(\gamma_{1}>\gamma_{2}\), \(X^{\gamma_{2}}\) is compactly embedded into the space \(X^{\gamma_{1}}\)._ **Lemma C.6** (Theorem 2.2, [31]).: _If \(B\subset\tilde{B}\) are two Banach spaces with compact embedding, and let the real numbers \(\gamma\in(0,1)\), \(p>1\) satisfy_ \[\gamma p>1.\] _Then the space \(W^{\gamma,p}(0,T;B)\) is compactly embedded into the space \(C(0,T;\tilde{B})\)._ **Lemma C.7** (Theorem 2.1, [31]).: _Let \(B_{0}\subset B\subset B_{1}\) be Banach spaces. Assume that \(B_{0}\) and \(B_{1}\) are reflexive. Further assume that the embedding \(B_{0}\subset B\) is compact, the embedding of \(B\hookrightarrow B_{1}\) being continuous. Let \(q\in(1,\infty)\) and \(\gamma\in(0,1)\). Then we have the following compact embedding_ \[L^{p}(0,T;B_{0})\cap W^{\gamma,q}(0,T;B_{1})\hookrightarrow L^{p}(0,T;B).\] **Lemma C.8** (See Lemma 5, [61]).: _Let \(f\in W^{\sigma,r}(0,T;B)\), \(0<\sigma<1\), \(1\leq r\leq\infty\) and \(p\) be such that_ 1. \(p\leq\infty\) _if_ \(\sigma>\frac{1}{r}\)_,_ 2. \(p<\infty\) _if_ \(\sigma=\frac{1}{r}\)_,_ 3. \(p\leq r_{*}=\frac{r}{1-\sigma r}\) _if_ \(p<\infty\) _if_ \(\sigma<\frac{1}{r}\) _Then \(f\in L^{p}(0,T;B)\) and there exists a constant \(C\) independent of \(f\) such that for all \(h>0\),_ \[\left|\tau_{h}f-f\right|_{L^{p}(0,T;B)}\leq\left\{\begin{array}{rl}&ch^{\sigma+ \frac{1}{p}-\frac{1}{p}}\left|f\right|_{\dot{W}^{\sigma,r}(0,T;B)},\text{ if }r\leq p<\infty\\ &ch^{\sigma}T^{\frac{1}{p}-\frac{1}{p}}\left|f\right|_{\dot{W}^{\sigma,r}(0,T ;B)},\text{ if }1\leq r\leq p.\end{array}\right.\] **Lemma C.9** (Theorem 3, [61]).: _Let \(X,B\) be Banach spaces with \(X\) compactly embedded in \(B\). Let \(F\subset L^{p}(0,T;B)\), where \(1\leq p\leq\infty\). Assume that_ 1. \(F\) _is bounded in_ \(L^{1}_{\text{loc}}(0,T;X)\)_._ 2. \(\left\|\tau_{h}f-f\right\|_{L^{p}(0,T-h:B)}\to 0\) _as_ \(h\to 0\) _uniformly for_ \(f\in F\)_._ _Then \(F\) is relatively compact in \(L^{p}(0,T;B)\) (and in \(C([0,T];B)\) if \(p=\infty\))._ **Lemma C.10** (Kuratowski, Theorem 1.1, [64]).: _Let \(X\) be a Polish space, \(Y\) be a separable metric space and let \(f:X\to Y\) be an injective Borel mapping. Then \(f(B)\in\mathcal{B}(Y)\) for any \(B\in\mathcal{B}(X)\)._ **Lemma C.11** (Theorem 2.1, [63]).: _Let \(X_{0},X,X_{1}\) be Banach spaces such that_ \[X_{0}\hookrightarrow X\hookrightarrow X_{1}\] _where the injection is continuous. Further assume that \(X_{0},X_{1}\) are reflexive spaces and the injection_ \[X_{0}\hookrightarrow X\] _is compact. Let \(\alpha_{0},\alpha_{1}>0\). Consider the space_ \[\mathcal{Y}=\left\{v\in L^{\alpha_{0}}\left(0,T;X_{0}\right):\frac{dv}{dt}\in L ^{\alpha_{1}}\left(0,T;X_{1}\right)\right\}\] _The space is provided with the norm_ \[\left|v\right|_{\mathcal{Y}}=\left|v\right|_{L^{\alpha_{0}}\left(0,T;X_{0} \right)}+\left|\frac{dv}{dt}\right|_{L^{\alpha_{1}}\left(0,T;X_{1}\right)}.\] _Then the space \(\mathcal{Y}\) is compactly embedded in the space \(L^{\alpha_{0}}\left(0,T;X\right)\)._
2309.15318
A Global 3-D Simulation of Magnetospheric Accretion: I. Magnetically Disrupted Disks and Surface Accretion
We present a 3-D ideal MHD simulation of magnetospheric accretion onto a non-rotating star. The accretion process unfolds with intricate 3-D structures driven by various mechanisms. First, the disc develops filaments at the magnetospheric truncation radius ($R_T$) due to magnetic interchange instability. These filaments penetrate deep into the magnetosphere, form multiple accretion columns, and eventually impact the star at $\sim$30$^o$ from the poles at nearly the free-fall speed. Over 50% (90%) of accretion occurs on just 5% (20%) of the stellar surface. Second, the disc region outside $R_T$ develops large-scale magnetically dominated bubbles, again due to magnetic interchange instability. These bubbles orbit at a sub-Keplerian speed, persisting for a few orbits while leading to asymmetric mass ejection. The disc outflow is overall weak because of mostly closed field lines. Third, magnetically-supported surface accretion regions appear above the disc, resembling a magnetized disc threaded by net vertical fields, a departure from traditional magnetospheric accretion models. Stellar fields are efficiently transported into the disc region due to above instabilities, contrasting with the ``X-wind'' model. The accretion rate onto the star remains relatively steady with a 23% standard deviation. The periodogram reveals variability occurring at around 0.2 times the Keplerian frequency at $R_T$, linked to the large-scale magnetic bubbles. The ratio of the spin-up torque to $\dot{M}(GM_*R_T)^{1/2}$ is around 0.8. Finally, after scaling the simulation, we investigate planet migration in the inner protoplanetary disc. The disc driven migration is slow in the MHD turbulent disc beyond $R_T$, while aerodynamic drag plays a significant role in migration within $R_T$.
Zhaohuan Zhu, James M. Stone, Nuria Calvet
2023-09-26T23:56:21Z
http://arxiv.org/abs/2309.15318v2
A Global 3-D Simulation of Magnetospheric Accretion: I. Magnetically Disrupted Disks and Surface Accretion ###### Abstract We present a 3-D ideal MHD simulation of magnetospheric accretion onto a non-rotating star. The accretion process unfolds with intricate 3-D structures driven by various mechanisms. First, the disk develops filaments at the magnetospheric truncation radius (\(R_{T}\)) due to magnetic interchange instability. These filaments penetrate deep into the magnetosphere, form multiple accretion columns, and eventually impact the star at \(\sim\)30\({}^{o}\) from the poles at nearly the free-fall speed. Over 50% (90%) of accretion occurs on just 5% (20%) of the stellar surface. Second, the disk region outside \(R_{T}\) develops large-scale magnetically dominated bubbles, again due to magnetic interchange instability. These bubbles orbit at a sub-Keplerian speed, persisting for a few orbits while leading to asymmetric mass ejection. Despite this, the disk outflow is weak. Third, magnetically-supported surface accretion regions appear above the disk, resembling a magnetized disk threaded by net vertical fields, a departure from traditional magnetospheric accretion models. Stellar fields are efficiently transported into the disk region, contrasting with the "X-wind" model. The accretion rate onto the star remains relatively steady with a 23% standard deviation. The periodogram reveals variability occurring at around 0.2 times the Keplerian frequency at \(R_{T}\), linked to the large-scale magnetic bubbles. The ratio of the spin-up torque to \(\dot{M}(GM_{*}R_{T})^{1/2}\) is around 0.8, with 70% of the torque exerted within \(R_{T}\). Finally, after scaling the simulation, we investigate planet migration in the inner protoplanetary disk. The disk driven migration is slow in the inner MHD turbulent disk beyond \(R_{T}\), while aerodynamic drag plays a significant role in migration within \(R_{T}\). accretion, accretion disks - dynamo - magnetohydrodynamics (MHD) - instabilities - X-rays: binaries - protoplanetary discs 0000-0002-0002-0882-8879]Zhaohuan Zhu 0000-0002-4880-7888]James M. Stone 0000-0002-4880-7888]Nuria Calvet ## 1 Introduction Magnetospheric accretion plays a key role in many astrophysical systems, from neutron stars (e.g. Lewin & van der Klis, 2006) to T-Tauri stars (Hartmann et al., 2016) and even young planets (Wagner et al., 2018; Haffert et al., 2019; Zhou et al., 2021)1. For neutron stars, it could be related to the spin-up/spin-down of accreting X-ray pulsars (Bildsten et al., 1997), QPOs in low-mass X-ray binaries (van der Klis, 2006), ultraluminous X-ray sources (King et al., 2023), and relativistic jets/outflows (Fender et al., 2004), the latter of which are crucial for studying Gamma-ray bursts and neutron star mergers. For T-Tauri stars, magnetospheric accretion is directly detected via accretion shocks at the surface of the star (Calvet & Gullbring, 1998; Ingleby et al., 2011) and atomic lines produced within the magnetosphere (Hartmann et al., 1994; Muzerolle et al., 1998, 2001), which serve as the main ways to constrain the disk accretion rates (Rigliaco et al., 2012; Manara et al., 2013; Espaillat et al., 2022). Magnetospheric accretion also affects the structure of protoplanetary disks within 1 au, which is crucial for studying the formation of close-in exoplanets (Lee & Chiang, 2017; Liu et al., 2017). For young planets, magnetospheric accretion may be responsible for the detected \(H_{\alpha}\) lines around young giant planets (e.g. Zhu, 2015; Thanathibodee et al., 2019; Marleau et al., 2022, but see Aoyama et al., 2018; Szulagyi & Ercolano, 2020), and emission from magnetospheric accretion could reveal young planets in protoplanetary disks. Footnote 1: The problem generator and input file for the simulation presented in this work can be found at [https://github.com/zhuzh1983/magnetospheric2023](https://github.com/zhuzh1983/magnetospheric2023) Magnetospheric accretion is a complex process primarily driven by the influence of the stellar magnetic fields. The magnetic field strength decreases sharply as distance from the star increases. In close proximity to the star, stellar fields are so strong that the flow is in the force-free regime. Moving outwards to the magnetospheric truncation radius, the flow and magnetic fields are dynamically balanced and the flow is in the strong field regime. Further out into the disk, the disk thermal pressure is far higher than the magnetic pressure so that the disk enters the weak field regime, where disk instabilities (e.g. magneto-rotational instability, MRI) could play an essential role in disk accretion. Several outstanding theoretical questions persist regarding magnetospheric accretion, as reviewed by Lai 2014): 1) how do the stellar magnetic fields connect with the disk? 2) Can accretion occur steadily throughout the region? 3) What controls the stellar spin? 4) How are outflows launched? and many others. To address these questions, numerous models have been proposed. In a seminal paper, Ghosh & Lamb (1979a) supposed that stellar fields can permeate a substantial radial region of the disk, reaching a steady state when field dragging is balanced by dissipation. This model suggests the existence of a broad transition zone (a detailed description of this model is given in SS2.1). However, alternative models propose that the disk is a good conductor so that the field lines are pinched inwards at the boundary (Arons & Barnard, 1986, "X-wind" model from Shu et al., 1994). Furthermore, some models suggest that the accretion is not steady (e.g. Aly & Kuijpers, 1990; Lovelace et al., 1995; Uzdensky et al., 2002). In these scenarios, magnetic fields connecting the star and the disk undergo winding, causing the flux tube to expand with building magnetic pressure - so called "field inflation". Subsequently, reconnection events take place, leading to the expulsion of outer field lines, resulting in mass ejection. Meanwhile, the inner reconnected field lines continue to wind up, perpetuating this cyclic process. Magnetospheric accretion could also be linked to the launching of jets and winds, expanding the scope beyond the conventional extended disk winds(Blandford & Payne, 1982; Wardle & Koenigl, 1993; Ferreira & Pelletier, 1995; Casse & Ferreira, 2000) and stellar winds (Sauty & Tsinganos, 1994; Hartmann & MacGregor, 1980). Various models have been suggested, including accretion-powered stellar winds (Matt & Pudritz, 2005), a wide-angle "X-wind" (Shu et al., 1994), or even unsteady magnetospheric winds due to reconnection (Ferreira et al., 2000; Hayashi et al., 1996; Matt et al., 2002). More detailed insights have been unveiled through direct numerical simulations (see review by Romanova & Owocki, 2015). Earlier simulations employed axisymmetric 2-D configurations (Goodson et al., 1997; Miller & Stone, 1997; Fendt & Elstner, 2000). These simulations have already revealed that the accretion structure depends sensitively on the initial stellar and disk field configurations (e.g. parallel or anti-parallel). Both accretion and stellar winds (Zanni & Ferreira, 2013) could also vary strongly with time in these simulations. Later, 3-D simulations have been carried out (Romanova et al., 2003b, 2004b). Given the difficulty in simulating the polar region using the conventional spherical-polar coordinate system in 3-D, Romanova et al. (2003b) adopt the "cubed sphere" grid. To simulate the accretion disk, an \(\alpha\) viscosity is adopted in the disk region. Tilted dipole fields (Romanova et al., 2003b), fast rotators (Romanova et al., 2004a), titled rotating stars with titled dipole fields (Romanova et al., 2020) have all been studied. These works show that the accretion structure depends on the dipole tilt angle. Furthermore, warping of the disk, interchange instability, and Rayleigh-Taylor instability (Kulkarni & Romanova, 2008) can all play important roles for accretion with titled dipole fields. However, due to the use of an \(\alpha\) viscosity, the self-consistent treatment of disk accretion driven by the MRI is not achieved in these models. MHD simulations including both the magnetosphere and MRI turbulence have been carried out with both axisymmetric 2-D and fully 3-D simulations (Romanova et al., 2011, 2012). The 3-D simulations again use the "cubed sphere" configuration. Both boundary layer accretion with weak fields and magnetospheric accretion with strong fields have been explored. These simulations reveal significant accretion variability. They show that the stress in the accretion disk is consistent with local MRI simulations. On the other hand, the global flow structure has not been thoroughly examined. Recent global MHD simulations reveal that, for disks threaded by external vertical magnetic fields (without including the stellar fields), the flow structure is dramatically different from that in local MRI simulations. The disk accretion structure sensitively depends on the strength of net vertical magnetic fields. When the net field is relatively weak (initial plasma \(\beta\sim 10^{3}-10^{4}\) at the disk midplane), most accretion occurs in the magnetically supported disk surface region that can extend vertically to \(z\sim R\)(Beckwith et al., 2009; Zhu & Stone, 2018; Takasao et al., 2018; Mishra et al., 2020; Jacquemin-Ide et al., 2021). A quasi-static global field geometry is established when the flux transport by the fast inflow at the surface is balanced by the slow vertical turbulent diffusion. When strong vertical fields thread the disk around black-holes (BH), the disk flow can enter the regime of magnetically arrested disks (MAD, Narayan et al., 2003; Igumenshchev et al., 2003), where the accumulated poloidal fields disrupt the accretion flow to become discrete blobs/streams and the blobs/streams fight their way towards the BH through magnetic interchanges and reconnections. With poloidal magnetic fields acting like a wire lowering material down to the BH (Bekenstein, 1972), the MAD state leads to the efficient release of the rest mass energy and strong quasi-periodic outflows (Tchekhovskoy et al., 2011). The variability may be related to low-frequency QPOs, variability in AGN, and GRB outflows (Proga & Zhang, 2006). Considering that the disk-threading stellar dipole fields weaken sharply with distance to the star (from force-free, strong fields, to weak fields), some aspects of the MAD state and the newly discovered surface accretion mode can be applied to magnetospheric accretion. Furthermore, owing to the high computational cost in 3-D simulations, most previous 3-D magnetospheric accretion simulations cannot follow the disk evolution over the viscous timescale, and thus focused on studying the magnetosphere itself. However, magnetospheric accretion plays a pivotal role in shaping the disk's long-term evolution. An unresolved challenge within accretion disk theory is the uncertainty of the inner boundary conditions, except for accretion onto black holes. For example, zero torque or zero mass flux inner conditions lead to different disk evolution paths (Lynden-Bell & Pringle, 1974). The proper inner boundary condition can only be understood if we incorporate the central object in the disk simulation. To study the interplay between the magnetosphere and the disk, encompassing all three different MHD regimes, we carry out high-resolution long-timescale global simulations that incorporate both the magnetized star and the disk. As a first step, we adopt the simplest setup which only includes a non-rotating star with a dipole field surrounded by an accretion disk. The disk is threaded by the stellar field only (without the external field) to avoid the complex interaction between the stellar and external fields. Since the star is non-rotating, the corotation radius is at infinity. Such a simple setup allows us to model a relatively clean problem as the foundation for future more realistic simulations. Nevertheless, this setup can be directly applied to slow rotators in many astrophysical systems. On the other hand, this setup does not allow us to study accretion around fast rotators, which could launch powerful jets and winds (Miller & Stone, 1997; Lovelace et al., 1999; Romanova et al., 2018) that make the star to spin down (Matt & Pudritz, 2005). While we were preparing this manuscript, Takasao et al. (2022) published the results of 3-D ideal MHD simulations studying magnetospheric accretion onto stars with different spin rates. Our work and Takasao et al. (2022) share some similarities, but also bear significant differences. By exploring stars with different spin rates, Takasao et al. (2022) can address how wind launching and spin-up/down torque is affected by the stellar spin. On the other hand, our simulation adopts a Cartesian grid with mesh-refinement, which significantly reduces the computational cost. This allows our simulation to include a relatively large magnetosphere generated from the 1kG stellar dipole (compared with 100 G in Takasao et al., 2022) and study the disk evolution on much longer timescales (2000 vs 400 innermost orbits). Thus, besides the processes within the magnetosphere, we can study the dynamics of the disk over a large dynamical range and explore how the disk and the magnetosphere interact with each other in a quasi-steady accretion stage. In SS2, we lay out the theoretical framework for magnetospheric accretion, and introduce the key physical quantities in the classical Ghosh & Lamb (1979a) model. Our numerical model is presented in SS3. We present results in SS4 from the inner magnetosphere to the outer disk. After discussions in SS5, we conclude the paper in SS6. ## 2 Theoretical framework ### The Classical Model The magnetospheric accretion model by Ghosh & Lamb (1979a) is summarized in the left panel of Figure 1. In close proximity to the star, magnetic fields are so strong that the plasma is forced to corotate with the star. For an axisymmetric and steady flow, the plasma structure there can be solved via four conserved quantities which are constant along magnetic field lines (Ghosh et al., 1977). These four constants (Mestel, 1961; Weber & Davis, 1967) are also widely used in disk wind studies (Blandford & Payne, 1982). When \(\mathbf{v}\) and \(\mathbf{B}\) are separated into the poloidal and toroidal components (\(\mathbf{v}=\mathbf{v_{p}}+\Omega R\mathbf{e_{\phi}}\) and \(\mathbf{B}=\mathbf{B_{p}}+B_{\phi}\mathbf{e_{\phi}}\)), the induction equation implies that \(\mathbf{v_{p}}\) and \(\mathbf{B_{p}}\) are in the same direction, and the first constant is \[k=\frac{\rho\mathbf{v_{p}}}{\mathbf{B_{p}}}\,, \tag{1}\] which is the mass loading parameter. In the azimuthal direction, we have the second constant \[\omega_{s}=\Omega-\frac{kB_{\phi}}{\rho R}\,. \tag{2}\] Note that \(\omega_{s}\) should be equal to the angular frequency of the star, denoted as \(\Omega_{s}\). Otherwise, there is a strong shear at the stellar surface where \(v_{p}\) decreases to 0. Equations 1 and 2 imply that, in the rotating frame with the angular frequency \(\omega_{s}\), the flow's velocity is in the same direction as the magnetic field lines, or in simpler terms, the material flows along the field lines. We can define the pitch angle of the magnetic field lines as tan \(\psi\equiv B_{p}/B_{\phi}\). Using the angular momentum equation, we have the third constant \[l=R(v_{\phi}-\frac{B_{\phi}}{k})\,, \tag{3}\] which is the specific angular momentum of the flow. In a barotropic fluid, Bernoulli's equation can be used to derive the fourth constant \[e =\frac{1}{2}v^{2}+\Phi+h+\frac{B^{2}}{\rho}-\frac{\mathbf{B}\cdot \mathbf{v}}{k}\] \[=\frac{1}{2}v^{2}+\Phi+h+\frac{B_{\phi}B_{\phi}}{\rho}-\frac{B_{ \phi}v_{\phi}}{k}\,, \tag{4}\] where \(h\) is \(\int_{0}^{\rho}dp/\rho\). For a strongly magnetized plasma, \(h\) is much smaller than other terms. Within the magnetosphere, the poloidal velocity and gravitational term dominate in Equation 4, so that the poloidal velocity is nearly equal to the free-fall velocity. With both \(B_{\rm p}\) (dipole fields) and \(v_{p}\) known, the flow and magnetic field structure within the magnetosphere can be derived using these constants with given \(\omega_{s}\) and \(l\). One main result regarding this inner magnetosphere (Ghosh et al., 1977) is that, in the case of slow rotators, matter inside the Alfven surface (\(r_{A}\)) rotates in the opposite direction from \(l\). The reason is that, when \(\omega_{s}\) has the same sign but is much smaller than \(l/r_{A}^{2}\), the spiral field lines have a forward pitch since field lines are dragged forward by the fast disk rotation. Matter that falls inwards along these spiral fields has a backward azimuthal velocity. These magnetic fields could also spin up the star through the magnetic stress. Meanwhile, the values of \(l\) along different field lines depend on the condition in the transition zone (inside the screening radius \(R_{s}\) in Figure 1) where the corotating flow (with \(\bf{v_{p}}\) following \(\bf{B_{p}}\)) changes to the Keplerian disk flow. Since the flow in the transition zone is neither steady nor axisymmetric, the full angular momentum equation is needed to describe this region: \[\frac{\partial R\rho v_{\phi}}{\partial t}+\nabla\cdot\left(R\left(\rho{\bf v }v_{\phi}-{\bf B}B_{\phi}+P{\bf e_{\phi}}\right)\right)=0\,, \tag{5}\] where \(P\) is the total pressure. If using spherical-polar coordinates, \(R\) is replaced with \(r\times\)sin\(\theta\). Ghosh & Lamb (1979a) built a simple disk model for the transition zone, assuming the accretion can reach a steady state with an effective electrical conductivity (\(\sigma_{eff}\)) in this region. In a steady state, we have \[\nabla\times{\bf B}=\frac{4\pi{\bf J}}{c}\,, \tag{6}\] with the Ohm's law \({\bf J}=\sigma_{eff}({\bf E}+({\bf J}\times{\bf B})/c)\). If \(B_{\phi}\) changes over the disk scale height H, we have \[\frac{B_{\phi}}{H}=\frac{(\Omega-\Omega_{s})rB_{z}}{\eta}\,, \tag{7}\] where \(H\equiv c_{s}/\Omega\) is the disk scale height and \(\eta\equiv c^{2}/(4\pi\sigma_{eff})\) is the resistivity. The resistivity has to satisfy Equation 7 for a steady state, meaning that the Figure 1: The flow and magnetic field structure in the magnetospheric accretion model of Ghosh & Lamb (1979a) (the left panel) and in our 3-D simulations (the right panel). The solid curves represent the magnetic field lines. The red and blue colors represent \(B\) fields with positive and negative \(B_{\phi}\) components. The yellow and purple arrows label the flow with positive and negative \(v_{\phi}\) components respectively. The positive \(\phi\) component indicates the same direction as disk rotation. The screening current in the traditional model has a positive \(\phi\) component, which leads to the vanishing stellar \(B_{z}\) beyond \(R_{s}\). \(R_{s}\) is the screening radius where \(B_{z}=0\). The most noticeable difference between the two models include the highly magnetized surface layer which inflows at supersonic speeds. Keplerian shear in this layer drags the radial magnetic field lines to develop significant \(B_{\phi}\) components inside the layer. Other differences include the filaments and density voids in the magnetically disrupted disk (MDD), and the non-steady outflow what is also illustrated in the right panel. slippage of field lines balances the azimuthal shear. The azimuthal current screens the background stellar fields. Ghosh & Lamb (1979a) did not specify the source of the resistivity. But if we consider turbulent resistivity \(\eta\sim\nu\sim\alpha Hc_{s}\) and \(\Omega_{s}=0\), we have \(B_{\phi}/B_{z}=R/(\alpha H)\). If \(\alpha<R/H\), \(B_{\phi}\) is then larger than \(B_{z}\), and the magnetic pressure from the toroidal fields can drive field inflation until fields open up (Aly & Kuijpers, 1990; Lovelace et al., 1995; Uzdensky et al., 2002). These later works suggest that a steady state may not be possible with the turbulent resistivity. The transition zone is separated into the inner transition zone (also called the boundary layer) where the azimuthal velocity changes dramatically due to the magnetic stress and the outer transition zone where the disk is Keplerian but the residual stellar magnetic fields affect the disk accretion. The boundary between these two transition zones is denoted as \(R_{b}\) and is a sizable fraction of the Alfven radius for spherical accretion \[r_{A}=r_{*}\left(\frac{B_{*}^{4}r_{*}^{5}}{2GM_{*}\dot{M}^{2}}\right)^{1/7}\ \ \ \mathrm{in}\ \ \mathrm{C.G.S.}\,, \tag{8}\] where \(r_{A}\) is sometimes called the magnetospheric truncation radius \(R_{T}\)(Hartmann et al., 2016). In this paper, we define \(R_{T}\) as the radius where the averaged azimuthal velocity drops to half of the local Keplerian velocity (\(v_{K}\)). As will be shown later, the azimuthal velocity changes dramatically around the magnetospheric truncation radius. Thus, choosing other velocities between 0 and \(v_{K}\) for the definition of \(R_{T}\) barely affects \(R_{T}\). In Section 4.2, we will compare our measured \(R_{T}\) in the simulation with various definitions of the magnetospheric truncation radius, and show that the measured \(R_{T}\) is quite close to \(r_{A}\) in Equation 8. The outer transition zone ends at \(R_{s}\), where the screening current reduces the background stellar magnetic fields to zero. In the Ghosh & Lamb (1979a) model, \(R_{s}\) could be tens to hundreds of times larger than \(R_{b}\). The broad outer transition zone in their model plays a key role in the coupling between the disk and the star, especially for fast rotators. If the torque on the star is written as \[T_{*}=n(GM_{*}R_{b})^{1/2}\dot{M}\,, \tag{9}\] Ghosh & Lamb (1979b) derived n\(\approx\)1.4 for a non-rotator. The \(\alpha\) accretion disk (Shakura & Sunyaev, 1973) exists beyond the transition zone, and continuously provides mass inwards. Since we focus on the simplest setup with a non-rotator, we will not discuss the rich physical processes associated with moderate and fast rotators (Ghosh & Lamb, 1979b; Lovelace et al., 1999; Romanova et al., 2003, 2004). ### Angular Momentum Transport and Disk Evolution Our first-principle 3-D simulation reveals that the flow structure is more complicated than that assumed in the traditional Ghosh & Lamb model. To understand the flow structure in the 3-D simulation, we need to study how angular momentum is transported among different regions. For the disk region that reaches a steady state, the second term in Equation 5 becomes zero. In other words, if we define \(S\) as a surface surrounding the star which is also along the \(\phi\) direction (so \(S\) could be cylinders or spheres surrounding the star), we have \[\int R\left(\rho\mathbf{v}v_{\phi}-\mathbf{B}B_{\phi}\right)\cdot d\mathbf{S}=const \tag{10}\] along the radial direction. If we assume \(S\) is the cylinder surface around the star, we have \[R\langle v_{\phi}\rangle\dot{M}+R\int(\rho v_{R}(v_{\phi}-\langle v_{\phi} \rangle)-B_{R}B_{\phi})dS=const \tag{11}\] at different \(R\). The symbol \(\langle\rangle\) denotes averaging over the \(\phi\) direction, and we have assumed that \(\langle v_{\phi}\rangle\) is constant along the cylinder. The first and second terms within the integral are Reynolds and Maxwell stresses. Thus, for the steady state, the accretion rate is directly determined by the total stress and the constant. The constant represents the torque between the star and the disk. Considering that \(\langle v_{\phi}\rangle\sim v_{K}\sim R^{-1/2}\) and \(\dot{M}\) is constant with \(R\), the first term on the left-hand side increases with \(R\). Thus, at large distances, the constant on the right hand side becomes negligible, and the \(\dot{M}\) term is balanced by the stress term. The accretion structure there is solely determined by the stress values. On the other hand, the constant is crucial for the disk's structure at the inner disk edge. We will discuss the constant in detail in SS5.2 We can also derive the disk's accretion rate even if the disk has not reached the steady state. We can average the angular momentum equation (Equation 5) in the azimuthal direction to derive \[\frac{\partial R\langle\rho v_{\phi}\rangle}{\partial t}= -\frac{1}{R}\frac{\partial}{\partial R}\left(R^{2}\langle\rho v_{R} v_{\phi}-B_{R}B_{\phi}\rangle\right)\] \[-\frac{\partial}{\partial z}\left(R\langle\rho v_{z}v_{\phi}-B_{z }B_{\phi}\rangle\right)\,, \tag{12}\] If we integrate Equation 12 vertically in the disk region and use the mass conservation equation, we have \[\frac{\partial\int R\langle\rho\delta v_{\phi}\rangle dz}{\partial t} =-\frac{1}{R}\frac{\partial}{\partial R}\left(R^{2}\int\left( \langle\rho v_{R}\delta v_{\phi}\rangle-\langle B_{R}B_{\phi}\rangle\right)dz\right)\] \[-\frac{\dot{M}_{acc}}{2\pi R}\frac{\partial Rv_{b}}{\partial R}-R \left(\langle\rho v_{z}\delta v_{\phi}\rangle-\langle B_{z}B_{\phi}\rangle \right)\bigg{|}_{z_{min}}^{z_{max}}, \tag{13}\] where \(\delta v_{\phi}\equiv v_{\phi}-v_{k}\)2. The equation connects the disk's radial mass accretion rate (\(\dot{M}_{acc}=2\pi R\int\rho v_{R}dz\)) to the \(R\phi\) stress within the disk and \(z\phi\) stress at the disk surface. Equation 13 is widely used in accretion disk studies (e.g., Turner et al., 2014). However, it can also be used to study flows in the magnetosphere where the \(v_{z}\) term describes the vertical flow lifted out of the disk plane. The terms \(\langle\rho v_{R}\delta v_{\phi}\rangle\) and \(-\langle B_{R}B_{\phi}\rangle\) are the radial Reynolds and Maxwell stresses, and the corresponding \(\alpha\) parameters can be defined as \[\alpha_{Rey}=\langle\rho v_{R}\delta v_{\phi}\rangle/\langle p\rangle\quad \text{and}\quad\alpha_{Max}=-\langle B_{R}B_{\phi}\rangle/\langle p\rangle\,. \tag{14}\] Stresses and \(\alpha\) parameters in spherical-polar coordinates can be defined in similar ways. If we define the vertically integrated \(\alpha\) parameter as \[\alpha_{int}=\frac{\int T_{R\phi}dz}{\Sigma c_{s}^{2}}\,, \tag{15}\] where \(T_{R\phi}\) is the sum of both radial Reynolds and Maxwell stresses, Equation 13 can be written as \[\dot{M}_{acc}=-\frac{2\pi}{\partial Rv_{\phi}/\partial R}\times\] \[\left(\frac{\partial}{\partial R}\left(R^{2}\alpha_{int}\Sigma c _{s}^{2}\right)+R^{2}\left(\langle\rho v_{z}\delta v_{\phi}\rangle-\langle B_ {z}B_{\phi}\rangle\right)\Big{|}_{z_{min}}^{z_{max}}\right)\,, \tag{16}\] for a steady state. It is the differential form of Equation 11. Equation 16 suggests that both the internal stress (the radial gradient of the \(r\)-\(\phi\) stress) and the surface stress can lead to accretion. To understand the flow structure, We will measure these stresses and \(\alpha\) values directly from our simulations. ## 3 Method We solve the magnetohydrodynamic (MHD) equations in the ideal MHD limit using Athena++ (Stone et al., 2020). Athena++ is a grid-based code using a higher-order Godunov scheme for MHD and constrained transport (CT) to conserve the divergence-free property for magnetic fields. Compared with its predecessor Athena (Gardiner and Stone, 2008; Stone et al., 2008), Athena++ is highly optimized for speed and uses a flexible grid structure that enables mesh refinement, allowing global numerical simulations spanning a large radial range. We adopt a Cartesian coordinate system (\(x\), \(y\), \(z\)) with mesh-refinement to include both the central magnetized star and the accretion disk. Although the spherical-polar coordinate system is normally used for accretion disk studies, its grid cells become highly distorted at the grid poles in this case. Although Athena++ includes a special polar boundary condition to treat these grid cells (Zhu and Stone, 2018), they still introduce large numerical errors and significantly limit the evolution timestep. Considering that most accretion onto the star occurs close to the star's magnetic poles, accurately simulating the flow in these regions is crucial. Thus, the Cartesian coordinate system is better suited for magnetospheric accretion studies. After the simulation is completed, we transform all quantities into spherical-polar coordinates and cylindrical coordinates to simplify the data analysis. In this paper, we use (\(R\), \(\phi\), \(z\)) to denote positions in cylindrical coordinates and (\(r\), \(\theta\), \(\phi\)) to denote positions in spherical polar coordinates. In both coordinate systems, \(\phi\) represents the azimuthal direction (the direction of disk rotation). Our simulation domain expands from -64\(R_{0}\) to 64\(R_{0}\) in each \(x\), \(y\), and \(z\) direction, with 320 grid cells in each direction at the root level. \(R_{0}\) is the code length unit, which is quite close to the magnetospheric truncation radius \(R_{T}\) at the end of the simulation. The stellar radius, denoted as \(r_{in}\), is chosen as 0.1 \(R_{0}\), so that the magnetospheric truncation radius is roughly 10 times larger than the stellar radius. Static mesh-refinement has been adopted with the fourth level (cells with 2\({}^{4}\) times shorter length) at [-8, 8]\(R_{0}\times\)[-8, 8]\(R_{0}\times\)[-1, 1]\(R_{0}\) for \(x\times y\times z\). Within this fourth level domain, one additional higher level is used for every factor of 2 smaller domain until the seventh level at [-\(R_{0}\), \(R_{0}\)]\(\times\)[-\(R_{0}\), \(R_{0}\)]\(\times\)[-0.125\(R_{0}\), 0.125\(R_{0}\)]. Color contours of density and the grid structure for the innermost several levels are shown in Figure 2. If the disk's aspect ratio is 0.1, the disk scale height is resolved by 16 to 32 grid cells at the disk region between \(R=0.5R_{0}\) to \(R=8R_{0}\). At the finest level, the cell size is 0.003125 \(R_{0}\). Piecewise linear method is used in the spatial reconstruction, while Van Leer integrator is adopted for the time integration. ### Disk Setup The initial density profile at the disk midplane (the \(x-y\) plane) is \[\rho_{d}(R,z=0)=\rho_{0}\left(\frac{R}{R_{0}}\right)^{p}\,. \tag{17}\] In our code unit, we set \(\rho_{0}\equiv\rho_{d}(R=R_{0},z=0)\)=1, and the time unit \(1/\Omega(R=R_{0})=T_{0}/2\pi\). \(T_{0}\) is the orbital period \(T(R)\) at \(R_{0}\). Thus, \(T_{0}\) is also close to the orbital period at the magnetospheric truncation radius. The temperature is assumed to be constant on cylinders \[c_{s}^{2}(R,z)=c_{s}^{2}(R=R_{0},z=0)\left(\frac{R}{R_{0}}\right)^{q}\,. \tag{18}\] where \(c_{s}=\sqrt{p/\rho}\) is the isothermal sound speed. We choose \(p=-2.25\) and \(q=-1/2\) so that the disk surface density \(\Sigma\propto R^{-1}\). For the disk temperature, we set \((H/R)_{R=R_{0}}\)=0.1, where \(H=c_{s}/\Omega_{K}\). We note that this disk is thicker than a typical protoplanetary disk at the magnetospheric truncation radius. Simulations for thinner disks are more computational expensive and will be presented in the future. We have used the adiabatic equation of state with \(\gamma\)=1.4, while an almost instantaneous cooling is applied for each grid cell at every timestep (the thermal relaxation time is \(10^{-5}\) local orbital time using the cooling treatment in Zhu et al., 2015). The density and velocity in the \(R\)-\(z\) plane are set to be \[\rho_{d}(R,z)=\rho_{d}(R,z=0){\rm exp}\left[\frac{GM_{*}}{c_{s}^{2}}\left(\frac{ 1}{\sqrt{R^{2}+z^{2}}}-\frac{1}{R}\right)\right], \tag{19}\] and \[v_{\phi,d}(R,z)=v_{K}\left[(p+q)\left(\frac{c_{s}}{v_{\phi,K}}\right)^{2}+1+q- \frac{qR}{\sqrt{R^{2}+z^{2}}}\right]^{1/2}, \tag{20}\] with \(v_{K}=\sqrt{GM_{*}/R}\) (e.g. Nelson et al., 2013). To avoid the density and velocity becoming infinite at \(R=0\), we use \(R=\max(R,r_{in})\) on the right-hand side of Equations 17-20. ### Star Setup The gravitational acceleration from the central star is set as \[a_{r}=\begin{cases}-\frac{GM_{*}}{r^{2}}\frac{(r-r_{in})^{2}}{(r-r_{in})^{2}+ r_{sm}^{2}}&\text{ \ \ \ \ at }r>r_{in}\\ 0&\text{\ \ fields and thus the magnetospheric truncation radius are mainly determined by the dipole component (Johnstone et al., 2014). To maintain \(\nabla\cdot{\bf B}={\bf 0}\), we use the vector potential \({\bf A}\) to initialize magnetic fields (\({\bf B}=\nabla\times{\bf A}\)) : \[{\bf A}=\frac{\overline{\bf m}\times{\bf r}}{r_{c}^{3}}\,, \tag{24}\] where \(r_{c}=max(r,r_{in})\) to avoid the singularity at r=0. The magnetic moment \({\bf m}\) is thus \(4\pi\overline{\bf m}\). Note that the vacuum permeability constant is assumed to be 1 in Athena++, and thus the magnetic pressure is simply \(B^{2}/2\) in code units. We choose \(\overline{\bf m}\)=-0.0447\({\bf e_{z}}\) so that the initial plasma \(\beta=2P/B^{2}\) at \(R=R_{0}\) is 10. The fields within the star evolve through numerical diffusion. But at the disk surface \(r_{in}\), the midplane field strength only decreases by 5% at the end of the simulation. To avoid small timesteps within the highly magnetized magnetosphere, we employ a density floor that varies with position \[\rho_{fl}=\rho_{fl,0}\left(\frac{r}{R_{0}}\right)^{p}+\rho_{flm,0}\left(\frac {r}{R_{0}}\right)^{pm}\,, \tag{25}\] where \(\rho_{fl,0}=10^{-5}\rho_{0}\), \(\rho_{flm,0}\)=1.33\(\times 10^{-5}\rho_{0}\), and \(pm=-5.5\). When \(\rho_{fl}\) gets smaller than \(10^{-9}\rho_{0}\), we choose \(10^{-9}\rho_{0}\) as the density floor. Since the smoothing length (0.1 \(R_{0}\)) is resolved by 32 cells at the finest level, the star maintains hydrostatic equilibrium quite well in the absence of magnetic fields. However, the adoption of the high density floor due to the strong stellar magnetic fields leads to inflow onto the star. The density floor is chosen low enough that this inflow is much weaker than the magnetospheric accretion from the disk. At both the \(x\) and \(y\) boundaries, the flow and magnetic fields are fixed at the initial values during the whole simulation. In the \(z\) directions, we adopt outflow boundary conditions. In the rest of the paper, we drop the code unit (e.g. \(\rho_{0}\), \(R_{0}\), \(T_{0}\)) after the physical quantities for simplification. ## 4 Results We run the simulation for 65 orbits at \(R_{0}\), and subsequently, we continue the simulation for another 3.2 orbits albeit with a 10 times smaller \(\rho_{flm,0}\) for the density floor. The total time is equivalent to 2157 Keplerian orbits at the stellar surface \(R_{in}=0.1\). The disk has settled to a quasi-steady state, as shown in the right panels of Figure 3 where the disk's mass accretion rate and one-sided outflow rate are plotted against time. The lower left panel of Figure 3 also shows that the disk accretion has reached a steady state within \(R\sim 6\) at the end of the simulation. The upper left panel of Figure 3 shows a sharp drop in density within the magnetospheric truncation radius, \(R_{T}\sim\)1. Beyond this radius, the density at the disk midplane remains relatively constant. In the following subsections, we will present our findings for different regions in the order of the regions' proximity to the star, starting from the magnetosphere region. ### Magnetosphere and Instabilities The flow within the magnetospheric truncation radius is highly dynamic. Figure 4 shows the contours for the density and magnetic fields at the end of the simulation. In the upper panels of Figure 4, the poloidal plane reveals a distinct contrast in flow structure within and beyond \(R_{0}\sim R_{T}\). The disk region outside of \(R_{0}\) is turbulent, with certain denser regions exhibiting \(\beta\) values \(\sim 100\). In contrast, the region within \(R_{0}\) appears to be laminar in the poloidal plane and \(\beta\) is less than \(10^{-2}\). It seems that the material cannot penetrate into the magnetosphere and has to follow the field lines falling onto the star close to the polar directions. However, a different picture emerges from the midplane slices at the lower panels of Figure 4. At the edge of the magnetosphere around \(R_{0}\), the disk material becomes filamentary and develops "fingers" that penetrate into the magnetosphere. Due to their higher density, these filaments have higher \(\beta\) values than the rest of the magnetosphere. As they move in, they are lifted and move along the dipole magnetic field lines. In the upper panels of Figure 4, we can see the poloidal cut of these intruding filaments deep inside the magnetosphere. Closer to the star, fewer filaments persist at the midplane (lower panels), as some filaments have been lifted and subsequently accrete onto the star. To show the development of the filaments, we plot the midplane gas pressure and magnetic pressure at different radii along the azimuthal direction in Figure 5. At the outer edge (e.g. \(r=6\)), the gas pressure is within the same order of magnitude as the magnetic pressure. Both pressures fluctuate within one order of Figure 3: Left panels: the disk midplane density and mass accretion rate along the R direction at three different times. Quantities are averaged over both the azimuthal direction and time (from each time span at 18 to 20 \(T_{0}\), 38 to 40 \(T_{0}\), and 66.2 to 68.2 \(T_{0}\), we average over 21 snapshots). Right panels: the mass accretion rate at r=0.4, 0.8, 1.5 with time (upper panel), and the mass outflow rate at the disk atmosphere (integrated within \(0<\theta<0.65\)) at r=3, 6, 12 with time (lower panel). magnitude with slight anti-correlation (higher \(P_{gas}\) corresponds to lower \(P_{B}\)). As \(r\) becomes smaller, magnetic fields become stronger while the gas pressure decreases due to the magnetospheric truncation. This results in a smoother profile of the magnetic pressure but a significant increase in density fluctuations. At \(r=0.8\), the density can fluctuate more than 3 orders of magnitude and the gas pressure is near the magnetic pressure only at the highest density peaks. Since the lowest density region has reached the density floor, the real pressure fluctuations are more significant than what is depicted in the plot. Very close to the star (e.g. \(r=0.4\)), the magnetic pressure substantially exceeds the gas pressure, even within the densest filaments. With a higher density floor imposed in this region, the density fluctuations within the filaments are less accurately captured. The density fluctuation is also shown in the upper left panel of Figure 6. At the disk midplane and along \(\theta=1\), the amplitude of the density fluctuation (the shaded region) increases towards the inner disk where the disk magnetic field is stronger. The lower density boundary at \(r\lesssim 1\) is the density floor. Although the density at the midplane has a sharp drop at \(r\sim\)1, the density profiles at \(\theta=1\) and 0.57 are relatively smooth. This suggests that material high above the disk midplane smoothly accretes into the magnetosphere and onto the star, more similar to the spherical accretion onto a magnetized star. Such smooth transition from the disk to the magnetosphere at higher altitudes is also apparent in Figure 4. Magnetospheric accretion in our simulation seems to be a mixture of the traditional thin disk magnetospheric accretion at the midplane and the spherical accretion above the midplane (more discussion in SS4.4). The "fingers" penetrating into the magnetosphere are due to a type of Rayleigh-Taylor (RT) instability that involves magnetically supported material, called the "interchange instability" (Kruskal & Schwarzschild, 1954; Newcomb, 1961). Arons & Lea (1976) suggested that this Figure 4: Poloidal and midplane slices (upper and lower panels) for density and plasma \(\beta\) (left and right panels) at the end of the simulation. The \(\beta=1\) contour is also shown as white contours in the upper right panel. Vectors of the momentum and magnetic field are overplotted on the upper \(\rho\) and \(\beta\) panels respectively. The magnetic streamlines are overplotted in the lower right panel. instability can occur at the disk-magnetosphere boundary, which is confirmed later by numerical simulations (Kulkarni and Romanova, 2008; Romanova et al., 2008; Blinnova et al., 2016). Spruit et al. (1995) have derived the general condition for the instability taking the velocity shear into account, which agrees well with numerical simulations (Romanova et al., 2008; Takasao et al., 2022). For our simulation with a non-rotator, the instability is expected at \(R_{T}\). Within the magnetosphere, the flow couples strongly with stellar magnetic fields and its azimuthal velocity reduces to zero (the lower left panel of Figure 6). With such a small azimuthal velocity, the magnetosphere can be considered a hydrostatic fluid supported by magnetic pressure against gravity. This magnetized fluid is unstable when \(-d\rho/dz<\rho^{2}g/\gamma P\)(Newcomb, 1961), where the gravity is towards negative \(z\). This condition is identical to the condition of the Rayleigh-Taylor (RT) instability and independent of the field strength. Since the density always increases with \(r\) (opposite to the direction of the gravity) at the boundary between the magnetosphere and the disk due to the magnetospheric truncation, the instability condition is satisfied and the instability develops. In the nonlinear regime of the instability, the material becomes filamentary and sinks to the star (Stone and Gardiner, 2007). We caution that, although the instability grows fast in our simulation with a non-rotating star, it will be suppressed for a rotating star (Blinova et al., 2016). Slightly different from previous simulations, our high-resolution simulation reveals that the filaments have substructures. A larger filament can split into multiple smaller sub-filaments when it moves in, implying a highly dynamic magnetosphere. When the filaments move closer to the star, the magnetic pressure and stress keep increasing and more filamentary material starts to climb vertically along the dipole fields and onto the star. Eventually, most material accretes onto the star at high latitudes. Once the material begins to follow the field lines, it undergoes a free-fall motion driven by the stellar gravity (Equation 4). To verify the free-fall motion, we plot the averaged radial velocity at different \(\theta\) angles in the upper right panel of Figure 6. The velocity is calculated by dividing the azimuthally averaged radial momentum by the azimuthally averaged density. The thick black solid curve uses the spherically integrated mass flux divided by the spherically integrated density. The thin solid curve is the free-fall velocity starting from \(R_{T}\sim R_{0}\): \[v_{ff}=\left(\frac{2GM_{*}}{r}\right)^{1/2}\left(1-\frac{r}{R_{0}}\right)^{1/2 }\,, \tag{26}\] while the dotted line is the free-fall velocity starting from infinity. Within the magnetosphere, the accretion generally follows the free-fall velocity except for the disk midplane region. At the disk midplane, magnetic fields are in the vertical direction so that the radial inflow is only possible via the interchange instability. ### Transition between the Magnetosphere and the Disk In the classical model, the magnetosphere and the disk are sharply separated at the magnetospheric truncation radius. Although our simulation supports this sharp transition at the disk midplane, the transition is more gradual and less obvious in the disk atmosphere, especially based on the \(\rho\) and \(v_{r}\) structure. The upper right \(\overline{v_{r}}\) panel of Figure 6 shows that, at \(\theta=1\), the inflow velocity is still 10% of the free-fall velocity even at \(r\sim\)8, and the accretion process smoothly transitions from the disk surface accretion to the magnetospheric infall at smaller \(r\) (more discussion on the disk surface accretion in Section 4.4). At \(\theta=0.57\), we see mass outflows far away at large \(r\) (Section 4.3). The distinction between the two regions becomes more pronounced when examining the structure of \(v_{\phi}\) and \(B_{\phi}\) in the R-z plane (the lower left two columns of Figure 7). \(v_{\phi}\) changes from negative values at small \(R\) to positive values in the disk, which is also shown in the lower left panel of Figure 6. The deviation from the Keplerian rotation suggests that the region is more magnetically and less rotationally supported. \(B_{\phi}\) also changes sign where \(v_{\phi}\) changes sign (the reason will be discussed in Section 4.4). Furthermore, magnetic fields are mostly poloidal around the star while toroidal in the disk. The different field geometries in these two regions are also shown in the lower right panel of Figure 6 (the \(1/\beta\) panel). Within the disk region (\(R\gtrsim\)1), the dominance of azimuthal Figure 5: The midplane gas and magnetic pressure at different radii along the azimuthal direction at the end of the simulation. fields is evident from the substantial difference between \(\langle B^{2}/2P\rangle\) and \(\langle B_{\theta}\rangle^{2}/2\langle P\rangle\). Conversely, at \(R\lesssim\)1, these two quantities closely align, signifying the dominance of poloidal fields. Magnetic fields are mostly axisymmetric within the magnetosphere (\(\langle B\rangle^{2}/\langle B^{2}\rangle\sim\)1 in the upper right panel of Figure 7). Using azimuthally averaged quantities, we calculate the conserved constants along magnetic field lines (\(k\), \(l\), \(\omega_{s}\) from Equations 1 to 3), which are plotted in Figure 7. These quantities are still roughly constant along the streamlines within the magnetosphere, although the density is highly filamentary as shown above. We will discuss the conserved constants in detail in Appendix A. We have tried various methods to quantitatively define the transition radius between the magnetosphere and the disk. At the disk midplane, we define the transition radius as where \(\Omega=0.5\Omega_{K}\), and consider it as the magnetospheric truncation radius. We measure this radius as 1.02 \(R_{0}\) at the end of the simulation. The magnetospheric truncation radius at the disk midplane has been defined in various ways in the literature. But, for a slow-rotator, we find that they all provide similar values. The mostly widely used \(R_{T}\) definition (Equation 8) is derived from \(v_{r}^{2}/v_{A,p}^{2}=1\), or more specifically \[\rho(R_{T})v_{r}^{2}(R_{T})=\frac{B_{p}^{2}(R_{T})}{4\pi}\ \ \ \mbox{in}\ \ \mbox{C.G.S.}\,, \tag{27}\] where \(v_{r}=\sqrt{2GM_{*}/r}\) and \(B_{p}\) follows a dipole stellar field. This equation can be interpreted in several ways, including ram pressure balancing magnetic pressure, free-fall radial speed equal to the Alfven speed, or magnetic energy density equal to radial kinetic energy density. In our unit system, we have \[R_{T}=\left(\frac{\overrightarrow{m}^{4}\left(4\pi\right)^{2}}{2GM_{*} \dot{M}^{2}}\right)^{1/7}\,. \tag{28}\] With our adopted initial dipole fields (\(\bf{B_{0}}\)) and the measured \(\dot{M}=0.005\) at the end of the simulation, we can calculate \(R_{T}=1.44\). Considering that the stellar dipole field is \(\sim\)0.6 \(B_{0}\) around \(R_{0}\) at the end of the simulation (Section 5.3), \(R_{T}\) calculated with 0.6 \(B_{0}\) is 1.07 \(R_{0}\), which is remarkably close to our measured 1.02\(R_{0}\). Such a good agreement is due to: 1) the surface radial velocity is indeed close to the free-fall velocity (Figure 6); 2) the dipole field has a very strong radial dependence Figure 6: The azimuthally averaged density, radial and azimuthal velocities, and \(1/\beta\) along different \(\theta\) directions (different colors) at the end of the simulation. The shaded area shows the quantities within 10% and 90% of all the data along the azimuthal direction. In the \(\overline{v_{r}}\) panel, the thick black solid curve uses the spherically integrated mass flux divided by the spherically integrated density. The dashed curves in this panel represent negative numbers. The thin solid curve is the free-fall velocity starting at \(R_{0}\), and the dotted line is the free-fall velocity starting from infinity. In the lower right panel, the dashed curves are calculated with the azimuthally averaged \(B_{\theta}\) and pressure, while the solid curves are the azimuthally averaged value of \(1/\beta\) from each cell. (\(P_{B}\propto r^{-6}\)) so that \(R_{T}\) has a weak dependence on parameters (Equation 28). Although our derivation is limited to non-rotators, Takasao et al. (2022) find that Equation 28 could also apply to fast rotators. The other ways to define \(R_{T}\) include the radius where \(\beta\)=1 (Pringle & Rees, 1972; Bessolaz et al., 2008; Kulkarni & Romanova, 2013), and the radius where the total kinetic energy (\(E_{k}\)) equals the magnetic energy (\(E_{B}\))(Lamb et al., 1973). In the \(\beta\), \(v_{p}^{2}/v_{A,p}^{2}\), \(E_{k}/E_{B}\) panels of Figure 7, we see that all three definitions (black dotted curves) give similar \(R_{T}\) at the disk midplane. However, when considering regions above the disk midplane, the transition radius between the magnetosphere and the disk exhibits significant variation depending on the method employed. The \(\beta\) panel shows that the magnetic pressure dominates over the thermal pressure in most regions except for the disk midplane and high above the surface where \(B_{\phi}\) changes sign. The \(v_{p}^{2}/v_{A,p}^{2}\) panel shows that the disk surface (with a high Figure 7: Various quantities related to magnetic fields at the end of the simulation. The streamlines of poloidal velocity are shown in the \(\rho\) and \(v_{\phi}\) panels, while the streamlines of poloidal magnetic field are shown in \(B_{\phi}\) and \(k\), \(\omega_{s}\), \(l\) panels. The streamlines of electric current are shown in the j panel. The dotted curves in the upper middle 3 panels indicate the regions where the quantity equals zero. Plotted quantities have been averaged over the azimuthal direction except the ones with \(\langle\rangle\) which indicates the azimuthal averaging for specific quantities. For streamlines and \(k\), \(\omega_{s}\), \(l\) constants, the primitive variables have been averaged over the azimuthal direction.). infall velocity and weak fields) has super-Alfvenic speed but with some sub-Alfvenic patches. Among these three diagnostics, \(E_{k}/E_{B}=1\) provides the clearest separation between the disk and highly magnetized regions. This \(E_{k}/E_{B}=1\) boundary also corresponds to the boundary where \(B_{\phi}\) and \(v_{\phi}\) change sign in the disk's atmosphere. Therefore, we consider \(E_{k}/E_{B}=1\) as the boundary that effectively separates the magnetosphere region from the disk region. ### Magnetically Disrupted Disk and Outflow Although the disk is pressure supported, the strong magnetization makes this region highly dynamic, occasionally leading to magnetic disruptions in some regions. Magnetic reconnection and interchange instability could sometimes reorganize magnetic fields around the truncation radius, leading to a large-scale density void (the middle panels of Figure 8). Before the density void forms, \(B_{\phi}\) changes sign 3 times when transitioning from one side of the disk to the other (bottom panels). While the density void is forming, the magnetic fields on both sides of the disk directly connect with each other (49 \(T_{0}\) panel in the bottom row). The void starts at the magnetospheric truncation radius and expands outwards. The outward motion slows down around 2-3 \(R_{T}\) and the void orbits around the central star at sub-Keplerian speed. As shown in Figure 8, it takes 6 \(T_{0}\) for the density void to finish one orbit, while the Keplerian orbital timescale at \(R=2\) is only 2.8 \(T_{0}\). This suggests that the asymmetric density structure is magnetically connected to either the region at larger scales with a slower Keplerian speed or the region within the magnetosphere which also rotates slowly. The magnetic field lines are shown in the bottom rows of Figure 8, where the density void is connected with the strong azimuthal magnetic fields at the disk surface and these magnetic fields rise up and outwards with time. These density voids and magnetic islands are remarkably similar to those in models of magnetically arrested disks (MAD, Tchekhovskoy et al., 2011). The density voids in MAD disks are similarly associated with flux tubes which move outwards until the circularization radius, and eventually dissipate after several orbits (Porth et al., 2021). These voids appear quasi-periodically, regulating the spin-up/down of the central blackhole and outflow rates, which may be associated with the flares of Sgr A*. Similarly, these voids due to the strong disk fields have also been invoked to explain the giant flares in protostars (Takasao et al., 2019). On the other hand, it is essential to highlight that magnetospheric accretion differs from the MAD state around BHs or disks threaded by external vertical fields. In magnetospheric accretion, mass in the disk accretes to the central star following the stellar field lines after mass is loaded into these field lines through turbulence or reconnection processes. In contrast, in the MAD state or disks with net vertical fields, there is no field line connecting the disk and the central object. Hence, we choose to designate our observed disk state, characterized by the presence of magnetically buoyant bubbles during the magnetospheric accretion, as the magnetically disrupted disk (MDD) rather than categorizing it as the MAD state. In our simulations, it is also evident that the appearance of magnetic islands is related to subsequent disk mass ejection/outflow events. As shown in Figure 8, the magnetic islands are magnetically connected to the disk surface, generating strong negative \(B_{\phi}\) at \(z>0\). The magnetized "bubble" at the disk surface moves out (bottom panels), and pushes material outwards (top panels), leading to asymmetric non-steady outflow. Such an outflow event is also shown in the \(\dot{M}(0<\theta<0.65)\) panel of Figure 3 as the bumps on the blue and red curves. At \(r=6\), the outflow rate starts to rise at t=55 \(T_{0}\) and drops down at t=63 \(T_{0}\), which corresponds to the time interval in Figure 8. It takes time for the ejection to move to larger distances. At \(r=12\), the outflow rate rises at t=65 \(T_{0}\). On the other hand, in our simulation that includes a non-rotating central star, the outflow rate is small compared with the accretion rate even during these mass ejection events. Even including outflow from both sides of the disk, the outflow rate during these mass ejection events (\(\dot{M}\sim 0.0002\) based on Figure 3) is less than 5% of the disk accretion rate (\(\dot{M}\sim 0.005\)). Such a small outflow rate is due to the magnetic field structure in the low density region surrounding the non-rotating star (Figure 7). At one side of the disk (e.g. \(z>0\) at \(R=2\)), all magnetic field lines are pointing towards the star. The material at the disk surface is channeled towards the star directly, and cannot move across the field lines to be loaded to the low density region high above the disk. Furthermore, even if the material can slip into the low density region, the material will simply fall to the star following the field lines. This occurs due to the magnetic fields being connected to the non-rotating star, and as a result, the flow lacks any rotation and centrifugal support. Due to this latter reason, it is not even clear if the mass ejection seen in Figure 8 can eventually escape the system. In contrast, we expect significantly higher outflow rates for rotating stars. Mass loaded into the stellar open fields, either by some steady diffusion or by the magnetic "bubbles", could be accelerated by twisted stellar magnetic fields (Matt and Pudritz, 2005). However, the outflow in this case depends on the density structure around the star (e.g. from the stellar wind) which is highly uncertain. ### Surface Accretion and Disk Evolution Our grid setup enables us to evolve the disk for a long timescale (2157 Keplerian orbits at the stellar radius) with a reasonable computational cost. The disk outside the magnetosphere has reached a steady state over a large dynamical range (\(R\lesssim 6R_{T}\)). To study the steady disk accretion, we plot the radial profiles of surface den sity, stress, \(\alpha\), and magnetic fields in Figure 9. The \(r\)-\(\phi\) stress changes sign within the magnetospheric truncation radius, mainly due to the reversal of \(B_{\phi}\). Beyond \(R_{0}\sim R_{T}\), the midplane \(r\)-\(\phi\) stress decreases as \(R^{-2}\), so that the midplane \(\alpha_{r\phi}\) changes as \(R^{-1.5}\). The vertically integrated \(\alpha_{int}\) (Equation 15) changes as \(R^{-2}\) and \(\Sigma\) changes as \(R\). With these profiles, \(\dot{M}\) is constant with \(R\) (Equation 16) if we ignore the stresses at the disk surface (which will be justified in Appendix B). We note that these profiles are significantly different from those in Zhu and Stone (2018) where the disk is threaded by net vertical magnetic fields with a constant initial \(\beta\). However, the disk accretion rates in both cases are constant radially. The disk in Zhu and Stone (2018) has \(\Sigma\propto R^{-0.6}\) and \(\alpha_{int}\propto R^{-0.4}\), which also leads to a constant \(\dot{M}\) along \(R\). This suggests that an MHD turbulent disk with net vertical magnetic fields (either from the star or molecular cloud) can reach different steady states depending on the initial magnetic field distribution and field transport within the disk. The disk evolution is inconveniently affected by the global magnetic field structure that is difficult to be constrained for real astrophysical systems. The magnetic fields adjust themselves quickly and affect the disk structure at a very short timescale, as indicated by Zhu and Stone (2018). While we lack a comprehensive theory to explain the observed differences in the smaller \(\alpha\) slope and the higher \(\Sigma\) slope in this study compared to those in Zhu and Stone (2018), we can offer a preliminary explanation based on stress considerations. To reach a steady state, the vertically integrated stress needs to follow \(R^{-1.5}\), independent from the external field configurations, which means \(\alpha_{int}\Sigma\propto R^{-1}\) with our temperature profile. Since the net vertical fields in this work decrease much faster outwards compared with those in Zhu and Stone (2018), the midplane \(\beta\) increases faster outwards even with the same surface density profile, which leads to a faster decrease of \(\alpha_{int}\) moving outwards (indicating a smaller slope or a more negative slope). Since \(\alpha_{int}\Sigma\) needs to maintain the same \(R^{-1}\) slope, \(\Sigma\) needs to increase faster outwards (indicating a higher slope), which drives an even steeper \(\beta\) and a smaller \(\alpha\) slope. Eventually, a balance is achieved with a small \(\alpha_{int}\) slope and a high \(\Sigma\) slope. To derive the exact slope values, we need to understand how field is transported and amplified in the disk. Such an analytical model has not been constructed. We hope that our simulations here could shed light on how to construct such a model in future. Figure 8: The density (upper and middle panels) and \(B_{\phi}\) (lower panels) structure at different times, highlighting the density void in the disk. The vectors in the upper and lower panels are the velocity and magnetic field vectors respectively. While the upper and lower panels show the poloidal cut, the middle panels show the midplane cut. The movie can be downloaded at [https://figshare.com/articles/media/Magnetospheric_Accretion/24103623](https://figshare.com/articles/media/Magnetospheric_Accretion/24103623). We could also estimate the \(\alpha\) value by equating the viscous timescale at \(R\sim\)6 to the simulation time. The derived \(\alpha\) value is on the order of unity, similar to \(\alpha_{int}\) derived above. This high \(\alpha\) value has important implications for planet formation theory (Section 5.4). The most surprising feature in our simulation is the vertically extended surface accretion region at \(R\gtrsim R_{T}\), as shown in the middle panel of Figure 10. This region is magnetically supported (Figure 11), and is remarkably similar to the surface accretion region in the MRI turbulent disks with net vertical magnetic fields (e.g., Zhu and Stone, 2018; Mishra et al., 2020; Jacquemin-Ide et al., 2021). The region extends to \(z\sim r\) and the flow in the region moves inwards supersonically. It is supported by magnetic pressure and located beyond the \(\langle\beta\rangle=1\) surface. The strong magnetic fields are generated by the azimuthal stretching of the radial magnetic fields from the Keplerian shear. The resulting large \(B_{r}B_{\phi}\) stress from net magnetic fields drives the surface accretion, while, at the disk midplane where \(\langle\beta\rangle\gtrsim 1\), the disk is turbulent due to MRI. The magnetically dominated surface is similar to that in the magnetic elevated disk model (Begelman and Armitage, 2023) and the failed disk wind in accretion disks around weakly magnetized stars (Takasao et al., 2018). To see the similarities and differences between accretion disks threaded by dipole versus net vertical magnetic fields, Figure 12 compares this magnetospheric accretion simulation against the simulation in Zhu and Stone (2018) where the disk is threaded by net vertical fields with an initial \(\beta_{0}=10^{3}\). The magnetically-dominated accreting surface is evident in both simulations. Significant radial inflow at the disk surface can be seen in the \(\langle r^{2}\rho v_{r}\rangle\) panels. It is driven by the high \(r\)-\(\phi\) stress Figure 11: Similar to Figure 10 but for \(B_{\phi}\), and the green curves are the magnetic streamlines. The range of the colorbar is [-0.01,0.01], [-0.1,0.1], [-0.1,0.1] in the left, middle, and right panels. Figure 10: Azimuthally averaged density at different scales at \(t=68.2T_{0}\). The green curves are the velocity streamlines calculated with azimuthally averaged velocities. The white curves label where the azimuthally averaged \(\beta=1\). The red curves label where the azimuthally averaged density is only larger than the density floor by 10%, indicating that the majority of grids have reached the density floor there. Only the pole region has reached the density floor. The range of the colorbar is [-8,1], [-5,1],[-3.5,-1] in the left, middle, and right panels. Figure 9: The disk surface density, \(\langle B_{\theta,mid}\rangle^{2}/2\langle P\rangle\), \(B_{mid}^{2}/2P\) (upper panels), midplane r-\(\phi\) stress, midplane \(\alpha_{r\phi}\), vertically integrated \(\alpha_{int}\) (lower panels) along the R direction at different times. The dashed curves in the lower panels represent negative values. In the upper right panel, the solid curves are \(\langle B_{mid}\rangle^{2}/2\langle P\rangle=(\langle B_{r,mid}\rangle^{2}+ \langle B_{\phi,mid}\rangle^{2}+\langle B_{\phi,mid}\rangle^{2})/2\langle P\rangle\), while the dashed curves are \(\langle B_{mid}^{2}/2P\rangle\). All plotted quantities are averaged over both the azimuthal direction and time (from each time span from 18 to 20 \(T_{0}\), 38 to 40 \(T_{0}\), and 66.2 to 68.2 \(T_{0}\), 21 snapshots are averaged). The quantities with \(\langle\rangle\) are azimuthally averaged first, and then the plotted quantities are averaged over time. (the \(\langle B_{r}\rangle\langle B_{\phi}\rangle\) panel) that is produced by stretching the radial fields azimuthally. The radial fields in magnetospheric accretion come from the stellar dipole fields after reconnection events (the \(\langle B_{\phi}\rangle\) panel). On the other hand, the radial fields in the net vertical field simulations come from the surface accretion itself, which drags the fields at the surface inwards tilting the vertical fields into the horizontal direction. The magnetic fields also connect the midplane with the disk atmosphere vertically in both cases. The \(B_{\theta}B_{\phi}\) stress acts like magnetic braking, which removes angular momentum from the surface to the midplane (the \(\langle B_{\theta}B_{\phi}\rangle\) panels). Thus, the \(B_{\theta}B_{\phi}\) stress increases surface accretion further while it slows down the midplane's radial inflow (or even makes it move outwards). On the other hand, the \(B_{\theta}B_{\phi}\) stress within the disk can only redistribute angular momentum vertically within the disk, and it cannot lead to the overall disk accretion if we integrate the disk accretion rate vertically throughout the disk. The overall accretion is led by the \(r\)-\(\phi\) stress integrated vertically within the disk and the \(\theta\)-\(\phi\) stress at the disk surface (Equation 16). The simulations in Zhu and Stone (2018) show that the \(r\)-\(\phi\) stresses play a more important role than the \(\theta\)-\(\phi\) stresses at the disk surface, which is also the case for magnetospheric accretion presented here (more details in Appendix B). Jacquemin-Ide et al. (2021) discover that, for disks threaded by net vertical fields, the surface accretion region consists of two parts: the laminar region at lower \(z\) where the net fields (\(\langle B_{r}\rangle\langle B_{\phi}\rangle\)) dominate the angular momentum transport and the turbulent region at higher \(z\) where the turbulent fields (\(\delta B_{r}\delta B_{\phi}\)) dominate the transport. This is also shown in our Figure 12. They identify that the turbulent region at \(z\sim R\) is due to MRI when the net azimuthal fields become weaker than the net vertical fields. For our magnetospheric accretion simulation, we also detect the turbulent surface accretion region above the laminar region. Turbulent stress Figure 12: Disk structure from this work (upper panels), compared to the structure of a disk threaded by net vertical magnetic fields (lower panels, from Zhu and Stone, 2018). Both simulations are shown at the final output, and all plotted quantities are azimuthally averaged. The white contours in the leftmost panels (the density panel) represent where \(\langle\beta\rangle=1\). The streamlines in the \(\langle v_{\phi}\rangle\) panels show the poloidal velocity, while the streamlines in the \(\langle B_{\phi}\rangle\) panels show the poloidal magnetic fields. is also observed in the vicinity of the magnetosphere, which could be attributed to the interchange instability. However, it's worth noting that the turbulent stress is significantly weaker than the laminar stress in that region. The most noticeable differences between these two models are mainly at the magnetosphere and the lowest density region at \(z>r\). In the upper panels of Figure 12, the stellar dipole fields in the low-density region are pointing toward the star. Since the star is non-rotating, the stellar field lines can magnetically break the low-density region, leading to accretion onto the star instead of launching outflows. As discussed in Section 2.1, for an axisymmetric steady flow around a non-rotating star, the gas velocity and the magnetic field lines are along the same direction (\(B_{\phi}/B_{r}\sim v_{\phi}/v_{r}\)). High above the disk at \(z>0\), \(B_{\phi}\), \(B_{r}\), \(v_{\phi}\), and \(v_{r}\) are all negative (\(\langle v_{\phi}\rangle\) and \(\langle B_{\phi}\rangle\) panels). Thus, the flow falling onto the star is in the opposite azimuthal direction from the direction of the Keplerian rotation, similar to the traditional picture in Section 2.1. In contrast, for disks threaded by net vertical magnetic fields shown in the lower panels, the net vertical field lines at the disk surface are tilted away from the star. The material in the disk surface connects to the higher and outer low density region magnetically. Since the disk rotates faster than the higher and outer low density region, the magneto-centrifugal wind is launched. Such quasi-steady structure is not established instantaneously, it is important to understand how the disk evolves to such a state from the initial condition. Such evolution may have implications for outbursting disks. Figure 13 shows the density, velocity, and magnetic structure at different times. In the initial condition, we set up a disk that is in hydrostatic equilibrium with the stellar gravity threaded by a dipole magnetic field. Due to the azimuthal shear, the poloidal fields are quickly stretched to produce toroidal components. The second panel in the third row shows that, inside the matter dominated disk region (\(\overline{\beta}\geq 1\)), the faster rotation at the inner disk drag the magnetic field with negative \(B_{R}\) at \(z>0\) to develop a positive \(B_{\phi}\) component. On the other hand, in the magnetically dominated region (\(\overline{\beta}<1\)) close to the star, the flow's rotation is slowed down by magnetic fields from the non-rotating star (the second panel in the second row). Since the Keplerian rotating disk with \(\overline{\beta}\geq 1\) is outside the magnetosphere with \(\overline{\beta}<1\), magnetic fields with negative \(B_{R}\) in the magnetosphere are stretched to develop the negative \(B_{\phi}\) component. The strong shear quickly amplifies \(B_{\phi}\), especially at the \(\overline{\beta}\sim 1\) region. The increasing magnetic pressure starts to push material outwards (the third and fourth panels in the second row), forming an outwardly moving magnetic bubble that stretches the magnetic fields in the radial direction (the third row). Such strong magnetic fields in the atmosphere push the \(\overline{\beta}\sim 1\) curve further into the disk region (the fourth panel in the first row). After the magnetic bubble moves outwards, magnetic field lines at the base reconnect so that once again they connect the disk region to the star (the fifth and sixth panels in the third row). This is very similar to the unsteady field inflation model in Lynden-Bell & Boily (1994); Lovelace et al. (1995); Uzdensky et al. (2002). However, unlike these previous studies, after this initial relaxation stage, the disk generates a magnetically supported surface region, which expands with time and allows the disk to accrete quasi-steadily. The \(B_{\phi}\) in the disk atmosphere increases steadily due to the Keplerian shear until the field growth is balanced by turbulent dissipation. This whole process is self-similar at different radii, and eventually the poloidal field structure looks similar to the initial dipole structure except with a very strong azimuthal component. Even at the end of the simulation, the largest scale (e.g. the leftmost panels in Figures 10 and 11) is still far from steady state. Instead, the flow and magnetic structure there look like the structure at \(R\sim\)1 when \(t=5T_{0}\) (Figure 13), demonstrating that the flow and magnetic structure expand self-similarly with time. ## 5 Discussion ### Comparison with Observations: Filling Factor and Variability Observable accretion signatures of classical T-Tauri stars are produced within the magnetosphere (Hartmann et al., 2016). The accretion shock at the surface of the star produces the excess emission at ultraviolet, which is the most robust tracer for estimating the disk's accretion rate. Atomic lines are produced over a large volume (probably covering the whole magnetosphere) and the line shapes can be used to constrain the flow structure within the magnetosphere. Detailed radiative transfer modeling reveals that: 1) the maximum infall velocity onto the star is roughly consistent with free-fall velocity (Hartmann et al., 1994; Muzerolle et al., 1998, 2001; Kurosawa et al., 2011); 2) the infall is at moderate latitudes from the disk plane but not at the poles (Bonnell et al., 1998; Muzerolle et al., 1998); 3) the covering factor of the accretion columns at the stellar surface (called "filling factor") is 0.001 to 0.1 (Calvet & Gullbring, 1998); 4) the outflow rate (with 100 km/s velocity) is correlated with disk accretion rate (Hartigan et al., 1995; Rigliaco et al., 2013; Natta et al., 2014). We can compare the flow structure in our simulated magnetosphere with above observed properties. Physical quantities on the \(\theta\)-\(\phi\) plane at different radii are shown in Figure 14. The top two rows clearly demonstrate that the material is lifted from the disk (the \(r=1.6\) panel) to higher altitudes within the magnetosphere (\(r\)=0.4 and 0.8 panels). The \(r\)=0.4 panel suggests that most material will eventually fall onto the star at \(\theta\sim 30^{o}\) and \(150^{o}\). We caution that the position of the hot spot also depends on the tilt of the dipole fields and/or the multipole field components (Long et al., 2008). Our discussion here is based on our simulation with the aligned dipole fields. Figure 14 also shows that the filamentary features stretch from north to south, and fewer filaments penetrate into \(r=0.4\) at the midplane compared with filaments at \(r=0.8\). The infalling material also acceler Figure 13: Top three rows: the disk density, velocity, and magnetic structure at different times during the simulation (left to right panels). The white contours in the top panels label where \(\overline{\beta}\equiv 2\langle P\rangle/(\langle B_{r}\rangle^{2}+\langle B_{ \theta}\rangle^{2}+\langle B_{\phi}\rangle^{2})=1\). The streamlines in the panels of the second row represent the poloidal velocity, while the streamlines in the panels of the third row represent the poloidal magnetic field. The bottom row: the schematic diagrams showing how magnetic fields and flow structure change with time. The red and blue curves are magnetic field lines with positive and negative \(B_{\phi}\) components. The arrows show the flow in the poloidal direction, and their colors (yellow or purple) represent positive and negative \(v_{\phi}\) in the region. Figure 14: Various quantities at \(r=0.4\), 0.8, and 1.6 (left, middle, and right columns). The upper two rows show \(log_{10}\rho\) mapped to a sphere (the top row) and in the \(\phi\)-\(\theta\) plane (the second row). The third row shows the radial velocity normalized to the free-fall velocity at those three different radii. The colorbar extends from -1 to 1, and the black contour represents the value of 0.8. The fourth row shows the fraction of the area on the sphere with density (\(\rho\), blue curves) or radial mass flux (\(\rho_{\rm tr}\), black curves) larger than the given value at x-axis. The vertical black and blue lines label where the radial mass flux and density should be for spherically symmetric accretion with the measured accretion rate (0.005) and the free-fall velocity. The bottom panels show the fraction of the sphere (\(f\)) where the integrated accretion rate is \(\dot{M}_{>}\). To derive \(\dot{M}_{>}\), we integrate the mass flux from the highest flux region to the lowest flux region. ates as it falls, reaching free fall speed at \(r\sim\)0.4. As discussed in Section 4.1, Figure 6 shows that the infall speed at moderate latitudes is almost the free-fall speed. We notice that there is more than one accretion hot spot and column, also shown in Figure 2. There could be many layers of magnetosphere with an onion-like structure. Recent observations by Thanathibodee et al. (2019) find that some systems do have multiple geometrically isolated accretion flows, which seems to be consistent with our simulation. Considering that accretion is concentrated at high altitudes and within several accretion columns, we calculate the fraction of the sphere where most accretion occurs (the "filling" factor). The fourth row in Figure 14 shows the fraction of the area with density (\(\rho\)) or radial mass flux (\(\rho v_{r}\)) higher than a given value. The vertical dotted lines label the density and radial mass flux for spherically symmetric accretion (labeled as \(\rho_{sph}\) and \(\rho_{sph}v_{ff}\)) that is calculated using the measured accretion rate (0.005) and the free-fall velocity. At all three radii, roughly 30% of the area has a mass flux higher than \(\rho_{sph}v_{ff}\). On the other hand, the density of the accretion columns in the outer disk is significantly higher than the \(\rho_{sph}\), since the radial velocity there is significantly lower than the free-fall velocity. At \(r=1.6\), 70% of the area has a density higher than \(\rho_{sph}\). One important parameter in the magnetospheric accretion model is the filling factor, \(f\)(Hartmann et al., 2016) defined as the surface covering fraction of the accretion columns. With a small \(f\), most accretion occurs within a small patch on the stellar surface. Since all accretion energy is released in such a small region, a small \(f\) produces a high temperature hot spot. For young stars, this produces UV excess emission over the photospheric SED, a distinct feature indicating magnetospheric accretion. On the other hand, \(f\) is less well defined in our simulations, since accretion occurs across a wide range of densities and mass fluxes over the sphere, rather than in discrete patches. Thus, we define a filling factor function, \(f(\dot{M})\), as the surface covering fraction for regions where the integrated mass flux is \(\dot{M}\). We integrate the mass flux from the patch with the highest mass flux to the patch with the lowest mass flux. More specifically, to calculate this function, we first calculate the distribution function for \(M_{r}\equiv\rho v_{r}\) as the probability of finding accreting columns within a certain range of \(M_{r}\) values across all \(4\pi\) direction at \(r\), \[P(M_{r,1}<M_{r}<M_{r,2})=\int_{M_{r,1}}^{M_{r,2}}P(M_{r})dM_{r}\,, \tag{29}\] where \(P\) is calculated by dividing the solid angle corresponding to the selected \(M_{r}\) range by the total solid angle of \(4\pi\) steradians. Essentially, we rearrange all patches on the stellar surface according to its mass flux. Then, we integrate \(P\) from the patch with the highest mass flux \(M_{r,max}\) all the way down to \(M_{r,\dot{M}}\) to derive \(f(\dot{M}_{>})\): \[f(\dot{M}_{>})=-\int_{M_{r,max}}^{M_{r,\dot{M}}}P(M_{r})dM_{r}\,, \tag{30}\] where \(\dot{M}_{>}\) is the integrated flux for top accreting patches \[\dot{M}_{>}=\int_{M_{r,max}}^{M_{r,\dot{M}}}pv_{r}P(M_{r})4\pi r^{2}dM_{r}\,. \tag{31}\] The filling factor \(f(\dot{M}_{>})\) is shown in the bottom panels of Figure 14. From the right to left panels, the filling factor is smaller at the inner radius. At \(r=0.8\), 90% of the accretion occurs within 30% of the area. At \(r=0.4\), 90% of accretion occurs within 20% of the area, and 50% of accretion is within 5% of the area. At smaller radii, the filling factor could be even smaller. On the other hand, considering that most protostars have stellar surfaces around 0.2 \(R_{T}\) which is only a factor of 2 smaller than \(r=\)0.4, the filling factor derived at \(r=\)0.4 should be similar to the filling factor at the stellar surface. Thus, we estimate that the filling factor is \(\sim\)5%-20%. Figure 15: Top panel: evolution of the mass accretion rate at \(r=0.4\), 0.8, and 1.5 with time. The insert zooms into the last 3 orbits. The bottom three panels are space time diagram for \(\rho v_{r}\) along \(r\), \(\theta\), and \(\phi\) directions. The \(\dot{M}\) and \(\rho v_{r}\) in the top two panels are integrated and averaged over the sphere respectively. The \(\rho v_{r}\) in the \(t\)-\(\theta\) and \(t\)-\(\phi\) panels are averaged along the \(\phi\) and \(\theta\) directions at \(r=0.4\). The red line in the bottom panel shows the azimuthal movement for a hot spot if its orbital frequency is 20% of the Keplerian frequency at the magnetospheric truncation radius. Since most accretion is concentrated in a few accretion columns and these columns appear and disappear dynamically due to the interchange instability, it is natural to ask if such an accretion is steady. We integrate the total accretion rate over the sphere at \(r=0.4\), 0.8, and 1.5 respectively, and plot the accretion rates with respect to time in the top panel of Figure 15. The accretion rates at all three radii are almost the same, indicating a constant accretion rate from the disk to the star. There is little time lag among the accretion rates at all three radii, even when the accretion rate changes by a factor of 2 over \(\sim\)10 orbits at the beginning of the simulation. This simultaneous change of the accretion rates at all three radii is due to the fast radial inflow from the disk surface accretion and magnetospheric accretion. After 30 orbits, the accretion rates become almost constant. From the insert of Figure 15, it is evident that the accretion rate in the disk (e.g. \(r=1.5\)) is smoother than those in the magnetosphere (e.g. \(r=0.4\) and \(r=0.8\)). These short time-scale fluctuations within the magnetosphere are probably due to the filaments produced by the interchange instability. On the other hand, the amplitudes of the \(\dot{M}\) variability are similar at these three radii, as shown in Figure 16. The averaged rates are within 15% of each other and the ratio between the standard deviation of the accretion rates and the mean accretion rates is also close to each other (\(\delta\dot{M}/\langle\dot{M}\rangle\sim 23\%\)). The \(\dot{M}\) distribution along the \(r\), \(\theta\), and \(\phi\) directions are shown in the spacetime plots in Figure 15. In the radial direction, there is a region of steady accretion which grows with time, beyond which there is a transition region that also moves outwards (with positive \(\rho v_{r}\)). In the \(\theta\) direction, most accretion onto the star occurs around 30 and 150 degrees. The \(\phi\) panel shows that the accretion is not axisymmetric. This is expected due to the filaments within the magnetosphere. However, most interestingly, the accretion hot spot rotates around the central star at 20% of the Keplerian frequency at the magnetospheric truncation radius. This \(5T_{0}\) periodicity is also shown in the periodogram for the accretion rate at one particular \(\phi\) angle. The bottom panel of Figure 17 shows that the periodogram for \(\phi=0\) has a peak around \(5T_{0}\). On the other hand, there are many other peaks due to the statistical noise. Thus, we averaged all the periodograms in different \(\phi\) directions to lower the noise, and the averaged curve is the red curve. We still see a bump around \(5T_{0}\), confirming the trend in the space-time diagram at the bottom panel of Figure 15. This \(5T_{0}\) modulation is linked to the orbital motion of the magnetic bubble in Section 4.3. The middle panels in Figure 8 show that the magnetic bubble develops around \(R_{T}\) but it extends all the way to the central star. A dense filament can be seen at the edge of the bubble, connecting to the star. The filament is most apparent in the middle panels from 54 to 61 \(T_{0}\) and this accreting filament can also be seen in the bottom panel of Figure 15 during the same period of time (the bubble and filament actually start around 50 \(T_{0}\) shown in the movie after Figure 8). In terms of astronomical observations, we anticipate that such accretion modula Figure 16: The probability distribution function of the accretion rates at r=0.4, 0.8, and 1.5 (black, blue, and red curves). They are derived using 341 snapshots uniformly spanned from half of the total time (\(t\)=34.1) to the end of the simulation. The integration of the probability over all the \(\dot{M}\) is 1. The vertical lines are the averaged accretion rates. The standard deviation of the accretion rates is also given in the upper left corner. Figure 17: Top panel: the periodogram for the integrated mass accretion rate at \(r=0.4\), 0.8, and 1.5. Bottom panel: the periodogram for the accretion rate at \(r=0.4\) and \(\phi=0\) (black curve) and the averaged curve of all the periodograms for the accretion rate at \(r=0.4\) in 256 \(\phi\) directions (red curve). The mass accretion rate data are from the top and bottom panels of Figure 15, and we use 341 snapshots from 34.1 to 68.2 \(T_{0}\).The red curve in the bottom panel has been stretched vertically by a factor of 1.5 to highlight its features. tion would be observable in inclined systems when the accretion hot spot undergoes orbital motion around the star at this particular frequency. We have also calculated the periodogram for the integrated accretion rate over the sphere at \(r=0.4\), 0.8, and 1.5 (top panel of Figure 17). At both \(r=0.4\) and 0.8, there is also a peak at \(\sim\)5.5 \(T_{0}\) which could also be related to the modulation and duration of the magnetic bubble. On the other hand, the power in the periodogram increases with \(\Delta T\), and there could be more peaks at larger \(\Delta T\) which can only be studied with long timescale simulations. ### The Spin-up Torque The star is spun up by its coupling with the disk through magnetic fields. The total angular momentum of the star and the disk is conserved. If we integrate Equation 5 over the whole volume from \(r=r_{in}\) to \(r=r_{out}\), we can derive the region's angular momentum change \[\frac{\partial\int_{r_{in}}^{r_{out}}R\rho v_{\phi}dv}{\partial t}= \int_{r_{in}}R_{in}(\rho v_{r}v_{\phi}-B_{r}B_{\phi})dS\] \[- \int_{r_{out}}R_{out}(\rho v_{r}v_{\phi}-B_{r}B_{\phi})dS\,, \tag{32}\] where the first and second terms on the right-hand side are the integrals over the sphere at \(r_{in}\) and \(r_{out}\). When the integrated stress at \(r\) is positive, the spherical region within it loses angular momentum. If the disk region extends from \(r_{in}\) to a very large \(r_{out}\) where the density and velocity are close to zero, the second integral on the right-hand side becomes zero. Since angular momentum can be changed by torque, the first term on the right-hand side can be considered as the torque between the disk and the star. The \(B_{r}B_{\phi}\) term is the magnetic stress/torque and the \(\rho v_{r}v_{\phi}\) term is the hydrodynamical stress/torque that includes both the turbulent stress/torque (\(\rho v_{r}\delta v_{\phi}\)) and angular momentum carried by the accreting material (\(\rho v_{r}(v_{\phi})\)). We integrate the total torque over the sphere at \(r=0.4\), 0.8, and 1.5, and plot them with time in Figure 18. All three curves overlap with each other, suggesting that the disk has reached a constant angular momentum flow within \(r=1.5\). If we divide the torque by \(\dot{M}(GM_{*}R_{T})^{1/2}\), we derive the \(n\) parameter in Equation 9. The measured mean value of \(n\) is \(\sim\)0.8 with the standard deviation of \(\sim\)0.1, shown in the second panel of Figure 18. The distribution of the torque along the \(\theta\) direction at \(r=0.6\) is shown in the third panel of Figure 18. We can see that most of the torque is exerted at \(\theta\sim 45^{o}\) and \(135^{o}\), corresponding to regions where the highest accretion rates are observed at \(r=0.6\). The torque between the star and the disk sets the constant in Equation 10, and this constant is essential for the disk evolution. To understand how different stress/torque terms contribute to the total torque, we plot the Maxwell stress/torque and the hydrodynamical stress/torque in Figure 20. Close to the stellar surface (e.g. \(r\)=0.4), the magnetic stress dominates and is exerted at high latitudes where most of accretion occurs. This indicates that the accretion disk twists the magnetic fields that connect to the star, and these field lines torque the star while channeling the accretion flow. While the magnetic stress spins up the star (bottom panels), the small hydrodynamical stress actually tries to spin down the star (middle panels). This is because the infalling gas rotates in the opposite direction from the disk (SS4.4) and the star thus accretes gas Figure 18: Top two panels: total torque and the \(n\) parameter at \(r\)=0.4, 0.8, and 1.5 with time. The average and the standard deviation of \(n\) after 10 orbits are given in the legend of the second panel. Third panel: the torque (\(\langle M_{r\phi}\rangle 2\pi r^{3}\mathrm{sin}^{2}\theta\)) at \(r=0.6\) along the \(\theta\) direction. Fourth and fifth panels: \(\langle M_{r\phi}\rangle 2\pi r^{2}\mathrm{sin}^{2}\theta\Delta r\) along the \(r\) direction at \(\theta\)=1 and \(\pi\)-1 (\(S_{2}\) and \(S_{3}\) areas in Figure 19). Bottom panel: integrated torques along \(S_{2}\) (black curve) and \(S_{3}\) (green curve) averaged from \(t=50\)\(T_{0}\) to the end of the simulation. with negative angular momentum. Around the magnetospheric truncation radius (e.g. \(r\)=0.8), the hydrodynamical stress from the penetrating filaments at the disk midplane becomes more apparent, while the total stress remains the same as the stress at \(r\)=0.4. At \(r=1.5\), the magnetic stress is \(\sim 0\), and the positive magnetic stress in the disk region is balanced by the negative magnetic stress higher above. This means that the hydrodynamical stress equals the constant. For the disk region (e.g. \(r=3\)), both hydrodynamical and magnetic stresses become larger than the constant (see the discussion after Equation 10), and the magnetic stress drives the accretion inwards. Overall, the magnetic fields within the magnetosphere transfer angular momentum to the star, while the magnetic fields in the disk transfers angular momentum outwards. The magnetic stress changes sign at \(r\sim 1.5\), slightly outside the magnetospheric truncation radius. The fact that the \(n\) parameter is \(\sim\)1 suggests that most of the coupling between the magnetosphere and the disk occurs around \(R\sim R_{T}\). To confirm this and understand how different disk regions contribute to the star's spin-up, we calculate the stress at the interface between the disk region and the magnetosphere, shown as \(S_{1}\), \(S_{2}\), and \(S_{3}\) in Figure 19. The total torque in the disk region can be separated into \[\frac{\partial\int R\rho v_{\phi}dv}{\partial t}= \int_{S_{1}}R_{in}(\rho v_{r}v_{\phi}-B_{r}B_{\phi})dS\] \[+ \int_{S_{2}}R(\rho v_{\theta}v_{\phi}-B_{\theta}B_{\phi})dS\] \[- \int_{S_{3}}R(\rho v_{\theta}v_{\phi}-B_{\theta}B_{\phi})dS\,, \tag{33}\] where \(S_{1}\) is the shell from \(\theta=1\) to \(\pi-1\) at \(r_{in}=0.6\), while \(S_{2}\) and \(S_{3}\) are surfaces of a cone at \(\theta=1\) and \(\theta=\pi-1\) from \(r_{in}=0.6\) outwards. \(S_{2}\) and \(S_{3}\) are chosen to enclose the disk region, including the surface accretion region. The space-time diagrams for these three terms are shown in the third to fifth panels of Figure 18. Since most of the torque at \(r=0.6\) is exerted beyond the \(\theta\)=[1, \(\pi\)-1] region (the third panel), the integrated torque at \(S_{1}\) is small compared with the torque at \(S_{2}\) and \(S_{3}\). The fourth and fifth panels show that most of the torque at \(S_{2}\) and \(S_{3}\) is from the region within \(r\sim 1\). The white band extending from \(log_{10}r\sim\)0 to 0.4 suggests that the torque density is \(\sim\)0 beyond \(r\sim\)1. This is also confirmed in the bottom panel showing the integrated torques at \(S_{2}\) and \(S_{3}\). The total torque averaged over the last 50 \(T_{0}\) is -0.0065, and the torque at \(S_{1}\) during the same period of time is -0.0020. The bottom panel shows that the torques at \(S_{2}\) and \(S_{3}\) are also \(\sim\)-0.0020. At \(r=1\), the integrated torques at \(S_{2}\) and \(S_{3}\) are both -0.0012. Thus, 70% of the total torque is exerted at the disk surface within \(r=1\), and 90% of the torque is exerted within \(r=3\). Our torque results are noticeably different from recent work by Takasao et al. (2022) who finds significant hydrodynamical stress contribution close to the star and the \(n\) parameter is significantly smaller than 1. Especially for their model C, whose large corotation radius is similar to our non-rotator setup, its \(n\) value is \(\sim\)0. Two factors could contribute to the difference. The first is that our truncation radius is 10 times the stellar radius while their truncation radius is 2 times the stellar radius due to their much weaker stellar field. As shown in Figure 20, magnetic stress is more important deeper into the magnetosphere. The second difference is the strong outflow in Takasao et al. (2022), which is absent in our simulations. The difference in the outflow rate could be due to the different adopted coronal density/density floor around the star. The impact of the coronal density/density floor is an important issue which needs to be thoroughly examined in future. ### Field Transport The transport of net magnetic flux in disks directly controls the disk's long-term evolution. However, studying magnetic field transport in disks can be challenging, often influenced by inner boundary conditions of the simulation. Fortunately, our simulation setup has two advantages allowing us to study field transport. First, our setup incorporates the central star within the simulation domain, and the stellar magnetic fields are the only source of magnetic fields in the problem. Thus, there is no need for special boundary conditions at the stellar surface. Second, Athena++ conserves the total magnetic flux in the whole domain to machine precision, except for flux losses at the boundaries. To examine flux transport, we monitor the evolution of magnetic flux integrated outward from the central star. Given our Cartesian grid setup, we integrate the flux over the area of a circle at the disk midplane. The Figure 19: The azimuthally averaged density and poloidal magnetic streamlines at the end of the simulation. The white lines label the surfaces where the torques are calculated in Figure 18. integrated flux within different sized circles around the central star is shown at the bottom panel of Figure 21. Shortly after the simulation starts, the dipole magnetic fields begin moving outward, reducing the dipole field strength within \(r\sim 1\). This decrease of dipole fields is likely due to field inflation and the reconnection at the beginning of the simulation (Figure 13). The outward moving fields are piled up at the disk region (the high \(B_{z}\) values that are above the initial field strengths in the upper panel of Figure 21). With outer regions becoming MRI active, the fields are transported further outwards. At the end of our simulation, the whole region within \(R\sim 5\) has weaker fields than the initial condition. The disk region beyond \(R\sim\)1 seems to have the most significant field reduction compared with the magnetosphere region within \(R\sim\)1. Overall MRI turbulence in the disk seems to be efficient at transporting the fields outwards. The outward moving fields may accelerate the operation of MRI at the outer disks. Such outward field transport is different from field transport in simulations with net vertical fields. Previous net vertical field simulations find that either the fields are transported to the star (Zhu and Stone, 2018; Jacquemin-Ide et al., 2021) at the mass accretion timescale (Jacquemin-Ide et al., 2021) or maintains a quasi-steady state (Mishra et al., 2020). Such difference indicates that field transport also depends on the initial field distribution besides the accretion disk properties. To achieve a long-term equilibrium field configuration, it is necessary to conduct simulations that run for significantly longer timescales. ### Implications for Planet Formation Our simulation can be scaled to realistic astronomical systems that undergo magnetospheric accretion. When considering a disk that is threaded by the stellar dipole fields and it has the same temperature slope (\(q=-1/2\) in Equation 18) as in our simulation, there are only two dimensionless free parameters to define the sys Figure 21: The azimuthally averaged vertical magnetic fields (upper panel) and the radially integrated magnetic flux (lower panel) at the midplane. Dark to light colored curves show \(t=\)0, 1, 5, 20, 68.2 \(T_{0}\) respectively. Figure 20: Similar to Figure 18 but for different stress components. Top panels: the integrated torque over spheres at different radii (left to right panels) with time. The middle and bottom panels: the hydrodynamical and magnetic components of the torque along the \(\theta\) direction with time. The normalization and colorbar are the same as those in the third panel in Figure 18. tem: the disk aspect ratio at the magnetospheric truncation radius (\(h(R_{T})\)), and the ratio between \(R_{T}\) and \(R_{*}\). Since material undergoes free fall toward the central star within the magnetosphere, the region immediately surrounding the star is unlikely to have a significant impact on the dynamics within the disk, except through thermal feedback. Thus, the only important free parameter is \(h(R_{T})\). Although we have only studied the thick disk case with \(h(R_{T})=0.1\) here, we will explore thinner disks, which will be more applicable to protoplanetary disks, in future works. Nevertheless, we still use our current simulation results to study protoplanetary disks and will justify some parameter choices later. We consider a typical protoplanetary disk with an accretion rate of \(\dot{M}=10^{-8}\mathrm{M_{\odot}\,yr^{-1}}\) around a 2 \(r_{\odot}\), 0.5 \(M_{\odot}\) star having a 1kG dipole magnetic field. The magnetic truncation radius (Equation 8) is thus \[R_{T}= 14.4\left(\frac{B_{*}}{1kG}\right)^{4/7}\left(\frac{M_{*}}{0.5M_{ \odot}}\right)^{-1/7}\] \[\left(\frac{\dot{M}}{10^{-8}\mathrm{M_{\odot}\,yr^{-1}}}\right)^ {-2/7}\left(\frac{r_{*}}{2r_{\odot}}\right)^{12/7}r_{\odot}\,. \tag{34}\] \(R_{T}/r_{*}=7.2\) which is quite close to our \(R_{T}/r_{*}=10\) in the simulation 3. To represent this fiducial system, the length unit in our simulation \(R_{0}\) is 14.4 \(r_{\odot}\) or 0.067 au since \(R_{0}\sim R_{T}\). The time unit (\(1/\Omega_{0}\)) is thus 0.0039 years. If we equate \(\dot{M}=-0.005\) in our code unit with \(\dot{M}=10^{-8}\mathrm{M_{\odot}\,yr^{-1}}\), the mass unit is \(7.8\times 10^{-9}\mathrm{M_{\odot}}\). The surface density unit is then 15.5 g/cm\({}^{2}\). Thus, the disk surface density \(\sim 0.003\)\(R\) from \(R=1\) to \(R=10\) (Figure 9) is equivalent to a protoplanetary disk with the surface density of Footnote 3: For a star with weaker magnetic fields, \(r_{*}/R_{T}\) is larger. To scale our simulation for such a star, we could assume that the stellar surface is at the given \(r_{*}/R_{T}\) in the current simulation. \[\Sigma=0.7\times(R/\mathrm{au})\,\mathrm{g/cm^{2}}\quad at\quad r<0.7\;au\,. \tag{35}\] The increase of \(\Sigma\) with R is due to the fast decrease of \(\alpha\) with R. With this surface density, the total gas mass within R is \[M(R)=\int_{0}^{R}2\pi R\Sigma dR=0.055(R/au)^{3}M_{\oplus}\,. \tag{36}\] Assuming the dust-to-gas mass ratio is 1 to 100, the total dust mass is 100 times smaller. This low value of \(\Sigma\) is caused by the large \(\alpha\) value within the disk resulting from surface accretion. Figure 9 shows that \(\alpha_{int}\sim 1\) at \(R\sim\)3, which is scaled to \[\alpha_{int}\sim 0.1\times(R/au)^{-1.5}\,, \tag{37}\] in the MRI active region of the protoplanetary disk. Using \(\dot{M}=3\pi\nu\Sigma\), we can estimate that \(\Sigma\sim\)2 g/cm\({}^{2}\) at \(R\sim\)0.2 \(au\) assuming a more realistic \(h/r\sim\)0.05. This surface density is \(\sim\)10 times larger than Equation 35. But even so, the mass is orders of magnitude smaller than what would be required to explain the discovered exoplanets within 1 au. Furthermore, dust may evaporate in this region. Thus, exoplanets may not be able to form in the MRI active inner disk. On the other hand, they could form at the inner edge of the dead-zone and later migrate inwards. To estimate the location of the inner edge of the dead zone, it's necessary to calculate the temperature distribution within the disk. MRI becomes active when the disk temperature exceeds \(\sim\) 1000 K. An accurate estimate requires us to know how accretion energy is dissipated in the disk. Since we have little knowledge on this, we simply estimate the lower limit of the disk temperature using the irradiation equilibrium temperature. Assuming that the disk absorbs a fraction (\(\epsilon\)) of the total stellar luminosity (\(L_{*}\)), the equilibrium temperature is then \[T_{irr}=394\left(\frac{R}{au}\right)^{-1/2}\left(\frac{\epsilon L_{*}}{L_{ \odot}}\right)^{1/4}K\,. \tag{38}\] Since MRI becomes active when \(T\gtrsim 1000\)K, the inner edge of the deadzone is 0.07 au if \(\epsilon L_{*}=0.2L_{\odot}\). On the other hand, if viscous heating is included, the inner deadzone edge can be 0.2 au (D'Alessio et al., 1998). For Herbig Ae-Be stars, this radius can be even larger, reaching to 1 au (Dullemond & Monnier, 2010). Within this radius where the disk couples efficiently with the stellar magnetic fields and maintains a low surface density, the formation of the exoplanets within 0.1 au (10 day period) through in-situ formation is challenging. Instead, these exoplanets are likely to form at the outer disks and migrate inwards. We can estimate the planet migration timescale in the MRI active disk using the derived disk surface density in Equation 35. The type I migration timescale (Baruteau et al., 2014) for a planet around a 0.5 \(M_{\odot}\) star is then \[t_{I,mig} =\Omega^{-1}h^{2}q^{-1}\left(\frac{\Sigma R^{2}}{M_{*}}\right)^{-1}\] \[=1.1\times 10^{10}\left(\frac{R}{0.1\;au}\right)^{-1.5}\left(\frac{q} {10^{-5}}\right)^{-1}\left(\frac{h}{0.05}\right)^{2}yr\,, \tag{39}\] where \(q\equiv M_{p}/M_{*}\). When this mass ratio is higher than \(\alpha^{1/2}h^{5/2}\)(Zhu et al., 2013), a gap will be induced in the disk and the planet undergoes Type II migration. The Type II migration rate is (Ivanov et al., 1999; Dempsey et al., 2020) \[t_{II,mig}=\tau_{visc}\frac{M_{p}}{\Sigma R^{2}}=\Omega^{-1}h^{-2}\alpha^{-1} q\left(\frac{\Sigma R^{2}}{M_{*}}\right)^{-1}\,. \tag{40}\] We could combine both Type I and Type II migration rates (Equations 39, 40), and incorporate the effects of the gap-opening planet mass into a single equation that represents the overall planet migration rate \[t_{mig}=\Omega^{-1}h^{2}q^{-1}\left(\frac{\Sigma R^{2}}{M_{*}}\right)^{-1}\left(1 +hK\right), \tag{41}\] where \(K\equiv q^{2}/(\alpha h^{5})\). When \(K\lesssim 1/h\), it reduces to the Type I rate, as shown in Figure 12 of Dempsey et al. (2020). Unlike Type I migration, the Type II migration timescale is longer for a more massive planet. Thus, a planet that can marginally induce gaps (\(hK\sim\)1) migrates fastest in the disk. With the high \(\alpha\) value in our simulations, Jupiter mass planets marginally induce gaps in the disk. Figure 22 shows the planet's migration timescale calculated with our disk structure from Equation 35. For this calculation, we ignore the fact that the planet will undergo stochastic migration due to MRI turbulence. The planet's stochastic migration in a disk that undergoes magnetospheric accretion will be presented in a future publication. Nevertheless, if we only consider Type I and Type II migration, the planet migration timescale is much longer than the disk's lifetime, except for giant planets at \(\sim\)1 au. Most planets cannot undergo disk migration within the inner MRI active disk, except early times when the disk's accretion rate and surface density is a lot higher. Planets are likely to stall or become trapped at the inner edge of the deadzone. Thus, it is the inner deadzone edge, instead of the magnetospheric truncation radius, that determines the planet's final position before the protoplanetary disk dissipates. This could have important implications for the distribution of exoplanets. On the other hand, if a planet manages to migrate through the inner MRI turbulent disk (via either disk migration or planet-planet scattering) and gets into the magnetosphere, the planet will be subject to strong aerodynamic drag which accelerates its migration to the central star. For a Keplerian orbiting object in a Keplerian rotating disk, the relative motion between the planet and the local disk flow is small, and the interactions between the planet and the disk are mostly through resonance interactions (e.g. Lindblad and Corotation resonances). However, if the relative motion between the planet and background flow becomes significant, dynamical friction and aerodynamic drag start to play a more important role. One example where these effects manifest is the interaction between an inclined planet and a Keplerian rotating disk (Rein, 2012; Arzamasskiy et al., 2018). As the material in the magnetosphere corotates with the star, it rotates significantly slower than the Keplerian speed. In our simulation having a non-rotating star, the material inside the magnetosphere has nearly zero azimuthal velocity (Figure 6). Thus, the relative speed between the planet and the magnetosphere is the local Keplerian speed. As a result, the planet within the magnetosphere experiences strong head-wind and migrates inwards. Although both aerodynamic drag and dynamical friction could be important when the relative motion between the object and the background flow is nonzero, aerodynamic drag plays a more important role for a planet in the magnetosphere. The ratio between the aerodynamic drag force and the dynamical friction force is roughly the square of the ratio between the object's size and its Bondi radius (\(R_{Bondi}=GM_{p}/v_{rel}^{2}\)) (e.g., Rein, 2012; Wang et al., 2023). The Bondi radius, for a planet that is within 10 solar radius distance to the star, is at least one order of magnitude smaller than the planet size assuming \(v_{rel}\sim v_{K}\). If we only consider the aerodynamic drag force \[\mathbf{f_{zero}}=-\pi s_{p}^{2}\rho v_{rel}\mathbf{v_{rel}}\,, \tag{42}\] where \(s_{p}\) is the planet's radius, we can estimate the migration timescale \[t_{mig}=\frac{M_{p}v_{rel}}{f_{aero}}=\frac{4s_{p}\rho_{p}}{3v_{rel}\rho}\,, \tag{43}\] where \(\rho_{p}\) is the material density of the planet. The background density within the magnetosphere can be estimated by assuming that the accretion is from the spherical infall at the free-fall speed \[\rho=\frac{\dot{M}}{4\pi R^{2}v_{ff}}\,. \tag{44}\] We verify that, at a distance of 5 stellar radii, this density is only a factor of 2 smaller than the midplane density found in our simulation. The presence of intruding filaments resulting from interchange instability ensures that the gas density at the midplane within the Figure 22: The migration timescale for a planet at the inner disk. We assume that the magnetospheric truncation radius is at 0.1 au. Outside this radius, disk-driven migration (Equation 41) is important. We adopt the disk’s surface density from Equation 35, \(h=0.05\), and \(\alpha\) from Equation 37. Within 0.1 au, aerodynamic drag is important (Equation 43). The solid and dashed white contours label the migration timescale of \(10^{7}\) and \(10^{8}\) years. The central star is a 0.5 \(M_{\odot}\) star. magnetosphere remains non-negligible and approaches values estimated from the spherical infall. Again, using the typical stellar parameters and the disk accretion rate as before, we calculate this timescale within 0.1 au, shown in Figure 22. The dashed contours label the parameter space for planets with a migration timescale of \(10^{8}\) years. Since both the disk-driven and aerodynamic-driven migration timescales are inversely proportional to the disk's surface density, these contours can also be interpreted as the parameter space of planets with a migration timescale of \(10^{7}\) years in a \(M=10^{-7}M_{\odot}\)yr\({}^{-1}\) disk. Although the aerodynamic drag seems to accelerate the planet's migration within the magnetosphere, the migration timescale is still longer than the disk's lifetime for most parameter spaces. Thus, we would expect that any planet that ends in this region will stay in this region. However, their orbital configurations (e.g. eccentricity and inclination) might evolve due to the planet-disk or planet-magnetosphere interactions. Finally, we caution that we have ignored any electromagnetic effect (including dipole-dipole and dipole-conductor interactions, Bromley & Kenyon, 2022) on the planet migration. ## 6 Conclusion We have carried out high-resolution long-timescale MHD simulations to study magnetospheric accretion onto a non-rotating star. Adopting a Cartesian grid with mesh-refinement allows us to resolve both the disk and the polar accretion region equally well and reduces the computational cost significantly. We run the simulation for 68 orbits at the magnetospheric truncation radius (\(R_{T}\)), which is equivalent to 2157 Keplerian orbits at the stellar surface. A steady accretion is reached within \(R\sim\)6 \(R_{T}\). Figure 23 summarizes some of our key results. Surrounding the star, the flow within the magnetosphere is highly dynamic and filamentary. Within the magnetospheric truncation radius \(R_{T}\), the filamentary flow is in the force-free limit, moving along the magnetic field lines. The formation of these filamentary structures is driven by the interchange instability at \(R_{T}\) where the density increases with radial distance. The developed filaments ("fingers") could penetrate deep into the magnetosphere. The density within the filaments could be more than 3 orders of magnitude higher than the background density. As these filaments move in, they are lifted from the midplane and move along the dipole magnetic field lines. Eventually, most material accretes at 30\({}^{\circ}\) from the magnetic poles and falls to the star at close to the free-fall speed. More than 50% (90%) of accretion occurs within accretion columns covering 5% (20%) of the stellar surface area. Thus, we consider the filling factor to be \(\sim\)5-20%. Multiple accretion columns could develop simultaneously, forming an onion like structure with multiple isolated layers. Despite the filamentary structures, the total accretion rate onto the star is relatively steady with 23% standard deviation. Material falling onto the star has negative azimuthal velocity, since it follows the stellar magnetic field lines that are pitched forward by disk dragging. The stress from the magnetic fields spins up the star. The ratio between the spin-up torque and \(\dot{M}(GM_{*}R_{T})^{1/2}\) is \(\sim\) 0.8, independent of the long-term accretion rate change. This constant torque will affect the disk's long term evolution. Many properties of the simulated flow within the magnetosphere are consistent with observations, including hot spots at high altitudes, free-fall velocities, low filling factors, and multiple accretion layers. On the other hand, recent observations by Thanathibodee et al. (2023) find that low accretors with \(\dot{M}<2\times 10^{-10}\)M\({}_{\odot}\) yr\({}^{-1}\) have magnetospheres with sizes \(\sim\) 2 \(-\) 5 \(R_{*}\), which is significantly smaller than the theory prediction (\(\sim\)7 \(R_{*}\) from Equation 34). Possible explanations include a weaker dipole magnetic field (Long et al., 2008), a quadrupole/multipole magnetic field, or a more complicated thermal structure of the magnetosphere. Future theoretical and observational studies in these directions are desired. Outside the magnetosphere, we have the highly magnetized disk. Although, at the disk midplane, the transition radius between the magnetosphere and the disk agrees well with the traditional magnetospheric truncation radius, these two regions are less distinct above the midplane. The disk surface accretion smoothly joins the magnetospheric accretion. If we use the Alfven surface or the \(E_{k}/E_{m}\sim 1\) surface to separate these two regions, the transition radius becomes larger when it is higher up in the disk. The azimuthal velocity and azimuthal magnetic field also reverse their signs at the transition radius. The disk region outside \(R_{T}\) is also highly variable due to the strong net vertical magnetic fields. Magnetic reconnection and interchange instability could occasionally reorganize magnetic fields around the truncation radius, leading to a large-scale density void that orbits around the central star at sub-Keplerian speed, similar to the structures in MADs around blackholes. It takes \(\sim\)5 \(T_{0}\) for the density void to finish one orbit. The density void extends all the way to the stellar surface, leading to hot spots that orbit at the same frequency as the void. The periodogram of disk accretion also shows a peak at \(\sim\)20% of the Keplerian frequency at \(R_{T}\), which corresponds to the orbital motion of the hot spot and the lifetime of the bubble. We have also observed outflows that originate from the bubble. But the mass loss rate is quite low. Overall, both smaller-scale filaments and larger-scale magnetic bubbles are characteristics of a disk that is magnetically disrupted by strong fields. Further away into the disk, a magnetically supported surface region plays a crucial role in disk accretion, which is distinctly different from the traditional model. Keplerian differential rotation stretches the radial com ponent of the dipole magnetic fields to generate strong azimuthal fields, which lead to a low-density region up to \(z\sim\)R. The resulting strong \(R-\phi\) stress makes this surface region accrete inwards at supersonic speeds, which is similar to the surface accretion of an accretion disk with net vertical magnetic fields. However, little disk wind is launched above this region, since the magnetic fields there are connected to the non-rotating star instead of the Keplerian rotating disk. Both the net vertical magnetic fields and the disk \(\alpha\) decrease sharply with radii, which leads to a disk with a surface density proportional to \(R\) within \(R\sim 10R_{T}\). After scaling our simulations to protostars with \(\dot{M}=10^{-8}\rm M_{\odot}\,yr^{-1}\), we find that the inner MRI active disk has a very low surface density (\(<\)1 g cm\({}^{-2}\)) due to the efficient surface accretion. The timescale for Type-I/II planet migration is longer than the disk lifetime, suggesting that planets are not able to migrate in the inner MRI active region and likely to be stalled at the inner edge of the deadzone. If the planets could move into the magnetosphere, aerodynamic drag can accelerate the planet's migration, although the migration timescale is still long. Finally, stellar fields are efficiently transported into the disk region, different from the "X-wind" type model. All simulations are carried out using with NASA Pleiades supercomputer. Z. Z. acknowledges support from NASA award 80NSSC22K1413. J.M.S. acknowledges support from the Schmidt Futures Fund to the IAS. ZZ thanks Bart Ripperda, Catherine Dougados, Dong Lai, Yihan Wang, Douglas N.C. Lin for discussions and suggestions. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2309.14760
Program Repair with Minimal Edits Using CodeT5
Programmers often struggle to identify and fix bugs in their programs. In recent years, many language models (LMs) have been proposed to fix erroneous programs and support error recovery. However, the LMs tend to generate solutions that differ from the original input programs. This leads to potential comprehension difficulties for users. In this paper, we propose an approach to suggest a correct program with minimal repair edits using CodeT5. We fine-tune a pre-trained CodeT5 on code pairs of wrong and correct programs and evaluate its performance with several baseline models. The experimental results show that the fine-tuned CodeT5 achieves a pass@100 of 91.95% and an average edit distance of the most similar correct program of 6.84, which indicates that at least one correct program can be suggested by generating 100 candidate programs. We demonstrate the effectiveness of LMs in suggesting program repair with minimal edits for solving introductory programming problems.
Atsushi Shirafuji, Md. Mostafizer Rahman, Md Faizul Ibne Amin, Yutaka Watanobe
2023-09-26T08:45:05Z
http://arxiv.org/abs/2309.14760v1
# Program Repair with Minimal Edits Using CodeT5 ###### Abstract Programmers often struggle to identify and fix bugs in their programs. In recent years, many language models (LMs) have been proposed to fix erroneous programs and support error recovery. However, the LMs tend to generate solutions that differ from the original input programs. This leads to potential comprehension difficulties for users. In this paper, we propose an approach to suggest a correct program with minimal repair edits using CodeT5. We fine-tune a pre-trained CodeT5 on code pairs of wrong and correct programs and evaluate its performance with several baseline models. The experimental results show that the fine-tuned CodeT5 achieves a pass@100 of 91.95% and an average edit distance of the most similar correct program of 6.84, which indicates that at least one correct program can be suggested by generating 100 candidate programs. We demonstrate the effectiveness of LMs in suggesting program repair with minimal edits for solving introductory programming problems. program repair, programming problems, learning support, computer science education. This work was supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Number JP23H03508. ## I Introduction Most of the time, programmers edit source code to fix bugs, add new features, or change existing features. Recent studies have shown that these edits are repetitive [1, 2], and manually repeating the edits can be error-prone and time-consuming [3]. Many language models (LMs) have been proposed to assist debugging by identifying errors and suggesting fixes in source code. Suggesting correct code for wrong code using LMs can significantly reduce these repetitive edits and reduce the effort programmers spend to correct the wrong code. Several deep learning models, such as recurrent neural network (RNN), long short-term memory (LSTM), bidirectional LSTM (BiLSTM), LSTM with an attention mechanism (LST-MAttn), and Transformer, are used to correct erroneous source code and provide various learning supports, such as predicting the next tokens [4, 5], fixing bugs in wrong code [6, 7, 8, 9], and improving the performance and readability [10, 11]. In particular, LMs based on the Transformer architecture [12] have demonstrated exceptional performance across various tasks, not only in texts [13, 14] but also in code [15, 16, 17, 18, 19, 20, 21], images [23], and videos [24]. However, the code generated by LMs often deviates from user expectations, necessitating additional effort to edit to rectify the code. As illustrated in Figure 1, although the top-right program is a more popular solution than the bottom-right program, it requires larger edits from the wrong program. Since it can confuse the user in understanding the suggested repair, suggesting a program with minimal edits is desirable. Despite the importance of whether the generated programs are helpful for learners, to the best of our knowledge, no previous work has been undertaken to investigate syntactic similarity and functional correctness simultaneously. In this work, to mitigate the learners' burden of finding the bugs and support more effective learning programming, we propose using an LM trained on source code, CodeT5 [17], to suggest a correct program with minimal edits to repair the given wrong program. By suggesting the correct program that is more aligned with the user-written program, we can avoid confusing users due to the large difference in the suggested program from the original program. We fine-tune a pre-trained CodeT5 on a dataset of code pairs consisting of wrong and correct programs collected from Aizu Online Judge (AOJ) [25]. We evaluate the performance of the fine-tuned CodeT5 compared with the following baseline models. * Naive copy model to copy the input program as the output program. * Naive retrieval model to retrieve the most similar program from the training data. * Sequence-to-sequence (Seq2Seq) model consisting of a BiLSTM and an LSTMAttn. Our experimental results demonstrate that the fine-tuned CodeT5 achieves a 91.95% on pass@100. It indicates that at least one correct program can be suggested by generating 100 candidate programs for 91.95% of the wrong programs. Moreover, the edit distance of the most similar correctly generated program is 6.84 on average, whereas the human-crafted repair Fig. 1: **Motivating example of program repair.** has 10.76 on average. We show that the fine-tuned CodeT5 can generate correct programs for wrong programs with minimal edits to assist in solving introductory programming problems. The contributions of this work are as follows. * We demonstrate that fine-tuning CodeT5 on code pairs of (wrong, correct) programs performs well on program repair. * We show that the fine-tuned CodeT5 can generate repaired programs with minimal edits compared to human-crafted repair. ## II Related Work Automated program repair (APR) has been a subject of increasing interest with the growth of software systems, aiming to reduce the time and effort spent on debugging by programmers. Many studies have utilized deep learning techniques, such as RNNs and Transformers, considering the task of converting the wrong program into a correct one, similar to the neural machine translation task. As the earlier works leveraged RNNs for program repair, using Seq2Seq based on RNN [26, 27, 28], LSTM [8, 29], BiLSTM [7, 30], and LSTMAttn [31], has been proposed. For other attempts, Hoppity[32] used the graph neural network to capture the graph structure, and CoCoNuT[33] used the convolutional neural network instead of an LSTM to model source code at different granularity levels. As one of the works considering the edit distance of generated programs, Gulwani et al. [34] proposed Clara using the syntactic difference (tree-edit-distance) as the cost function to find the program repair from the existing correct student solutions for introductory programming education. Similarly, Lu et al. [35] proposed a fast and accurate program repair tool, FAPR, that outperformed Clara in suggesting the correct and smaller programs according to the qualitative evaluation. As an application, Parihar et al. [36] applied program repair, enabling automatic grading of incorrect submissions that contain syntax errors using test cases and awarding partial marks for them. In recent years, Transformer-based approaches have performed remarkably well and are now a dominant model. Especially, Transformer-based [37, 38], BERT [13]-based [38, 39], and T5 [40]-based [6, 17] models are proposed. CodeT5 [17] has demonstrated the capability in program repair on the CodeXGLUE benchmark [38]. More recently, using large language models (LLMs) trained on source code has shown the capability in APR. Several works [41, 42, 43, 9] proposed APR systems leveraging LLMs such as Codex [15]. Not only fixing syntactic or semantic bugs, but Codex has also shown the ability to fix security bugs [44], improving time performance [10] and code readability [11]. Our approach shares similarities with the works mentioned above. However, it differs in focusing on minimal edits for program repair to better align with the user-written programs. ## III Experiments ### _Proposed Approach_ The proposed approach is illustrated in Figure 2. A user inputs a wrong program, failing to solve a programming problem. After an LM generates multiple candidate programs, a judge system validates the functional correctness of the generated programs. For the correctly generated programs, the most similar program, i.e., the program with the smallest edit distance with the input program, is suggested to the user. This process allows the user to obtain _a repaired program with minimal edits_. Note that, in this work, we also use naive models instead of the LM for comparison. In addition, in the experiments, we generate 100 candidate programs. Increasing the number of candidate programs enhances the accuracy of suggesting correct programs as it can search for better programs. However, the system response time must be increased as the number of candidate programs increases since LMs require much computational time for inference. ### _Dataset_ We use Python 3 programs submitted on AOJ [45, 25] for the dataset to train and evaluate the models. We target a set of programming problems named _Introduction to Programming I_ (ITP1)1, an introductory course with 44 programming problems. The course is designed for introduction to programming and ranges from requiring standard input and output to class and method definitions. Footnote 1: [https://onlinejudge.u-aizn.ac.jp/courses/lesson/2/ITP1/all](https://onlinejudge.u-aizn.ac.jp/courses/lesson/2/ITP1/all). For the program repair task, we collect code pairs of wrong and correct programs from AOJ, as shown in Figure 3. If a wrong program is submitted before the correct program, we consider it _an attempt_ and make a code pair (wrong, correct) with the correct program. For program consistency, each code pair consists of the submissions from the same user. Fig. 3: **Illustration of collecting code pairs consisting of wrong and correct programs from the same user.**_AC_ indicates correct programs, and _WA_, _RE_, _TLE_, and _MLE_ indicate wrong programs, such as wrong answer, runtime error, time limit exceeded, and memory limit exceeded, respectively. Fig. 2: **Illustration of the proposed approach.** As preprocessing, we only use the code whose token-based length is more than 0 and less than 256. In addition, we remove duplicated code pairs to avoid overfitting and cheating in training and evaluating models. After shuffling the collected code pairs, we split the code pairs into 90% for training or fine-tuning, 5% for validation, and 5% for testing. Table I shows the number of code pairs and their average edit distance between wrong and correct programs. ### _Evaluation Metrics_ #### Iii-C1 Pass Rate Pass rate (also known as success rate) is a metric showing the percentage of problems solved by generating \(k\) programs for each problem [15]. A problem is considered solved if any generated program solves the problem (i.e., passes all test cases). However, this work uses this metric to show _the percentage of wrong programs that are repaired by generating \(k\) programs for each wrong program._ Therefore, a wrong program is considered repaired if any generated program solves the problem. In this work, we use an unbiased estimator of pass rate for programs, pass@\(k\), inspired by Chen et al. [15]. Pass@\(k\) is denoted as Formula 1, where \(n\geq k\) is the number of samples and \(c\leq n\) is the number of correct samples. \[\text{Pass@}k:=\underset{\text{Programs}}{\mathbb{E}}\left[1-\frac{\binom{n-c}{ k}}{\binom{n}{k}}\right] \tag{1}\] Whether the generated program solved the given programming problem (i.e., functional correctness) is validated by executing it using hidden test cases. More detail about the evaluation of generated programs is described in Section III-E. We report pass@\(k\) at \(k\in\{1,10,100\}\) where \(n=100\) samples are generated for each wrong program. #### Iii-C2 Compilability Compilability is the syntactic correctness of programs, showing whether the program passes the compilation. It does not account for semantic correctness. Therefore, a compilable program can be a wrong program. Inspired by the pass@\(k\), we define compilable@\(k\) as Formula 2, where \(n\geq k\) is the number of samples and \(c\leq n\) is the number of compilable samples. \[\text{Compilable@}k:=\underset{\text{Programs}}{\mathbb{E}}\left[1-\frac{\binom{ n-c}{k}}{\binom{n}{k}}\right] \tag{2}\] We report compilable@\(k\) at \(k\in\{1,10,100\}\) where \(n=100\). #### Iii-C3 Bleu To evaluate the syntactic similarity of generated programs against the expected correct programs, we report smoothed BLEU-4 scores [46, 47]. Although several works reported that BLEU is not a good metric in code-related tasks, as it does not account for functional correctness [15, 48, 49], we use this metric to show _how much the generated programs syntactically match the expected programs_ for reference. In this work, BLEU scores are computed based on tokens, which are tokenized using the pre-trained byte-pair encoding (BPE) tokenizer of CodeT5, codet5-base2, from the tokenizers3 library. To compute the BLEU scores, we employ the evaluate4 library. Footnote 2: [https://huggingface.co/Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) Footnote 3: [https://github.com/huggingface/tokenizers](https://github.com/huggingface/tokenizers). Footnote 4: [https://github.com/huggingface/evaluate](https://github.com/huggingface/evaluate). #### Iii-C4 Exact Match Exact Match is a metric that measures the percentage of generated programs that exactly match the expected target programs. This metric provides insights into the model's ability to replicate the exact solution, which can be particularly useful in certain use cases where the exact replication of the solution is required. However, it is noteworthy that a low Exact Match score does not necessarily imply poor performance, as the model might generate functionally correct but syntactically different programs. Therefore, while Exact Match provides valuable information, it should be interpreted with other metrics that measure functional correctness and syntactic similarity, such as pass@\(k\), compilable@\(k\), and BLEU. We report the ratio of the Exact Match. #### Iii-C5 Edit Distance Edit distance (also known as Levenshtein distance [50]) is used to measure the dissimilarity between the source and the generated programs. It quantifies the minimum number of character-level changes (insertions, deletions, and substitutions) required to transform the source program into the generated one. This metric indicates the magnitude of alterations the model makes to obtain the correct program, as well as the number of edits required to repair the wrong program. In addition to the character-based edit distance, we also report two specific edit distances: correct and top-1. The correct edit distance is computed between the source and correctly generated programs, indicating the alternations required to correct the wrong program, whereas the default edit distance includes edits from wrong to wrong programs. The top-1 edit distance, on the other hand, is calculated between the source and the most syntactically similar correct program, providing insight into the best-case scenario of successful program repair. Unlike other metrics, the edit distance does not involve the target program; the calculation is performed solely using the source and generated programs. This makes it a helpful metric in understanding the model's strategy in problem-solving - whether it heavily modifies the source program or makes minimal edits. **This work primarily focuses on minimizing the edit distance metrics** while keeping the functional correctness of the generated programs. ### _Models_ #### Iii-D1 Naive For the simplest baseline models to compare with the fine-tuned LM, we employ two types of naive models: Naive Copy and Naive Retrieval. \begin{table} \begin{tabular}{l c c} \hline \hline & \#Code Pairs & Avg. Edit Distance (Std) \\ \hline Train & 52,526 (90\%) & 10.72 (13.68) \\ Valid & 2,918 (5\%) & 10.95 (14.39) \\ Test & 2,918 (5\%) & 11.34 (17.14) \\ \hline Total & 58,362 & 10.76 (13.91) \\ \hline \hline \end{tabular} \end{table} TABLE I: The number of code pairs in each set. Naive CopyWe refer to the model Naive Copy as the model to copy the input program as the output program. Since the input program is always wrong, the output program is always wrong (i.e., pass@\(k\) is always 0% at any \(k\)). However, note that the output program can be compilable (i.e., compilable@\(k\) can be greater than 0%) since the wrong program includes the wrong answer, which passed the compilation, but the output is wrong. Although Naive Copy does not repair programs, it can be a base comparison from the perspective of BLEU scores, as it constantly generates a similar program to the target program. Naive RetrievalWe refer to the model Naive Retrieval as the model to retrieve the most similar program from the training data in each programming problem using linear search. The program with the shortest edit distance in the training data is considered the most similar program. Since Naive Retrieval retrieves a correct program from the training data, programs generated by this model are ensured to be correct (i.e., pass@\(k\) and compilable@\(k\) are always 100% at any \(k\)). The key aspect of comparing with this model is the edit distance. This model can result in 100% correctness but does not necessarily generate the most helpful program. #### Iii-B2 Seq2Seq For the baseline model based on LMs, we also use an LSTM-based Seq2Seq model. A Seq2Seq model is composed of an encoder and a decoder parts. The encoder utilizes a BiLSTM to parse the input sequences and extract their features. The decoder, on the other hand, employs an LSTMAttn. The attention mechanism allows the decoder access to all parts of the input sequences. The total number of model parameters is 12.4M. We employ fairseq5[51] for model training. Footnote 5: [https://github.com/facebookresearch/fairseq](https://github.com/facebookresearch/fairseq). #### Iii-B3 CodeT5 CodeT5[17] is a Transformer-based [12] LM specifically tailored for source code. It is a variant of the T5 [40] model, which is designed to handle any text-to-text conversion tasks. CodeT5 has shown the capability in the code refinement (program repair) task from the CodeXGLUE benchmark [38] by fine-tuning. We use the codet5-base6 model, which has 220M parameters, and fine-tune it on our dataset. We use the transformers7 library to load the model and conduct the fine-tuning. Footnote 6: [https://huggingface.co/Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) Footnote 7: [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers). Footnote 8: [https://github.com/facebookresearch/fairseq](https://github.com/facebookresearch/fairseq). ### _Environment_ For the evaluation of program correctness, generated programs are executed on an isolated judge system to validate the functional correctness. The judge system uses hidden test cases for the programming problem provided by AOJ. The program is judged correct if it passes all hidden test cases and is incorrect otherwise. Experiments for Seq2Seq and CodeT5 models (especially training and inference) are conducted in a GPU environment with one NVIDIA A100 40GB GPU. Experiments for naive models and evaluations by the judge systems are conducted in a CPU environment. ## IV Results ### _Training Results_ For training and fine-tuning, we adopt an early-stopping strategy to avoid overfitting, where the patience for BLEU is set to 5. The training or fine-tuning stops when the validation BLEU is not improved for 5 epochs. Figure 4 shows the BLEU scores on validation set throughout training or fine-tuning. The training takes 21.0 hours for 21 epochs in the Seq2Seq model, and the fine-tuning takes 11.5 hours for 11 epochs in the CodeT5 model. The best validation BLEU for Seq2Seq is 88.84 at epoch 16, and that for CodeT5 is 94.68 at epoch 6. ### _Evaluation Results_ Table II shows the evaluation results of the test set. For sampling in Seq2Seq and CodeT5 models, we generate \(n=100\) samples for each code pair, where the sampling temperature is set to \(\mathcal{T}=0.7\), and the number of maximum tokens is set to 256. Note that pass@\(k\) of Naive Copy is ensured to be 0% since it copies the wrong program, whereas the compilable@\(k\) can be greater than 0% since it contains _the wrong but compilable program_, such as the output value is wrong. Similarly, pass@\(k\) and compilable@\(k\) of Naive Retrieval are ensured to be 100% since the Naive Retrieval retrieves the correct program from the training data. Since the naive models copy or retrieve only one program, we only report pass@\(k\) and compilable@\(k\) at \(k=1\). In addition, since the Naive Copy cannot generate any correct programs, the edit distance of correct and top-1 is invalid, and the cells are in NA. Fine-tuned CodeT5 model performs the best on pass@\(k\) and compilable@\(k\) at any \(k\), compared with the Seq2Seq model. The pass@100 of 91.95% by CodeT5 indicates that the fine-tuned CodeT5 can generate at least one correct program for 91.95% of wrong programs by generating 100 candidates, whereas the Seq2Seq can do it for only 62.58% of the programs. In addition, CodeT5 performs the best on BLEU and Exact Match, compared with all baseline models. Although the Fig. 4: **Valid BLEU throughout training/fine-tuning.** The best BLEU score for each model is represented in red point. higher BLEU and Exact Match do not necessarily indicate the performance in generating better correct programs, it shows the better understanding capability of CodeT5 in generating the target programs. For the edit distance, Seq2Seq and CodeT5 achieve 13.36 (\(\pm\) 25.27) and 8.54 (\(\pm\) 11.70), respectively. As the edit distance of collected code pairs is 10.76 (\(\pm\) 13.91), as shown in Table I, it shows that CodeT5 performs better than humans in repairing wrong programs with minimal edits, whereas Seq2Seq performs worse than humans. ## V Discussion Naive Copy achieves a high BLEU scoreNaive Copy achieves a BLEU score of 90.80, outperforming the Seq2Seq of 88.67, although the Exact Match is 0.00%. This is because, as the edit distance of the collected code pairs is small, as shown in Table I, the program repair task does not require many edits, i.e., there is already much duplication in tokens between wrong and correct programs. Therefore, the BLEU score, which is calculated based on the matching degree of tokens between the wrong and correct programs, is likely to be high. Although the Naive Copy cannot program repair at all, it achieves a high BLEU score due to the nature of the calculation of BLEU. However, the fact that Seq2Seq is much worse than the Naive Copy on BLEU indicates that Seq2Seq makes unnecessary edits and makes the source program further away from the target programs. Naive Retrieval achieves 100% in pass@1As we mentioned, pass@1 and compilable@1 of Naive Retrieval are ensured to be 100% since the Naive Retrieval retrieves the correct program from the training data. Although the Naive Retrieval achieves 100% in pass@1, the edit distance is 37.50 (\(\pm\) 51.78), and it is more than three times higher than those of the collected code pairs of 10.76 (\(\pm\) 13.91). It presents a problem of the Naive Retrieval of making extensive edits, e.g., some of the input programs are converted into a completely different program if there is no similar program in the training data to be corrected. However, although the edit distance can be large, the Naive Retrieval is still helpful in suggesting the correct program when the LMs, such as CodeT5, fail to generate the correct program. Therefore, Naive Retrieval can be used as a hybrid with machine learning models. Edit distance of correct programs is shorter in Seq2Seq than in CodeT5Seq2Seq achieves 7.10 (\(\pm\) 8.16) on the edit distance of correctly generated programs, and it outperforms those of CodeT5 at 8.31 (\(\pm\) 10.28). From this result, it seems that Seq2Seq performs better in generating programs with shorter edits for correct programs than CodeT5. However, this result is strongly affected by the bias that Seq2Seq fails to generate correct programs for the wrong programs that require longer edits, and it does not indicate that Seq2Seq is more capable than CodeT5. As shown in Figure 5, the edit distance of the generated programs by Seq2Seq (Figure 4(a)) is much larger than CodeT5 (Figure 4(b)). In the figure, the points on the right indicate the programs that require longer edits, and the points on the top indicate the generated programs that have longer edits. Therefore, Figure 5 indicates that (1) incorrectly generated programs have longer edits in Seq2Seq, and (2) more incorrectly generated programs for programs requiring longer edits in Seq2Seq. In addition, Figure 6 shows that CodeT5 can generate correct programs for programs requiring longer edit distance, whereas Seq2Seq fails. ## VI Conclusion In this paper, we propose using a language model trained on source code, CodeT5, to suggest correct programs with minimal edits for program repair. We fine-tune the pre-trained \begin{table} \begin{tabular}{l|c c c|c c c|c|c c|c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Pass@_k_} & \multicolumn{3}{c|}{Compilable@_k_} & \multirow{2}{*}{BLEU} & \multirow{2}{*}{Match} & \multicolumn{3}{c}{Edit Distance (Std)} \\ & \(k=1\) & \(k=10\) & \(k=100\) & \(k=1\) & \(k=10\) & \(k=100\) & & Match & All & Correct & Top-1 \\ \hline \multicolumn{10}{l}{**Naive Models**} \\ Copy & 0.00\% & — & — & 3.32\% & — & — & 90.80 & 0.00\% & 0.00 (0.00) & — & — \\ Retrieval & 100.00\% & — & — & 100.00\% & — & — & 66.25 & 8.46\% & 37.50 (51.78) & 37.50 (51.78) \\ \hline \multicolumn{10}{l}{**Language Models**} \\ Seq2Seq & 38.36\% & 52.43\% & 62.58\% & 74.51\% & 89.42\% & 95.24\% & 88.67 & 24.35\% & 13.36 (25.27) & **7.10 (8.16)** & 7.22 (9.32) \\ CodeT5 & **72.52\%** & **88.34\%** & **91.95\%** & **97.78\%** & **99.49\%** & **99.76\%** & **99.64** & **44.48\%** & **8.54 (11.70)** & 8.31 (10.28) & **6.84 (8.69)** \\ \hline \hline \end{tabular} \end{table} TABLE II: **Results evaluated on the test set.** Naive Copy copies the input data as the output data, which is ensured to be incorrect but can be compilable. Naive Retrieval retrieves the most similar program from the training data, which is ensured to be correct. Seq2Seq and CodeT5 models generate 100 samples for each code pair. _Correct_ indicates the average edit distance between input and correctly generated programs, and _Top-1_ indicates the average edit distance between input and the most similar correctly generated programs. The best score in LM-based models for each metric is represented in bold. Fig. 5: **Edit distance of original v.s. generated programs.** The code pair is considered _correct_ if at least one generated program is correct. CodeT5 model on code pairs consisting of wrong and correct programs and evaluate its performance compared to several baseline models. Our experimental results show that the fine-tuned CodeT5 model outperforms the baseline models in generating correct programs with shorter edit distances from the input programs. While the naive retrieval model achieved 100% correctness in suggesting code repairs, the average edit distance between the suggested programs and the input programs was 36.37, which is much longer than the average edit distance of 11.34 for the test data. On the other hand, the CodeT5 model achieves a pass@100 of 91.95%, which indicates that at least one correct program can be suggested by generating 100 candidate programs, and an average edit distance of 8.54 between the input and the generated programs, demonstrating its effectiveness in suggesting concise program repairs. While the Seq2Seq model seems to generate programs with a shorter average edit distance than the CodeT5 model when filtering only the correct programs, it fundamentally fails to generate correct programs for incorrect programs that require longer edits. In conclusion, the proposed method using CodeT5 shows promise in program repair by suggesting accurate and concise program repairs with minimal edits for solving introductory programming problems. Future work includes further improvements of the correctness and edit distance and exploring other LM-based approaches for program repair.
2309.04573
Mask2Anomaly: Mask Transformer for Universal Open-set Segmentation
Segmenting unknown or anomalous object instances is a critical task in autonomous driving applications, and it is approached traditionally as a per-pixel classification problem. However, reasoning individually about each pixel without considering their contextual semantics results in high uncertainty around the objects' boundaries and numerous false positives. We propose a paradigm change by shifting from a per-pixel classification to a mask classification. Our mask-based method, Mask2Anomaly, demonstrates the feasibility of integrating a mask-classification architecture to jointly address anomaly segmentation, open-set semantic segmentation, and open-set panoptic segmentation. Mask2Anomaly includes several technical novelties that are designed to improve the detection of anomalies/unknown objects: i) a global masked attention module to focus individually on the foreground and background regions; ii) a mask contrastive learning that maximizes the margin between an anomaly and known classes; iii) a mask refinement solution to reduce false positives; and iv) a novel approach to mine unknown instances based on the mask-architecture properties. By comprehensive qualitative and qualitative evaluation, we show Mask2Anomaly achieves new state-of-the-art results across the benchmarks of anomaly segmentation, open-set semantic segmentation, and open-set panoptic segmentation.
Shyam Nandan Rai, Fabio Cermelli, Barbara Caputo, Carlo Masone
2023-09-08T20:07:18Z
http://arxiv.org/abs/2309.04573v2
# Mask2Anomaly: Mask Transformer for Universal Open-set Segmentation ###### Abstract Segmenting unknown or anomalous object instances is a critical task in autonomous driving applications, and it is approached traditionally as a per-pixel classification problem. However, reasoning individually about each pixel without considering their contextual semantics results in high uncertainty around the objects' boundaries and numerous false positives. We propose a paradigm change by shifting from a per-pixel classification to a mask classification. Our mask-based method, Mask2Anomaly, demonstrates the feasibility of integrating a mask-classification architecture to jointly address anomaly segmentation, open-set semantic segmentation, and open-set panoptic segmentation. Mask2Anomaly includes several technical novelties that are designed to improve the detection of anomalies/unknown objects: i) a global masked attention module to focus individually on the foreground and background regions; ii) a mask contrastive learning that maximizes the margin between an anomaly and known classes; iii) a mask refinement solution to reduce false positives; and iv) a novel approach to mine unknown instances based on the mask- architecture properties. By comprehensive qualitative and qualitative evaluation, we show Mask2Anomaly achieves new state-of-the-art results across the benchmarks of anomaly segmentation, open-set semantic segmentation, and open-set panoptic segmentation. Anomaly Segmentation, Open-set Semantic Segmentation, Open-set Panoptic Segmentation, Mask Transformers. ## 1 Introduction Image segmentation [15, 53, 58, 60] plays a significant role in self-driving cars, being instrumental in achieving a detailed understanding of the vehicle's surroundings. Generally, segmentation models are trained to recognize a pre-defined set of semantic classes (e.g., car, pedestrian, road, etc.); however, in real-world applications, they may encounter objects not belonging to such categories (e.g., animals or cargo dropped on the road). Therefore, it is essential for these models to identify objects in a scene that are not present during training i.e., _anomalies_, both to avoid potential dangers and to enable continual learning [8, 9, 44, 19] and open-world solutions [7]. The segmentation of unseen object categories can be performed at three levels of increasing semantic output information (see Fig. 1): * _Anomaly segmentation_ (AS) [5, 57, 22, 33] focuses on segmenting objects from classes that were absent during training, generating an output map that identifies the anomalous image pixels. * _Open-set semantic segmentation_ (OSS) [26] evaluates a segmentation model's performance on both anomalies and known classes. OSS ensures that when training an anomaly segmentation model, its performance on known classes remains unaffected. * _Open-set panoptic segmentation_ (OPS) [32] simultaneously segments distinct instances of unknown objects and performs panoptic segmentation [34] for the known classes. In the literature, AS, OSS and OPS are typically addressed separately using specialized networks for each task. These networks rely on per-pixel classification architectures that individually classify the pixels and assign to each of them an anomaly score. However, reasoning on the pixels individually without any spatial correlation produces noisy anomaly scores, thus leading to a high number of false positives and poorly localized anomalies or unknown objects (see Fig. 2). In this paper, we propose to jointly address AS, OSS, and OPS with a single architecture (with minor changes during inference) by casting them as a mask classification task rather than a pixel classification task (see Fig. 1). The idea of employing mask-based architecture stems from the recent advances in mask-transformer architectures [13, 14], which demonstrated that it is possible to achieve remarkable performance across various segmentation tasks by classifying masks rather than pixels. We hypothesize that mask-transformer architectures are better suited to detect anomalies than per-pixel architectures because masks encourage objectness and thus can capture anomalies as whole entities, leading to more congruent anomaly scores and reduced false positives. However, the effec Fig. 1: **Mask2Anomaly**: We present a mask-based architecture that can jointly perform open-set semantic segmentation, open-set panoptic segmentation, and anomaly segmentation. In the figure, the objects enclosed in red boxes are anomaly/unknown. tiveness of mask-transformer architectures hinges on the capability to output masks that captures anomalies well. Hence, we propose several technical contributions to improve the capability of mask-transformer architectures to capture anomalies or unknown objects and minimize false positives: * At the **architectural** level, we propose a global masked-attention mechanism that allows the model to focus on both the foreground objects and on the background while retaining the efficiency of the original masked-attention [13]. * At the **training** level, we have developed a mask contrastive learning framework that utilizes outlier masks from additional out-of-distribution data to maximize the separation between anomalies and known classes. * At the **inference** level, for anomaly segmentation, we propose a mask-based refinement solution that reduces false positives by filtering masks based on the panoptic segmentation that distinguishes between "things" and "stuff" and for open-set panoptic segmentation, we developed an approach to mine unknown instances based on mask-architecture properties. We integrate these contributions on top of the mask architecture [13] and term this solution **Mask2Anomaly**. To the best of our knowledge, Mask2Anomaly is the first universal architecture that jointly addresses AS, OSS, and OPS task and segment anomalies or unknown objects at the mask level. We tested Mask2Anomaly on standard anomaly segmentation benchmarks (Road Anomaly [40], Fishsycapes [5], Segment Me If You Can [10], Lost&Found [48]), open-set semantic segmentation benchmark (Streethazard [29]), and open-set panoptic MSCOCO [32] dataset, achieving the best results among all methods for all task by a significant margin. In particular, Mask2Anomaly reduces the false positives rate by more than half on average and improves the open-set metric performance by one-third w.r.t the previous state-of-the-art. Code and pre-trained models will be made publicly available upon acceptance. This work is an extension of our previous paper [49] that was accepted to ICCV 2023 (Oral) with the following contributions: * We extend Mask2Anomaly to open-set segmentation tasks, namely open-set semantic segmentation and open-set panoptic segmentation. * For the open-set panoptic segmentation task, we developed a novel approach to mine unknown instances based on the properties of the mask-architecture and provide related ablation studies to show its efficacy. * Extensive qualitative and quantitative experiments demonstrate that Mask2Anomaly is an effective approach to address open-set segmentation tasks. Notably, Mask2Anomaly gives a significant gain of 30% on Open-IoU metrics w.r.t best existing method. * We extend Mask2Anomaly experimentation for the anomaly segmentation task by showing results on the Lost&Found dataset. Also, we show global mask attention can positively impact semantic segmentation by investigating its generalizability to other datasets. ## 2 Related Work **Mask-based semantic segmentation.** Traditionally, semantic segmentation methods [12, 38, 42, 61, 62] have adopted fully-convolutional encoder-decoder architectures [1, 42] and addressed the task as a dense classification problem. However, transformer architectures have recently caused us to question this paradigm due to their outstanding performance in closely related tasks such as object detection [6] and instance segmentation [27]. In particular, [14] proposed a mask-transformer architecture that addresses segmentation as a mask classification problem. It adopts a transformer and a per-pixel decoder on top of the feature extraction. The generated per-pixel and mask embeddings are combined to produce the segmentation output. Building upon [14, 13] introduced a new transformer decoder adopting a novel masked-attention module and feeding the transformer decoder with one pixel-decoder high-resolution feature at a time. So far, all these mask-transformers have been considered exclusively in a closed set setting, i.e, there are no unknown categories at test time. To the best of our knowledge, Mask2Anomaly is the first method that performs AS directly with mask-transformers, thus empowering these approaches with the capability to recognize anomalies in real-world settings. **Anomaly segmentation** methods can be broadly divided into three categories: (a) Discriminative, (b) Generative, and (c) Uncertainty-based methods. _Discriminative Methods_ are based on the classification of the model outputs. Hendrycks and Gimpel [30] established the initial AS discriminative baseline by applying a threshold over the maximum softmax probability (MSP) that distinguishes between in-distribution and out-of-distribution data. Other approaches use auxiliary datasets to improve performance [33, 37, 54] by calibrating the model over-confident outputs. Alternatively, [36] learns a confidence score by using the Mahalanobis distance, and [11] introduces an entropy-based classifier to discover out-of-distribution classes. Recently, discriminative methods tailored for semantic segmentation [5] directly segment anomalies in embedding space. _Generative Methods_ provides an alternative paradigm to segment anomalies based on generative models [17, 56, 57, 40]. These approaches train generative networks to reconstruct anomaly-free training data and then use Fig. 2: **Per-pixel vs per-mask architecture:** We show significant shortcoming in the performance of state-of-the-art methods employing per-pixel architectures for anomaly segmentation or open-set segmentation tasks. These methods prediction have significant false positives and noisy outcomes. Mask2Anomaly(ours), an architecture based on mask-transformer properties that effectively addresses both anomaly segmentation and open-set segmentation tasks, leading to a substantial reduction in false positives and enhancing overall prediction quality. the generation discrepancy to detect an anomaly at test time. All the generative-based methods heavily rely on the generation quality and thus experience performance degradation due to image artifacts [22]. Finally, _Uncertainty based_ methods segment anomalies by leveraging uncertainty estimates via Bayesian neural networks [46]. **Open-set segmentation** is the task of segmenting both the the anomalies and in-distribution classes for a given image. Anomaly segmentation methods [31, 57] can be adapted to perform open-set semantic segmentation by fusing the in-distribution segmentation results. However, these methods show poor performance in open-set metrics because their in-distribution class segmentation capabilities degrade after training for anomaly segmentation. [2] formally introduces the problem of open-set semantic segmentation that uses multi-task model segment anomaly and predicts semantic segmentation maps. Later, [3] improved the prior method using noisy outlier labels. Recently, [26] proposed a hybrid approach that combines the known class posterior, dataset posterior, and an un-normalized data likelihood to estimate anomalies and in-distribution classes simultaneously. Another challenging problem in the space of open-set segmentation is open-set panoptic segmentation [32]. In open-set panoptic segmentation, the goal is to simultaneously segment distinct instances of unknown objects and perform panoptic segmentation for in-distribution classes. Hwang _et.al._[32] proposed an exemplar-based open-set panoptic segmentation network (EOPSN) that is based on exemplar theory and utilizes Panoptic FPN [34] which is a per-pixel architecture to perform open-set panoptic segmentation. All the methods discussed so far for anomaly and open-set segmentation rely on per-pixel classification and evaluate individual pixels without considering local semantics. This approach often leads to noisy anomaly predictions, resulting in significant false positives and reduced in-distribution class segmentation performance. Mask2Anomaly overcomes this limitation by segmenting anomalies and in-distribution classes as semantically clustered masks, encouraging the objectness of the predictions. To the best of our knowledge, this is the first work to use masks both to segment anomalies and for open-set segmentation. ## 3 Preliminaries **Notations**: Let us denote \(\mathcal{X}\subset\mathbb{R}^{3\times H\times W}\) the space of RGB images, where \(H\) and \(W\) are the height and width, respectively, and with \(\mathcal{Y}\subset\mathbb{N}^{2\times H\times W}\) the space of semantic labels that associate each pixel in an image to a semantic category from a predefined set \(\mathcal{Z}\), with \(|\mathcal{Z}|=Z\). At training time we assume to have a dataset \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{D}\), where \(x_{i}\in\mathcal{X}\) is an image and \(y_{i}\in\mathcal{Y}\) is its ground truth having pixel-wise semantic class labels. Alternatively, \(\mathcal{Y}\) can also be described as the semantic partition of the image into \(Z\) regions that are represented as a set of binary masks \(M^{gt}\), where the ground-truth labels of \(x_{i}\) can be represented as \(M^{gt}=\{m_{i}|m_{i}\in[0,1]^{H\times W}\}_{i=1}^{Z}\). **Mask architectures:** The prototypical mask architecture consists of three meta parts: a) a _backbone_ that acts as feature extractor, b) a _pixel-decoder_ that upsamples the low-resolution features extracted from the backbone to produce high-resolution _per-pixel embeddings_, and c) a _transformer decoder_, made of \(L\) transformer layers, that takes the image features to output a fixed number of object queries consisting of _mask embeddings_ and their associated _class scores_\(C\in\mathbb{R}^{N\times Z}\). The final _class masks_\(M\in\mathbb{R}^{N\times(H\times W)}\) are obtained by multiplying the mask embeddings with the per-pixel embeddings obtained from the pixel-decoder. During training we use the Hungarian algorithm to match ground truth masks \(M^{gt}\) with the predicted masks \(M\). Since the Hungarian algorithm requires one-to-one correspondences and \(M\geq M^{gt}\), we pad the ground truth mask \(M^{gt}\) with "no object" masks, which we indicate as \(\phi\). The cost function for matching \(M\) and \(M^{gt}\) is given by \[L_{masks}=\lambda_{bce}L_{bce}+\lambda_{dice}L_{dice} \tag{1}\] where \(L_{bce}\) and \(L_{dice}\) are, respectively, the binary cross entropy loss and the dice loss calculated between the matched masks. The weights \(\lambda_{bce}\) and \(\lambda_{dice}\) are both set to \(5.0\). Additionally, we also train the model on cross-entropy loss \(L_{ce}\) to learn the semantic class of each mask that is denoted by \(C\). The total training loss is given by: \[L=L_{masks}+\lambda_{ce}L_{ce} \tag{2}\] with \(\lambda_{ce}\) set to 2.0 for the prediction that is matched with the ground truth and 0.1 for \(\phi\), i.e., for no object. At inference time, the segmentation output is inferred by marginalization over the softmax of \(C\) and sigmoid of \(M\) given as: \[g(x)=\max_{\text{s}}^{Z}\Big{(}\text{softmax}(C)^{T}\cdot\text{sigmoid}(M) \Big{)} \tag{3}\] In the subsequent sections we will address the tasks of anomaly segmentation (Sec. 4), open-set semantic segmentation (Sec. 5), and open-set panoptic segmentation (Sec. 6) using our proposed Mask2Anomaly architecture and delve into its novel elements. ## 4 Anomaly Segmentation ### _Problem Setting_ Anomaly segmentation can be achieved in per-pixel semantic segmentation architectures [12] by applying the _Maximum Softmax Probability_ (MSP) [30] on top of the per-pixel classifier. Formally, given the pixel-wise class scores \(S(x)\in[0,1]^{Z\times H\times W}\) obtained Fig. 3: **Mask2Anomaly Overview.** Mask2Anomaly meta-architecture consists of an encoder, a pixel decoder, and a transformer decoder. We propose GMA: Global Mask Attention that is discussed in Sec. 4.2 and Fig. 4. \(\phi\) is image features. \(\phi^{i},\phi^{i+1},\phi^{i+2}\) are upsampled image features at multiple scales. Mask contrastive Loss \(L_{CL}\) (Sec. 4.3) utilizes outlier masks to maximize the separation between anomalies and known classes. During anomaly inference, we utilize refinement mask \(R_{M}\) (Sec. 4.4) to minimize false positives. by segmenting the image \(x\) with a per-pixel architecture, we can compute the anomaly score \(f(x)\) as: \[f(x)=1-\max_{\text{$\mathrm{max}$}}^{Z}(S(x)). \tag{4}\] In this paper, we propose to adapt this framework based on MSP for mask-transformer segmentation architectures. Given such a mask-transformer architecture, we calculate the anomaly scores for an input \(x\) as \[f(x)=1-\max_{\text{$\mathrm{max}$}}^{Z}\left(\text{softmax}(C)^{T}\cdot\text{ sigmoid}(M)\right). \tag{5}\] Here, \(f(x)\) utilizes the same marginalization strategy of class and mask pairs as [14] to get anomaly scores. Without loss of generality, we implement the anomaly scoring (Eq. (5)) on top of the Mask2Former [13] architecture. However, this strategy hinges on the fact that the masks predicted by the segmentation architecture can capture anomalies well. We found that simply applying the MSP on top of Mask2Former as in Eq. (5) does not yield good results (see Fig. 1 and the results in Sec. 7.5). To overcome this problem, we introduce improvements in the architecture, training procedure, and anomaly inference mechanism. We name our method as Mask2Anomaly, and its overview is shown in Fig. 3 (left). Now, we will discuss the proposed novel components in Mask2Anomaly. ### _Global Masked Attention_ One of the key ingredients to Mask2Former [13] state-of-the-art segmentation results is the replacement of the _cross-attention_ (CA) layer in the transformer decoder with a _masked-attention_ (MA). The masked-attention attends only to pixels within the foreground region of the predicted mask for each query, under the hypothesis that local features are enough to update the query object features. The output of the \(l\)-th masked-attention layer can be formulated as \[\text{softmax}(\mathcal{M}_{l}^{F}+QK^{T})V+X_{in} \tag{6}\] where \(X_{in}\in\mathbb{R}^{N\times C}\) are the \(N\)\(C\)-dimensional query features from the previous decoder layer. The queries \(Q\in\mathbb{R}^{N\times C}\) are obtained by linearly transforming the query features with a learnable transformation whereas the keys and values \(K,V\) are the image features under learnable linear transformations \(f_{k}(.)\) and \(f_{v}()\). Finally, \(\mathcal{M}_{l}^{F}\) is the predicted foreground attention mask that at each pixel location \((i,j)\) is defined as \[\mathcal{M}_{l}^{F}(i,j)=\begin{cases}0&\text{if }M_{l-1}(i,j)\geq 0.5\\ -\infty&\text{otherwise},\end{cases} \tag{7}\] where \(M_{l-1}\) is the output mask of the previous layer. By focusing only on the foreground objects, masked attention grants faster convergence and better semantic segmentation performance than cross-attention. However, focusing only on the foreground region constitutes a problem for anomaly segmentation because anomalies may also appear in the background regions. Removing background information leads to failure cases in which the anomalies in the background are entirely missed, as shown in the example in Fig. 5. To ameliorate the detection of anomalies in these corner cases, we extend the masked attention with an additional term focusing on the background region (see Fig. 4, right). We call this a _global masked-attention_ (GMA) formally expressed as \[\begin{split} X_{out}=&\text{softmax}(\mathcal{M}_{l }^{F}+QK^{T})V\\ +&\text{softmax}(\mathcal{M}_{l}^{B}+QK^{T})V+X_{in} \end{split} \tag{8}\] where \(\mathcal{M}_{l}^{B}\) is the additional background attention mask that complements the foreground mask \(\mathcal{M}_{l}^{F}\), and it is defined at the pixel coordinates \((i,j)\) as \[\mathcal{M}_{l}^{B}(i,j)=\begin{cases}0&\text{if }M_{l-1}(i,j)<0.5\\ -\infty&\text{otherwise}.\end{cases} \tag{9}\] The global masked-attention in Eq. (8) differs from the masked-attention by additionally attending to the background mask region, yet it retains the benefits of faster convergence w.r.t. the cross-attention. ### _Mask Contrastive Learning_ The ideal characteristic of an anomaly segmentation model is to predict high anomaly scores for out-of-distribution (OOD) objects and low anomaly scores for in-distribution (ID) regions. Namely, we would like to have a significant margin between the likelihood of known classes being predicted at anomalous regions and vice-versa. A common strategy used to improve this separation is to fine-tune the model with auxiliary out-of-distribution (anomalous) data as supervision [25, 26, 5]. Here we propose a contrastive learning approach to encourage the model to have a significant margin between the anomaly scores for in-distribution and out-of-distribution classes. Our mask-based Fig. 4: **Global Mask Attention:** independently distributes the attention between foreground and background. V, K, and Q are Value, Key, and Query. Fig. 5: **Limitation of Mask-Attention:** Masked-attention [13] selectively attends to foreground regions resulting in low attention scores (dark regions) for anomalies. Anomalies are in red. Best viewed with Zoom. framework allows us to straightforwardly implement this contrastive strategy by using as supervision outlier images generated by cutting anomalous objects from the auxiliary OOD data and pasting it on top of the training data. For each outlier image, we can then generate a binary outlier mask \(M_{OOD}\) that is \(1\) for out-of-distribution pixels and \(0\) for in-distribution class pixels. With this setting, we first calculate the negative likelihood of in-distribution classes using the class scores \(C\) and class masks \(M\) as: \[l_{N}=-\max_{\text{max}}^{Z}\left(\text{softmax}(C)^{T}\cdot\text{sigmoid}(M)\right) \tag{10}\] Ideally, for pixels corresponding to in-distribution classes \(l_{N}\) should be \(-1\) since the value of \(\text{softmax}(C)^{T}\) and \(\text{sigmoid}(M)\) would be close to \(1\). On the other hand, for the anomalous pixels, \(l_{N}\) should be \(0\) as the likelihood of these pixels belonging to any in-distribution classes is \(0\) resulting \(\text{softmax}(C)^{T}\) to be \(0\). Using \(l_{N}\), we define our contrastive loss as: \[L_{CL} =\frac{1}{2}(l_{CL}^{2}), \tag{11}\] \[l_{CL} =\begin{cases}l_{N}&\text{if}M_{OOD}=0\\ max(0,m-l_{N})&\text{otherwise,}\end{cases}\] where the margin \(m\) is a hyperparameter that decides the minimum distance between the out-of-distribution and in-distribution classes. During mask contrastive training, we also preserve the in-distribution accuracy by training on \(L_{masks}\) and \(L_{ce}\) which formulates our total training loss as: \[L_{ood}=L_{CL}+L_{masks}+\lambda_{ce}L_{ce} \tag{12}\] ### _Refinement Mask_ False positives are one of the main problems in anomaly segmentation, particularly around object boundaries. Handcrafted methods such as iterative boundary suppression [33] or dilated smoothing have been proposed to minimize the false positives at boundaries or globally, however, they require tuning for each specific dataset. Instead, we propose a general refinement technique that leverages the capability of mask transformers [13] to perform all segmentation tasks. Our method stems from the panoptic perspective [34] that the elements in the scene can be categorized as _things_, i.e. countable objects, and _stuff_, i.e. amorphous regions. With this distinction in mind, we observe that in driving scenes, i) unknown objects are classified as things, and ii) they are often present on the road. Thus, we can proceed to remove most false positives by filtering out all the masks corresponding to "stuff", except the "road" category. We implement this removal mechanism in the form of a binary refinement mask \(R_{M}\in[0,1]^{H\times W}\), which contains zeros in the segments corresponding to the unwanted "stuff" masks and one otherwise. Thus, by multiplying \(R_{M}\) with the predicted anomaly scores \(f\) we filter out all the unwanted "stuff" masks and eliminate a large portion of the false positives (see Fig. 6). Formally, for an image \(x\) the refined anomaly scores \(f^{r}\) is computed as: \[f^{r}(x)=R_{M}\odot f(x), \tag{13}\] where \(\odot\) is the Hadamard product. \(R_{M}\) is the dot product between the binarized output mask \(\bar{M}\in\{0,1\}^{N\times(H\times W)}\) and the class filter \(\bar{C}\in\{0,1\}^{1\times N}\), i.e. \(R_{M}=\bar{C}\cdot\bar{M}\). We define \(\bar{M}=\text{sigmoid}(M)>0.5\) and the class filter \(\bar{C}\) is equal to \(1\) only where the highest class score of \(\text{softmax}(C)\) belongs to "things" or "road" classes and is greater than \(0.95\). **Inference:** During inference, we pass the input image through Mask2Anomaly to get anomaly scores Eq. (5). Then, we refine the anomaly scores via refinement mask Eq. (13). ## 5 Open-Set Semantic Segmentation ### _Problem Setting_ Anomaly segmentation methods solely focus on segmenting road scene anomalies. However, a strong performance for in-distribution classes is equally important. For instance, an anomaly segmentation model deployed in an autonomous vehicle that fails to identify a person crossing the road can result in a fatal accident. Hence, it is crucial that while recognizing anomalies, the performance of the model on in-distribution classes remains preserved. Open-set semantic segmentation addresses this problem by jointly accessing the model's performance on in-distribution and out-of-distribution classes. We utilize the mask properties of Mask2Anomaly to perform open-set semantic segmentation by only modifying its inference process with respect to the Anomaly Segmentation task. **Inference:** Our open-set semantic segmentation network has an identical mask architecture as anomaly segmentation that contains global mask attention. During the inference, we first threshold the anomaly scores obtained from Eq. (5) at a true positive rate of 95%, similar to [26]. We denote the thresholded anomaly scores by \(f(x)\). Next, we calculate the in-distribution class performance \(g(x)\) by Eq. (3). Finally, we formulate the open-set semantic segmentation \(f_{oss}\) prediction of an image \(x\) as: \[f_{oss}(x)=\arg\max(\text{concat}(g(x),f(\hat{x}))) \tag{14}\] ## 6 Open Set Panoptic Segmentation ### _Problem Overview_ Panoptic segmentation [34] jointly addresses the dense prediction task of semantic segmentation and instance segmentation. In this task, we divide an image into two broad categories: i) _stuff_, i.e., amorphous areas of an image that have homogeneous texture, such as grass and sky, and ii) _things_, i.e., countable objects such as pedestrians. Every pixel belonging to a _things_ category is assigned a semantic label and a unique instance id, whereas, for _stuff_ regions, only semantic labels are given, and the instance id is ignored. However, constructing and annotating large-scale panoptic segmentation datasets is expensive and requires significant human Fig. 6: **Mask Refinement Illustration: To obtain the refined prediction, we multiply the prediction map with a refinement mask that is built by assigning zero anomaly scores for pixels that are categorized as “stuff”, except for the “road”. The refinement eliminates many false positives at the boundary of objects and in the background. The region to be masked is white in the refinement mask.** effort. Hwang [32] addresses this problem by formulating it as open-set panoptic segmentation (OPS) problem where a model can perform panoptic segmentation on a pre-defined set of classes and identify unknown objects. This ability of the OPS model could accelerate the process of constructing large-scale panoptic segmentation datasets from existing ones. ### _Problem Setting_ The key difference between panoptic and open-set panoptic segmentation is the presence of unknown objects while testing. However, handling the classification of unknown object is OPS is quite challenging. Firstly, in comparison to open-set image classification, OPS requires the classification of unknown objects at the pixel level. Secondly, the absence of semantic information about unknown objects means that they are generically labeled as background during training. In order to make the problem tractable, we follow [32] and make three assumptions: 1. we categorize all the unknowns into things categories (i.e., the unknowns are countable objects); 2. elements of known categories cannot be classified as unknown classes; 3. the unknown objects are always found in the background/void regions. This avoids confusion between known and unknown class regions. We address open-set panoptic segmentation by utilizing the mask properties of Mask2Anomaly and leveraging its global mask attention. We first mask out the known _stuff_ and _things_ regions of an image, and then within the remaining background area, we mine the instances of the unknown objects. We will now formally discuss the method in more detail. For an input image \(x\), Mask2Anomaly outputs a set of masks \(M\) and its corresponding class scores \(C\). Among these, we denote the joint set of known _stuff_ and _things_ class masks as \(M_{k}\in[0,1]^{N_{k}\times H\times W}\) and its corresponding class scores as \(C_{k}\in\mathbb{R}^{N_{k}\times Z}\). Finally, we denote the number of known class masks as \(N_{k}\). We obtain the background region \(\mathcal{B}\) of \(x\) by using the weighted combination of \(M_{k}\) and \(C_{k}\) given by: \[\mathcal{B}=1-\underset{\text{max}}{\text{max}}(\underset{\text{max}}{\text{ max}}(C_{k}))\cdot\text{sigmoid}(M_{k})). \tag{15}\] In light of our assumptions, \(\mathcal{B}\) consists of background _stuff_ classes and unknown _things_ classes. ### _Mining Unknown Instances_ Generally in panoptic segmentation datasets such as MS-COCO [39] the background class consists of only background _stuff_ classes. However, in open-set panoptic segmentation, the background class consists of background _stuff_ classes and unknown _things_ classes. So, we mine the unknown instances from background \(\mathcal{B}\) obtained from Eq. (15) using the following steps: 1. In the first step, we employ the connected component algorithm [4] to cluster and identify unique segments in \(\mathcal{B}\). 2. Next, we calculate each connected component's overlap with the individual masks of \(M\). Intersection over union is used for calculating the overlap. 3. If there is a significant overlap between a connected component and a mask \(M^{i}\in M\), we calculate the average stuff class entropy \(\mathcal{E}_{S}\) and average things class entropy \(\mathcal{E}_{T}\) using the corresponding class scores \(C^{i}\in C\). 4. Finally, if \(\mathcal{E}_{S}>\mathcal{E}_{T}\) we can conclude that the connected component is more likely to belong to the _things_ class. Hence, we classify the connected component to be an unknown instance. **Inference:** During the inference, we first calculate \(\mathcal{B}\) from Eq. (15). Then, we identify the unknown instances in \(\mathcal{B}\) by following the above described steps of mining unknown instances. ## 7 Experimentation ### _Datasets_ **Anomaly Segmentation**: We train Mask2Anomaly on the Cityscapes [15] dataset, which consists of 2975 training and 500 validation images. To evaluate anomaly segmentation, we use Road Anomaly [40], Lost & Found [48], Fishsycapes [5], and Segment Me If You Can (SMIYC) benchmarks [10]. _Road Anomaly:_ is a collection of 60 web images with anomalous objects on or near the road. _Lost & Found:_ has 1068 test images with small obstacles for road scenes. _Fishyscapes (FS):_ consists of two datasets, Fishsycape static (FS static) and Fishsycapes lost & found (FS lost & found). Fishsycapes static is built by blending Pascal VOC [21] objects on Cityscapes images containing 30 validation and 1000 test images. Fishsycapes lost & found is based on a subset of the Lost and Found dataset [48], with 100 validation and 275 test images. _SMIYC:_ consists of two datasets, RoadAnomaly21 (SMIYC-RA21) and RoadObstacle21 (SMIYC-RO21). The SMIYC-RA21 contains 10 validation and 100 test images with diverse anomalies. The SMIYC-RO21 is collected to segment road anomalies and has 30 validation and 327 test images. **Open-set panoptic segmentation:** We perform all the open-set panoptic segmentation experiments on the panoptic segmentation dataset of MS-COCO [39]. The dataset consists of 118 thousand training images and 5 thousand validation images having 80 _thing_ classes and 53 _stuff_ classes. We construct open-set panoptic segmentation dataset by removing the labels of a small set of known _things_ classes from the train set of panoptic segmentation dataset. The removed set of _things_ classes are treated as unknown classes. We construct three different training dataset split with increasing order of difficulty with (5%, 10%, 20%) of unknown classes. The removed classes in each split that are removed cumulatively is given as: 5%: {car, cow, pizza, toilet}, 10%: {boat, tie, zebra, stop sign }, 20%: {dining table, banana, bicycle, cake, sink, cat, keyboard, bear}. **Open-set semantic segmentation:** We use StreetHazards [29], a synthetic dataset for open-set semantic segmentation. StreetHazards dataset is created with the CARLA simulator [18], leveraging the Unreal Engine to render realistic road scene images in which diverse anomalous objects are inserted. The dataset consists of 5125 training images and 1031 validation images having 12 classes. The test set has 1500 images along with an additional anomaly class. ### _Evaluation Metrics_ **Anomaly Segmentation:** We evaluate all the anomaly segmentation methods at pixel and component levels that are described next. _Pixel-Level:_ For pixel-wise evaluation, \(Y\in\{Y_{a},Y_{na}\}\) is the pixel level annotated ground truth labels for an image \(\chi\) containing anomalies. \(Y_{a}\) and \(Y_{na}\) represents the anomalous and non-anomalous labels in the ground-truth, respectively. Assume that \(\hat{Y}(\gamma)\) is the model prediction obtained by thresholding at \(\gamma\). Then, we can write the precision and recall equations as \[\text{precision}(\gamma)=\frac{|Y_{a}\cap\hat{Y}_{a}(\gamma)|}{|\hat{Y}_{a}( \gamma)|} \tag{16}\] \[\text{recall}(\gamma)=\frac{|Y_{a}\cap\hat{Y}_{a}(\gamma)|}{|Y_{a}|} \tag{17}\] and the AuPRC can be approximated as \[\text{AuPRC}=\int_{\gamma}\text{precision}(\gamma)\text{recall}(\gamma) \tag{18}\] The AuPRC works well for unbalanced datasets making it particularly suitable for anomaly segmentation since all the datasets are significantly skewed. Next, we consider the False Positive Rate at a true positive rate of 95% (FPR\({}_{95}\)), an important criterion for safety-critical applications that is calculated as: \[\text{FPR}_{95}=\frac{|\hat{Y}_{a}(\gamma^{*})\cap Y_{na}|}{|Y_{na}|} \tag{19}\] where \(\gamma^{*}\) is a threshold when the true positive rate is 95%. _Component-Level:_ SMIYC [10] introduced component-level evaluation metrics that solely focus on detecting anomalous objects regardless of their size. These metrics are important to be considered because pixel-level metrics may not penalize a model for missing a small anomaly, even though such a small anomaly may be important to be detected. In order to have a component-level assessment of the detected anomalies, the quantities to be considered are the component-wise true-positives (\(TP\)), false-negatives (\(FN\)), and false-positives (\(FP\)). These component-wise quantities can be measured by considering the anomalies as the positive class. From these quantities, we can use three metrics to evaluate the component-wise segmentation of anomalies: sIoU, PPV, and F1\({}^{*}\). Here we provide the details of how these metrics are computed, using the notation \(\mathcal{K}\) to denote the set of ground truth components, and \(\hat{\mathcal{K}}\) to denote the set of predicted components. The _sIoU_ metric used in SMIYC [10] is a modified version of the component-wise intersection over union proposed in [50], which considers the ground-truth components in the computation of the \(TP\) and \(FN\). Namely, it is computed as \[\text{sIoU}(k)=\frac{|k\cap\hat{K}(k)|}{|k\cap\hat{K}(k)\backslash\mathcal{A}( k)|},\qquad\hat{K}(k)=\bigcup_{\hat{k}\in\hat{\mathcal{K}},\hat{k}\cap k \neq\emptyset}\hat{k} \tag{20}\] where \(\mathcal{A}(k)\) is an adjustment term that excludes from the union those pixels that correctly intersect with another ground-truth component different from \(k\). Given a threshold \(\tau\in[0,1]\), a target \(k\in\mathcal{K}\) is considered a \(TP\) if _sIoU_(\(k\)) > \(\tau\), and a \(FN\) otherwise. The positive predictive value (_PPV_) is a metric that measures the \(FP\) for a predicted component \(\hat{k}\in\hat{\mathcal{K}}\), and it is computed as \[\text{PPV}(\hat{k})=\frac{|\hat{k}\cap\hat{K}(k)|}{|\hat{k}|} \tag{21}\] A predicted component \(\hat{k}\in\hat{\mathcal{K}}\) is considered a \(FP\) if \(PPV(\hat{k})\leq\tau\). Finally, the \(F1^{*}\) summarizes all the component-wise \(TP\), \(FN\), and \(FP\) quantities by the following formula: \[F1^{*}(\tau)=\frac{2TP(\tau)}{2TP(\tau)+FN(\tau)+FP(\tau)} \tag{22}\] **Open-set semantic segmentation:** We use open-IoU [26] to evaluate open-set semantic segmentation. Unlike, IoU, open-IoU takes into account the false positives (\(FP^{OOD}\)) and false negatives (\(FN^{OOD}\)) of an anomaly segmentation model. To measure open-IoU, we first threshold the output of the anomaly segmentation model at a true positive rate of 95% and then re-calculate the classification scores of in-distribution classes according to the anomaly threshold. Now, \(FP^{OOD}\) and \(FN^{OOD}\) for a class \(\alpha\) can be calculated as: \[FP^{OOD}_{\alpha}=\sum_{i=1,i\neq\alpha}^{Z+1}FP^{i}_{\alpha},FN^{OOD}_{\alpha }=\sum_{i=1,i\neq\alpha}^{Z+1}FN^{i}_{\alpha} \tag{23}\] Using \(FP^{OOD}_{\alpha}\) and \(FN^{OOD}_{\alpha}\), we can calculate the open-IoU for class \(\alpha\) as: \[\text{open-IoU}_{\alpha}=\frac{TP_{\alpha}}{TP_{\alpha}+FP^{OOD}_{\alpha}+ FN^{OOD}_{\alpha}} \tag{24}\] \(TP_{\alpha}\) denotes the true-positive of class \(\alpha\). An ideal open-set model will have open-IoU to be equal to IoU. **Open-set panoptic segmentation:** We measure the panoptic segmentation quality of known and unknown classes by using the panoptic quality (\(PQ\)) metric [34]. For each class, \(PQ\) is calculated individually and averaged over all the classes making \(PQ\) independent of class imbalance. Every class has predicted segments \(p\) and its corresponding ground truths \(g\) that is divided into three parts: true positives (\(TP\)): matched pair of segments, false positives (\(FP\)): unmatched predicted segments, and false negatives (\(FN\)): unmatched ground truth segments. Given the three sets, \(PQ\) can be formulated as: \[PQ=\underbrace{\sum_{(p,g)\in TP}IoU(p,g)}_{\text{segmentation quality (SQ)}}\times\underbrace{\frac{|TP|}{|TP|+\frac{1}{2}|FP|+\frac{1}{2}|FN|}}_{ \text{recognition quality (RQ)}} \tag{25}\] From the above equation, we can see \(PQ\) as the product of a segmentation quality (\(SQ\)) and a recognition quality (\(RQ\)). \(RQ\) can be inferred as an F1 score that gives the estimation of segmentation quality. \(SQ\) is the average IoU of matched segments. ### _Implementation Details_ **Anomaly Segmentation:** Our implementation is derived from [13, 14]. We use a ResNet-50 [28] encoder, and its weights are initialized from a model that is pre-trained with barlow-twins [59] self-supervision on ImageNet [16]. We freeze the encoder weights during training, saving memory and training time. We use a multi-scale deformable attention Transformer (MSDeformAttn) [64] as the pixel decoder. The MSDeformAttn gives features maps at \(1/8,1/16\), and \(1/32\) resolution, providing image features to the transformer decoder layers. Our transformer decoder is adopted from [13] and consists of 9 layers with 100 queries. We train Mask2Anomaly using a combination of binary cross-entropy loss and the dice loss [45] for class masks and cross-entropy loss for class scores. The network is trained with an initial learning rate of 1e-4 and batch size of 16 for 90 thousand iterations on AdamW [43] with a weight decay of 0.05. We use an image crop of \(380\times 760\) with large-scale jittering [20] along with a random scale ranging from 0.1 to 2.0. Next, we train the Mask2Anomaly in a contrastive setting. We generate the outlier image using AnomalyMix [54] where we cut an object from MS-COCO [39] dataset image and paste them on the Cityscapes image. The corresponding binary mask for an outlier image is created by assigning \(1\) to the MS-COCO image area and 0 to the Cityscapes image area. We randomly sample 300 images from the MS-COCO dataset during training to generate outliers. We train the network for 4000 iterations with \(m\) as 0.75, a learning rate of 1e-5, and batch size 8, keeping all the other hyper-parameters the same as above. The probability of choosing an outlier in a training batch is kept at 0.2. **Open-set semantic segmentation:** We the use Streethazard [29] dataset to train Mask2Anomaly with a Swin-Base backbone. The model was trained for 50 thousand iterations keeping all the parameters the same as anomaly segmentation. Next, we train Mask2Anomaly on outlier images in a contrastive settings. The outlier image was created by AnomalyMix [54] using MS-COCO [39] and Streethazard image. We train the network for 5000 iterations keeping the Swin-Base backbone frozen. The image crop size was kept at \(380\times 760\), the rest all the other hyper-parameters were the same as for anomaly segmentation. **Open-set panoptic segmentation:** We train Mask2Anomaly having ResNet-50 backbone for 370 thousand iterations. Our training approach employs a batch size of 8, incorporating cropped input images sized at 640\(\times\)640. We keep the remaining hyperparameters same as specified in the anomaly segmentation. Across all the three training datasets, which contain 5%, 10%, and 15% of unknown classes, the number of connected components were 2, 2, and 3 respectively. The number of iteration for connected component algorithm was kept at 500 for each training dataset. ### _Main Results_ **Anomaly Segmentation:** Table I shows the pixel-level anomaly segmentation results achieved by Mask2Anomaly and recent SOTA methods on Fishyscapes, SMIYC, and Road Anomaly datasets. We can observe that Mask2Anomaly significantly improves the average AuPRC by 20% and the FPR\({}_{95}\) by 60% compared to the second-best method. Another observation is that anomaly segmentation methods based on per-pixel architecture, such as JSRNet, perform exceptionally well on the Road Anomaly dataset. However, JSRNet does not generalize well on other datasets. On the other hand, Mask2Anomaly yields excellent results on all the datasets. Moreover, the property of our mask architecture to encourage objectness rather than individual pixel anomalies, not only reduces the false positive but also improves the localization of whole anomalies. Indeed, Tab. II demonstrates \begin{table} \begin{tabular}{c c c c c c c c c c c c c c} \hline \hline Methods & \multicolumn{2}{c}{SMIYC RA-21} & \multicolumn{2}{c}{SMIYC RO-21} & \multicolumn{2}{c}{PS LEP} & \multicolumn{2}{c}{PS Scale} & \multicolumn{2}{c}{Road Anomaly} & \multicolumn{2}{c}{Low \& F Bound} & \multicolumn{2}{c}{Average} \\ \cline{2-13} \cline{6-13} Methods that Mask2Anomaly outperforms all the baselined methods on component-level evaluation metrics. To conclude, Mask2Anomaly yields state-of-the-art anomaly segmentation performance both in pixel and component metrics. To get a better understanding of the visual results, in Fig. 8 we visually compare the anomaly scores predicted by Mask2Anomaly and its closest competitors: Dense Hybrid [26] and Maximized Entropy [11]. The results from both: Dense Hybrid and Maximized Entropy exhibit a strong presence of false positives across the scene, particularly on the boundaries of objects ("things") and regions ("stuff"). On the other hand, Mask2Anomaly demonstrates the precise segmentation of anomalies while at the same time having minimal false positives. Another critical characteristic of any anomaly segmentation method is that it should not disturb the in-distribution classification performance, or else it would make the semantic segmentation model unusable. We show that Tab. 5(c) Mask2Anomaly \begin{table} \begin{tabular}{c c c c c c c c c c c c} & \multicolumn{4}{c}{Anomaly Segmentation} & \multicolumn{4}{c}{Close Set Performance} & \multicolumn{4}{c}{Open Set Performance} \\ \hline Methods & AuPRC \(\uparrow\) & FPR\({}_{95}\downarrow\) & mIoU\(\uparrow\) & Open-IoU\({}^{\text{m}}\uparrow\) & Open-IoU\({}^{\text{m}}\uparrow\) & Open-IoU\({}^{\text{m}}\uparrow\) & Open-IoU\({}^{\text{m}}\uparrow\) & Open-IoU\({}^{\text{m}}\uparrow\) \\ \hline MSP [30] (CLR’17) & 7.5 & 27.9 & 65.0 & 32.7 & 40.2 & 35.1 \\ ODIN [37] (CLR’18) & 7.0 & 28.7 & 65.0 & 26.4 & 33.9 & 28.8 \\ Outlier Exposure [31] (CLR’19) & 14.6 & 17.7 & 61.7 & 43.7 & 44.1 & 43.8 \\ OOD-Head [2] (GCPR’19) & 19.7 & 56.2 & 66.6 & 33.7 & 34.3 & 33.9 \\ MC Dropout [46] (CVPR’20) & 7.5 & 79.4 & - & - & - & - & - \\ SynthD [57] (CICCV’20) & 9.3 & 28.4 & - & - & - & - & - \\ TRADI [24] (ECCV’20) & 7.2 & 25.3 & - & - & - & - & - \\ OVNNI [23] (CoRR’20) & 12.6 & 22.2 & 54.6 & - & - & - & - \\ Energy [41] (NuIP’S’20) & 12.9 & 18.2 & 63.3 & 41.7 & 44.9 & 42.7 \\ PARS [21] (CVPRW’21) & 8.8 & 23.2 & - & - & - & - & - \\ SO+H [25] (VISIGRAPP’21) & 12.7 & 22.2 & 59.7 & - & - & - & - \\ DML [7] (ICCV’21) & 14.7 & 17.3 & - & - & - & - & - \\ ReAct [52] (NuIPS’21) & 10.9 & 21.2 & 62.7 & 33.0 & 36.2 & 34.0 \\ OH+MSP [2] (CoRR’21) & 18.8 & 30.9 & 66.6 & 43.3 & 44.2 & 43.6 \\ ML [29] (CML’22) & 11.6 & 22.5 & 65.0 & 39.6 & 44.5 & 41.2 \\ DenseHybrid [26] (ECCV’22) & 30.2 & **13.0** & 63.0 & 46.1 & 45.3 & 45.8 \\ \hline Mask2Anomaly & **58.1** & 14.9 & **72.3** & **59.9** & **59.7** & **59.8** \\ \end{tabular} \end{table} TABLE 2: **Anomaly segmentation component level evaluation: Mask2Anomaly achieves large improvement on component level evaluation metrics among the baselined methods. Higher values of sIoU, PPV, and \(F1^{*}\) are better. The best and second best results are bold and underlined, respectively.** \begin{table} \begin{tabular}{c c c c c c c c c c c c c} & \multicolumn{4}{c}{SMYC RA-21} & \multicolumn{4}{c}{SMYYC RO-21} & \multicolumn{4}{c}{Lost \& Found} & \multicolumn{4}{c}{Average} \\ \hline Methods & sIoU \(\uparrow\) & FPV \(\uparrow\) & \(F1^{*}\)+ & sIoU \(\uparrow\) & FPV \(\uparrow\) & \(F1^{*}\)+ & sIoU \(\uparrow\) & FPV \(\uparrow\) & \(F1^{*}\)+ & sIoU \(\uparrow\) & FPV \(\uparrow\) & \(F1^{*}\)+ \\ \hline Max Softmax [30](ICLR’17) & 15.48 & 15.29 & 5.37 & 19.72 & 15.93 & 6.25 & 14.20 & 62.23 & 10.32 & 16.47 & 31.15 & 7.31 \\ Ensemble [35](NurIPS’17) & 16.44 & 20.77 & 3.39 & 8.63 & 4.71 & 1.28 & 6.66 & 7.64 & 2.68 & 10.58 & 11.04 & 2.45 \\ Mahalanobis [36](NeurIPS’18) & 14.82 & 10.22 & 2.68 & 13.52 & 21.79 & 4.70 & 33.83 & 31.71 & 22.09 & 20.72 & 21.24 & 9.82 \\ Image Newshtiels [40](ICCV’19) & 39.68 & 10.95 & 12.51 & 16.61 & 20.48 & 8.38 & 27.16 & 30.69 & 19.17 & 27.82 & 20.71 & 13.35 \\ MC Dropout [46](CVPR’20) & 20.49 & 17.26 & 4.26 & 5.49 & 5.77 & 1.05 & 17.35 & 34.71 & 12.99 & 14.44 & 19.25 & 6.10 \\ Learning Embedding [31](ICV’21) & 33.86 & 20.54 & 7.90 & 35.64 & 2.87 & 2.31 & 27.16 & 30.69 & 19.17 & 32.22 & 18.03 & 9.79 \\ SML [33](ICCV’21) & 26.00 & 24.70 & 12.20 & 5.10 & 13.30 & 3.00 & 32.14 & 27.57 & 26.93 & 21.08 & 21.86 & 14.04 \\ SynBoost [17](CVPR’21) & 34.68 & 17.81 & 9.99 & 44.28 & 41.75 & 37.57 & 36.83 & **72.32** & 48.72 & 38.60 & 43.96 & 32.09 \\ Maximized Entropy [11](ICCV’21) & 49.21 & 39.51 & 28.72 & 47.87 & 62.64 & 48.51 & 45.90 & 63.06 & 49.92 & 47.66 & 55.07 & 42.38 \\ ISRNet [65](ICCV’21) & 20.20 & 29.27 & 13.66 & 18.55 & 24.46 & 11.02 & 34.28 & 45.89 & 35.97 & 24.34 & 33.21 & 20.22 \\ Void Classifier [5](UC’21) & 21.14 & 22.13 & 6.49 & 6.34 & 20.27 & 5.41 & 1.76 & 35.08 & 1.87 & 9.75 & 25.83 & 4.59 \\ Dense Hybrid [26](ECCV’22) & 54.17 & 24.13 & 31.08 & 45.74 & 50.10 & 50.72 & 46.90 & 52.14 & 52.33 & 48.94 & 42.12 & 44.71 \\ PEBEL [54](ECCV’22) & 38.88 & 27.20 & 14.48 & 29.91 & 7.55 & 5.54 & 33.47 & 35.92 & 27.11 & 34.09 & 23.56 & 15.71 \\ \hline Mask2Former [13] & 25.20 & 18.20 & 15.30 & 5.00 & 21.90 & 4.80 & 17.88 & 18.09 & 9.77 & 16.03 & 19.40 & 9.96 \\ **Mask2Anomaly (Ours)** & **60.40** & **45.70** & **48.60** & **61.40** & **70.30** & **69.80** & **56.07** & 63.41 & **62.78** & **59.29** & **59.80** & **60.39** \\ \end{tabular} \end{table} TABLE 2: **Anomaly segmentation component level evaluation: Mask2Anomaly achieves large improvement on component level evaluation metrics among the baselined methods. Higher values of sIoU, PPV, and \(F1^{*}\) are better. The best and second best results are bold and underlined, respectively.** \begin{table} \begin{tabular}{c c c c c c c c c} & \multicolumn{4}{c}{Known Classes} achieves mIoU of 80.45, consisting of only GMA as a novel component. However, after mask contrastive training, we find that Mask2Anomaly maintains an in-distribution accuracy of 78.88 mIoU on the Cityscapes validation dataset, which is still 1.46 points higher than the vanilla Mask2Former. Moreover, it is important to note that both Mask2Anomaly and Mask2Former are trained for 90k iterations, indicating that, although Mask2Anomaly additionally attends to the background mask region, it shows convergence similar to Mask2Former. Fig. 7 qualitatively shows Mask2Anomaly semantic segmentation results are almost identical to Mask2Former. **Open-set semantic segmentation**: Table III illustrates the open-set semantic segmentation performance of Mask2Anomaly on the StreetHazards test set. In terms of anomaly segmentation performance, we observe that Mask2Anomaly gives a significant gain of 90% compared to DenseHybrid in AuPRC with minimal increase in false positives. Notably, Mask2Anomaly also gives the best closed set performance, indicating its ability to improve in-distribution while giving state-of-the-art anomaly segmentation results. Furthermore, we measure open-set semantic segmentation using Open-IoU metrics, which allows us to measure anomalous and in-distribution class performance jointly. The Streetheazard test dataset consists of two sets: t5 and t6. So, to calculate Open-IoU on t5: Open-IoU[15], we select the anomaly threshold from t6 at a true positive rate of 95% and then recalculate the classification scores of in-distribution classes t5. We repeat the same steps to get Open-IoU[16]. To get the overall Open-IoU on the Streetheazard test, we calculate the weighted average of Open-IoU on t5 and t6 according to the number of images in each set. In Table III, we can observe Mask2Anomaly outperforms other baselined methods by a significant margin of 30% on Open-IoU metrics. It is also important to note that methods such as OOD-Head achieve good close-set performance but show low Open-IoU. On the other hand, Outlier Exposure has a relatively better Open-IoU but losses close set performance. Mask2Anomaly does not suffer such shortcomings and gives the best open and closed set performances. Qualitatively from Fig. 9, we can visually infer that Mask2Anomaly is able to preciously segment the anomalous/open-set objects as compared to the best per-pixel architecture i.e., Dense Hybrid. **Open-set panoptic segmentation**: Table IV summarises the open-set panoptic segmentation performance of all the methods. Void-train is a baseline method in which we train the void re Fig. 9: **Qualitative results of open-set semantic segmentation**: We can observe that the Mask2Anomaly gives precise boundaries for open-set objects compared to best performing per-pixel architecture i.e. Dense Hybrid. gions of an image by treating it as a new class. We can observe Mask2Anomaly shows the best open-set panoptic segmentation results among all the baselined methods on different proportions of unknown classes. Additionally, it also shows strong results on in-distribution classes that are indicated by various panoptic evaluation metrics. Figure 10 illustrates the qualitative comparison of Mask2Anomaly with baselined methods on most challenging dataset having 20 % unknown classes. In Figure 10 (Row: 1-3), we can see Mask2Anomaly can better perform panoptic segmentation on unknown instances compared with baselined methods. Figure 10 (Row: 4), shows the panoptic segmentation on known classes where we can observe Mask2Anomaly outputs are precise with minimal false positives. ### _Ablations_ All the results reported in this section are based on the FS L&F validation dataset. **Mask2Anomaly:** Table V(a) presents the results of a component-wise ablation of the technical novelties included in Mask2Anomaly. We use Mask2Former as the baseline. As shown in the table, removing any individual component from Mask2Anomaly drastically reduces the results, thus proving that their individual benefits are complimentary. In particular, we observe that the global masked attention has a big impact on the AuPRC and contrastive learning is very important for the FPR\({}_{95}\). The mask refinement brings further improvements to both. Figure 11 visually demonstrates the positive effect of all the components. **Global Mask Attention:** To better understand the effect of the global masked attention (GMA), in Tab. V(c), we compare it to the masked-attention (MA) [13] and cross-attention (CA) [55]. We can observe that although the MA increases the mIoU w.r.t. the CA, it degrades all the metrics for anomaly segmentation, thus confirming our preliminary experiment shown in Fig. 5. On the other hand, the GMA provides improvements across all the metrics. This is confirmed visually in Fig. 12, where we show the negative attention maps for the three methods at different resolutions. The negative attention is calculated by averaging all the queries (since there is no reference known object) and then Fig. 11: **Mask2Anomaly Qualitative Ablation: demonstrates the performance gain by progressively adding (left to right ) proposed components. Masked-out regions by refinement mask are shown in white. Anomalies are represented in red.** Fig. 12: **Visualization of negative attention maps and results: Global mask attention gives high attention scores to anomalous regions across all resolutions showing the best anomaly segmentation results among the compared attention mechanisms. Cross-attention performs better than mask-attention but has high false positives and low confidence prediction for the anomalous region. Darker regions represent low attention values. Details to calculate negative attention are given in Section:7.5.** Fig. 10: **Open-set panoptic segmentation qualitative results: Row 1-3: We can observe that Mask2Anomaly is better able to segment the different instances of unknown objects compared with the baselined method. Row 4: Shows that Mask2Anomaly gives better panoptic segmentation with precise boundaries on known classes.** subtracting one. Note that the GMA has a high response on the anomaly (the giraffe) across all resolutions. **Refinement Mask:** Table V(d) shows the performance gains due to the refinement mask. We observe that filtering out the {"stuff" \(\backslash\) "road"} regions of the prediction map improves the FPR\({}_{95}\) by \(14.61\) along with marginal improvement in AuPRC. On the other hand, removing the {"things"\(\backslash\) "road"} regions degrades the results, confirming our hypothesis that anomalies are likely to belong to the "things" category. Figure 11 qualitatively shows the improvement achieved with the refinement mask. **Mask Contrastive Learning:** We tested the effect of the margin in the contrastive loss \(L_{CL}\), and we report these results in Tab. V(b). We find that the best results are achieved by setting \(m\) to 0.75, but the performance is competitive for any value of \(m\) in the table. Similarly, we tested the effect of the batch outlier probability, which is the likelihood of selecting an outlier image in a batch. The results shown in Tab. V(e) indicate that the best performance is achieved at \(0.2\), but the results remain stable for higher values of the batch outlier probability. **Mining Unknowns Instances:** We quantitatively summarise the impact of mining unknown instances in panoptic segmentation of unknown instances shown in Tab. VIII. We can clearly observe that removing the mining of unknown instances from Mask2Anomaly drastically reduces the performance across all the metrics. Also, the absence of global mask attention further degrades performance. **Connected Components:** Tabs. 6 and 7 shows the impact of connected components hyperparameters on open-set panoptic segmentation of unknown classes. In both tables, we train the model on dataset split having 20 % of unknown classes. In Tab. VI, we can observe that Mask2Anomaly shows the best performance at 500 iterations. Whereas, in Tab. VII, we achieve the best performance when the number of connected components is set to 3. **Architectural Efficacy of Mask2Anomaly:** We demonstrate the efficacy of Mask2Anomaly by comparing it to the vanilla Mask2Former but using larger backbones. The results in Tab. VI show that despite the disadvantage, Mask2Anomaly with a ResNet-50 still performs better than Mask2Former using large transformer-based backbones like Swin-S. It is also important to note that the number of training parameters for Mask2Anomaly can be reduced to \(23M\) as we use a frozen self-supervised pretrained encoder during the entire training, which is significantly less than all the Mask2Former variations. ## VIII Discussion **Performance stability:** Employing an outlier set to train an anomaly segmentation model presents a challenge because the model's performance can vary significantly across different sets of outliers. Here, we show that Mask2Anomaly performs similarly when trained on different outlier sets. We randomly chose two subsets of 300 MS-COCO images (S1, S2) as our outlier dataset for training Mask2Anomaly and DenseHybrid. Table IX shows the \begin{table} \begin{tabular}{c c c c} Number of Iterations & PQ \(\uparrow\) & SQ \(\uparrow\) & RQ \(\uparrow\) \\ \hline 100 & 11.4 & 77.2 & 14.8 \\ 200 & 12.5 & **77.8** & 16.0 \\ 500 & **14.6** & 76.2 & **19.1** \\ 1000 & 10.9 & 76.1 & 14.9 \\ \end{tabular} \end{table} TABLE VI: **Connected component training iteration:** We show the panoptic segmentation performance of unknown classes with the increasing number of iterations. We find the best performance at 500 iterations. Best results are shown in bold. \begin{table} \begin{tabular}{c c c c} Number of Connected Components & PQ \(\uparrow\) & SQ \(\uparrow\) & RQ \(\uparrow\) \\ \hline 1 & 10.9 & 76.1 & 14.9 \\ 2 & 12.4 & **76.5** & 16.3 \\ 3 & **14.6** & 76.2 & **19.1** \\ 5 & 14.0 & 78.2 & 17.9 \\ \end{tabular} \end{table} TABLE VII: **Number of connected components:** Shows the panoptic segmentation performance of unknown classes with the increasing number of connected components. We find the best performance at 3. Best results are shown in bold. performance of Mask2Anomaly and Dense Hybrid trained on S1 and S2 outlier sets, along with the standard deviation(\(\sigma\)) in the performance. We can observe that the variation in performance for the dense hybrid is significantly higher than Mask2Anomaly. Specifically, in dense hybrid, the average deviation in AuPRC is greater than 300%, and the average variation in FPR\({}_{95}\) is more than 200% compared to Mask2Anomaly. **Reducing the supervision gap:** In our previous discussion, we show models that are trained with outlier supervision have varying performance across different sets of outliers. So, we extend the previous discussion by demonstrating the performance of Mask2Anomaly without reliance on outlier supervision. We evaluate the performance of all the baselined method average over the validation dataset of FS static, FS L&F, SMIYC-RA21 and SMIYC-RO21. Fig. 13 shows the performance of Mask2Anomaly with or without outlier supervision names as Mask2Anomaly (_w OS_) and Mask2Anomaly (_w/o OS_), respectively. In the plot, we can see unequivocally that Mask2Anomaly (_w/o OS_) significantly reduces the anomaly segmentation performance gap between the methods with outlier supervision and notably outperforms methods that do not use outlier supervision. **Outlier Loss:** In this discussion, we will examine the efficacy of mask contrastive loss in anomaly segmentation. We empirically demonstrate why mask contrastive loss, a margin-based loss, performs better at anomaly segmentation by comparing it with binary cross-entropy loss as an outlier loss. So, we train Mask2Anomaly with \(M_{OOD}\) using binary-cross entropy which equates the outlier loss as: \[L_{BCE}=M_{OOD}\log(l_{N})+(1-M_{OOD})\log(1-l_{N}) \tag{26}\] and, the new total loss at the outlier learning stage becomes: \[L_{ood}=L_{BCE}+L_{masks}+\lambda_{ce}L_{ce} \tag{27}\] \(l_{N}\) is the negative likelihood of in-distribution classes calculated using the class scores \(C\) and class masks \(M\). Figure 14 illustrates the anomaly segmentation performance comparison on FS L&F validation dataset between the Mask2Anomaly when trained with the binary cross entropy loss and mask contrastive loss, respectively. We can observe that the mask contrastive loss achieves a wider margin between out-of-distribution(anomaly) and in-distribution prediction while maintaining significantly lower false positives. **Global Mask Attention:** The application of global mask attention in semantic segmentation has shown a positive impact on performance, as demonstrated in Tab. 5(c). So, we further investigate to assess the generalizability of this positive effect on Ade20K [63] and Vistas [47]. To evaluate the possible benefits of global mask attention, we trained the Mask2Former architecture using both masked attention and global masked attention for 40 thousand \begin{table} \begin{tabular}{l l l l l l} \multicolumn{1}{c}{Method} & \multicolumn{1}{c}{Backbone} & \multicolumn{1}{c}{AuPRC\(\uparrow\)} & \multicolumn{1}{c}{FPR\({}_{95}\)\(\downarrow\)} & \multicolumn{1}{c}{FLOPs\(\downarrow\)} & \multicolumn{1}{c}{Training \(\downarrow\)} \\ & & & & & & Parameters \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{ResNet-50} & 10.60 & 89.35 & **226G** & 44M \\ & ResNet-101 & 9.11 & 45.83 & 293G & 63M \\ & Swin-T & 24.54 & 37.98 & 232G & 42M \\ & Swin-S & 30.96 & 36.78 & 313G & 69M \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline Mask2Anomaly\({}^{\ddagger}\) & ResNet-50 & **32.35** & **25.95** & 258G & **23M** \\ \end{tabular} \end{table} TABLE X: **Architectural Efficiency of Mask2Anomaly: Mask2Anomaly outperforms the best performing Mask2Former architecture with Swin-S as backbone by using almost 30% trainable parameters. Mask2Anomaly\({}^{\ddagger}\) only uses global mask attention.** Fig. 14: **Outlier Loss Comparision: During the training Mask2Anomaly, on the outlier set, we find that incorporating a mask contrastive loss, which is a margin-based loss function, resulted in better performance compared to the conventional binary cross-entropy loss. These experiments were conducted on the FS L&F validation set.** Fig. 13: **Bridging the supervision gap: In this figure, we represent methods that utilize outlier supervision in red, and those without outlier supervision are in blue. We can observe Mask2Anomaly _(w/o OS):_ Mask2Anomaly without using outlier supervision, shows significant performance gain among anomaly segmentation methods that do not use any extra supervision. Also, displays a similar performance to PEBEL, which is the best per-pixel method that utilizes additional supervision).** \begin{table} \begin{tabular}{l c c c c c c c c c c} \multicolumn{1}{c}{} & \multicolumn{1}{c}{SMIYC-RA21} & \multicolumn{1}{c}{SMIYC-RO21} & \multicolumn{1}{c}{FS L\&F} & \multicolumn{1}{c}{FS Static} & \multicolumn{1}{c}{Average \(\sigma\)} \\ \cline{2-13} Methods & \multicolumn{1}{c}{AuPRC \(\uparrow\)} & \multicolumn{1}{c}{FPR\({}_{95}\)\(\downarrow\)} & \multicolumn{1}{c}{AuPRC \(\uparrow\)} & \multicolumn{1}{c}{FPR\({}_{95}\)\(\downarrow\)} & \multicolumn{1}{c}{AuPRC \(\uparrow\)} & \multicolumn{1}{c}{FPR\({}_{95}\)\(\downarrow\)} & \multicolumn{1}{c}{AuPRC \(\uparrow\)} & \multicolumn{1}{c}{FPR\({}_{95}\)\(\downarrow\)} & \multicolumn{1}{c}{AuPRC \(\uparrow\)} & \multicolumn{1}{c}{FPR\({}_{95}\)\(\downarrow\)} & \multicolumn{1}{c}{AuPRC \(\uparrow\)} & \multicolumn{1}{c}{FPR\({}_{95}\)\(\downarrow\)} & \multicolumn{1}{c}{AuPRC \(\uparrow\)} & \multicolumn{1}{c}{FPR\({}_{95}\)\(\downarrow\)} & \multicolumn{1}{c}{AuPRC \(\uparrow\)} & \multicolumn{1}{c}{FPR\({}_{95}\)\(\downarrow\)} & \multicolumn{1}{c}{AuPRC \(\uparrow\)} & \multicolumn{1}{c}{FPR\({}_{95}\)\(\downarrow\)} & \multicolumn{1}{c}{AuPRC \(\uparrow\)} & \multicolumn{1}{c}{FPR\({}_{95}\)\(\downarrow\)} & \multicolumn{1}{c}{AuPRC \(\uparrow\)} & \multicolumn{1}{c}{FPR\({}_{95}\)\(\downarrow\)} & \multicolumn{1}{c}{AuPRC \(\uparrow\)} & \multicolumn{1}{c}{FPR\({}_{95}\)\(\downarrow\)} & \multicolumn{1}{c}{FPR\({}_{95}\)\(\downarrow\)} & \multicolumn{1}{c}{AuPRC \(\uparrow\)} & \multicolumn{1}{c}{FPR\({}_{95}\)\(\downarrow\)} & \multicolumn{1}{c}{FPR\({}_{95}\)\(\downarrow\)} & \multicolumn{1}{c}{AuPRC \(\uparrow\)} & \multicolumn{1}{c}{FPR\({}_{95}\)\(\downarrow\)} & \multicolumn{1}{ iterations. Mask2Former performed mIoU scores of 43.20 and 38.17 on masked attention, while global mask attention yields better mIoU scores of 43.80 (+0.6) and 38.92 (+0.75) on Ade20K and Vistas, respectively. **Failure Cases:** Fig. 15 illustrates the failure cases predicted by Mask2Anomaly. It is apparent that Mask2Anomaly faces difficulties when anomalies exhibit a resemblance to in-distribution classes like cars or buses, as shown in in Fig. 15 (a, b). In Fig. 15 (c) shows increased false positives around anomalies when illumination conditions are poor. Weather conditions adversely effects Mask2Anomaly performance as seen in Fig. 15 (d). We think that improving anomaly segmentation in such scenarios would be a promising avenue for future research. ## 9 Conclusion In this work, we introduce Mask2Anomaly, a universal architecture that is designed to jointly address anomaly and open-set segmentation utilizing a mask transformer. Mask2Anomaly incorporates a global mask attention mechanism specifically to improve the attention mechanism for anomaly or open-set segmentation tasks. For the anomaly segmentation task, we propose a mask contrastive learning framework that leverages outlier masks to maximize the distance between anomalies and known classes. Furthermore, we introduce a mask refinement technique aimed at reducing false positives and improving overall performance. For the open-set segmentation task, we developed a novel approach to mine unknown instances based on mask-architecture properties. Through extensive qualitative and quantitative analysis, we demonstrate the effectiveness of Mask2Anomaly and its components. Our results highlight the promising performance and potential of Mask2Anomaly in the field of anomaly and open-set segmentation. We believe this work will open doors for a new development of novel anomaly and open-set segmentation approaches based on masked architecture, stimulating further advancements in the field.
2306.00190
Contextualizing Problems to Student Interests at Scale in Intelligent Tutoring System Using Large Language Models
Contextualizing problems to align with student interests can significantly improve learning outcomes. However, this task often presents scalability challenges due to resource and time constraints. Recent advancements in Large Language Models (LLMs) like GPT-4 offer potential solutions to these issues. This study explores the ability of GPT-4 in the contextualization of problems within CTAT, an intelligent tutoring system, aiming to increase student engagement and enhance learning outcomes. Through iterative prompt engineering, we achieved meaningful contextualization that preserved the difficulty and original intent of the problem, thereby not altering values or overcomplicating the questions. While our research highlights the potential of LLMs in educational settings, we acknowledge current limitations, particularly with geometry problems, and emphasize the need for ongoing evaluation and research. Future work includes systematic studies to measure the impact of this tool on students' learning outcomes and enhancements to handle a broader range of problems.
Gautam Yadav, Ying-Jui Tseng, Xiaolin Ni
2023-05-31T21:11:38Z
http://arxiv.org/abs/2306.00190v1
Contextualizing Problems to Student Interests at Scale in Intelligent Tutoring System Using Large Language Models ###### Abstract Contextualizing problems to align with student interests can significantly improve learning outcomes. However, this task often presents scalability challenges due to resource and time constraints. Recent advancements in Large Language Models (LLMs) like GPT-4 [1]offer potential solutions to these issues. This study explores the ability of GPT-4 in the contextualization of problems within CTAT [2], an intelligent tutoring system, aiming to increase student engagement and enhance learning outcomes. Through iterative prompt engineering, we achieved meaningful contextualization that preserved the difficulty and original intent of the problem, thereby not altering values or overcomplicating the questions. While our research highlights the potential of LLMs in educational settings, we acknowledge current limitations, particularly with geometry problems, and emphasize the need for ongoing evaluation and research. Future work includes systematic studies to measure the impact of this tool on students' learning outcomes and enhancements to handle a broader range of problems. L 1Carnegie Mellon University, 5000 Forbes Ave Pittsburgh PA 15213, United States Large Language Models, Mass Production, Student Interests, Intelligent Tutoring System ## 1 Introduction Research has demonstrated that integrating problem contextualization with student interests can significantly enhance learning outcomes in algebra, resulting in increased proficiency in problem-solving, improved accuracy, and the ability to transfer to future learning [3]. Teachers, who intimately comprehend their students' interests, often find the task of contextualizing problems according to these interests challenging, since the scalability of such task is often met with resource and time constraints. However, recent developments in Large Language Models (LLMs) may provide an opportunity to lessen the strains associated with the personalization of learning context for students. This research aims to explore the capability of LLMs in contextualizing problems to align with student interests at a large scale within CTAT [2], an intelligent tutoring system. In this study, we perform experiments using one of the most advanced LLMs currently accessible, the GPT-4 [1], obtained via the OpenAI API. Our hypothesis suggests that the application of LLMs for problem contextualization, based on student interests, could result in increased student engagement and enhanced learning outcomes. ## 2 Prior Work ### Context Personalization The groundbreaking works of Walkington [3] introduced the concept of contextualizing algebraic questions based on students' interests. This innovative methodology, featuring student-created "algebra stories," aimed to boost engagement, cultivate ownership, and enhance understanding of algebraic principles. Interest has been identified as a pivotal factor in learning, impacting attention, persistence, and motivation. Personalized learning that mirrors individual interests has demonstrated a capacity to elicit positive emotional responses, enhance appreciation for instructional content, and leverage existing knowledge. [4, 5] The efficacy of context personalization was investigated using both qualitative and quantitative research methods, demonstrating a positive association between these 'algebra stories' and improved student engagement and performance. Despite possible implementation obstacles due to the diversity of learners' interests, the use of digital tools has been proposed as a facilitative means for this personalization process. In its totality, contextual personalization has the potential to enhance learning effectiveness and accuracy, decrease the practice required for mastery, and foster transferable skills applicable to various scenarios. ### Mass Production in Intelligent Tutoring Systems Mass Production in Intelligent Tutoring Systems (ITS) is a technique that enables authors to parameterize previously authored problem-specific content, which can then be instantiated to suit a multitude of different problems. This technique essentially permits authors to manually generalize Example-Tracing expert models (known as behavior graphs) to accommodate all problems that share isomorphic solution structures. The application of mass production in ITS offers significant value, as it facilitates the creation of a vast array of distinct problems using the same underlying structure. This contributes to mastery learning, allowing learners to practice similar problems in various contexts, ultimately strengthening their grasp of the subject [6]. In our approach, we utilized this principle, where only the contextual 'cover stories' were varied for the problem within the CTAT platform. This delivers similar problem-solving opportunities to students, yet personalizes these scenarios to align with their individual interests. The implications of this mass production approach based on interests are manifold; it can potentially increase student engagement, improve problem-solving abilities, and promote a better understanding of the subject matter. ### Instruction Generation with Large Language Models Previous research involving large language models has explored their application in educational settings, such as the use of models like GPT for generating questions or providing hints/ex planations to students [7]. Empirical evaluations of these applications and their impact on student outcomes suggested that students perform better on content generated by these models compared to human-generated content [8, 9]. It is evident that large language models hold great potential in enhancing learning experiences, making them a promising tool for future educational endeavors. However, as our work proposes, a step further in personalized learning can be taken by leveraging these models in a more context-aware manner, which could further improve student engagement and outcomes. ## 3 System Design ### Iterative Prompt Engineering in GPT-4 Our iterative prompt engineering was executed over four different problem sets in Tutor-Shop [10], with the objective of achieving meaningful contextualization aligned with diverse interests. A few-shot learning approach was used, drawing on examples from a range of contexts and interests to foster coverage and generalization. Throughout this process, we gradually refined and added rules based on testing until our output could aptly accommodate novel interests such as TikTok and NBA, as demonstrated in Tables 1 and 2. #### 3.1.1 Prompt We used the following prompt: * Your task is to change context based on interest for a problem, for example: * Input Problem 1: Chaz and Nikki are standing in a long line to buy rock concert tickets. Nikki is 8 feet ahead of Chaz in the line. Let's compare Chaz's distance to Nikki's distance from the front of the line. When Nikki is 20 feet from the front of the line, how far away is Chaz? When Nikki is 16 feet from the front of the line, how far away is Chaz? In the row labeled "Expression", define a variable for Nikki's distance and use that variable to write an expression that will allow you to calculate Chaz's distance. Output Problem 1 based on interest "Video Games": In a video game, two players, Mario and Luigi, are standing at different points in a level. Luigi is 8 units ahead of Mario in the game. Let's compare Mario's distance to Luigi's distance from the level's end. When Luigi is 20 units from the end of the level, how far away is Mario? When Luigi is 16 units from the end of the level, how far away is Mario? In the row labeled "Expression", define a variable for Mario's distance and use that variable to write an expression that will allow you to calculate Luigi's distance. Output Problem 1 based on interest "basketball": During a basketball game, two players, Jordan and Kobe, are standing at different positions on the court. Jordan is 12 feet ahead of Kobe on the court. Let's compare Jordan's distance to Kobe's distance from the basket. When Kobe is 20 feet away from the basket, how far away is Jordan from the basket? When Kobe is 16 feet away from the basket, how far away is Jordan from the basket? In the row labeled "Expression", define a variable for Kobe's distance and use that variable to write an expression that will allow you to calculate Jordan's distance. * Input Problem 2: You are a product inspector for a company that produces light bulbs. You find that two out of every 300 bulbs are defective: they don't work properly. Output Problem 2 based on interest "World of Warcraft": You enjoy playing World of Warcraft on your computer. You notice that two out of every 300 times you defeat a monster, the monster has an epic item: a treasure that you want to collect. * 6x If x = 10, what is y? If x = 7, what is y? If y = 8, what is x? Write a story that could go along with the equation y = 80 - 6x. Output Problem 3 based on interest "Video Games": You are playing your favorite war game on the Xbox 360. When you started playing today, there were 80 enemies left in the locust horde. You kill an average of 6 enemies every minute. (a) How many enemies are left after 10 minutes? (b) How many enemies are left after 7 minutes? (c) Write an algebra rule that represents this situation using symbols. (d) If there are only 8 enemies left, how long have you been playing today? Now give output for * input problem: 2x+3=15 * Interest: [The interest that the problem needs to be contextualized for.] Some rules to follow: 1. don't change values 2. we want to have deeper contextualization not surface details based on Using Adaptive Learning Technologies to Personalize Instruction to Student Interests: The Impact of Relevant Contexts on Performance and Learning Outcomes 3. output question should ask same thing as input question, don't ask any additional question or complicate the info by adding unnecessary details This strict adherence to rules ensures that we maintain consistency in problem difficulty and preserve the problem's original intent. This methodology respects the principle of not altering values or over-complicating the question by adding unnecessary details as observed in our earlier iterations. ### CTAT Implementation In this section, we propose a novel interaction design for contextualizing problems in Intelligent Tutoring Systems using CTAT and GPT-4 that emphasizes problem-authoring control. Teachers or instructional designers could contextualize existing problems simply by adding interest in the "Contextualized by Interest" tab in the Mass Production feature (Figure 1). After the user click the contextualize problem, the system will use GPT-4 and the prompt we mentioned in the prompt engineering section to generate variations of the problem for each interest. They can also preview and edit the contextualized result in the student-facing interface on the right panel to make sure whether they are satisfied with the generation result (Figure 2). ## 4 Future Work and Limitations While this work is firmly grounded in existing pedagogical and technological research, it is imperative that ongoing evaluation continues to ensure its effective application within real-world educational environments. We plan to conduct systematic studies to measure the impact of this tool on students' learning outcomes. This encompasses improvements in the initial Figure 1: User can enter or add interests in the “Contextualize By Interest” tab accuracy of responses, an enhancement in learning efficiency, and an accelerated pace toward proficiency. However, certain limitations in the current model require attention. While it excels in solving algebraic equations, it needs help with geometric problems, especially those involving graphs, tables, or diagrammatic components. Its existing capabilities of GPT limit its ability to create images that align with the problem text, accurately representing variable relationships. Specifically, the system fails to produce suitable diagrams for linear algebra questions requiring visual components, which is essential for testing students' comprehension of the underlying concepts. ## Acknowledgments We extend our sincere gratitude to Prof. Vincent Aleven, whose expert guidance was indispensable to the success of this research. His profound wisdom and unwavering support enriched this work immeasurably.
2309.17269
Unpaired Optical Coherence Tomography Angiography Image Super-Resolution via Frequency-Aware Inverse-Consistency GAN
For optical coherence tomography angiography (OCTA) images, a limited scanning rate leads to a trade-off between field-of-view (FOV) and imaging resolution. Although larger FOV images may reveal more parafoveal vascular lesions, their application is greatly hampered due to lower resolution. To increase the resolution, previous works only achieved satisfactory performance by using paired data for training, but real-world applications are limited by the challenge of collecting large-scale paired images. Thus, an unpaired approach is highly demanded. Generative Adversarial Network (GAN) has been commonly used in the unpaired setting, but it may struggle to accurately preserve fine-grained capillary details, which are critical biomarkers for OCTA. In this paper, our approach aspires to preserve these details by leveraging the frequency information, which represents details as high-frequencies ($\textbf{hf}$) and coarse-grained backgrounds as low-frequencies ($\textbf{lf}$). In general, we propose a GAN-based unpaired super-resolution method for OCTA images and exceptionally emphasize $\textbf{hf}$ fine capillaries through a dual-path generator. To facilitate a precise spectrum of the reconstructed image, we also propose a frequency-aware adversarial loss for the discriminator and introduce a frequency-aware focal consistency loss for end-to-end optimization. Experiments show that our method outperforms other state-of-the-art unpaired methods both quantitatively and visually.
Weiwen Zhang, Dawei Yang, Haoxuan Che, An Ran Ran, Carol Y. Cheung, Hao Chen
2023-09-29T14:19:51Z
http://arxiv.org/abs/2309.17269v1
Unpaired Optical Coherence Tomography Angiography Image Super-Resolution via Frequency-Aware Inverse-Consistency GAN ###### Abstract For optical coherence tomography angiography (OCTA) images, a limited scanning rate leads to a trade-off between field-of-view (FOV) and imaging resolution. Although larger FOV images may reveal more parafoveal vascular lesions, their application is greatly hampered due to lower resolution. To increase the resolution, previous works only achieved satisfactory performance by using paired data for training, but real-world applications are limited by the challenge of collecting large-scale paired images. Thus, an unpaired approach is highly demanded. Generative Adversarial Network (GAN) has been commonly used in the unpaired setting, but it may struggle to accurately preserve fine-grained capillary details, which are critical biomarkers for OCTA. In this paper, our approach assigns to preserve these details by leveraging the frequency information, which represents details as high-frequencies (_hf_) and coarse-grained backgrounds as low-frequencies (_ff_). In general, we propose a GAN-based unpaired super-resolution method for OCTA images and exceptionally emphasize _ht_ fine capillaries through a dual-path generator. To facilitate a precise spectrum of the reconstructed image, we also propose a frequency-aware adversarial loss for the discriminator and introduce a frequency-aware focal consistency loss for end-to-end optimization. Experiments show that our method outperforms other state-of-the-art unpaired methods both quantitatively and visually. OCT-Angiography, Unpaired Super-Resolution, GAN, Frequency Analysis ## I Introduction Optical coherence tomography angiography (OCTA) is an imaging modality based on the optical coherence tomography (OCT) platform, which generates depth-resolved images of the retina and choroidal microvasculature [1]. OCTA can support the evaluation of multiple retinal diseases, including diabetic retinopathy and age-related macular degeneration [2, 3, 4, 5, 6, 7]. However, due to the limited scanning rate of commercial OCT instruments [8, 9], images with a smaller field-of-view (FOV) have higher axial scan density and thus higher resolution [10]. As a result, among most common FOV, 3mm\(\times\)3mm (Fig. 1. \(A_{1}\)\(\sim\)\(E_{1}\)) is more widely employed in clinical settings to visualize finer capillaries, as compared to the 6mm\(\times\)6mm [7, 11]. Nevertheless, larger FOV (Fig. 1. \(A_{2}\)) is supposed to reveal more parafoveal vascular lesions [12]. Therefore, improving the resolution of 6mm\(\times\)6mm images (Fig. 1. \(A_{2}\)), on par with 3mm\(\times\)3mm images, will further empower ophthalmologists to evaluate capillary losses and develop more personalized treatments [13, 14, 15]. In computer vision, this task refers to super-resolution which upscales images (Fig. 1. \(A_{2}\) to \(B_{2}\sim E_{2}\)) and improves the quality through restoration. For OCTA image super-resolution, only a few works have been proposed and most adopt the paired setting for training to achieve the desired qualitative performance [9, 16, 17]. One approach involves collecting and creating paired high-resolution (\(HR\)) and low-resolution (\(LR\)) images from the same eye of the same patient [9, 16]. However, collecting large-scale real-paired images requires sophisticated image registration, which is laborious and challenging and may Fig. 1: Illustration for our OCTA images dataset, which is retrospectively collected from the Chinese University of Hong Kong **Sight**-T**H**eatening **D**iabetic Retinopathy (CUHK-STDR) study. Orange boxes indicate the 6mm\(\times\)6mm with subscript 1, while blue boxes indicate the 3mm\(\times\)3mm with subscript 2. The same letter indicates the same regions in low- and high-resolution images. A: Fovea-center. B\(\sim\)E: parafoveal patches. hinder the medical application [18]. Another approach creates pseudo image pairs using bicubic interpolation to synthesize \(LR\) from \(HR\) for training [17]. However, interpolation is an oversimplified presumption since it may not accurately represent real-world degradation. Alternatively, the unpaired setting could mitigate these issues, but there is still a lack of sufficient studies. Therefore, we propose an unpaired approach by formulating a degradation model and jointly optimizing the models [19]. Also regarding the consensual merits of Generative Adversarial Networks (GANs), the models are accordingly formulated and optimized via consistency loss [20, 21, 22, 23, 24]. Moreover, higher resolution of the capillary network will allow more accurate assessments of eye diseases related to microvasculature [25, 26]. Thus the algorithm should exceptionally emphasize the fine-grained vessels. In the frequency domain, these vessels correspond to high-frequency (\(hf\)) information (Fig. 2. B), whereas the general illuminance and backgrounds correspond to low frequencies (\(lf\)) (Fig. 2. C). However, convolutional neural networks (CNNs) inherently exhibit a bias towards \(lf\)[27]. This bias can also be observed in the spectral distribution (Fig. 3), which illustrates a discernibly increasing discrepancy between reconstructed images and \(HR\) ground truths as the bandwidth increases. Though super-resolution aspires to enhance \(hf\) details, such bias may result in inaccurate or deficient details [28, 29, 30]. Consequently, for OCTA images, capillary structures in microvasculature might be altered. To alleviate these issues, our approach directly leverages frequency information and imposes exceptional emphasis on \(hf\), aiming at accurate and sufficient fine-grained details. Specifically in our restoration and degradation GAN, to preserve salient \(hf\) details in generators, we separate frequency components in a dual-path structure for feature extraction and then fuse features for reconstruction. To also facilitate discriminators being sensitive to \(hf\), we introduce the frequency-aware adversarial loss (**FAL**) to consider both frequency and spatial components. To consistently preserve the frequency distribution, we propose a **F**requency-aware Focal Consistency Loss (**FFCL**). In general, by leveraging frequency information and imposing exceptional emphasis on \(hf\), this paper introduces a Frequency-aware Unpaired Super-Resolution for OCTA images (FAUSRA). To surmount \(lf\)-bias of neural networks, we propose a dual-path architecture in generators to separately refine \(hf\) capillary details. To also facilitate discriminators being aware of frequencies, we propose **FAL** by exploiting wavelet space. To preserve accurate spectral distribution, we propose the **FFCL** to penalize spectral errors. Then, our approach exploits both frequency and spatial domains to effectively produce high-resolution images in the unpaired setting. This paper contributes in following perspectives: * To resolve the unpaired OCTA super-resolution, we propose a GAN-based approach containing restoration and degradation models, and jointly optimize it using consistency losses in an end-to-end manner. * To mitigate \(lf\)-bias and enhance fine-grained details, we exceptionally emphasize \(hf\) components in a dual-path structure in generators. We also propose **FAL** for adversarial training, and the **FFCL** for spectral consistency. * To quantitatively evaluate the performance, we purposely collect fovea-central and parafroveal paired \(HR\) and \(LR\) images from CUHK-STDR study for different paired metrics, and verify the superiority of our method. ## 2 Related works **Super-Resolution** has been a prominent task in low-level computer vision, aimed at increasing the resolution and restoring imaging quality, including OCTA images [9]. In recent years, with the advancements in deep learning, learning-based techniques have demonstrated remarkable performance, particularly through supervised learning using paired datasets [31]. However, for most imaging modalities in the real world, the challenge of collecting large-scale paired datasets remained a fundamental issue [32]. To overcome this challenge, a common approach was to generate training data by downsampling \(HR\) images to synthetic \(LR\) counterparts using interpolation [31]. Subsequently, algorithms aimed to recover the super-resolution mapping by restoring \(LR\). However, interpolation-based downsampling oversimplified the degradation process, leading to models that may not generalize well to real-world super-resolution. To relieve the issues under the paired setting, GAN has been introduced [33]. Adversarial loss guided the model to produce more visually pleasing results [20], but these methods still rely on paired data for training. **Unpaired Super-Resolution** methods alternatively employed various models to formulate the degradation process. These methods could be categorized into two main approaches: **two-stage** and **one-stage** methods. **Two-Stage** methods primarily formulated the degradation process, followed by a separate optimization of the restoration model. [34] proposed a kernel estimation approach to mimic real-world degradation. [19] introduced a High-to-Low GAN, which generates the \(LR\) from \(HR\). To ensure stable optimization of the degradation, [35] proposed to synthesize \(LR\) and utilized unsupervised learning to bridge the gap between real and synthesized images. However, for two-stage approaches, the restoration might be highly affected by the performance of degradation, causing suboptimal restoration results [36]. **One-Stage** approaches commonly employed GAN and consistency loss to jointly optimize the models end-to-end [24]. For instance, a bi-cycle network was proposed to jointly generate real-world \(LR\) and optimize the super-resolution model [21]. However, GAN has suffered from inability in the training phase and thus is prone to introduce unexpected noises [37, 38]. To this end, several solutions were proposed. Figure 2: Illustration for 6mm\(\times\)6mm OCTA images and frequency components. Lower-right corners are bandwidth spectral filters. A: 6mm\(\times\)6mm OCTA image and its spectrum \(\mathbf{B}\): 1\(\boldsymbol{if}\) and its low-pass filter. C: \(h\)\(\boldsymbol{if}\) and its high-pass filter. D: Middle-frequencies and its middle-pass filter. Some modified the architecture of the pipeline. [22] proposed a cycle-in-cycle structure using nested GANs and consistency losses. [23] proposed a pseudo-supervision using corrected-clean and pseudo-clean \(LR\) as intermediates between \(LR\) and \(HR\) images. Another attempt was to introduce frequency information. By exploiting both spatial and wavelet domains, [32] resolved unpaired super-resolution via domain adaptation to tackle the gap between real and synthetic images. **OCTA Super-Resolution** algorithms specifically aimed to upscale 6mm\(\times\)6mm images and improve their quality on a par with 3mm\(\times\)3mm images. However, the preparation of training OCTA images posed a laborious challenge [18]. Existing methods addressed this challenge by either synthesizing \(LR\) or collecting paired images. For example, [17] proposed a method that degrades \(HR\) images through interpolation downsampling. Although this approach intended to mitigate domain gaps between real- and generated images using GAN, the restoration modeling overlooked complex real-world degradation, which could be influenced by different FOV, limited scanning rates, etc. Alternative methods veritably collected paired \(HR\) and \(LR\) images from the same eye of the same patient and employed a supervised approach to enhance the \(LR\) images [9, 16]. However, these approaches suffered from two inherent limitations when applied to OCTA images. Firstly, to prepare pixel-wise paired images for training and evaluation, registration should be used to mitigate structural changes from image capturing. Such preprocessing inevitably altered the original structure within the OCTA image, resulting in unconvincing supervision. Secondly, due to different FOV, each 3mm\(\times\)3mm image could only provide incomplete supervision with only a sub-region of \(HR\) information for each 6mm\(\times\)6mm image. The above limitations motivated our unpaired OCTA super-resolution, which could release the reliance on paired data and implicitly formulate the restoration and degradation using GAN. **Frequency Analysis Studies** showed that deep neural networks tend to fit \(lf\) more precisely than \(hf\)[27]. Especially for GANs that are commonly utilized in unpaired super-resolution, models also suffered from the bias and resulted in missing \(hf\) details or unexpected artifacts [37, 38]. The spectral distribution can also demonstrate this bias, where \(hf\) were not reconstructed as sufficiently as \(lf\) (see Fig. 3). To produce high-fidelity \(HR\) images, \(hf\) capillary details [39] should have been meticulously preserved. To address this inherent gap, frequency information has been included in deep learning frameworks by separating frequency components [40]. [41] leveraged frequency components in unpaired super-resolution algorithm. In addition to incorporating frequency in the frameworks, several frequency-aware losses have been introduced to yield more realistic results. [28] proposed exploiting the wavelet domain to mitigate the domain gap between real and synthetic images. [29] introduced Frequency Consistent Adaptation to ensure frequency domain consistency. The focal frequency loss [42] adaptively focused on frequency components using amplitude and phase information. [39] suggested a Fourier frequency loss to separately preserve high and low-frequency amplitudes. However, studies that use frequency and spatial information for unpaired OCTA super-resolution have not been sufficiently conducted yet. Therefore, under the unpaired super-resolution setting, this paper proposes leveraging spatial and different frequency components in the restoration and degradation frameworks to benefit unpaired super-resolution. To reconstruct precise frequency information, we introduce **FAL** for discriminators and the **FFCL** for our end-to-end framework. ## 3 Methodology To resolve unpaired super-resolution for OCTA images, we propose GAN-based restoration and inverse degradation models, which are optimized in an end-to-end manner through consistency loss [24], as depicted in Fig. 4. To mitigate the frequency bias and thus precisely enhance \(hf\) capillary details, we leverage different frequency components via a dual-path structure within the framework. In the GAN paradigm, we also exploit the frequency domain through **FAL** for the discriminators, and propose an **FFCL** to guarantee accurate spectrum as an objective for end-to-end learning. ### _Preliminaries_ We define 3mm\(\times\)3mm OCTA images as the \(HR\) and 6mm\(\times\)6mm images as the \(LR\). We denote the restoration process as the mapping from \(LR\) to \(HR\), represented by \(G_{Res}:LR\to HR^{\dagger}\), and the degradation process as the inverse mapping from \(HR\) to \(LR\), represented by \(G_{Deg}:HR\to LR^{\downarrow}\). Here, \(HR^{\dagger}\) and \(LR^{\downarrow}\) refer to the generated \(HR\) and \(LR\) images. Without loss of generality, we denote an image \(x\in\mathbb{R}^{\mathrm{M}\times\mathrm{N}}\) which can be either \(HR\) or \(LR\). Then its frequency representation is denoted as \(\mathcal{X}\). The Fast Fourier Transformation (FFT) transforms \(x\) to \(\mathcal{X}\) as: \[\mathcal{X}\left(u,v\right) =\sum_{m=0}^{M-1}\sum_{n=0}^{N-1}x\left(m,n\right)e^{-i2\pi\left( \frac{m}{4\pi^{2}}+\frac{m}{4\pi}\right)}\] \[=R\left(u,v\right)+iI\left(u,v\right) \tag{1}\] Figure 3: Azimuthal integral on spectrum as specified in Eq. (2). It indicates that \(\boldsymbol{HR}\) contains stronger power in middle- and high-bands of the spectrum than \(\boldsymbol{LR}\). While the middle frequencies of different methods are similar, our approach better fits the \(\boldsymbol{lf}\) information and enhances \(\boldsymbol{hf}\) information compared to the real \(\boldsymbol{HR}\). where \(M\) and \(N\) are image height and width, while \((m,n)\) and \((u,v)\) are Cartesian coordinates of the image in the spatial and frequency domain. Since FFT results are complex numbers, \(\mathcal{X}\) can be further separated into real \((R)\) and imaginary parts \((I)\) with respect to Euler's formula \(e^{i\theta}=cos\theta+isin\theta\). Based on FFT, to visualize the spectral power, the azimuthal integral over the spectrum is defined as [43]: \[A(\omega_{k})=\int_{0}^{2\pi}\left\|\mathcal{X}(\omega_{k}\cdot\cos(\phi), \omega_{k}\cdot\sin(\phi))\right\|^{2}d\phi \tag{2}\] where (\(\phi\), \(\omega_{k}\)) is the azimuth and \(k\)-bandwidth in polar coordinate of the image, and \(k=0,1,...,M/2-1\). To leverage frequency information by high- and low-pass filters, we first define the Gaussian kernel as: \[G\left(u,v\right)=\frac{1}{2\pi\sigma^{2}}e^{-\left(u^{2}+v^{2}\right)/(2 \sigma^{2})} \tag{3}\] where \(\sigma\) is the variance of Gaussian kernel, and (\(u\),\(v\)) is Cartesian coordinate. Thus the \(lf\) is extracted by Gaussian blurring using convolution operation, \(lf=G*x\). The \(hf\) is obtained by subtracting the \(lf\) information from the original image, \(hf=x-lf\). Gaussian blurry in the spatial domain can be represented as low-pass filtering in the frequency domain according to the convolution theorem (see Fig. 2). ### Frequency-aware Restoration and Degradation In our unpaired setting, we propose a one-stage end-to-end super-resolution pipeline by formulating the restoration model \(G_{Res}\) and a degradation model \(G_{Deg}\). Models are optimized according to adversarial loss and consistency loss [20, 24]. Super-resolution aims to effectively and accurately refine fine-grained details. In the frequency domain, these details are widely recognized as \(hf\), while contours are referred to as \(lf\)[44]. But limited by the inherent bias of neural networks, \(hf\) may not be sufficiently boosted [27]. Therefore, our work proposes a frequency-aware architecture that leverages both spatial and frequency information, as shown in Fig. 4. Specifically, to emphasize \(hf\) and alleviate the bias to \(lf\), we intentionally separate frequency components using Gaussian filtering, as shown in Eq. (3), and extract corresponding features through a dual-path structure (see Fig. 4(a) and (b)). Then, the images are reconstructed by fusing these features. Such that the restoration model, \(G_{Res}\), enhances \(hf\) details while preserving \(lf\) backgrounds. For the degradation model, \(G_{Deg}\), \(hf\) details are filtered out while \(lf\) components are retained. Additionally, to provide \(hf\) for generators, we define an operation dubbed as the high-frequency boosting (\(HFB\)), in the following form: \[hf^{*}=x+\alpha*hf \tag{4}\] where \(hf^{*}\) represents the boosted \(hf\), and \(\alpha\) is the factor determining the extent of enhancement. Since providing pure \(hf\) to the network may break vessel structures and cause incoherence, we utilize the \(HFB\) to obtain \(hf^{*}\), shown in Eq. (4). Then \(lf\) and \(hf^{*}\) are provided as frequency components to the dual-path generators for feature extraction. Subsequently, the features are fused for the reconstruction. To optimize the \(G_{Res}\) using the unpaired dataset, we incorporate consistency loss to preserve the vessel structures, namely inverse consistency since restoration and degradation represent inverse mappings between \(LR\) and \(HR\). We formulate the degradation-restoration inverse-consistency, \(G_{Deg}\cdot G_{Res}:LR\to HR^{\uparrow}\to LR^{\uparrow\downarrow}\), using the \(L_{1}\) norm loss as: \[\mathcal{L}_{inv}^{Res}(G_{Res},G_{Deg},LR)=\mathbb{E}\left[\left\|LR^{\uparrow \downarrow}-LR\right\|_{1}\right] \tag{5}\] The restoration-degradation inverse-consistency, \(G_{Res}\cdot G_{Des}:HR\to LR^{\downarrow}\to HR^{\downarrow\uparrow}\), is also deployed to facilitate more precise training as: \[\mathcal{L}_{inv}^{Deg}(G_{Deg},G_{Res},HR)=\mathbb{E}\left[\left\|HR^{ \downarrow\uparrow}-HR\right\|_{1}\right] \tag{6}\] Furthermore, during the image translation using GAN, there is a lack of pixel-level regularization. Thus, common features shared by both \(LR\) and \(HR\) images will possibly be altered Figure 4: An overview of our methods. An input low-resolution image \(LR\) is restored and then degraded through restoration-degradation consistency. The input \(LR\) is first decomposed into \(lf\) and \(hf^{*}\) (through \(HFB\)) and then fused for reconstruction. Eventually, restoration is taken as OCTA high-resolution model in the inference phase. The inverse degradation-restoration process is represented in simplified conceptual graphs. when the generators are over-fitted, such as the morphology of vessels. Discriminators may not be capable of distinguishing these features and consequently overlook the alternations. To alleviate it, an identity loss is introduced to \(G_{Res}\) with the input being \(HR\). This identity loss is formulated as follows: \[\mathcal{L}_{idt}^{Res}\left(G_{Res},HR\right)=\mathbb{E}\left[\left\|G_{Res} \left(HR\right)-HR\right\|_{1}\right] \tag{7}\] To ensure the transitivity of the inverse workflow, we also introduce identity loss to \(G_{Deg}\) using \(LR\) as: \[\mathcal{L}_{idt}^{Deg}\left(G_{Deg},LR\right)=\mathbb{E}\left[\left\|G_{Deg} \left(LR\right)-LR\right\|_{1}\right] \tag{8}\] Identity losses preserve vessel structures and suppress unexpected noises and artifacts in generating \(LR\) and \(HR\) images. ### Frequency-aware Adversarial Loss Within Generative Adversarial Network (GAN), a powerful discriminator could induce a generator to produce high-quality results. As for the super-resolution, GAN aspires to produce \(HR\) images by accurately enhancing \(hf\) details while preserving \(lf\) contours. Therefore, we facilitate the discriminator in distinguishing frequency information and thus propose the frequency-aware adversarial loss (**FAL**). Inspired by the superior performance of wavelets in discriminating frequency information [32], we decompose vertical and horizontal frequency components by applying either high-pass (\(H\)) or low-pass (\(L\)) Haar wavelets filters. Thus decomposed results include four possible combinations: \(LL\), \(LH\), \(HL\), and \(HH\), where \(L\) extracts \(lf\) and \(H\) extracts \(hf\). Then we refer to all (\(LH\), \(HL\), \(HH\)) as \(hf\), \(LL\) as \(lf\), original image as spatial information, denoted as \(W_{hf}\), \(W_{lf}\), \(W_{s}\), respectively (see Fig. 5). They are separately fed to three neural networks to capture frequency and spatial features, denoted as \(D_{hf},D_{lf},D_{s}\). The final discrimination aggregates the outputs of three networks: \[D\left(y\right)=D_{hf}\left(W_{hf}\right)+D_{lf}\left(W_{lf}\right)+D_{s}\left( W_{s}\right) \tag{9}\] We employ two discriminators to distinguish between real and generated \(LR\) and \(HR\) images, denoted as \(D_{LR}\) and \(D_{HR}\), respectively. By formulating \(D_{HR}\) using Eq. (9), the **FAL** for the \(G_{Res}\) is defined as: \[\mathcal{L}_{\textbf{FAL}}^{Res}\left(G_{Res},D_{HR},LR,HR\right) =\mathbb{E}\left[\left\|D_{HR}\left(HR^{\dagger}\right)-1\right\| ^{2}\right]\] \[+\mathbb{E}\left[\left\|D_{HR}\left(HR\right)\right\|^{2}\right] \tag{10}\] where the labeling scheme considers real \(HR\) image as 1 and restored \(HR\)\(\uparrow\) image as 0. The mean square error is used, following the formulation of the least-square GAN [45]. Similarly, for the degradation counterpart, the **FAL** for \(G_{Deg}\) is defined as: \[\mathcal{L}_{\textbf{FAL}}^{Deg}\left(G_{Deg},D_{LR},LR,HR\right) =\mathbb{E}\left[\left\|D_{LR}\left(LR^{\dagger}\right)-1\right\| ^{2}\right]\] \[+\mathbb{E}\left[\left\|D_{LR}\left(LR\right)\right\|^{2}\right] \tag{11}\] According to the essential paradigm of GAN [20], the Eq. (10) and (11) are minimized to optimize generators whereas being maximized to optimize the discriminators for distinguishing between real and generated data as sensitively as possible. ### Frequency-aware Consistency Focal Loss To preserve the spectrum distribution in the reconstructed results of \(G_{Res}\) and \(G_{Deg}\), we introduce a Frequency-aware Focal Consistency Loss (**FFCL**). Given the limitations of neural networks and GANs in accurately capturing \(hf\), this term aims to enforce consistency in frequency information and place additional emphasis on \(hf\). Specifically, we first construct a spectrum weighting matrix to penalize the spectral consistency error as: \[w\left(u,v\right)=\left|\mathcal{X}^{\prime}\left(u,v\right)-\mathcal{X}\left( u,v\right)\right|^{\gamma_{1}} \tag{12}\] where \(\mathcal{X}\) and \(\mathcal{X}^{\prime}\) are the frequency representation of the original and reconstructed images \(x\) and \(x^{\prime}\) via Eq. (1) [42]. \(\gamma_{1}\) is the scaling factor. Then we formulate **FFCL** to penalize the error in the restoration and degradation consistency as: \[\mathcal{L}_{\textbf{FFCL}}(x,x^{\prime}) =\frac{1}{MN}\sum_{u=0}^{M-1}\sum_{v=0}^{N-1}w_{hf}\odot\left| \mathcal{X}_{hf}^{\prime}-\mathcal{X}_{hf}\right|^{2}\] \[+\gamma_{2}w_{lf}\odot\left|\mathcal{X}_{lf}^{\prime}-\mathcal{X }_{lf}\right|^{2} \tag{13}\] where \(u\) and \(v\) are coordinates of \(\mathcal{X}\) anf \(\mathcal{X}^{\prime}\). \(w\) represents the spectrum weighting matrix using Eq. (12). The subscripts \(hf\) and \(lf\) indicate the \(hf\) and \(lf\) components, respectively. \(\gamma_{2}\) is a scaling factor. The symbol \(\odot\) denotes the Hadamard product, which applies the weighting matrix \(w\) to the mean square error between the spectra \(\mathcal{X}\) and \(\mathcal{X}^{\prime}\). By aggregating all terms of our proposed loss functions, we form the final objective \(\mathcal{L}_{Total}\) as: \[\min_{G}\max_{D}\mathcal{L}_{Total} =\left(\mathcal{L}_{\textbf{FAL}}^{Deg}+\beta_{1}\mathcal{L}_{ \textbf{FAL}}^{Res}\right)+\left(\mathcal{L}_{inv}^{Deg}+\beta_{2}\mathcal{L}_ {inv}^{Res}\right)\] \[+\left(\mathcal{L}_{idt}^{Deg}+\beta_{3}\mathcal{L}_{idt}^{Res} \right)+\beta_{4}\mathcal{L}_{\textbf{FFCL}} \tag{14}\] where \(G\) comprises both \(G_{Res}\) and \(G_{Deg}\), while \(D\) consists of \(D_{HR}\) and \(D_{LR}\). Then, the total loss \(\mathcal{L}_{Total}\) encompasses adversarial loss \(\mathcal{L}_{\textbf{FAL}}\), the consistency loss \(L_{inv}\), the identity loss \(\mathcal{L}_{idt}\), and the frequency-aware focal consistency loss \(\mathcal{L}_{\textbf{FFCL}}\). The parameters \(\beta_{1}\), \(\beta_{2}\), \(\beta_{3}\), and \(\beta_{4}\) in Eq. (14) are empirically set to 1, 10, 5, and 1 respectively. In the training phase, Eq. (14) is minimized to optimize the generators, while it is maximized to optimize the discriminator iteratively. Figure 5: Structure of the discriminator. Our method combines both frequency and spatial information in the discriminating phase. Results are aggregated to formulate the frequency-aware adversarial loss. In summary, we propose a GAN-based unpaired OCTA super-resolution. By leveraging spatial and frequency information, we improve the resolution while preserving \(hf\). Our dual-path generators separately refine \(hf\) while retaining \(hf\) components, and the discriminators incorporate spatial and wavelets information as the **FAL**. We also introduce the **FFCL** to dynamically preserve the entire spectral consistency. ## IV Experimental Evaluation ### _Dataset_ The OCTA images used in this work were retrospectively collected from the Chinese University of Hong Kong Sight-THReatening Diabetic Retinopathy (CUHK-STDR) study. This study was an observational clinical study focused on diabetic retinal disease in subjects with Type 1 or Type 2 Diabetes Mellitus recruited from the CUHK Eye Centre and Hong Kong Eye Hospital [4, 7, 11, 48]. The OCTA imaging was performed using a swept-source optical coherence tomography (DRI OCT Triton; Topcon, Tokyo, Japan). Notably, CUHK-STDR dataset was paired, which could enable the training of unpaired models and evaluation using paired metrics. Specifically, 296 pairs of fovea-central \(HR\) and \(LR\) OCTA images (see Fig. 1) were collected and split in the proportion of 4:1 for training and validation. Otherwise, additional 279 groups of paired \(HR\) and \(LR\) images for the whole area were also purposely collected and used for testing. Specifically, each group consisted of one \(LR\) and five \(HR\). \(HR\) images included one fovea-center and four parafovea, which were combined to generate a whole \(HR\) 6mm\(\times\)6mm montage registered to the original \(LR\). It is also worth noting that most typical OCTA images are fovea-centered in the real world; thus, we only utilized these data for testing. In the preprocessing of the training set, the original \(LR\) images were upsampled using bicubic interpolation (Fig. 1. A). We then cropped 256\(\times\)256 patches from the upsampled \(LR\) images (Fig. 1. B) and corresponding \(HR\) images (Fig. 1. C). During the training phase, these cropped \(LR\) and \(HR\) patches were randomly selected and provided to the network in an unpaired manner. To prepare a pixel-wise aligned testing dataset for quantitative evaluation, each \(LR\) image was paired with an \(HR\) image from the same eye of the same patient. To account for slight structural changes due to the time interval between capturing the images, registration was performed to align the paired images. Thus, to evaluate the performance, the paired images were provided to the model after proper image registration. ### Implementation Details Our model was trained on one NVIDIA RTX 3090 with 24GB memory. The parameters were initialized using the standard normal distribution. The initial learning rate was set to 0.0002, and it decayed linearly to 0 during training. The training phase optimized the parameters for a minimum of 5,000 iterations. In each iteration, an unaligned pair of \(HR\) and \(LR\) images was provided for the network. Specifically, the \(HR\) image was a 3mm\(\times\)3mm patch cropped with 256 pixels \(\times\) 256 pixels, while the \(LR\) image was a 6mm\(\times\)6mm patch upsampled using bicubic interpolation and then randomly cropped with 256 pixels \(\times\) 256 pixels. We evaluated the performance using common pixel-wise paired metrics in super-resolution studies. Specifically, we used peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM) [49], normalized mutual information (NMI), and feature similarity index measure (FSIM) [50]. PSNR measured valid signals compared to noises. SSIM evaluated quality in terms of structure, illuminance, and contrast. NMI evaluated how matched the two images were. FSIM was a feature-level frequency-aware measurement that considered phase congruency and gradient magnitude. ### Experimental Results and Comparison In Table. 3, we compare our method to several baselines, including CycleGAN [24], Cycle-in-Cycle GAN (CinCGAN) [22], Pseudo-Supervision [23], and Domain Adaptation Unsupervised Super-Resolution (DA-Unsupervised SR) [28]. We also reimplement our previous work [47]. Similar to our approach, these baseline methods are all in the unpaired setting. The quantitative results demonstrate that our method outperforms them in most of the metrics and also surpasses our previous work. Fig. 6 presents the visual results of our method. It showcases how our approach improves the resolution of OCTA images while preserving fine capillary structures. In comparison, CinCGAN introduces noise and loses the original vessel features. Pseudo-Supervision and DA-Unsupervised SR exhibit lower contrast compared to our method. CycleGAN successfully recovers \(hf\) vasculature but it disrupts the vessel coherence of the foveal avascular zone (FAZ). On the other hand, our method refines \(hf\) information while introducing minimal unexpected noise and retaining most of the original information. Notably, CinCGAN visually resembles the \(HR\) ground truth images (Fig. 7) with high SSIM and FSIM. However, the cost is the signal-to-noise ratio, as shown in Table. 3. Therefore, our proposed approach achieves a better balance between structural information and the signal-to-noise ratio, resulting in overall higher fidelity. ### Frequency Decomposition in Generators As previously illustrated in Fig. 4, we decompose \(hf\) and \(lf\) in generators to exceptionally enhance fine-grained details and alleviate \(lf\)-bias. Specifically, \(hf\) is provided for neural networks via \(HFB\) operation. Thus we further investigate the effectiveness of frequency decomposition. First, as shown in Table. 3, we respectively remove \(hf\) and \(lf\) by replacing frequency components with tensors filled with all zeros and freezing the gradients. Results show that either removing \(hf\) or \(lf\) would degrade the performance with respect to all metrics. Comparably, removing \(hf\) would lead to more severe degradation, especially in terms of FSIM. This confirms the \(lf\)-bias of the neural networks and verifies the effectiveness of our method. Aside from evaluation in the spatial domain, we also examine the frequency domain by employing the Log Frequency Distance (LFD) metric [42], which is formed as: \[LFD=\log\left[\frac{1}{MN}\left(\sum_{m=0}^{M-1}\sum_{n=0}^{N-1}\left|F_{x}-F_{ y}\right|^{2}\right)+1\right] \tag{15}\] where \(F_{x}\) and \(F_{y}\) are spectrums of generated and real images via Eq. (1). \(m\) and \(n\) are the Cartesian coordinates of the \(F\). Eq. (15) measures the log mean square error over the whole spectrums. Table. 3 show different results as presented in Table. 3. Reconstruction from components \(w/o~{}lf\) leads Figure 6: Visual results of whole 6mm\(\times\)6mm OCTA image. The first and third rows are two exemplary reconstructed results. The second and fourth rows are zoomed in to local details. to higher spectral errors than results from \(w/o\)\(hf\). This may be attributed to the common view that most spectral powers are concentrated in \(lf\) bands [51]. Thus although the spectral errors for \(hf\) are less than that for \(lf\), these comparably slight disturbances for \(hf\) would lead to more severe quality degradation in spatial domain images. Results verify the necessity to exceptionally emphasize \(hf\) while preserving \(lf\). Moreover, we also evaluate the effectiveness of \(HFB\) in Eq. (4). As shown in Fig. 8, results illustrate the relationship between qualitative performance and the level of boosting, which is \(\alpha\) in Eq. (4). We observe better performances in terms of SSIM and FSIM as more \(hf\) information is provided. This indicates that \(hf\) components play a crucial role in preserving structural information. The above results demonstrate the significance of our \(HFB\) operation in frequency decomposition. ### Frequency-Aware Loss Functions We introduce two frequency-aware loss functions, which are Frequency-aware Focal Consistency Loss (**FFCL**) and Frequency-Aware Adversarial Loss (**FAL**). They were designed to provide more precise supervision over the spectrum. To verify the effectiveness, we present the 3D visualization for errors in the spectrum for different results obtained using **FFCL** and **FAL**, as shown in Fig. 9. It can be observed that without **FFCL** and **FAL**, most of the spectral errors are introduced by the \(hf\) components. This indicates the importance Fig. 8: Plots of the \(HFB\) rates versus different qualitative metrics. Fig. 7: Visual results of whole 6mm\(\times\)6mm OCTA image. The first and third rows are two exemplary reconstructed results. The second and fourth rows are zoomed in to local details. Fig. 9: 3D visualization for spectral errors. The horizontal plane is the spectrum coordinates, while the vertical axis indicates the error intensity. Larger errors indicate less accurately reconstructed frequencies. of addressing the frequency information for accurate spectral reconstruction. By incorporating **FAL**, the discriminators help the model become more aware of the \(hf\) components and reduce the corresponding reconstruction errors. However, there are still inaccuracies in the middle- to high-frequency components. Otherwise, **FFCL** retains more precise middle-frequency information but lacks the capability to reproduce \(hf\) accurately. Consequently, by leveraging both **FFCL** and **FAL**, our super-resolution model is able to maintain precise results across the entire spectrum. This combined approach effectively addresses the challenges posed by different frequency components and enhances spectral precision. ### Ablation Study To evaluate the effects of different components, we continue to conduct ablation on **FAL** and **FFCL**, as shown in Table. III-C. By removing the **FAL** and the **FFCL**, we observed a decline or similar performance in PSNR, particularly in the fovea-central area. This indicates that the frequency-aware losses applied to the generators and discriminators play a crucial role in controlling the noise intensity. These findings are consistent with the results in Fig. 6. Furthermore, we replaced the proposed **FFCL** with focal frequency loss (**FFL**) [42]. Comparing the results, we found that **FFL** can preserve the structural information related to \(hf\) components, but our proposed **FFCL** achieves higher accuracy in preserving the structural details. These results empirically evidence the effectiveness of each component in our approach, showcasing contributions to reconstructing high-quality images in both the spatial and frequency domains. ## V Discussion This paper proposed a GAN-based approach for unpaired OCTA image super-resolution. We formulated restoration and degradation GANs and optimized the models end-to-end through consistency regularization. Meanwhile, since fine-grained features such as capillaries in microvasculature are critical biomarkers for ophthalmology studies, we leveraged frequency and spatial information to mitigate the \(lf\)-bias of neural networks and enhance \(hf\). We verified the general performance and effectiveness of our approach through quantitative results and visualization from various experiments. However, since the unpaired super-resolution setting lacks pixel-wise supervision information, the overall performance might not be comparable to fully supervised approaches [9, 16, 17]. As shown in Fig. 6, although our method could enhance the intensities of \(hf\) capillary details without introducing extra artifacts, it could not precisely infer missing semantic information and might lead to vessel incoherence. Another limitation of current approaches, including our proposed method, was the assumption that the features inside and outside the fovea-central 3mm\(\times\)3mm area are identically distributed because only this region could provide \(HR\) supervision information. However, this assumption is not solid and may impede the generalizability of the model. One possible solution is to formulate a self-supervised learning approach, such as image inpainting, to infer and fill in missing information in peripheral regions [52]. ## VI Conclusion This paper presented a novel approach, Frequency-aware Unpaired Super-Resolution for OCTA images (FAUSRA), for enhancing the resolution using the unpaired setting. We proposed a GAN-based framework to mimic restoration and degradation mappings, optimized end-to-end through consistency loss. Given the importance of fine-grained capillaries in microvasculature as biomarkers for OCTA, we employed frequency decomposition to emphasize \(hf\) by separating and fusing frequency components. We also introduced an **FAL** for the discriminators to better preserve the capillary structure, and an **FFCL** to preserve the spectrum consistency. We conducted experiments and analytical studies to validate the effectiveness of the method and to show the superior performance. To the best of our knowledge, as an extension of our previous work [47], our studies were the first to leverage frequency analysis and to utilize GAN in unpaired OCTA super-resolution. It addressed challenges associated with large-scale data collection and complex data preparation required in conventional supervised super-resolution methods in the perspective of frequency-domain analysis.
2309.05645
CitDet: A Benchmark Dataset for Citrus Fruit Detection
In this letter, we present a new dataset to advance the state of the art in detecting citrus fruit and accurately estimate yield on trees affected by the Huanglongbing (HLB) disease in orchard environments via imaging. Despite the fact that significant progress has been made in solving the fruit detection problem, the lack of publicly available datasets has complicated direct comparison of results. For instance, citrus detection has long been of interest to the agricultural research community, yet there is an absence of work, particularly involving public datasets of citrus affected by HLB. To address this issue, we enhance state-of-the-art object detection methods for use in typical orchard settings. Concretely, we provide high-resolution images of citrus trees located in an area known to be highly affected by HLB, along with high-quality bounding box annotations of citrus fruit. Fruit on both the trees and the ground are labeled to allow for identification of fruit location, which contributes to advancements in yield estimation and potential measure of HLB impact via fruit drop. The dataset consists of over 32,000 bounding box annotations for fruit instances contained in 579 high-resolution images. In summary, our contributions are the following: (i) we introduce a novel dataset along with baseline performance benchmarks on multiple contemporary object detection algorithms, (ii) we show the ability to accurately capture fruit location on tree or on ground, and finally (ii) we present a correlation of our results with yield estimations.
Jordan A. James, Heather K. Manching, Matthew R. Mattia, Kim D. Bowman, Amanda M. Hulse-Kemp, William J. Beksi
2023-09-11T17:37:08Z
http://arxiv.org/abs/2309.05645v3
# CitDet: A Benchmark Dataset for Citrus Fruit Detection ###### Abstract In this letter, we present a new dataset to advance the state of the art in detecting citrus fruit and accurately estimate yield on trees affected by the Huanglongbing (HLB) disease in orchard environments via imaging. Despite the fact that significant progress has been made in solving the fruit detection problem, the lack of publicly available datasets has complicated direct comparison of results. For instance, citrus detection has long been of interest in the agricultural research community, yet there is an absence of work, particularly involving public datasets of citrus affected by HLB. To address this issue, we enhance state-of-the-art object detection methods for use in typical orchard settings. Concretely, we provide high-resolution images of citrus trees located in an area known to be highly affected by HLB, along with high-quality bounding box annotations of citrus fruit. Fruit on both the trees and the ground are labeled to allow for identification of fruit location, which contributes to advancements in yield estimation and potential measure of HLB impact via fruit drop. The dataset consists of over 32,000 bounding box annotations for fruit instances contained in 579 high-resolution images. In summary, our contributions are the following: (i) we introduce a novel dataset along with baseline performance benchmarks on multiple contemporary object detection algorithms, (ii) we show the ability to accurately capture fruit location on tree or on ground, and finally (ii) we present a correlation of our results with yield estimations. Agricultural Automation; Object Detection, Segmentation and Categorization; Robotics and Automation in Agriculture and Forestry ## Multimedia Material The dataset, software, and documentation for fruit detection and counting can be found at [1]. ## I Introduction Fruit detection and counting in orchards are crucial tasks for agricultural automation. They can be used to reduce routine farming and breeding activities as well as provide insightful estimates for harvest and forthcoming growing seasons. Moreover, accurate fruit detection enables the possibility of robotic harvesting, which has the potential to eliminate one of the most labor-intensive processes for growers. Many imaging and sensing technologies have been used for detecting fruit such as hyperspectral [2], laser scanning [3], thermal [4], and red-green-blue depth (RGBD) sensors [5, 6], yet the most common technology is the standard RGB camera. Although conventional RGB cameras are widely accessible, they present several challenges for in-orchard fruit detection such as variation in appearance, irregular lighting, and severe occlusion. Recent works have used deep learning to overcome these challenges [7]. However, due to the lack of standardized benchmark datasets for agricultural automation, it is difficult to compare existing methods with each other. To tackle this problem we establish a new benchmark dataset, **CitDet**, for citrus fruit detection and counting in orchard settings together with a comprehensive analysis of state-of-the-art object detection algorithms, Fig. 1. Computer vision techniques combined with deep learning are appealing in agricultural automation due to their powerful prediction capabilities and non-invasive nature. Nonetheless, such methods require huge amounts of data to perform with high accuracy. While there exists large datasets (e.g., COCO [8]) that have allowed for the development of new algorithms, many automation tasks require custom datasets to achieve meaningful results. For example, although COCO contains a class for oranges an orange detector trained solely from the dataset instances will perform poorly in an orchard setting. This is due to the complex background scenes in Fig. 1: The **CitDet** dataset contains precise bounding box object annotations for fruit on tree and fruit on ground (top row). It also has images from multiple different tree rows including a large variety of citrus (bottom row).
2301.01201
Uncertainty in Real-Time Semantic Segmentation on Embedded Systems
Application for semantic segmentation models in areas such as autonomous vehicles and human computer interaction require real-time predictive capabilities. The challenges of addressing real-time application is amplified by the need to operate on resource constrained hardware. Whilst development of real-time methods for these platforms has increased, these models are unable to sufficiently reason about uncertainty present when applied on embedded real-time systems. This paper addresses this by combining deep feature extraction from pre-trained models with Bayesian regression and moment propagation for uncertainty aware predictions. We demonstrate how the proposed method can yield meaningful epistemic uncertainty on embedded hardware in real-time whilst maintaining predictive performance.
Ethan Goan, Clinton Fookes
2022-12-20T07:32:12Z
http://arxiv.org/abs/2301.01201v4
# Uncertainty in Real-Time Semantic Segmentation on Embedded Systems ###### Abstract Application for semantic segmentation models in areas such as autonomous vehicles and human computer interaction require real-time predictive capabilities. The challenges of addressing real-time application is amplified by the need to operate on resource constrained hardware. Whilst development of real-time methods for these platforms has increased, these models are unable to sufficiently reason about uncertainty present. This paper addresses this by combining deep feature extraction from pre-trained models with Bayesian regression and moment propagation for uncertainty aware predictions. We demonstrate how the proposed method can yield meaningful uncertainty on embedded hardware in real-time whilst maintaining predictive performance. Code for the proposed model and experimentation can be found here. Ethan Goan, Clinton Fookes [email protected] semantic segmentation, uncertainty, real-time, Bayesian ## 1 Introduction Development and capabilities of semantic segmentation models has increased dramatically, with models based on deep learning applied to domains such as autonomous vehicles [1, 2, 3] and human computer interaction [4, 2]. Practical application in these areas requires systems to provide real-time performance, and in safety critical domains, meaningful uncertainty information is essential. The challenge of providing this uncertainty information becomes increasingly challenging for real-time operation, as obtaining this uncertainty information can considerably increase compute demands. A natural way to represent uncertainty is through a Bayesian framework, where uncertainty in the model is propagated to predictions. Complete Bayesian inference is intractable for deep learning models, meaning expensive Monte Carlo integration is often used for approximate inference [5, 6, 7]. These sampling based techniques considerably increase the time required for prediction, making them unsuitable for real-time operation. Deterministic methods have been proposed to alleviate the expensive Monte Carlo estimates [8], though requires approximating epistemic uncertainty through the use of an additional Gaussian-Discriminant Analysis model. Other research have proposed utilising uncertainty information during training [9, 10], though are unable to sufficiently reason about epistemic uncertainty present during predictions. Whilst addressing the issues of avoiding expensive sampling approaches, all these methods have yet to demonstrate real-time predictive performance for semantic segmentation on traditional computing platforms, let alone resource constrained embedded hardware. Given the primary practical application of many semantic segmentation models is performed on edge computing devices, it is crucial that we are able to deliver this uncertainty information on these platforms in real time. This work aims to address this issue by building from the work of the Gaussian Process treatments of deep kernel learning and Bayesian optimization [11, 12], where the tasks of feature extraction is computed deterministically with fixed model parameters, which are then used as inputs to a probabilistic classification module. Complexity for Gaussian processes is cubic in the number of training samples, thus making them infeasible for the intended applications. Instead we opt for parameterised probabilistic regression module combined with a moment propegating non-linearity for classification. This reduces the inference overhead for high-dimensional data whilst providing analytic results when using conjugate models, avoiding the need for expensive Monte Carlo approximations. We demonstrate how this approach can be applied to existing real-time semantic segmentation models on embedded hardware whilst providing meaningful Figure 1: Example of semantic segmentation results obtainable from proposed real-time semantic segmentation method and the forms of uncertainty it permits in real-time applications. uncertainty measures with minimal compute overhead. An example of the predictions and uncertainty measures for the presented models is shown in Figure 1. The contributions of this paper are as follows, * Propose a light-weight method for uncertainty in semantic segmentation by combining Bayesian methods within moment propagation, * Develop meaningful real-time aleatoric and epistemic uncertainty metrics and investigation into how they can inform end users and decision protocols * Demonstrate how these methods can be easily adapted to pre-trained models, enabling them to provide real time uncertainty quantification on embedded devices. ## 2 Real-Time Uncertainty Quantification We pose the uncertainty quantification within the Bayesian framework, where we propagate uncertainty in the model parameters to predictions. For this we require a posterior of our model latent variables \(\mathbf{\omega}\) such that, \[p(\mathbf{\omega}|\mathbf{X},\mathbf{Y})\propto p(\mathbf{\omega})p(\mathbf{X},\mathbf{ Y}|\mathbf{\omega}) \tag{1}\] where \(\mathbf{X}\) is the set of our training inputs and \(\mathbf{Y}\) is a matrix of our predictive labels. We can then form a predictive distribution for new test data \((\hat{\mathbf{x}},\hat{\mathbf{y}})\) as, \[p(\hat{y}|\hat{\mathbf{x}},\mathbf{X},\mathbf{Y})=\int p(\mathbf{\omega}|\mathbf{X },\mathbf{Y})p(\hat{\mathbf{x}},\hat{\mathbf{y}}|\mathbf{\omega})d\mathbf{\omega} \tag{2}\] For the case of a neural network representing our likelihood, the integrals required for Eqns. (1) (2) when treating all network parameters as random variables is intractable. For the case of (1), the posterior may be approximated by a tractable form \(q_{\theta}(\mathbf{\omega})\), though the predictive posterior in (2) for deep neural networks remains intractable. As previously stated, the commonly used sampling methods to approximate these integrals are computationally prohibitive for real-time models. The goal of this work is to find a simpler representation of model predictions for semantic segmentation that permits for analytic computation, thus avoiding the need for expensive sampling approximations. To achieve this, we propose simplifying the model to that of linear model with respect to latent variables, where the basis function applied to the input is a neural network \(f(\cdot;\theta)\) parameterised by all but the last layer of the neural network \(\theta\). These parameters are treated as fixed and known values. We then replace the final linear layer within a given network with a probabilistic layer parameterised by our latent variables \(\mathbf{\omega}\) for which we will perform inference. This can be seen as separating the model into a feature extraction stage with fixed parameters \(\theta\), and the classification stage with parameters treated as random variables \(\mathbf{\omega}\). The latent parameters in \(\mathbf{\omega}\) will represent the weights and bias of a final convolutional layer applied to the model. Figure 2 illustrates the proposed model, and can be summarised as, \[\Phi(\mathbf{x}) =f(\mathbf{x};\theta) \tag{3}\] \[\alpha =\Phi(\mathbf{x})\mathbf{\omega}\] (4) \[\mathbf{t} =\text{Softmax}(\alpha), \tag{5}\] where \(\Phi(\mathbf{x})\) represents the design matrix for input \(\mathbf{x}\) generated from a neural network with fixed parameters \(\theta\), \(\alpha\) and \(\mathbf{t}\) represents the corresponding logits and final predictive categorical probability respectively, and \(\mathbf{\omega}\sim p(\mathbf{\omega}|\mathbf{X},\mathbf{Y},\theta)\) is the latent variables we wish to perform inference for. We represent the multiplication between the design matrix and the probabilistic parameters as an inner product, but note that this inner product can be computed with an equivalent convolution operation. Since inference is only required for the final latent variables, we are able to leverage pretrained networks \(f(\mathbf{x};\theta)\) for the generation of design matricies. For the work presented within, we frame inference in the logit space as opposed to the final categorical distribution. This allows us to frame the inference problem as a simple linear regression model, such that our posterior of latent variables \(\mathbf{\omega}\) with \(N\) data points is, \[p(\mathbf{\omega}|\mathbf{X},\mathbf{Y},\theta)\propto\Big{[}\prod_{i=0}^{N-1} \mathcal{N}\Big{(}\mathbf{\omega}|f(\mathbf{x}_{i};\theta)\mathbf{\omega},\sigma^{2} \Big{)}\Big{]}p(\mathbf{\omega}). \tag{6}\] With a conjugate Gaussian prior placed on latent variables \(\mathbf{\omega}\), our posterior will also be Gaussian [13], \[p(\mathbf{\omega}|\mathbf{X},\mathbf{Y},\theta)=\mathcal{N}(\mu_{\pi},\Sigma_{\pi }). \tag{7}\] The advantage of representing our model in this way is that it allows us to represent predictive probabilities over the Figure 2: Summary of the proposed model. The pink box represent the deterministic feature extractor \(f(\mathbf{x};\theta)\) that can be any pretrained semantic segmentation network. The blue convolutional layer represents our probabilistic layer, which allows for analytic predictive inference, and is used to generate a mean and variance representation of the predictive logits \(\alpha\). These are then passed through the ADFSoftmax layer to represent final predictive categories \(\mathbf{t}\) and epistemic uncertainty information. logits for a single test image \(\hat{\mathbf{X}}\) analytically as, \[p(\hat{\alpha}|\hat{\mathbf{x}},\mathbf{X},\mathbf{Y},\theta) =\int\mathcal{N}\Big{(}\mathbf{\omega}|\Phi(\hat{\mathbf{x}})\mathbf{ \omega},\sigma\Big{)}p(\mathbf{\omega}|\mathbf{X},\mathbf{Y},\theta)d\mathbf{\omega} \tag{8}\] \[=\mathcal{N}(\hat{\alpha}|\Phi(\hat{\mathbf{x}})\mu_{\pi},\sigma_ {N}(\hat{\mathbf{x}}))\] (9) \[\sigma_{N}^{2}(\hat{\mathbf{x}}) =\sigma^{2}+\hat{\mathbf{x}}^{T}\Sigma_{\pi}\hat{\mathbf{x}} \tag{10}\] ### Inference of Latent Variables A challenge with this modelling scheme is that since we are performing inference in the logit space, we cannot directly define a likelihood based on the data, as the labels for our data is represented as final categorical probabilites and the softmax function applied to logits to obtain this predictive probability is not bijective. To address this, we instead propose to approximate samples from the conditional posterior of our latent variables \(\mathbf{\omega}\) using the diagonal SWAG method [14]. Given that with a conjugate prior, we know that the posterior of our latent variables \(\mathbf{\omega}\) will be a Gaussian, meaning it can be fully described using only the first two moments. To summarise these moments, we apply the diagonal SWAG method to approximately sample from our model parameters using SGD iterates to compute \(\mathbf{\omega}_{\text{SWA}}=\sum_{i=1}^{T}\mathbf{\omega}_{i}\). From this, we can compute an empirical variance estimate for our probabilistic parameters as \(\Sigma_{\text{diag}}=\text{diag}(\bar{\mathbf{\omega}}^{2}-\Sigma_{\text{SWG}}^{2})\). In this work, we differ from the original SWAG method, where instead of computing the empirical mean from the SWAG iterates, we use the pretrained weights for the last convolutional layer in our network \(\bar{\mathbf{\omega}}=\mathbf{\omega}_{\text{pretrained}}\). This reduces the need to record many parameter values during training, and encourages mean predictions from the probabilistic model to be similar to the point estimate network. With this empirical variance parameters, we can represent our approximate posterior as a factorised Gaussian distribution, \[p(\mathbf{\omega}|\mathbf{X},\mathbf{Y},\theta)\approx\mathcal{N}(\mathbf{\omega}_{i} |\bar{\mathbf{\omega}},\Sigma_{\text{diag}}). \tag{11}\] This allows us to perform approximate inference in the logit space by utilising the categorical labels available for our data sets. Furthermore, the use of a factorised approximation accelerates computation of predictive posterior, as the covariance matrix \(\Sigma_{\pi}\) in (10) becomes a diagonal matrix, such that, \[\sigma_{N}^{2}(\hat{\mathbf{x}})\approx\sigma^{2}+\hat{\mathbf{x}}^{T}\Sigma_ {\text{diag}}\hat{\mathbf{x}}. \tag{12}\] Whilst eliminating covariance within our posterior approximation, the use of a diagonal covariance matrix considerably reduces the number of model parameters, allowing for models using this our to be more easily adapted to low memory applications and accelerate prediction on resource constrained. ### Predictive Distributions and Measures of Uncertainty Whilst providing an analytic solution to the predictive probabilities of our logits, we ultimately wish to represent a predictive distribution for final categorical probabilities. An exact analytical solution when using the softmax activation is not known. We address this by building on the methods of moment propegation used in the works of Assumed Density Filtering, where instead of computing exact distributions, we approximate the moments of random variables after applying certain functions, and use these moments to approximate the transformed random variables as Gaussians. Similar to [15], we compute intermediate moments of the softmax using properties of the Log-Normal distribution. For a Gaussian random variable \(\gamma\sim\mathcal{N}(\mu_{\gamma},\sigma_{\gamma}^{2})\), \(\beta=\exp(\gamma)\sim\text{Lognormal}(\mu_{\beta},\sigma_{\beta}^{2})\), where the mean and variance of \(\beta\) is \(\mu_{\beta}=\exp\big{(}\mu_{\gamma}+\sigma_{\gamma}^{2}/2\big{)}\) and \(\sigma_{\beta}^{2}=\big{(}\exp\sigma_{\gamma}^{2}-1\big{)}\exp(2\mu_{\gamma}+ \sigma_{\gamma}^{2})\) respectively. We can use these two moments to approximate the transformed random variables as a Gaussian \(\beta\)\(\sim\)\(\mathcal{N}(\mu_{\beta},\sigma_{\beta}^{2})\). From this we can approximate the distribution of the output for each class as, \[t_{j}=\frac{\exp(\alpha_{j})}{\sum_{i=0}^{N-1}\exp(\alpha_{i})}=\frac{y_{j}}{ \sum_{i=0}^{N-1}y_{i}} \tag{13}\] where \(\alpha_{j}\) is a single logit from our probabilistic layer and \(y_{i}\sim\mathcal{N}(\mu_{y_{i},i},\sigma_{y_{i},i}^{2})\) using the properties of the Log-Normal distribution. To find the mean and variance for our output \(t_{i}\) using moment propagation, we first find the mean and variance of the denominator. Assuming independence in our outputs, the denominator distribution in Equation (13) can be represented as, \[\sum_{i=0}^{N-1}y_{i}\sim\mathcal{N}(\sum\mu_{y,i},\sum\sigma_{y,i}^{2})= \mathcal{N}(\mu_{d},\sigma_{d}^{2}). \tag{14}\] The application of moment propagation used in [15] does not aim to approximate the moments from the ratio distribution encountered within a probabilistic treatment of the softmax, and instead only aim to ensure the resulting mean can be used as parameter to a categorical distribution. We rectify this by following ratio distribution approximation of [16], where a Gaussian approximation is derived from a Taylor expansion. With this we can approximate the mean and variance of each softmax output indexed by \(j\) as, \[t_{j}\tilde{\sim}\mathcal{N}(\frac{\mu_{j}}{\mu_{d}},\frac{\mu_{j}}{\mu_{d}} \sqrt{\sigma_{j}^{2}+\sigma_{d}^{2}}), \tag{15}\] where \(\mu_{j}\) and \(\sigma_{j}\) are found from the Log-Normal properties. We label the use of this moment propegation as the Assumed Density Filtered Softmax (ADFSoftmax). Whilst having the ability to representing model uncertainty in our output, we also wish to form a suitable categorical distribution over outputs for classification using the conventional argmax operator. **Proposition 1**.: _With the probabilistic output in eq. (15), we can create a valid \(k-\)dimensional categorical distribution such that,_ \[C\sim\text{Cat}(k,\mathbb{E}[\mathbf{t}]). \tag{16}\] Proof.: For \(\text{Cat}(k,\mathbb{E}[\mathbf{t}])\) to be a valid categorical distribution, we require that the parameters generated by \(\mathbb{E}[\mathbf{t}]\) satisfy the condition that \(\mathbb{E}[\mathbf{t}]_{j}\leq 1\) for \(0\leq j<k\) and \(\sum_{i=0}^{k-1}\mathbb{E}[\mathbf{t}]_{i}=1\). We first observe that expectation for the \(j^{th}\) component is, \[\mathbb{E}[\mathbf{t}]_{j}=\frac{\mu_{j}}{\mu_{d}}=\frac{\exp\left(\mu_{j}+ \sigma_{j}^{2}/2\right)}{\sum_{i=0}^{k-1}\exp\left(\mu_{i}+\sigma_{i}^{2}/2 \right)}=\frac{\exp\left(\mu_{j}+\sigma_{j}^{2}/2\right)}{\mu_{d}}, \tag{17}\] where \(\mu_{d}\) is that of Equation (14). Similar to the traditional softmax function, the denominator is shared amoungst all components, and the exponential function ensures all components are strictly positive. Following the same arguments of the softmax, this implies that \(\exp\left(\mu_{j}+\sigma_{j}^{2}/2\right)\) for valid index \(j\), and that \(\sum_{i}^{k-1}\mathbb{E}[\mathbf{t}]_{i}=1\) as required. With this categorical representation, we can easily perform final classification similar to point estimate networks, whilst also delivering important uncertainty information in these predictions. In the next section, we discuss the uncertainty metrics available. ### Computing Predictive Uncertainty With our representation of predictive probabilities, we want to be able to reason about the different types of uncertainty present. It is common to refer to the epistemic and aleatoric uncertainty [17], where epistemic uncertainty is a representation of reducible uncertainty identified by the model, and aleatoric uncertainty is irreducible uncertainty caused by the data present. With the probabilistic model presented within, we are able to reason about both forms of uncertainty. We propose to capture epistemic uncertainty using the variance information available in our model predictions. An advantage of our approach is that we can measure this uncertainty in both the logit and prediction space. Given we are modelling these spaces as Gaussians, we can succinctly summarise this uncertainty using the entropy of these distributions. The entropy for a D-dimensional multivariate Gaussian is, \[H[x]=\frac{1}{2}\log|\Sigma|+\frac{D}{2}(1+\log 2\pi). \tag{18}\] We see from eq. (18) that the entropy depends only on the covariance information in the distribution. This is advantageous as it provides a succinct and natural form to summarise the uncertainty induced by our latent variables in our predictions. Furthermore, given that we are representing our distributions over our logits and final predictions as independent Gaussian variables, the trace operator can be reduced to a product of the variance components for each component in the relevant distribution. Entropy provides a succinct summary for our uncertainty over all categories of interest, though there are instances where we may wish to view uncertainty information pertaining to a single class of interest. For example, for an autonomous vehicle scenario we may be interested in observing the uncertainty for safety critical classes such as other vehicles or pedestrians. We can summarise the class conditional uncertainty utilising the corresponding variance in our predictive probability in eq. (15) for classes of interest. For aleatoric uncertainty, a common method used in point estimate models is to compute the entropy in the final categorical distribution obtained. We follow this approach in this work, were we use the categorical distribution represented in (16). ## 3 Experiments With our probabilistic model, inference scheme and uncertainty measures defined, we now demonstrate how they can all be combined to deliver uncertainty in real-time compute constrained devices. The NVIDIA Jetson AGX Xavier embedded platform is the used as the computing platform. Given we are targeting lightweight real-time methods, we build upon the BiSeNet model variants [18, 19] as our pre-trained backbone for the deep feature extractors \(f(\mathbf{x};\theta)\), and with pre-trained weights in the final layer serving as the mean for our probabilistic classification layer \(\bar{\mathbf{\omega}}\). Pre-trained backbones for these variants are available for the CityScapes[3], ADE20k [4] and CocoStuff [2] datasets, which we build upon for the following experimentations.1 Footnote 1: Code available at [https://github.com/ethangoan/eu-seg](https://github.com/ethangoan/eu-seg). For our inference procedure, we perform training on the existing models to obtain our diagonal SWAG samples with the with Ohem cross-entropy loss [20]. We perform a total \begin{table} \begin{tabular}{l l l l l} Model & Dataset & Input Size & mIOU & fps \\ \hline \hline \multirow{4}{*}{BiSeNetV1} & Cityscapes & 512x1024 & 74.5457 & 65.8129 \\ & CocoStuff & 512x1024 & 30.6171 & 72.415 \\ & ADE20k & 512x1024 & 35.197 & 62.48 \\ \hline \multirow{3}{*}{Bayes} & Cityscapes & 512x1024 & 74.0748 & 57.359 \\ & CocoStuff & 512x512 & 30.786 & 43.334 \\ & ADE20k & 512x512 & 35.1449 & 54.245 \\ \hline \multirow{3}{*}{BiSeNetV2} & Cityscapes & 512x1024 & 74.6672 & 49.5015 \\ & CocoStuff & 512x1024 & 26.7878 & 58.6345 \\ & ADE20k & 512x1024 & 32.3957 & 59.5024 \\ \hline \multirow{3}{*}{Bayes} & Cityscapes & 512x1024 & 74.3651 & 42.6472 \\ & CocoStuff & 512x512 & 28.5445 & 41.2032 \\ \cline{1-1} & ADE20k & 512x512 & 32.1453 & 41.8055 \\ \hline \end{tabular} \end{table} Table 1: Summary of predictive performance of real-time semantic segmentation methods on Jetson Xavier embedded hardware. of 10,000 SGD iterations, with a warmup of 1,000 iterations with a linear increase of the learning rate. After this warmup period, the learning rate remains constant. The parameters are observed every 200 iterations for the remainder of training for inference utilising diagonal SWAG. These recorded parameters are then used to estimate the empirical covariance matrix for the final probabilistic convolutional layer. Our prior over our latent variables is a Gaussian distribution \(\mathcal{N}(0,1^{-4})\), which is implemented using weight decay. After training is completed, the models are then compiled to the TensorRT format to be used on the Jetson board for calculation of inference speed metrics. During compilation, the models are converted to half-precision floating point values to accelerate computation. To measure predictive speed, we perform time the computation required for 1,000 forward passes. Our primary measures for evaluating model performance is the mean intersection over union (mIOU), and predictive frames-per-second (fps). We summarise the predictive performance for these probabilistic models in Table 1, and compare them against the point estimate networks. From the results in Table 1, we see comparable performance amongst the probabilistic models and their point estimate equivalents. Whilst the predictive speed does decrease across the Bayesian models, we see that predictive performance remains suitability for real time application. Furthermore, we note that the predictive performance of the BiSeNetv1 model frequently outperforms in terms of predictive accuracy and inference speed, despite the BiSeNetv2 models having considerably fewer parameters. We attribute this decrease in predictive performance on the Jetson hardware to the depth-wise convolutional operations used in the BiSeNetv2 model not being as thoroughly optimised within the TensorRT framework. We highlight this as an important engineering research topic for future work. With our predictive performance measured quantitatively, we now investigate the uncertainty measures qualitatively. In Section 2.3, we state how the proposed modelling method is capable of producing measures for aleatoric, epistemic and class conditional epistemic uncertainty. Figure 3 illustrates the predictive performance over the datasets examined within this work and the relevant uncertainty measures. We can see from these figures that the measured epistemic uncertainty is concentrated within the objects of interest, whilst the aleatoric uncertainty is larger around the edges of these objects. We further see from the class conditional uncertainty measures that when targeting individual classes, the uncertainty is targeted towards the edge of the objects for the class of interest. ## 4 Conclusion Within this research, we have targeted the need for meaningful uncertainty information for challenging semantic segmentation tasks on resource constrained hardware. We propose a combination of a deterministic feature extractor with a probabilistic regression module, that can be combined with a moment propagation softmax module to generate final predictive outputs and uncertainties. We evaluate these models for multiple datasets on embedded hardware, and demonstrate how the proposed probabilistic model can mitigate the expensive compute required for a complete probabilistic treatment Figure 3: Examples of predictions from the proposed model using the BiSeNetv1 backbone. The rows represent samples from the Cityscapes, ADE20k and CocoStuff datasets respectively. From left to right, the columns represent the input, segmentation, epistemic uncertainty, aleatoric uncertainty and class conditional epistemic uncertainty. For class conditional uncertainty, we show the uncertainty for the classes “car”, “tree” and “dog” across the datasets. of deep segmentation models to provide real-time performance. Qualitative evaluation of obtained uncertainty measures demonstrate how they can be computed and used for demanding and high risk real-time scenarios.
2303.18129
Neutrino cooled disk in post-merger system studied via numerical GR MHD simulation with a composition-dependent equation of state
The code HARM\_COOL, a conservative scheme for relativistic magnetohydrodynamics, is being developed in our group and works with a tabulated equation of state of dense matter. This EOS can be chosen and used during dynamical simulation, instead of the simple ideal gas one. In this case, the inversion scheme between the conserved and primitive variables is not a trivial task. In principle, the code needs to solve numerically five coupled non-linear equations at every time-step. The 5-D recovery schemes were originally implemented in HARM and worked accurately for a simple polytropic EOS which has an analytic form. Our current simulations support the composition-dependent EOS, formulated in terms of rest-mass density, temperature and electron fraction. In this proceeding, I discuss and compare several recovery schemes that have been included in our code. I also present and discuss their convergence tests. Finally, I show set of preliminary results of a numerical simulation, addressed to the post-merger system formed after the binary neutron stars (BNS) coalescence.
Agnieszka Janiuk
2023-03-31T15:19:50Z
http://arxiv.org/abs/2303.18129v2
Neutrino cooled disk in post-merger system studied via numerical GR MHD simulation with a composition-dependent equation of state + ###### Abstract The code HARM_COOL, a conservative scheme for relativistic magnetohydrodynamics, is being developed in our group and works with a tabulated equation of state of dense matter. This EOS can be chosen and used during dynamical simulation, instead of the simple ideal gas one. In this case, the inversion scheme between the conserved and primitive variables is not a trivial task. In principle, the code needs to solve numerically five coupled non-linear equations at every time-step. The 5-D recovery schemes were originally implemented in HARM and worked accurately for a simple polytropic EOS which has an analytic form. Our current simulations support the composition-dependent EOS, formulated in terms of rest-mass density, temperature and electron fraction. In this proceeding, I discuss and compare several recovery schemes that have been included in our code. I also present and discuss their convergence tests. Finally, I show set of preliminary results of a numerical simulation, addressed to the post-merger system formed after the binary neutron stars (BNS) coalescence. ## 1 Introduction The first detection of a gravitational wave signal accompanied by an electromagnetic counterpart was announced on August 17, 2018, by the LIGO-Virgo team [1]. The signal originated from the coalescence of two neutron stars of masses in the range of 1.17-1.60 \(M_{\odot}\) and the total mass of the system of 2.74 \(M_{\odot}\). It was followed by a short weak gamma ray burst, observed 1.7 seconds after the GW signal. Hence, the theoretical prediction that this class of short gamma ray bursts originate from compact binary mergers has been proven by the detection of the source GW 170817. It was also suggested before that the radioactivities from dynamical ejecta after the first neutron star had been disrupted, can power an elec tromagnetic signal [7]. Subsequent accretion onto a newly formed black hole can provide bluer emission, if it is not absorbed by precedent ejecta [12, 6]. In this case, a day-timescale emission comes at optical wavelengths from lanthanide-free components of the ejecta, and is followed by a week-long emission with a spectral peak in the near-infrared (NIR). This two component model fits well with observations of the kilonova, detected in coincidence with the source GRB-GW 170817. In our studies, we investigate the very last stage of the system, namely the post-merger black hole accretion disk. Due to its high density, the accretion disk in post-merger system is opaque to photons. Neutrinos are produced via \(\beta\)-reactions, electron-positron pair anihillation, and plasmon decay and provide cooling mechanism. The plasma is composed of free nucleons, pairs, and Helium. In the outer regions, heavier nuclei can be also synthesized, under conditions of nuclear statistical equilibrium (NSE, see [4] for details). The kilonova signal is produced in the equatorial wind outflows, launched from the disk via magnetic instabilities. These ejecta are dominated by thermal energy of the dense plasma, and are accelerated to mildly relativistic velocities (of about 0.2-0.3 \(c\), [5]). The matter is highly neutronized there, so due to the r-process nucleosynthesis, copious amounts of unstable heavy isotopes are formed in these winds, and power the Infrared/Optical emission through their radioactive decay. ## 2 Numerical modeling Our study of the winds launched from accretion disk, is done by evolving the general relativistic magnetohydrodynamic (GRMHD) equations in time. We use the HARM (High Accuracy Relativistic Magnetohydrodynamics) code [3] which is a conservative and shock capturing scheme. The numerical scheme advances the conserved quantities from one time step to the next by solving a set of non-linear hyperbolic equations for continuity, energy-momentum conservation and magnetic induction. In the GRMHD scheme they read: \[\nabla_{\mu}(\rho u^{\mu})=0,\nabla_{\mu}(T^{\mu\nu})=0,\nabla_{\mu}(u^{\nu}b^ {\mu}-u^{\mu}b^{\nu})=0 \tag{1}\] where \[T^{\mu\nu}=T^{\mu\nu}_{gas}+T^{\mu\nu}_{EM} \tag{2}\] is contributed by \[\begin{split} T^{\mu\nu}_{gas}=\rho hu^{\mu}u^{\nu}+pg^{\mu\nu}=( \rho+u+p)u^{\mu}u^{\nu}+pg^{\mu\nu},\\ T^{\mu\nu}_{EM}=b^{2}u^{\mu}u^{\nu}+\frac{1}{2}b^{2}g^{\mu\nu}-b^{ \mu}b^{\nu},b^{\mu}=u^{*}_{\nu}F^{\mu\nu}\end{split} \tag{3}\] In the stress-energy tensor composed of the gas and electromagnetic terms, \(u^{\mu}\) is the four-velocity of the gas, \(u\) is the internal energy, \(\rho\) is the density, \(p\) is the pressure, and \(b^{\mu}\) is the magnetic four vector. \(F\) is the Faraday tensor and in a force-free approximation, we have \(E_{\nu}=u^{\nu}F^{\mu\nu}=0\). The unit convention is adopted such that \(G=c=M=1\). Our initial conditions mimic the configuration after the transient (hypers-massive-neutron star, HMNS) object has collapsed to a black hole, and a pressure equilibrium torus [2] formed. In this solution, the angular momentum along the radius of the disk is constant. We parameterize our models with black hole Kerr parameter, \(a\), and the inner radius and radius of pressure maximum, \(r_{\rm in}\), and \(r_{\rm max}\), respectively. The current simulations are run in axisymmetric setup, i.e. 2D, with a resolution 256x256 points in the \(r\), and \(\theta\) directions. The numerical code works in Kerr-Schild coordinates, which enables the matter to smoothly accrete under the horizon. The torus is embedded in an initially poloidal magnetic field, prescribed with the vector potential of \((0,0,A_{\varphi})\), with \(A_{\varphi}=(\frac{\rho}{\rho_{\rm max}}-\rho_{0})\) where we use offset of \(\rho_{0}=0.2\). We parameterize the field strength with plasma \(\beta\) defined as the ratio of the gas to magnetic pressure, \(\beta=p_{gas}/p_{mag}\), Here \(p_{gas}=(\gamma-1)u_{max}\) and \(p_{mag}=b_{max}^{2}/2\), where \(u_{max}\) is the internal energy of the gas at the radius of maximum pressure. ## 3 Chemical composition and structure of the disk On eof significant chalelnges for numerical simulation is neutrino treatment. Neutrinos carry away energy and lepton number, so they alter electron fraction and composition of ejected material. Dynamical simulations must consider the realistic equation of state (EOS) and impact of neutrinos in the optically thin and thick regions. Lepton number conservation is expressed as follows: \[\nabla_{\mu}(n_{e}u^{\mu})={\cal R}/m_{b};\ \ m_{b}=\rho/n_{b};\ \ Y_{e}=\frac{n_{e} }{n_{b}}=\frac{m_{b}n_{e}}{\rho} \tag{4}\] where \({\cal R}\) is the net rate neutrino volume number densities in the fluid frame, and \(Y_{e}\) is the electron fraction. Because the baryons dominate the rest-mass density, the baryon number conservation equation turns to regular continuity equation. In the energy-momentum conservation equation, we must introduce an additional source term due to heating and cooling by neutrinos: \[\nabla_{\mu}T_{\nu}^{\mu}={\cal Q}u_{\nu}. \tag{5}\] where \({\cal Q}\) is the energy change per unit volume due to neutrino emission. ### Conserved and primitive variables Conservation equations solved by the GR MHD code can be expressed in a flux conservative form [13], and explicit choice of the conserved variables (which are anautic functions of primitive ones) is to some extent arbitrary. The recovery schemes for the primitive variables need numerical root finding, and can be broadly devided into two categories. In a 2D scheme, there are two independent variables, e.g, \(v^{2}\) and specific enthalpy \(W\). Temperature is obtained from the EOS tables, solving \(h=h(\rho,T,Y_{e})\), and pressure and temperature are also obtained by Newton-Raphson method for \(W\) and \(v^{2}\). Alternatively, the system of GR MHD equations is reduced to 3 equations, which have three unknowns. The chosen independent variables can be: \(\gamma\), \(T\), and \(W=h\rho\gamma^{2}\). Pressure is interpolated from EOS tables, as \(P(\rho,T,Y_{e})\). ### Recovery transformation In the 2D recovery scheme of [8] the dimensionality of the recovery problem is reduced by making use of certain scalar quantities that can be computed from the conservatives. To avoid numerical pathologies o the \(1D_{W}\) scheme near the roots, this 2D scheme solves simultaneously the set of two equations: \[f_{1}:\ \ \tilde{Q}^{2}=v^{2}({\cal B}^{2}+W)^{2}-\frac{(Q_{\mu}{\cal B}^{ \mu})^{2}({\cal B}^{2}+2W)}{W^{2}}\] \[f_{2}:\ \ Q_{\mu}n^{\mu}=-\frac{{\cal B}^{2}}{2}(1+v^{2})+\frac{(Q_{\mu}{\cal B }^{\mu})^{2}}{2W^{2}}-W+p(u,\rho)\] The independent variables used in this scheme are defined as: \(Q_{\mu}=-n_{\mu}T_{\mu}^{\nu}=\alpha T_{\mu}^{t}\); where \(\ \ \ \ \ \tilde{Q}^{\nu}=j_{\mu}^{\nu}Q^{\mu}\) is energy-momentum density in the normal observer frame, and \(D=-\rho n_{\mu}u^{\mu}=\alpha\rho u^{t}=\gamma\rho\); is mass density in the observer's frame, and \({\cal B}^{i}=\alpha B^{i}=\alpha\dot{F}^{it}\) is magnetic 3-vector. Here, \(w=\rho+u+p\) with \(W=w\gamma^{2}\) is enthalpy. To solve this set of equations by means of Newton-Raphson method, the Jacobian matrix with \(\frac{\partial f_{1}}{\partial(v^{2})}\), \(\frac{\partial f_{2}}{\partial(v^{2})}\), \(\frac{\partial f_{1}}{\partial W}\), and \(\frac{\partial f_{2}}{\partial W}\) is needed. Note that this scheme does not require an analytic EOS, and derivatives of pressure wtr. to \(\rho\), \(v^{2}\), and \(u\) may be computed from tables using finite difference method. In the 3D recovery scheme of [13], the system is extended to solve 3 equations, on \(W,z\), and \(T\), by adding a constraint on the internal energy given by EOS tables: \[f_{1}:\ \ [\tau+D-z-B^{2}+\frac{B^{i}S_{i}}{2z^{2}}+\rho]W^{2}-\frac{B^{2}}{2}=0\] \[f_{2}: [(z+B^{2})^{2}-S^{2}-\frac{2z+B^{2}}{z^{2}}(B^{i}S_{i})^{2}]W^{2}-(z+B^{2})^{ 2}=0\] \[f_{3}: \epsilon-\epsilon(\rho,T,Y_{e})=0\] Here the temperature is employed directly as an unknown through \(\epsilon(W,zT)=h-1-\frac{P}{\rho}=\frac{z-DW-\rho W^{2}}{DW}\) and does not require inversion of the EOS. Notice here different notation: \(S_{i}\) is energy-momentum density, \(W\) is Lorentz factor, \(z\) is enthalpy, and \(\tau=-(n_{\mu}n_{\nu}T^{\mu\nu}+D)\). In method proposed in [11] the scheme is solving 1D equation, for the rescaled variable, \(\chi=\frac{\rho h\gamma^{2}}{\rho\gamma}\). Other quantities are also rescaled, accordingly, to give Lorentz factor, and we give the brackets for \(\chi\): \(2-2\frac{Q_{\mu}n^{\mu}+D}{D}-\frac{\mathcal{B}^{2}}{D}<\chi<1-\frac{Q_{\mu}n^ {\mu}+D}{D}-\frac{\mathcal{B}^{2}}{D}\). The equation \[f(\chi)=\chi-\tilde{\gamma}(1+\tilde{\epsilon}+\frac{\tilde{P}}{\tilde{\rho} })=0\] Figure 1: Convergence test results for the 3D method (2 versions, with (1): we compute specific internal energy from state vector x and conservatives as in Eq. (25) in Cerda-Duran et al. (2008), and solve \(f_{3}\) or (2): we compute pressure from state vector x and conservatives and solve \(f_{3}\) ), shown in the top panels, the 2D method (bottom panel, left) and Palenzuela method (bottom panel, right) is solved, with \(\tilde{P}=P(\tilde{\rho},\tilde{\epsilon},Y_{e})\) found in tables. The last method has proven to be most robust, for our wide parameter space. It works with smaller errors, while the performance spped is also slower. We performed the convergence tests to explore which recovery transformation works best for the wide parameter space. The parameters used for testing routines were: \(Y_{e}=0.1\), \(\gamma=2\), \(p_{gas}/p_{mag}=10^{5}\). We derived the conserved variables in Kerr metric, and then computed primitives perturbed by a factor of 1.05. The variables recovered through the 2D scheme were compared to the unperturbed, to calculate the total error, summed over all primitives \(Err=\Sigma_{k=0,NPR}(P_{k}-\bar{P_{k}})^{2}\). Figure 1 shows the results for all 3 methods probed. ### Neutrino transport We employ the neutrino leakage scheme that computes a gray optical depth estimate along radial rays for electron neutrinos, electron antineutrinos, and heavy-lepton neutrinos (nux), and then computes local energy and lepton number loss terms. The source code of the scheme has been publicly available and is downloaded from _[https://stellarcollapse.org_](https://stellarcollapse.org_). Details are described e.g. in [10]. The initial tests were done within an optically thin regime for neutrinos. \[\tau(r,\theta,\phi)=\int_{r}^{R}\sqrt{\gamma_{rr}}\bar{\kappa}_{\nu_{i}}dr^{ \prime}<2/3 \tag{6}\] ## 4 Results The structure of the flow in the evolved state is shown in Fig. 2. We present polar slices, taken at the evolved time of the simulation (\(t=44ms\) Figure 2: Maps of density and magnetic field contours (left), electron fraction (middle) and neutrino emissivity (right panel). Model assumes optically thin plasma for neutrinos. Inversion scheme used the Palenzuela method. The model parameters are disk mass = 0.13 \(M_{\odot}\), gas-to-magnetic pressure ratio \(\beta=50\) and black hole spin \(a=0.6\). The color maps are in logarithmic scale and are taken at \(t=0.044\) s. which is equivalent to the 3000 geometric time units, \(t_{g}\), for black hole mass of 3 \(M_{\odot}\)). Parameters of this model are black hole spin \(a=0.6\), initial torus radius \(r_{\rm in}=4r_{g}\) and pressure maximum radius of \(r_{\rm max}=11.8r_{g}\). We also assume initial electron fraction in the torus of \(Y_{e,disk}=0.1\), and in the atmosphere it is \(Y_{e,atm}=0.45\). The specific entropy per baryon in the torus is assumed \(S=10k_{B}\). This, with the enthalpy value resulting from FM torus solution, gives the physical density scaling: units in the code are normalized such that \(\rho_{max}=1\). The torus mass in cgs units is about \(0.03M_{\odot}\). As shown in the Figure, the magnetic turbulence developed in the torus, and helped launching winds from its surface. The neutronized material is redistributed, and electron fraction in the winds became larger than in the torus, so about \(Y_{e}=0.3\). The neutrino emissivity in the winds is couple orders of magnitude less than in the torus. As can be seen in the 2D map, the accreting torus has a very high neutrino luminosity, but here neutrinos are partially trapped. A moderately high neutrino luminosity can be observed from the wind, where the neutrino-antineutrino pairs can contribute to the heating of the plasma. Neutronisation of the plasma in the tours is still significant, and electron fraction values are below \(Y_{e}=0.1-0.3\) in some densest regions (cf. [9]). The neutrinos which are created in weak interactions, and the electron-positron pair annihilation, nucleon bremsstrahlung and plasmon decay, provide an efficient cooling mechanism. In the currently developed new numerical scheme, we substituted a simplified description of the neutrino cooling rate given by the two-stream approximation previously used by [5]. with a more advanced neutrino leakage scheme. The neutrino emissivity distribution image taken at an evolved state of the torus, is presented in Fig. 2 for the optically thin case. For optically thick simulation (which assumde different initial parameters as for the specific entropy), te quantitative difference can be noticed. The neutrino and antineutrino luminosities, as function of time, are shown in Figure 3. The velocity of the wind outflows, \(v\sim(0.11-0.23)\ c\), and mass loss via unbound outflows of 2- 17% of the initial disk mass were determined previously by [5]. We showed that the details are sensitive to engine parameters: BH spin and magnetisation of the disk, namely the more magnetized disk produce faster outflows. We also found that the accretion disk ejecta produce heavy elements up to mass number \(A\sim 200\), including Platinum and Gold isotopes (see details ## 5 Conclusions Numerical GR MHD simulations have been widely used to model post-merer systems and engines of short gamma ray bursts. We implemented a new scheme in our HAR-COOL code to the case study of post-merger system and kilonova source. The calculations are complex and need to cover physics of dense nuclear matter. Their performance is sensitive to the chosen recovery schemes. We tested several of tem and chose the most robust for a lare parameter space in densities, temperatures, and electron fraction values, to work with the 3-parameter EOS implemented in hydrodynamical simulation. Proper source terms have bee added in the system of equations for the neutrino losses, coupled with composition changes. We also use an advanced neutrino leakage scheme and calculate emissivities of neutrino and antineutrino as a function of time. The unbound outflows, i.e. winds, are powered by both neutrinos and magnetically driven acceleration. Therefore, winds may be more dense and powerful, if neutrino heating supports the magnetically driven wind. Figure 3: The electron neutrino and antineutrino (Lnu1 and Lnu2, respectively; Lnu3 denotes luminosity in other flavors) luminosty as function of time, for our two models, optically thin and optically thick, calculated with the neutrino leakage scheme. ## Acknowledgment This work was supported by grant 2019/35/B/ST9/04000 from the Polish National Science Center. We used computational resources of the ICM of the Warsaw University, and the PL-Grid, under grant _plggrb5_.
2309.14748
Discrepancy estimates related to the fractional parts of $b^n/n$
We prove a discrepancy estimate related to the sequence of fractional parts of $b^n/n$. This improves an earlier result of Cilleruelo et al.
Martin Lind
2023-09-26T08:16:37Z
http://arxiv.org/abs/2309.14748v2
# Discrepancy estimates related to the fractional parts of \(b^{n}/n\). ###### Abstract We prove a discrepancy estimate related to the sequence of fractional parts of \(b^{n}/n\). This improves an earlier result of Cilleruelo et al. **Keywords:** fractional parts, discrepancy, uniform distribution **MSC Classification:** 11K38, 11B05 ## 1 Introduction Let \(b\in\mathbb{N},b\geq 2\). In 2013, Cilleruelo et al. [1] proved that \[\left\{\frac{b^{n}\pmod{n}}{n}:n\in\mathbb{N}\right\} \tag{1}\] is dense in \([0,1]\). (See also [3] for a number of interesting related results.) For \(A\subset\mathbb{N}\), we set \[\mathcal{S}_{b}(A)=\left\{\frac{b^{n}\pmod{n}}{n}:n\in A\right\}.\] Note in particular that the set (1) is simply \(\mathcal{S}_{b}(\mathbb{N})\). Let \(\mathbb{P}=\{2,3,\ldots\}\) denote the prime numbers and set \[\mathcal{A}=\left\{pq:p,q\in\mathbb{P},p>b^{q}\right\}.\] The main result of [1] is an estimate of the discrepancy of \(\mathcal{S}_{b}(\mathcal{A})\). Denote \(\mathcal{A}_{N}=\mathcal{A}\cap[1,N]\), then \[D(\mathcal{S}_{b}(\mathcal{A}_{N}))=\mathcal{O}\left(\frac{\log\log\log\log(N)} {\log\log\log(N)}\right). \tag{2}\] where \(D(\mathcal{S}_{b}(\mathcal{A}_{N}))\) denotes the _discrepancy_ of \(\mathcal{S}_{b}(\mathcal{A}_{N})\) (see Section 2 below). In particular, it follows from (2) that \(\mathcal{S}_{b}(\mathcal{A})\) is uniformly distributed modulo \(1\) and this implies the density of \(\mathcal{S}_{b}(\mathcal{A})\) in \([0,1]\). Unaware of the work [1], the author studied properties of \(\mathcal{S}_{b}(\mathcal{A})\) from a different point of view in [5]. When informed of the paper [1], we found that some observations from [5] could be used to improve on (2). The main result of this note is the following. **Theorem 1**.: _There holds_ \[D(\mathcal{S}_{b}(\mathcal{A}_{N}))=\mathcal{O}\left(\frac{1}{\log\log\log(N) }\right). \tag{3}\] The improvement (3) is not due to any sharper number-theoretic inequalities. In fact, we use the same estimates as in [1]. Rather, we employ a different strategy to estimate the discrepancy. Instead of using the Erdos-Turan inequality and exponential sums as in [1], we use a sort of triangle inequality (Lemma 3) to decompose \(\mathcal{S}_{b}(\mathcal{A}_{N})\) into well-structured subsequences. By combining a number of basic facts about discrepancy with some observations from [5] (Proposition 5 in particular) and a variant of the Siegel-Walfisz theorem, we obtain in Lemma 7 an estimate of the discrepancy of each subsequence and these estimates allow us to establish Theorem 1. In connection with this, we mention our previous work [4] where a similar strategy based on Lemma 3 was used to find optimal discrepancy decay rates. ## 2 Auxiliary results ### Discrepancy For a finite set \(A\), we denote by \(|A|\) the cardinality of \(A\). Let \(S=\{x_{1},x_{2},\ldots,x_{M}\}\subset[0,1]\) be a finite sequence. The _extreme discrepancy_ of \(S\) is defined by \[D(S)=\sup_{J\subseteq[0,1]}\left|\frac{A_{S}(J)}{M}-\lambda(J)\right|,\] where \(A_{S}(J)=|\{n:x_{n}\in J\}|\) and \(\lambda\) is the linear Lebesgue measure. Similarly, the _star discrepancy_ of \(S\) is defined by \[D^{*}(S)=\sup_{r>0}\left|\frac{A_{S}([0,r])}{M}-r\right|.\] It is well-known (see e.g. [2], Chapter 3) that \[D^{*}(S)\leq D(S)\leq 2D^{*}(S),\] hence it is sufficient to only consider \(D^{*}(S).\) **Lemma 2**.: _Let \(R\in\mathbb{N}\) and \(S=\{x_{1},x_{2},\ldots,x_{M}\}\) be a finite sequence such that the elements of \(S\) only attain values in the set \(\{k/R:k=0,1,\ldots,R-1\}.\) Assume that_ \[|\{n:x_{n}=k/R\}|=\alpha_{k}M+\epsilon_{k}\] _where \(\alpha_{k}\geq 0\ \ (k=0,1,\ldots,R-1)\), \(\alpha_{1}=\alpha_{2}=\ldots=\alpha_{R-1}\) and_ \[\sum_{k=0}^{R-1}\alpha_{k}=1.\] _Then there exists an absolute constant \(C>0\) such that_ \[MD^{*}(S)\leq\max\left\{\alpha_{0}M,\frac{M}{R}\right\}+\sum_{k=0}^{R-1}| \epsilon_{k}|.\] Proof.: Take any \(J_{r}=[0,r]\) and let \(j=\lfloor rR\rfloor,\) so that \(M\mu(J_{r})=jM/R+M\delta\) for some \(\delta\in[0,1/R].\) We have \[A_{S}(J_{r})=M\sum_{k=0}^{j}\alpha_{k}+\sum_{k=0}^{j}\epsilon_{k}\] so \[|A_{S}(J_{r})-M\mu(J_{r})|\leq\left|M\alpha_{0}+M\sum_{k=1}^{j}\left(\alpha_{ k}-\frac{1}{R}\right)-M\delta\right|+\sum_{k=1}^{j}|\epsilon_{k}|\] The first term of the expression at the right-hand side above is either increasing or decreasing in \(\delta,\) hence we have \[MD^{*}(S)\leq\max_{j=0,\ldots,R-1}\left|M\alpha_{0}+M\sum_{k=1}^{j}\left( \alpha_{k}-\frac{1}{R}\right)\right|+\sum_{j=0}^{R-1}|\epsilon_{k}|\] where \(j=0\) means that the sum is \(0.\) The maximum of the first term is attained either at \(k=0\) or \(k=R-1,\) since the terms of the sum have the same sign (due to the fact that \(\alpha_{1}=\ldots=\alpha_{R-1}.\) Further, \(\alpha_{0}+(R-1)\alpha_{1}=1\) so \[\max_{j=0,\ldots,R-1}\left|M\alpha_{0}+M\sum_{k=1}^{j}\left( \alpha_{k}-\frac{1}{R}\right)\right| = \max\left\{M\alpha_{0},\left|M\alpha_{0}+M\sum_{k=1}^{R-1} \left(\alpha_{k}-\frac{1}{R}\right)\right|\right\}\] \[= \max\left\{M\alpha_{0},\left|M-\frac{M(R-1)}{R}\right|\right\}\] \[= \max\left\{M\alpha_{0},\frac{M}{R}\right\}\] Consequently, \[MD^{*}(S)\leq\max\left\{M\alpha_{0},\frac{M}{R}\right\}+\sum_{k=0}^{R-1}|\epsilon_ {k}|.\] **Lemma 3** ([2], Chapter 3).: _Assume that \(S=\cup_{j}S_{j}\) where \(S_{i}\cap S_{j}=\emptyset.\) Denote \(M_{j}=|S_{j}|\) and \(M=|S|=\sum_{j}M_{j}.\) Then_ \[MD^{*}(S)\leq\sum_{j=1}^{K}M_{j}D^{*}(S_{j}).\] **Lemma 4** ([6], Chapter 4).: _Let \(S^{\prime}=\{x_{1},x_{2},\ldots,x_{M}\}\) and \(S^{\prime\prime}=\{y_{1},y_{2},\ldots,y_{M}\}\) such that_ \[|x_{j}-y_{j}|\leq\epsilon\] _for \(j=1,2,\ldots,M.\) Then_ \[|D^{*}(S^{\prime})-D^{*}(S^{\prime\prime})|<\epsilon.\] ### Primes in arithmetic progressions Denote \[Z_{k}=\{r\in\mathbb{Z}_{q(q-1)}^{*}:b^{r}\equiv kr+b\pmod{q}\}.\] For any \(p\in\mathbb{P}\) with \(p>b^{q}\) and \(p\equiv r\pmod{q(q-1)}\) for some \(r\in Z_{k},\) there holds \[\left|\frac{b^{pq}\pmod{pq}}{pq}-\frac{k}{q}\right|<\frac{1}{q}, \tag{4}\] see [5]. Let \(\operatorname{ord}_{q}(b)=|\langle b\rangle|\) where \(\langle b\rangle\) is the subgroup of \(\mathbb{Z}_{q}^{*}\) generated by \(b.\) In [5], we proved the following proposition. **Proposition 5**.: _For \(q\in\mathbb{P}\) there holds_ \[|Z_{k}|=\varphi(q-1)-m_{b}(q)\quad(k=1,2,\ldots,q-1),\] _and_ \[|Z_{0}|=(q-1)m_{b}(q)\] _where_ \[m_{b}(q)=|\{r\in\mathbb{Z}_{q-1}^{*}:r\equiv 1\pmod{\operatorname{ord}_{q}(b)} \}|.\] We shall need to estimate the number of primes in certain arithmetic progressions. Denote \[\pi(N;q(q-1),r)=|\{p\in\mathbb{P}:p\leq N,p\equiv r\pmod{q(q-1)}\}.\] We use the following consequence of the Siegel-Walfisz theorem (see [1] and the reference given there): \[\pi(N;q(q-1),r)=\frac{\pi(N)}{\varphi(q(q-1))}+\mathcal{O}\left(\frac{N}{( \log(N))^{A}}\right) \tag{5}\] for any \(A>0\) and \(N\geq 2\). (The implied constant in (5) depends on \(A\).) Here, \(\pi(N)\) is the prime counting function. In particular, there is asymptotically the same amount of primes in the progression \(r+nq(q-1)\) for each \(r\in\mathbb{Z}_{q(q-1)}^{*}\). More precisely, (5) and Proposition 5 imply that \[|\{p\in\mathbb{P}:p\leq N,\exists r\in Z_{k}\text{ such that }p \equiv r\pmod{q(q-1)}\}|=\] \[=\frac{|Z_{k}|}{\varphi(q(q-1))}\pi(N)+\mathcal{O}\left(\frac{N|Z _{k}|}{(\log(N))^{A}}\right)\] ## 3 Proof of Theorem 1 Denote \[F_{q,N}=\{p\in\mathbb{P}:b^{q}<p\leq N/q\}\quad\text{and}\quad\mathcal{F}_{q,N }=\{pq:p\in F_{q,N}\},\] then \[\mathcal{A}_{N}=\bigcup_{q\in\mathbb{P}}\mathcal{F}_{q,N}.\] (Note that \(F_{q,N}=\emptyset\) for \(q\) sufficiently large.) Define \[F_{k}=\{p\in F_{q,N}:\exists r\in Z_{k}\text{ such that }p\equiv r\pmod{q(q-1)}\}.\] for \(k=0,1,\ldots,q-1\). **Lemma 6**.: _For each \(k\in\{0,1,\ldots,q-1\}\), there holds_ \[|F_{k}|=\frac{|Z_{k}|}{\varphi(q(q-1))}|F_{q,N}|+\epsilon_{k}\] _where_ \[\sum_{k=0}^{q-1}|\epsilon_{k}|\leq\frac{CN}{q^{2}\log(N)}.\] Proof.: Observe that \[|F_{k}|=|Z_{k}|\left(\pi(N;q(q-1),r)-\pi(b^{q},q(q-1),r)\right).\] Taking \(A=4\) in (5), we obtain \[|F_{k}| = \frac{|Z_{k}|}{\varphi(q(q-1))}\left(\pi(N/q)-\pi(b^{q})\right)+ \epsilon_{k}\] \[= \frac{|Z_{k}|}{\varphi(q(q-1))}|F_{q,N}|+\epsilon_{k},\] where \[|\epsilon_{k}|\,\leq\,C|Z_{k}|\left(\frac{N}{q(\log(N/q))^{4}}+\frac{b^{q}}{q^{4}} \right)\leq 2C|Z_{k}|\frac{N}{q(\log(N/q))^{4}}\] since \(x\mapsto x/(\log(x))^{4}\) is increasing. Note that \[\sum_{k=0}^{q-1}|\epsilon_{k}|\leq\frac{2CN}{q(\log(N/q))^{4}}\sum_{k=0}^{q-1} |Z_{k}|=\frac{2CN}{q(\log(N/q))^{4}}\varphi(q(q-1))\leq\frac{2CNq^{2}}{q(\log( N/q))^{4}}\] Since \(b^{q}<N/q\), we have \(q\leq\log(N/q)\). Furthermore, \(q^{2}<qb^{q}<N\), so \(N/q>\sqrt{N}\). Consequently, \[\frac{1}{\log(N/q)}\leq\frac{1}{q}\quad\text{and}\quad\frac{\log(N)}{2}\leq \log(N/q)\leq\log(N)\] and therefore \[\sum_{k=0}^{q-1}|\epsilon_{k}|\leq\frac{2CNq^{2}}{q(\log(N/q))^{4}}\leq\frac{ 4CN}{q^{2}\log(N)}.\] Denote by \[n(q,N)=|F_{q,N}|\] **Lemma 7**.: _There exists an absolute constant \(C\) such that for any \(q\in\mathbb{P}\), \(N>b^{q}\), there holds_ \[n(q,N)D^{*}(\mathcal{S}_{b}(\mathcal{F}_{q,N}))\leq C\left(\frac{\log\log(q)} {\log(q)}n(q,N)+\frac{N}{q^{2}\log(N)}\right). \tag{6}\] Proof.: By (4), for any \(p>b^{q}\) there is a \(k\in\{0,1,\ldots,q-1\}\) such that \[\left|\frac{b^{pq}\quad(\text{mod }pq)}{pq}-\frac{k}{q}\right|<\frac{1}{q} \tag{7}\] holds. Furthermore, for a specific \(k\) the estimate (7) holds if and only if \(p\equiv r\pmod{q(q-1)}\) where \(r\in Z_{k}\). For \(p\in F_{q,N}\) we define \(a_{p}=k/q\) if \(p\in F_{k}\). Set \(S^{\prime}=\{a_{p}:p\in F_{q,N}\}\). Then \(|S^{\prime}|=|F_{q,N}|\) and \[\left|\frac{b^{pq}\quad(\text{mod }pq)}{pq}-a_{p}\right|<\frac{1}{q}\] for each \(p\in F_{q,N}\). By Lemma 4, there holds \[D^{*}(S^{\prime})-\frac{1}{q}<D^{*}(\mathcal{S}_{b}(\mathcal{F}_{q,N})<D^{*}( S^{\prime})+\frac{1}{q} \tag{8}\] We shall now use Lemma 2 compute \(D^{*}(S^{\prime}).\) Set \(\alpha_{k}=|Z_{k}|/\varphi(q(q-1)),\) so \(\sum\alpha_{k}=1\) and this, together with Lemma 6, implies that we may apply Lemma 2 to conclude \[n(q,N)D^{*}(S^{\prime})\leq\max\left\{\frac{|Z_{0}|n(q,N)}{\varphi(q(q-1))}, \frac{n(q,N)}{q}\right\}+\frac{CN}{q^{2}\log(N)}\] Further, we have \[\frac{|Z_{0}|}{\varphi(q(q-1))}=\frac{(q-1)m_{b}(q)}{(q-1)\varphi(q-1)}=\frac{ |\mathcal{N}|}{\varphi(q-1)}.\] Clearly, \[|\mathcal{N}|\leq\frac{q-1}{\mathrm{ord}_{q}(b)}\] and it is well-known that \[\varphi(q-1)\geq\frac{C(q-1)}{\log\log(q-1)}\] Taking into consideration \(\mathrm{ord}_{q}(b)\geq C\log(q),\) we get \[n(q,N)D^{*}(S^{\prime}) \leq \max\left\{\frac{Cn(q,N)\log\log(q)}{\log(q)},\frac{n(q,N)}{q} \right\}+\frac{CN}{q^{2}\log(N)} \tag{9}\] \[= \frac{Cn(q,N)\log\log(q)}{\log(q)}+\frac{CN}{q^{2}\log(N)}\] By (8) and (9), we get \[n(q,N)D^{*}(\mathcal{S}_{b}(\mathcal{F}_{q,N})\leq\frac{Cn(q,N)\log\log(q)}{ \log(q)}+\frac{CN}{q^{2}\log(N)},\] concluding the proof of (6). Proof of Theorem 1.: Fix \(N>N_{0}\) and set \(M=|\mathcal{A}_{N}|,\) it was shown in [1] that \[M\sim\frac{N\log\log\log(N)}{\log(N)}. \tag{10}\] (We write \(A\sim B\) if \(c_{1}A\leq B\leq c_{2}A\) for absolute constants \(c_{1},c_{2}.\)) Using Lemma 3 and Lemma 7, we obtain \[MD^{*}(\mathcal{S}_{b}(\mathcal{A}_{N})) \leq \sum_{q\in\mathbb{P}}n(q,N)D^{*}(\mathcal{S}_{b}(\mathcal{F}_{q,N })) \tag{11}\] \[\leq C\sum_{q\in\mathbb{P}}\left(\frac{\log\log(q)}{\log(q)}n(q,N)+ \frac{N}{q^{2}\log(N)}\right),\] where \(n(q,N)=0\) if \(F_{q,N}=\emptyset.\) By the prime number theorem \[n(q,N)\leq\pi(N/q)=\frac{N}{q\log(N/q)}+\mathcal{O}\left(\frac{N}{q(\log(N/q))^{ 2}}\right) \tag{12}\] Using (11), (12) and the fact that \(\log(N)/2\leq\log(N/q)\leq\log(N)\) for every \(q\in\mathbb{P}\) with \(n(q,N)>0,\) we get \[MD^{*}(\mathcal{S}_{b}(\mathcal{A}_{N}))\,\leq\,\frac{CN}{\log(N)}\sum_{q\in \mathbb{P}}\left(\frac{\log\log(q)}{q\log(q)}\left(1+\mathcal{O}\left(\frac{1} {\log(N)}\right)\right)+\frac{1}{q^{2}}\right). \tag{13}\] Since the series \(\sum_{q\in\mathbb{P}}\log\log(q)/(q\log(q))\) and \(\sum_{q\in\mathbb{P}}1/q^{2}\) both are convergent, it follows from (13) that \[MD^{*}(\mathcal{S}_{b}(\mathcal{A}_{N}))\leq\frac{CN}{\log(N)}\sum_{q\in \mathbb{P}}\left(\frac{\log\log(q)}{q\log(q)}+\frac{1}{q^{2}}\right)\leq\frac {CN}{\log(N)}. \tag{14}\] From (14) and (10), we have \[D^{*}(\mathcal{S}_{b}(\mathcal{A}_{N}))=\mathcal{O}\left(\frac{1}{\log\log \log(N)}\right).\] **Acknowledgements** The author is grateful to Professor A. Dubickas (Vilnius) for pointing out the references [1, 3].
2309.07052
Low-energy flavour probes of light vector bosons
In this work, we construct the chiral Lagrangian for a light spin-1 boson $X$ possessing both vectorial and axial couplings to the light Standard Model quarks $u, d, s$. We then use it in order to describe the tree-level, model-independent contributions to the $\Delta S = 1$ transition $K^\pm \rightarrow \pi^\pm X$, which is induced by Standard Model charged currents and is possibly enhanced by the emission of a longitudinally polarized $X$ boson. Such a flavour observable is then shown to set the best model-independent bounds on the diagonal axial couplings of $X$ to light quarks in the mass range allowed by the decay kinematics, improving the currently available constraints from beam-dump experiments and collider searches.
Luca Di Luzio, Gabriele Levati, Paride Paradisi, Xavier Ponce Díaz
2023-09-13T16:08:51Z
http://arxiv.org/abs/2309.07052v1
# Low-energy flavour probes of light vector bosons ###### Abstract In this work, we construct the chiral Lagrangian for a light spin-1 boson \(X\) possessing both vectorial and axial couplings to the light Standard Model quarks \(u,d,s\). We then use it in order to describe the tree-level, model-independent contributions to the \(\Delta S=1\) transition \(K^{\pm}\to\pi^{\pm}X\), which is induced by Standard Model charged currents and is possibly enhanced by the emission of a longitudinally polarized \(X\) boson. Such a flavour observable is then shown to set the best model-independent bounds on the diagonal axial couplings of \(X\) to light quarks in the mass range allowed by the decay kinematics, improving the currently available constraints from beam-dump experiments and collider searches. InFN, Sezione di Padova - Via Marzolo 8, 35131, Padova, Italy, 12.39.FeChiral Lagrangians 13.25.-kHadronic decays of mesons 12.60.-iModels beyond the standard models. ## 1 Introduction The lack of any detection of heavy New Physics (NP) at the LHC has been pushing the theoretical community to explore new Beyond the Standard Model (BSM) physics scenarios. These generically consider either new particles that are too heavy to be possibly detected at collider experiments, or focus on new light and feebly interacting massive particles that have so far gone undetected. The new BSM particles introduced in the second scenario have been receiving a steadily increasing attention, both from a theoretical point of view and from an experimental one. Several studies in this direction were devoted to the analysis of the properties of a hypothetical "dark photon", a new massive spin-1 boson which is kinetically mixed with the ordinary photon and whose interactions with SM particles can act as a portal to a dark sector [1, 2]. Efforts in detecting the dark photon include beam-dump [3], fixed-target [4, 5], collider [6, 7, 8, 9, 10, 11, 12], and meson decay [13, 14, 15, 16, 17, 18, 19] experiments. Generalisations of the dark photon scenario featuring a light spin-1 boson \(X\) possessing general couplings to SM fermions have been envisaged and analysed as well (see e.g. [20, 21, 22, 23]). Interestingly enough, if the \(X\) couples to non-conserved currents of SM fields, processes involving its longitudinal component will result in being possibly enhanced by the ratio \((\text{energy}/m_{X})^{2}\)[24, 25], thus amounting for the largest contribution to the related observables. In this article we will discuss, based on our work in [26], the sensitivity of this scenario to the rare flavour-changing process \(K^{\pm}\to\pi^{\pm}X\). In order to do so, we will show how to build the most general \(\Delta S=1\) Chiral Lagrangian up to order \(\mathcal{O}(p^{4})\) that is necessary to account for all of the weak-induced flavour transitions \(s\to d\) prompting the aforementioned decay process. The weak-induced flavour-changing interactions one has to consider fall in either one of two categories: they can be either \(\mathcal{O}(p^{2})\) terms stemming from an effective \(sdX\) vertex generated by the one-loop exchange of a W boson and up-type quarks [24, 25], or they can be \(\mathcal{O}(p^{4})\) contributions arising from the tree-level initial- or final-state radiation of an \(X\) boson from external quark legs. We will show that the two contributions are comparable in size, the former being of lower order in the Chiral expansion but necessarily arising at one-loop level, while the latter is a tree-level one appearing however only at next-to-leading order in the Chiral expansion, _i.e._ when four-fermion \(\Delta S=1\) operators are included in the Lagrangian. The tree-level, \(\mathcal{O}(p^{4})\) contributions moreover have the virtue of being model-independent, therefore representing a robust prediction of any Ultraviolet (UV) complete NP model predicting the existence of extra \(U(1)\) light spin-1 bosons. The loop-induced effects discussed in [24, 25] are instead sensitive to the specific realisation of the UV completion mechanism providing the \(X\) boson with a mass. ## 2 \(\Delta S=1\) chiral Lagrangian for spin-1 bosons The most general Lagrangian describing the interactions of a new spin-1 boson \(X\) with the SM light quarks \(q=(u,d,s)^{T}\) can be written as \[\mathcal{L}_{X}^{\text{int}}=g_{x}X_{\mu}\,\bar{q}\,\gamma^{\mu}(x_{V}+x_{A} \gamma_{5})\,q\,, \tag{1}\] where \(g_{x}\) measures the strength of the universal coupling of \(X\) to quarks. The vectorial and axial charges, \(x_{V,A}\), are matrices in flavour space and may include off-diagonal entries in the 2-3 sector. ### Lowest-order chiral Lagrangian The description we have outlined in the previous section is of course valid at energies above few GeV, where the Lagrangian in eq. (1) can be directly employed to analyse the interactions of \(X\) with quarks. Below the QCD scale however quarks confine and they are no longer the most adequate degrees of freedom for describing physical processes and one should rather resort to a description in terms of mesons and baryons. In order to discuss the interactions of mesons and baryons with other particles one can then make use of Chiral Perturbation theory (\(\chi\)PT) techniques [27, 28]. In particular, the interaction of an extra spin-1 boson \(X\) with quarks can be implemented in a \(\chi\)PT setup as follows: first one considers the massless QCD Lagrangian with chiral symmetry group \(G=SU(3)_{L}\times SU(3)_{R}\) \[\mathcal{L}_{\text{QCD}}^{0}=-\frac{1}{4}G_{\mu\nu}^{a}G_{a}^{\mu\nu}+i\bar{q }_{L}\gamma^{\mu}\left(\partial_{\mu}+ig_{s}\frac{\lambda_{a}}{2}A_{\mu}^{a} \right)q_{L}+i\bar{q}_{R}\gamma^{\mu}\left(\partial_{\mu}+ig_{s}\frac{\lambda _{a}}{2}A_{\mu}^{a}\right)q_{R}\,, \tag{2}\] where \(q=(u,d,s)^{T}\) and \(\lambda_{a}\) are the Gell-Mann matrices. Chiral symmetry-breaking terms (like mass terms or interactions with external gauge fields other than gluons) can be implemented by introducing appropriate spurions (\(r_{\mu}\), \(l_{\mu}\), \(s\), \(p\)) as external source fields [27]. The resulting Lagrangian \({\mathcal{L}}^{\rm ext}_{\rm QCD}\) then reads \[{\mathcal{L}}^{\rm ext}_{\rm QCD}={\mathcal{L}}^{0}_{\rm QCD}+\bar{q}\gamma^{ \mu}(2r_{\mu}P_{R}+2\ell_{\mu}P_{L})q+\bar{q}(s-ip\gamma_{5})q\,. \tag{3}\] Its chiral counterpart is found to be \[{\mathcal{L}}^{\rm ext}_{\chi{\rm PT}}=\frac{f_{\pi}^{2}}{4}\,{\rm Tr}\left[D _{\mu}U^{\dagger}D^{\mu}U+U^{\dagger}\chi+\chi^{\dagger}U\right]+{\mathcal{O} }(p^{4}) \tag{4}\] where \(U(x)=\exp{[i\lambda_{a}\pi_{a}(x)/f_{\pi}]}\) is the mesonic matrix transforming as \(U(x)\to LU(x)R^{\dagger}\) under \(SU(3)_{L}\times SU(3)_{R}\) and \(\pi_{a}(x)\) are the Goldstone boson fields of \(SU(3)_{L}\times SU(3)_{R}\to SU(3)_{V}\) spontaneous breaking. Moreover, the following quantities have been defined: \[D_{\mu}U=\partial_{\mu}U-ir_{\mu}U+iU\ell_{\mu}\qquad\mbox{and}\qquad\chi=2B_{ 0}\left(s+ip\right). \tag{5}\] In the model described by eq.(1), the covariant derivative \(D_{\mu}U\) reads \[D_{\mu}U=\partial_{\mu}U-ig_{x}X_{\mu}(Q_{R}^{x}U-UQ_{L}^{x})\,, \tag{6}\] where \(Q_{R/L}^{x}=Q_{V}^{x}\pm Q_{A}^{x}\), while \[Q_{V}^{x}=\begin{bmatrix}x_{V}^{u}&0&0\\ 0&x_{V}^{d}&x_{V}^{23}\\ 0&x_{V}^{32}&x_{V}^{s}\end{bmatrix}\qquad\mbox{and}\qquad Q_{A}^{x}=\begin{bmatrix} x_{A}^{u}&0&0\\ 0&x_{A}^{d}&x_{A}^{23}\\ 0&x_{A}^{32}&x_{A}^{s}\end{bmatrix} \tag{7}\] The Lagrangian in (4) can then be expanded in terms of the constituent meson fields. The lowest order terms in the NP coupling relevant to \(K^{\pm}\to\pi^{\pm}X\) read \[{\mathcal{L}}^{\rm ext}_{\chi{\rm PT}}\supset -ig_{x}X_{\mu}(x_{V}^{u}-x_{V}^{s})\left(\partial^{\mu}K^{-}K^{+} -\partial^{\mu}K^{+}K^{-}\right)\] \[-iX_{\mu}g_{x}(x_{V}^{u}-x_{V}^{d})\left(\partial^{\mu}\pi^{-} \pi^{+}-\partial^{\mu}\pi^{+}\pi^{-}\right) \tag{8}\] \[+\left[-ig_{x}X_{\mu}x_{V}^{32}\,\left(\partial^{\mu}K^{+}\pi^{-} -\partial^{\mu}\pi^{-}K^{+}\right)+\mbox{h.c.}\right]\,.\] Some comments are in order to be made about this result: * All the couplings in eq. (8) are vectorial in nature. This is a consquence of the fact that the matrix element of the axial-vector component of the quark bilinears in eq. (1) between external pseudo-scalar states is null; * In the limit of universal vector couplings, _i.e._\(x_{V}^{u}=x_{V}^{d}=x_{V}^{s}\), both the \(K^{+}K^{-}X\) and the \(\pi^{+}\pi^{-}X\) interaction terms vanish due to the underlying \(SU(3)_{V}\) chiral symmetry. Contrarily, since flavour-changing currents are not conserved, the \(K^{\pm}\pi^{\mp}X\) vector coupling does not vanish. It is important then to notice that \({\cal O}(p^{2})\) contributions to \(\Delta S=1\) processes such as \(K^{\pm}\to\pi^{\pm}X\) can be generated only if the vectorial couplings have a non-null off-diagonal entry \(x_{V}^{32}\). If this is absent at tree level, it can nonetheless be generated at one-loop level by the exchange of a virtual W boson and an up-type quark [24, 25]. Weak interactions however do not limit themselves to provide one-loop effects to the decay process we are considering, but they generate as well tree-level effects once higher-order terms in the momentum expansion that are the chiral equivalent of four-fermion operators are taken into account. The analysis of such contributions will be the topic covered by the next subsection. ### Chiral Lagrangian for weak interactions In the SM, at energies above the chiral symmetry breaking scale, \(\Delta S=1\) transitions are induced by the effective Lagrangian [29] \[{\cal L}_{\rm SM}^{\Delta S=1}=G\sum_{i=1}^{10}C_{i}(\mu)O_{i}(\mu)\qquad{\rm with }\qquad G\equiv-\frac{G_{F}}{\sqrt{2}}V_{ud}V_{us}^{*}\,, \tag{9}\] where \[\begin{array}{ll}Q_{1}=4(\bar{s}_{L}\gamma_{\mu}d_{L})(\bar{u}_{L}\gamma_{ \mu}u_{L}),&Q_{2}=4(\bar{s}_{L}\gamma_{\mu}u_{L})(\bar{u}_{L}\gamma_{\mu}d_{L}),\\ Q_{3}=4(\bar{s}_{L}\gamma_{\mu}d_{L})(\bar{q}_{L}\gamma_{\mu}q_{L}),&Q_{4}=4( \bar{s}_{L}^{2}\gamma_{\mu}d_{L}^{\beta})(\bar{q}_{L}^{\beta}\gamma_{\mu}q_{L} ^{\alpha}),\\ Q_{5}=4(\bar{s}_{L}\gamma_{\mu}d_{L})\sum_{q}(\bar{q}_{R}\gamma_{\mu}q_{R}),&Q_{6 }=4(\bar{s}_{L}^{2}\gamma_{\mu}d_{L}^{\beta})\sum_{q}(\bar{q}_{R}^{\beta}\gamma _{\mu}q_{R}^{\alpha}),\\ Q_{7}=6(\bar{s}_{L}\gamma_{\mu}d_{L})\sum_{q}e_{q}(\bar{q}_{R}\gamma_{\mu}q_{R} ),&Q_{8}=6(\bar{s}_{L}^{2}\gamma_{\mu}d_{L}^{\beta})\sum_{q}e_{q}(\bar{q}_{R} ^{\beta}\gamma_{\mu}q_{R}^{\alpha}),\\ Q_{9}=6(\bar{s}_{L}\gamma_{\mu}d_{L})\sum_{q}e_{q}(\bar{q}_{L}\gamma_{\mu}q_{L }),&Q_{10}=6(\bar{s}_{L}^{\alpha}\gamma_{\mu}d_{L}^{\beta})\sum_{q}e_{q}(\bar{ q}_{L}^{\beta}\gamma_{\mu}q_{L}^{\alpha}),\end{array} \tag{10}\] \(q=u,d,s\), \(e_{u}=2/3\) and \(e_{d}=e_{s}=-1/3\); \(\alpha\) and \(\beta\) are colour indices which, if unspecified, are understood to be contracted between the two quarks in the same current. The construction of the chiral counterpart to eq. (10) proceeds in two steps: * Firstly, one constructs the chiral structures describing the product of two fermionic currents. These structures must possess the same chiral transformation properties of the corresponding quark currents and are obtained by exploiting the quark-hadron duality between the Lagrangians of eqs. (3) and (4), valid at low energies. One can then find the chiral counterparts to the various Dirac structures by taking appropriate functional derivatives of the QCD and the \(\chi\)PT actions with respect to the same external sources. * The product of quark currents can then be decomposed into the irreducible representations of the flavour algebra. This is done by defining appropriate projectors which have to be applied as well to the chiral realisation of the quark currents. In this way one can obtain a set of operators in the chiral theory that are automatically classified according to the irreducible representation of the flavour algebra they belong to and that can be thus directly related to the initial ones, expressed in terms of quark bilinears (see e.g. [30, 31, 32]). Once this two-step program is carried out, one can finally reproduce the \(\Delta S=1\) chiral Lagrangian of ref. [30], which takes the following simple form \[\begin{split}{\mathcal{L}}_{\rm eff}^{\Delta S=1}=Gf_{\pi}^{4}& \big{\{}g_{27}\left(L_{\mu,\,2}^{3}L_{1}^{\mu,\,1}+\frac{2}{3}L_{ \mu,\,2}^{1}L_{1}^{\mu,\,3}-\frac{1}{3}L_{\mu,\,2}^{3}{\rm tr}\left[L^{\mu} \right]\right)\right)+g_{8}^{S}\,L_{\mu,\,2}^{3}{\rm tr}\left[L^{\mu}\right]\\ &+g_{8}\left({\rm tr}\left[\lambda L_{\mu}L^{\mu}\right]+e^{2}g_ {\rm ew}f_{\pi}^{2}{\rm tr}\,\left[\lambda U^{\dagger}QU\right]\right)\big{\}} \,,\end{split} \tag{11}\] Here \(\lambda\equiv\frac{1}{2}(\lambda_{6}-i\lambda_{7})\) is responsible for the \(s\to d\) flavour transition and we have specialised \(Q=\frac{1}{3}{\rm diag}(2,-1,-1)\) to be the charge matrix for quarks. The left-handed current chiral \(L_{\mu}\) is defined via \(L_{\mu}\equiv iU^{\dagger}D_{\mu}U\). Out of the pieces making up eq. (11), the first one transforms in the \((27_{L},1_{R})\) representation of the flavour group, while the second and the third ones transform in the \((8_{L},1_{R})\) and \((8_{L},8_{R})\) representation, respectively. Clearly, no singlet term can have any effect on \(\Delta S=1\) transitions. The \({\mathcal{O}}(1)\) coefficients \(g_{27}\), \(g_{8}\), \(g_{8}^{S}\) and \(g_{\rm ew}\) are functions of non-perturbative effective parameters, as well as of the Wilson coefficients of the weak operators, see eq. (9). Expanding (11) and keeping only the contributions relevant for our analysis, we find \[\begin{split}{\mathcal{L}}_{\rm eff}^{\Delta S=1}& \supset\frac{2}{3}f^{2}g_{27}G\left(2\partial^{\mu}K^{+}\partial_{ \mu}\pi^{-}+g_{x}X_{\mu}\left[i\partial^{\mu}K^{+}\pi^{-}(4x_{A}^{u}-x_{A}^{d }-3x_{A}^{s}+2x_{V}^{u}-2x_{V}^{d})\right.\right.\\ &\left.\left.-i\partial^{\mu}\pi^{-}K^{+}(4x_{A}^{u}-3x_{A}^{d}-x _{A}^{s}+2x_{V}^{u}-2x_{V}^{d})+{\rm h.c.}\right]\right)\\ &+2f^{2}g_{8}^{S}Gg_{x}\left(x_{A}^{u}+x_{A}^{d}+x_{A}^{s}\right) X_{\mu}\left[i\,\left(\partial^{\mu}K^{+}\pi^{-}-\partial^{\mu}\pi^{-}K^{+} \right)+{\rm h.c.}\right]\\ &+2f^{2}g_{8}G\left(\partial^{\mu}K^{+}\partial_{\mu}\pi^{-}+g_{ x}X_{\mu}\left[i\partial^{\mu}K^{+}\pi^{-}(x_{A}^{u}+x_{A}^{s}+x_{V}^{u}-x_{V}^{ d})\right.\right.\\ &\left.\left.-i\partial^{\mu}\pi^{-}K^{+}(x_{A}^{u}+x_{A}^{d}+x_{ V}^{u}-x_{V}^{s})+{\rm h.c.}\right]\right)+2f^{4}Ge^{2}g_{8}g_{\rm ew}K^{+}\pi^{-} \,,\end{split} \tag{12}\] which includes both a \(K\pi\) mixing term and a flavour-violating \(K^{\pm}\to\pi^{\pm}X\) interaction. Interestingly enough, one is now sensitive to both vectorial and axial couplings since the hadronic matrix element \(\left\langle K|\,O_{i}\,|\pi\right\rangle\) -with \(O_{i}\) from eq. (10)- receives contributions from both vector and axial-vector currents. ## 3 \(-\)\(K^{\pm}\to\pi^{\pm}X\) in \(\chi\)Pt The lagrangian pieces in eqs. (8) and (11) can be used in order to compute the decay rate for the process \(K^{\pm}\to\pi^{\pm}X\). The Feynman diagrams describing the process under consideration are depicted in fig. 1: the \(X\) boson can be either emitted at the same vertex where the flavour transition takes place (first diagram) or at a different one (second and third diagrams). In the second case, weak interactions prompt a flavour transition while the \(X\) boson is radiated at a different interaction point from an external leg. Figure 1: Diagrams generating the tree-level transition \(K^{\pm}\to\pi^{\pm}X\) in \(\chi\)PT, see ref. [26]. A pretty simple expression for the decay rate can be found assuming generation universality of the couplings (\(x_{V,A}^{u}=x_{V,A}^{d}=x_{V,A}^{s}\)) and taking the limit \(m_{K}\gg m_{X},m_{\pi}\) \[\Gamma\approx\frac{m_{K}}{2\pi}\left(\frac{m_{K}}{m_{X}}\right)^{2}G_{F}^{2}f_ {\pi}^{4}\,|V_{us}|^{2}\,g_{x}^{2}\,(x_{A}^{u})^{2}\left(g_{8}+\frac{3}{4}g_{8 }^{S}\right)^{2}\,. \tag{13}\] It is interesting to notice that as a consequence of the \(SU(3)_{V}\) chiral symmetry, in the limit of universal vector couplings, the decay rate of \(K^{\pm}\to\pi^{\pm}X\) becomes independent of these couplings. Secondly, it has to be appreciated that the expected enhancement factor \((m_{K}/m_{X})^{2}\) in eq. (13) for small \(m_{X}\) is correctly recovered, and is here produced by the longitudinal component of the polarization vector: \(\sum\varepsilon_{\mu}^{*}(q)\varepsilon_{\nu}(q)=-\eta_{\mu\nu}+\frac{q_{\mu}q _{\nu}}{m_{X}^{2}}\). The one-loop effects discussed in [25] can be incorporated in eq. (1) via \[x_{V}^{32}\to x_{V}^{32}-x_{sd}^{\rm eff} \tag{14}\] where, in the limit of universal couplings, _i.e._\(x_{V,A}^{u_{i}}=x_{V,A}^{d}=x_{V,A}^{s}\), and keeping only the dominant loop effects stemming from the exchange of the top quark, we obtain \[x_{sd}^{\rm eff}\simeq\frac{g^{2}}{64\pi^{2}}\,V_{td}V_{ts}^{*}x_{A}^{u}f(x_{t}) \tag{15}\] with \[f(x_{t})=x_{t}\left[\frac{2}{\epsilon}+\log\frac{\mu^{2}}{m_{t}^{2}}-\frac{1} {2}-3\frac{(1-x_{t}+\log x_{t})}{(x_{t}-1)^{2}}\right]\,. \tag{16}\] This allows us to compare tree-level vs loop-induced effects, by studying the ratio \[\frac{x_{sd}^{\rm eff}}{4g_{8}f_{\pi}^{2}Gx_{A}^{u}}\approx f(x_{t})\,, \tag{17}\] where \(f(x_{t})\) is a model-dependent loop function which depends on the specific UV completion of the effective theory and that is expected to be of order \({\mathcal{O}}(1)\). Loop- and tree- level effects are thus seen to be comparable in magnitude. However, the former depend critically on the specifics of the UV completion of the theory, whereas the latter provide robust and model-independent results. ### Flavour bounds vs. beam-dump and collider searches The results of the previous section can be employed in order to explore the capability of the process \(K^{\pm}\to\pi^{\pm}X\) to probe new light vector bosons. The DarkCast package [21, 22] enables one to derive bounds on vector and axial couplings of NP scenarios featuring new spin-1 particles by imposing current and future experimental constraints on several processes. The bounds in the \((m_{X},g_{x})\) plane arising from a variety of beam-dump and collider searches [22] as well as from the flavour changing process \(K^{\pm}\to\pi^{\pm}X\) discussed in this paper are shown in fig. 2. The plot refers to the benchmark two-Higgs doublet model in [22], with charge assignment \(x_{V}^{e}=0.044\), \(x_{V}^{\nu}=0.05\), \(x_{V}^{u,c,t}=1.021\), \(x_{V}^{d,s,b}=0.015\), \(x_{A}^{e}=-0.1\), \(x_{A}^{\nu}=0.05\), \(x_{A}^{u,c,t}=-0.95\) and \(x_{A}^{d,s,b}=-0.1\). The bounds from the process \(K^{\pm}\to\pi^{\pm}X\) are obtained by assuming tree-level, flavour-diagonal (_i.e._ disregarding the one-loop effects) couplings in eq.(1), and exploiting the measurement of \({\rm BR}(K^{+}\to\pi^{+}\nu\nu)=(1.73^{+1.15}_{-1.05})\times 10^{-10}\) by the E949 experiment at BNL [33]. In particular, we imposed the \(2\sigma\) bound \({\rm BR}(K^{+}\to\pi^{+}X)\lesssim 4\times 10^{-10}\). Remarkably, in all scenarios of fig. 2, the process \(K^{\pm}\to\pi^{\pm}X\) sets the strongest to date model-independent bound in the \((m_{X},g_{x})\) plane for \(m_{X}<m_{K}-m_{\pi}\). ## 4 Conclusions Among the most studied scenarios for new physics beyond the Standard Model are the ones introducing a new, feebly interacting massive particle. A particularly interesting subclass of these models features light spin-1 bosons having masses smaller than a few GeVs. Such a possibility has been extensively analysed in the light of experimental searches at colliders and at beam-dump experiments. However, considerably less attention has been given to the flavour constraints by the rare decay \(K^{\pm}\to\pi^{\pm}X\), which is the object of our work. We extended previous analyses by building the most general \(\Delta S=1\) chiral Lagrangian as induced by the SM weak interactions up to order \({\cal O}(p^{4})\). In particular we observe that the \({\cal O}(p^{2})\) terms in such a Lagrangian describe the loop-induced effects from [25] to the decay process under consideration, while the \({\cal O}(p^{4})\) terms generate the flavour transition already at the tree-level. Due to a different \(\lambda\) suppression (\(\lambda\) is here the Wolfenstein parameter), the two effects turn out being comparable in strength. However, whereas the loop-induced effects suffer from a dependence from the details of the explicit UV mechanism providing the spin-1 boson with a mass, the tree-level ones are model-independent. With our work we showed that the flavour process \(K^{\pm}\to\pi^{\pm}X\) puts the strongest model-independent constraints on the diagonal axial-vector couplings to light quarks with a NP light spin-1 particle \(X\) in the mass range \(m_{X}<m_{K}-m_{\pi}\). Figure 2: The dark shaded area represents the tree-level \(K^{\pm}\to\pi^{\pm}X\) bound obtained in ref. [26]. Limits from beam-dump and collider searches are obtained with DarkCast [22] and are shown for the purpose of comparison for the three benchmark models given in Table 1.
2309.08361
Bystanders of Online Moderation: Examining the Effects of Witnessing Post-Removal Explanations
Prior research on transparency in content moderation has demonstrated the benefits of offering post-removal explanations to sanctioned users. In this paper, we examine whether the influence of such explanations transcends those who are moderated to the bystanders who witness such explanations. We conduct a quasi-experimental study on two popular Reddit communities (r/askreddit and r/science) by collecting their data spanning 13 months-a total of 85.5M posts made by 5.9M users. Our causal-inference analyses show that bystanders significantly increase their posting activity and interactivity levels as compared to their matched control set of users. Our findings suggest that explanations clarify and reinforce the social norms of online spaces, enhance community engagement, and benefit many more members than previously understood. We discuss the theoretical implications and design recommendations of this research, focusing on how investing more efforts in post-removal explanations can help build thriving online communities.
Shagun Jhaver, Himanshu Rathi, Koustuv Saha
2023-09-15T12:39:14Z
http://arxiv.org/abs/2309.08361v2
# Bystanders of Online Moderation: Examining the Effects of Witnessing Post-Removal Explanations ###### Abstract. Prior research on transparency in content moderation has demonstrated the benefits of offering post-removal explanations to sanctioned users. In this paper, we examine whether the influence of such explanations transcends those who are moderated to the bystanders who witness such explanations. We conduct a quasi-experimental study on two popular Reddit communities (r/askreddit and r/science) by collecting their data spanning 13 months--a total of 85.5M posts made by 5.9M users. Our causal-inference analyses show that bystanders significantly increase their posting activity and interactivity levels as compared to their matched control set of users. Our findings suggest that explanations clarify and reinforce the social norms of online spaces, enhance community engagement, and benefit many more members than previously understood. We discuss the theoretical implications and design recommendations of this research, focusing on how investing more efforts in post-removal explanations can help build thriving online communities. Key words and phrases:content moderation, social media, transparency + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: + Footnote †: journal: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + study has shown that when offered removal explanations in any online community, users tend to improve their posting behavior in that community in the future (Kumar et al., 2020). Such evidence has been used to motivate platforms, community moderators, and policymakers to continue to push for increased, meaningful transparency in their moderation practices. This study seeks to add further empirical evidence to the effects of offering transparency in content moderation on social media platforms. Specifically, we look at whether such transparency can serve users other than those sanctioned. Prior research has provided evidence for the educational benefits of offering removal explanations for users whose content is removed (Kumar et al., 2020; Kumar et al., 2020). However, the effects on _bystanders_ who witness the post-removal and the explanation behind it have not been tested. In this research, we ask the question: _Do public removal explanations intended for the sanctioned users influence the posting behavior of bystanders to those explanations?_ We collected a dataset of 85.5M posts from two large Reddit communities, _r/AskReddit_ and _r/science_, over the time period Dec 2021-Dec 2022, and developed a computational framework based on causal inference that matched users who witnessed a removal explanation in June 2022 with users who did not witness any explanation. Comparing the post-treatment behavior of these matched groups, we found that exposure to removal explanations significantly boosted the posting activity and interactivity of bystanders as compared to non-bystanders. This shows that the educational benefits of moderation transparency are more broadly applicable than previously understood (Kumar et al., 2020). Drawing upon this insight, we argue that community managers must invest more time and effort in increasing moderation transparency through explanation messages. On the other hand, witnessing explanation messages did not significantly enhance the posting quality of bystanders. We speculate on the causes of this empirical insight and offer directions for future research that may help us better understand the role of explanation messages. ## 2. Background and Related Work ### Transparency in Content Moderation Moderation systems on social media platforms are designed for governance purposes and often impose measures such as removing content, muting, or banning offenders (Kumar et al., 2020; Kumar et al., 2020). These measures are implemented by content moderators, who may either be volunteers among the platform's user base or commercial content moderators hired by the platform (Kumar et al., 2020; Kumar et al., 2020). More recently, AI-driven tools have been used to assist in moderation processes (Kumar et al., 2020; Kumar et al., 2020; Kumar et al., 2020). We focus here on transparency in end-users' experience with moderation processes. _Transparency_ implies opening up "the working procedures not immediately visible to those not directly involved to demonstrate the good working of an institution" (Kumar et al., 2020). We situate our work within a line of research that examines the impact of content moderation on end-users. Scholars have investigated the impact of both user-level (Kumar et al., 2020; Kumar et al., 2020; Kumar et al., 2020; Kumar et al., 2020) and community-wide sanctions (Kumar et al., 2020; Kumar et al., 2020). This has included studies using a variety of methods, such as interviews (Kumar et al., 2020), design workshops (Kumar et al., 2020), surveys (Kumar et al., 2020; Kumar et al., 2020), and log analyses (Kumar et al., 2020; Kumar et al., 2020; Kumar et al., 2020). Some studies in this area have also highlighted the benefits of offering moderation explanations to sanctioned users (Kumar et al., 2020; Kumar et al., 2020). Our focus is also on end-users who witness, although they are not directly affected by, the moderation sanctions. By doing so, we contribute to building a theory (Kumar et al., 2020) that prescribes to community managers which moderation interventions should be deployed, under what circumstances, and with what expected outcomes. In examining the complexities of enacting content moderation, researchers have identified several issues regarding transparency in the procedures followed by platforms when applying punitive measures (Kumar et al., 2020). First, the criteria for determining inappropriate content might not be well-established before moderation decisions are made (Kumar et al., 2020). Legal experts have raised concerns that despite social media platforms publicly sharing their content policies, they often fail to adequately consider the contextual factors surrounding the content, such as its localized meaning and the identities of the speakers and audiences, when evaluating its appropriateness (Ross and Senn, 2017). Second, there are inter-platform differences in how norm violations are conceptualized. For example, an HCI study comparing the content policies of 15 platforms found a lack of consensus in defining what qualifies as online harassment and how forcefully content deemed as harassment should be moderated (Kasner et al., 2017). Consequently, when these vague content policies are implemented for content regulation, it can lead to ambiguity in resolving moderation cases (Ross and Senn, 2017). Finally, and most pertinent to our study, communication with end-users regarding moderation decisions is often found to be deficient in details (Ross and Senn, 2017; Ross and Senn, 2017). ### Removal Explanations and Bystanders to Norm Violations Several prior studies have emphasized the significance of incorporating moderation notifications and explanations into the design of moderation systems (Kasner et al., 2017; Kasner et al., 2017; Kasner et al., 2018; Kasner et al., 2019). For example, researchers have shown that when Facebook and Reddit platforms do not inform users about their content removal (Ross and Senn, 2017), users question which platform policy they have violated (Kasner et al., 2017; Ross and Senn, 2017). Besides removal notification, users desire a justification for why their posts got removed, deeming it a significant factor in their perception of moderation fairness (Kasner et al., 2018). Users also express dissatisfaction with the inconsistent punishments meted out to them versus others, leading them to request explanations further (Kasner et al., 2017; Kasner et al., 2019). Many studies have empirically shown the benefits of offering removal explanations in improving the behavior of moderated users (Kasner et al., 2018; Kasner et al., 2019; Kasner et al., 2019). For example, Tyler et al. found that users who were provided education about platform rules in the week following their post removal were less likely to post new violating content (Kasner et al., 2019). We extend this research by investigating the utility of explanations in influencing the behavior of bystanders. Curiously, Reddit moderators offer explanations publicly by commenting on the removed submission. While this is not the sole communication mode--indeed, many moderators privately message users to inform them about moderation--prior research has argued that public explanations serve to enhance broader transparency efforts (Kasner et al., 2018; Kasner et al., 2019). On Reddit, users already engaging with a post retain access to it even after it is removed from the main subreddit; in this sense, removed submissions are not really _removed_, just hidden from the public view. By publicly explaining the reason behind post removal, explanation comments serve users who stumble upon it or are already engaged. Encouraging voluntary compliance with behavioral norms in a community requires that community members know the norms and be aware of them when being active within the community. Kiesler et al. (Kasner et al., 2019) argue that people learn the community norms in three ways: (1) observing other people's behavior and its consequences, (2) seeing codes of conduct, and (3) behaving and directly receiving feedback. Prior research has demonstrated the importance of users seeing codes of conduct (Ross and Senn, 2017) and directly receiving feedback in improving their subsequent behavior (Kasner et al., 2019; Kasner et al., 2019). We focus here on establishing the utility of bystanders observing other people's norm violations and the resulting consequences. In terms of reducing the posting of norm-violating content, some research has focused on the roles bystanders can play in the context of online harassment. Blackwell et al. found that labeling a variety of technology-enabled abusive experiences as 'online harassment' helps bystanders _understand_ the breadth and depth of this problem (Bradbury et al., 2017). Further, designs that motivate bystander intervention discourage harassment through normative enforcement (Bradbury et al., 2017). Taylor et al. (Taylor et al., 2017) additionally found that design solutions that encourage empathy and accountability can promote bystander intervention to cyberbullying. Extending this line of research to a broader range of norm violations, we analyze how bystanders are affected by their exposure to post-removal explanations. ### Observational Research on Social Media Prior HCI and CSCW research has recognized that observational analyses of social media data can serve as a valuable tool for understanding society and evaluating changes in users' behavior, especially regarding their use of social network sites (Sakak et al., 2018). Regarding our study's context, empirical research on the effects of various content moderation interventions has often deployed observational analyses of social media logs (Bahdan et al., 2017; Bahdan et al., 2018; Sakak et al., 2018; Sakak et al., 2018). Similar to our work, such research has primarily examined behavior patterns over more extended timeframes, typically spanning months (Han et al., 2018; Krawczyk et al., 2018; Krawczyk et al., 2018). Examining the impact of an intervention, whether internal or external, is best studied through causal inference approaches, such as randomized controlled trials (RCTs). However, these approaches have certain limitations. First, experimental studies requiring participant consent can be constrained by concerns about the observer effect (Sakak et al., 2018)--that individuals might alter their typical behavior when they are aware of being monitored or observed. Second, conducting experimental research without participants' awareness is considered unethical, especially within the human-centered research paradigm (Sakak et al., 2018; Krawczyk et al., 2018). Finally, conducting experiments without prior awareness of their potential impact on participants can lead to long-term adverse consequences for both platforms and individuals. As a result, observational studies can serve as a viable alternative in situations where experimental approaches may not be feasible or ethical. While observational studies may not provide true causality, they are structured to minimize confounds and investigate longitudinal data, offering stronger evidence than basic correlational analyses (Krawczyk et al., 2018). Recently, there has been growing interest in these types of studies within the fields of HCI and behavioral science, including those analyzing social media data (Bahdan et al., 2017; Sakak et al., 2018; Sakak et al., 2018; Sakak et al., 2018; Sakak et al., 2018; Sakak et al., 2018; Sakak et al., 2018). Significantly, the research conducted by Saha et al. prompted us to operationalize metrics for assessing social media behavior, including factors like activity and interactivity. ## 3. Data In this paper, we study the effects of observing post-removal explanations on Reddit. We conducted an observational study on two major subreddits, _r/AskReddit_ (43M members) and _r/science_ (31M members). Fig. 1 shows example post-removals on these subreddits. We downloaded the data Figure 1. Examples of post-removals and explanations by a moderator on (a) _r/Askreddit_ (here, the explanation is provided by the Automoderator), and (b) _r/science_ (here, the explanation is provided by a human moderator). from these subreddits over 13 months between 01 December 2021-31 December 2022, using the _pushshift.io_ service. The downloaded data was in a \(Z\)standard compressed and encoded format. We iterated through this dataset, decompressing and decoding it in smaller chunks, and simultaneously storing the readable data into SQLite database tables. We queried the database to access the data for the ensuing analyses in the paper. Table 1 summarizes the data (submissions and comments) collected for our study. Note that we use the term _post_ to indicate posting activity in the form of either submissions or comments; therefore, for any given period \(T,N_{P}(T)=N_{S}(T)+N_{C}(T)\) (where \(N_{P}\), \(N_{S}\), and \(N_{C}\) denote the number of posts, submissions, and comments respectively). ### Defining Treated and Control Users Our study employed a causal-inference framework, drawing on similar approaches in prior research (Bowman et al., 2017; T ## 4. Methods ### Study Design and Rationale Our study aims to understand the effects of providing explanations to post removals in an online community. Such a problem would be best studied using an A/B Test or experimental approaches. However, conducting such an experiment has challenges and raises ethical concerns (Han et al., 2017; Wang et al., 2018; Wang et al., 2019). Given these considerations, we drew on quasi-experimental approaches to observational data. We adopted a causal-inference approach based on the potential outcomes framework proposed by Rubin (2017). This approach simulates an experimental setting by matching individuals (Treated and Control) on several covariates (Krishnan et al., 2017). For a given treatment, \(T\), two potential outcomes are compared: (1) when a user is exposed to \(T\) (\(T=1\)), and (2) when a user is not exposed to \(T\) (\(T=0\)). Because it is impossible to obtain both kinds of outcomes simultaneously for the same user, this framework estimates the missing counterfactual for a user based on the outcomes of a matched user--another user with similar covariates (attributes and behaviors) but not exposed to \(T\). Our work drew motivation from prior works that adopted similar causal-inference approaches on social media data (Han et al., 2017; Wang et al., 2019; Wang et al., 2019). ### Matching for Causal-Inference #### 4.2.1. Covariates for Matching We operationalized a number of covariates that we would use for matching the Treated and Control users, movitated from prior work (Han et al., 2017; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), as listed below. Each covariate was measured using the data in the user's pre-treatment history. _Frequency of Comments_ : The normalized quantity of comments per day in the pre-treatment period. _Frequency of Submissions_ : The normalized quantity of submissions per day in the pre-treatment period. _User Interactivity_ : The ratio of number of comments to total number of posts. _Submission Removal Rate_ : The ratio of removal submissions to total submissions posted by the user. _Karma_ : The average kappa across the comments and submissions made by the user. _Normalized \(n\)-grams_ : The normalized occurrences of the top 1000 \(n\)-grams (\(n=1,2\)). #### 4.2.2. Stratified Propensity Score Matching As mentioned above, we used matching to find pairs (generalizable to groups) of Treated and Control users with statistically similar covariates. We adopted the propensity score matching approach that matches users based on propensity scores, which is essentially a user's _likelihood_ of receiving the treatment. However, exact one-to-one propensity score matching can suffer from biases (Wang et al., 2019). Therefore, motivated by prior work (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), we adopted _stratified propensity score matching_ that can balance the bias-variance tradeoff of either too biased (one-to-one match) or too variant (unmatched) data comparisons. In a stratified matching approach, users with similar propensity scores are grouped into strata. Hence, every stratum consists of users with similar covariates (Wang et al., 2019). Through this approach, we isolated and estimated treatment effects within each stratum. For the above matching, we computed the propensity scores by building a logistic regression model with the covariates as independent variables and a user's binary treatment score (1 for Treated users and 0 for Control users) as dependent variable. We segregated the distribution of propensity scores into 200 strata of equal width. To ensure that our causal analysis was restricted to a sufficient number of similar users, we discarded strata with less than 10 Treated and 10 Control users. This led to a final matched dataset of 50 strata (4,842 Treated users and 146,922 Control users) in _r/AskReddit_ and 33 strata (4,890 Treated users and 176,324 Control users) in _r/science_. ### Measuring Treatment Effects After matching the Treated and Control users, we measured the differences in the post-treatment behaviors of the users. For this, we operationalized three outcomes--1) _Frequency of posting_, 2) _Interactivity_, and 3) _Submission Removal Rate_ for the users in the post-treatment period. Drawing on the difference in differences approach in causal-inference (Gomez et al., 2018), we calculated the average treatment effect (ATE) as the average of the difference of changes in the Treated users and the Control users per stratum. In addition, we obtained the effect size (Cohen's \(d\)) and evaluated statistical significance in differences using relative \(t\)-tests. We conducted Kolmogorov-Smirnov (\(KS\)) test to evaluate the differences in the distributions of the Treated and Control groups' outcomes. ## 5. Results Table 2 summarizes our observations of the differences in the post-treatment outcomes in our study. We describe our findings below: _Posting Frequency._ We find significant differences in the posting frequency of Treated and matched Control individuals. On _r/AskReddit_, the ATE is 0.453, which can be roughly interpreted as the treatment increases the frequency of posts by 1 for about 45.3% of the individuals. We see a high effect size (0.807) and significant differences as per \(t\)-test and \(KS\)-test (\(p<0.0001\)). We also see convergent findings in _r/science_ with an ATE of 0.025, Cohen's \(d\) of 1.075, and significant differences as per \(t\)-test and \(KS\)-test (\(p<0.0001\)). Higher posting frequency indicates that the Treated users (by-standards) became more active in the subreddits after witnessing the post-removal explanations. This measure is an indicator of positive community behavior (Kal over two dimensions--their future posting activity and the frequency of their future post-removals. In this section, we examine the implications of our findings for moderators, site administrators, designers, and future research. ### Removal Explanations Help Boost Posting Frequency We found that on both _r/AskReddit_ and _r/science_, users who got exposed to removal explanations directed at moderated others significantly increased their posting activity as compared to users who did not witness any explanations. It could be that seeing explanation messages indicated to bystanders that the community is well-moderated. This, in turn, could have enhanced their inclination to be active within the community. We note that this result contrasts Jhaver et al.' findings for moderated users: exposure to removal explanations reduced these users' future posting activity. One reason for this could be that users who suffer moderation may find it more difficult to accept the justification for their post-removals than other bystanders. Prior research has often grappled with the tradeoffs of moderation actions reducing posting traffic at the cost of improving posting quality (Han et al., 2015; D'Amico et al., 2016; D'Amico et al., 2016; D'Amico et al., 2016). However, in this study's context, for any given removed submission, there is only one moderated user but potentially many more bystanders. Thus, our results suggest that providing explanation messages may boost the overall posting frequency in a community. This empirical insight offers a powerful incentive to community managers considering the deployment of explanation messages. ### Removal Explanations Help Increase Community Engagement We found that exposure to others' explanation messages increases the posting interactivity. That is, bystanders' comments constitute a greater proportion of their posting volume after the treatment. Prior research has shown that this metric is an important factor in community engagement (Shen et al., 2016). Therefore, this finding suggests that observing the reasoned explanation for post removals can inform bystanders why certain types of posts are unacceptable in the community, help them learn its accepted norms (Bahdan et al., 2016), and thereby increase their confidence in instituting a deeper engagement with the community. This further demonstrates the utility of offering post-removal explanations. Another explanation for this finding is that users perceive moderators attend to and regulate inappropriate submissions more than inappropriate comments. This perception may incline them to engage more in posting comments than submissions in an effort to avoid experiencing post removals themselves. As prior research shows, users often develop "folk theories" of content moderation processes in order to make sense of them (Jhaver et al., 2017; D'Amico et al., 2016). Going forward, qualitative studies could inquire whether the posting activity of users is shaped by their folk theories of where the content moderation efforts are focused. ### Removal Explanations Do not Impact Post Removals Our analysis shows that removal explanations do not significantly impact the future post-removals of bystanders. This contrasts previous results for moderated users: Jhaver et al. showed that offering removal explanations reduced the future post-removals of moderated users (D'Amico et al., 2016). This suggests that explanation messages boost the posting quality of moderated users more than bystanders. Why is this the case? One reason could be that having experienced post removal, moderated users may be likelier to attend to _all_ community guidelines before posting their next submissions. On the other hand, witnessing a removal explanation may not be a strong enough incentive to bystanders to ensure compliance with community guidelines in their next submissions. It is possible that witnessing explanation messages educates bystanders about the violated community norm specific to the corresponding removed post and leads them to avoid the same violation in the future, yet they continue violating other community norms. While beyond the scope of the current paper, a more granular analysis could examine whether norm-specific learning occurs through removal explanations among bystanders. ### Design Implications This work bears design implications regarding the positive impacts of enacting transparency in online content moderation. The empirical evidence presented here informs community managers to put more effort into providing explanations for their sanctions, and more importantly, make these explanations _publicly visible_, so that they can educate bystanders. While content moderation has proliferated as an important aspect in online communities, providing explanations is still not as prevalent. For instance, to conduct this study, we originally started with four large subreddits- we had also collected over \(\sim\)2M posts from _r/politics_ (8.4M users) and _r/technology_ (15M users). However, despite being large subreddits and having many moderators, neither of these communities provided any post-removal explanations (which also prevented us from including their data in our analyses). Prior work has noted challenges in providing explanations in all instances, such as moderator fatigue and limitations of automated moderation tools (Han et al., 2017; Krizhevsky et al., 2017). However, with the advent of generative AI and large-language model-based technologies, it would be interesting to explore the design space of curating automated explanation messages through these emerging technologies. The computational framework of our study can be easily extended to delineate the effects of different kinds of explanations, e.g., explanation length and politeness level. The results of such analyses can inform platform owners and community managers about the suitability of different explanation types. Community managers can also examine whether different approaches to explanations are warranted for different norm violations. ### Limitations and Future Directions Our analyses focused on two large Reddit communities. Therefore, our results are most readily applicable to other subreddits of similar size. Future analyses would benefit from investigating the circumstances under which these results replicate (or do not) on other platforms and communities. The computational framework we have presented here should help such inquiries. Prior similar efforts on developing extendable computational frameworks for evaluating moderation actions have similarly used data from a limited number of samples (Beng et al., 2019; Chen et al., 2019; Krizhevsky et al., 2017; Krizhevsky et al., 2017). For this project, we had initially planned a comparative analysis of the effects of human v/s bot explanations on bystanders. However, our data review showed that all r/AskReddit explanations were provided by bots and all r/science explanations by human moderators during the treatment period. Therefore, we could not conduct our planned comparative analysis for either community. Future work should explore how AI-generated explanations compare to human-offered explanations in influencing bystanders' behavior, extending similar inquiries in prior research (Krizhevsky et al., 2017). Our analysis does not take into account the in-situ practical concerns and constraints under which content moderators work (Krizhevsky et al., 2017; Krizhevsky et al., 2017). Examining how moderators create explanations and developing tools to ease that process may help them offer explanation messages at a higher rate. ## 7. Conclusion Transparency in communications is a key concern for moderated users (Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). On the other hand, secretiveness about moderation decisions triggers speculation among users who suspect potential biases (Han et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). In this paper, we focus on one important mode of enacting greater transparency in moderation decisions: publicly visible messaging by moderators that reveals the reasons behind submission removals. Our analysis shows that witnessing such messages significantly boosts the posting and interactivity levels of bystanders. This suggests that adopting an educational approach to content moderation, as opposed to a strictly punitive one, can lead to enhanced community outcomes.
2301.13605
Aspects of the map from Exact RG to Holographic RG in AdS and dS
In earlier work the evolution operator for the exact RG equation was mapped to a field theory in Euclidean AdS. This gives a simple way of understanding AdS/CFT. We explore aspects of this map by studying a simple example of a Schroedinger equation for a free particle with time dependent mass. This is an analytic continuation of an ERG like equation. We show for instance that it can be mapped to a harmonic oscillator. We show that the same techniques can lead to an understanding of dS/CFT too.
Pavan Dharanipragada, Semanti Dutta, B. Sathiapalan
2023-01-31T13:06:17Z
http://arxiv.org/abs/2301.13605v1
# Aspects of the map from Exact RG to Holographic RG in AdS and dS ###### Abstract In earlier work the evolution operator for the exact RG equation was mapped to a field theory in Euclidean AdS. This gives a simple way of understanding AdS/CFT. We explore aspects of this map by studying a simple example of a Schroedinger equation for a free particle with time dependent mass. This is an analytic continuation of an ERG like equation. We show for instance that it can be mapped to a harmonic oscillator. We show that the same techniques can lead to an understanding of dS/CFT too. ###### Contents * 1 Introduction * 2 Mapping Free Particle with Time Dependent Mass to a Harmonic Oscillator * 2.1 Mapping Actions * 2.1.1 Lorentzian Case * 2.1.2 Euclidean Case * 2.2 Mapping Schrodinger Equations * 2.2.1 Lorentzian * 2.2.2 Euclidean * 2.2.3 Analytic Continuation * 2.3 Semiclassical Treatment * 2.3.1 Using Harmonic Oscillator Formulation * 2.3.2 Using ERG formulation * 3 * 3 ERG to field theory in dS * 3.1 Analytic Continuation * 3.1.1 Analytic Continuation of the Action * 3.2 Mapping * 3.2.1 Mapping from Quantum Mechanics * 3.2.2 Mapping from ERG * 3.3 Connections * 3.4 dS-CFT correspondence * 4 Obtaining Bulk field from ERG * 5 Summary and Conclusions Introduction It has been recognized from the early days of the AdS/CFT correspondence [1, 2, 3, 4] that the radial coordinate of the AdS space behaves like a scale for the boundary field theory. This observation follows directly from the form of the AdS metric in Poincare coordinates: \[ds^{2}=R^{2}\frac{dz^{2}+dx^{\mu}dx_{\mu}}{z^{2}} \tag{1.1}\] This leads naturally to the idea of the "Holographic" renormalization group: If the AdS/CFT conjecture is correct then radial evolution in the bulk must correspond to RG evolution in the boundary theory [9]-[25]. In [5, 6, 7] a mathematically precise connection was made between the exact RG (ERG) equation of a boundary theory and holographic RG equations of a bulk theory in Euclidean AdS (EAdS) space. It was shown that the ERG evolution operator of the boundary theory can be mapped by a field redefinition to a functional integral of a field theory in the bulk AdS space. This guarantees the existence of an EAdS bulk dual of a boundary CFT without invoking the AdS/CFT conjecture 1 Footnote 1: There is still the open question of the locality properties of interaction terms in this bulk field theory. For the case of the \(O(N)\) model some aspects of this issue were discussed in [7]. Given that the crucial ingredient in this connection with ERG is the form of the metric (1.1) with the factor \(z^{2}\) in the denominator, one is naturally led to ask if similar mappings can be done for the dS metric \[ds^{2}=L^{2}\frac{-d\eta^{2}+dx^{\mu}dx_{\mu}}{\eta^{2}} \tag{1.2}\] It too has a scaling form. The difference is that the scale is a time like coordinate - so RG evolution seems to be related to a real time evolution. In fact this metric is related to the EAdS metric by an analytic continuation: \(i\eta=z,\ iL=R\). Thus real time evolution should be related to RG evolution by analytic continuation. These points have been discussed in many of the early papers on de Sitter holography [[30]-[43]], (see also [44] for more recent work and further references.) This paper is an attempt to address the question of whether the mapping of [5] can be generalised to include for instance dS-CFT. One is also led to explore other kinds of mapping in an effort to understand the nature of this map better. In [5] the map was first introduced in the case of 0-dimensional field theory in the boundary, which gave a one dimensional bulk field theory or equivalently a point particle quantum mechanical system. In this paper therefore we start by exploring maps for point particle quantum mechanical systems. In Section 2 we show that the dynamics of a free particle with a time dependent mass can be mapped to a harmonic oscillator. The Euclidean version of this is relevant for the ERG equation. In Section 3 the case of mapping a field theory ERG equation to de Sitter space is considered by starting with the analytically continued form. This complements the discussion of earlier papers where dS-CFT is described as an analytic continuation of EAdS-CFT. In Section 4 we give some examples of two point functions obtained using the techniques of [5] being analytically continued to dS space. Section 5 contains a summary and conclusions. ## 2 Mapping Free Particle with Time Dependent Mass to a Harmonic Oscillator In this section we reconsider the construction of [5] where the action for a free field theory in \(D+1\) dimension with a non standard kinetic term was mapped to a free field in \(AdS_{D+1}\) When \(D=0\) this is just a particle: we will map a free particle with time dependent mass to a harmonic oscillator. ### Mapping Actions #### 2.1.1 Lorentzian Case Consider the following action. It defines an evolution operator for free particle (with time dependent mass) wave function. \[S=\frac{1}{2}\int_{t_{i}}^{t_{f}}dt\ M(t)\dot{x}^{2} \tag{2.3}\] \[\Psi(x,t)=\int dx_{i}\int x(t_{i}) = x_{i}\ \ \mathcal{D}x\ e^{i\frac{1}{2}\int_{t_{i}}^{t}M(t^{ \prime})\dot{x}^{2}dt^{\prime}}\Psi(x_{i},t_{i}) \tag{2.4}\] \[x(t) = x\] Let \(x(t)=f(t)y(t)\) with \(f^{2}(t)=\frac{1}{M(t)}\). Substitute this in (2.3). \[S=\frac{1}{2}\int dt\ (\dot{y}^{2}+(\frac{\dot{f}}{f})^{2}y^{2}+2\frac{\dot{f}} {f}\dot{y}y)\] \[=\frac{1}{2}\int dt\ [\dot{y}^{2}+(\frac{d\ln f}{dt})^{2}y^{2}-(\frac{d^{2}}{ dt^{2}}\ln f)y^{2}]+\frac{1}{2}\int dt\ \frac{d}{dt}(\frac{d\ln f}{dt}y^{2})\] Thus, upto the boundary term, the action is \[S=\frac{1}{2}\int dt\ [\dot{y}^{2}+e^{\ln f}(\frac{d^{2}}{dt^{2}}e^{-\ln f})y^{ 2}] \tag{2.5}\] Now choose \[e^{\ln f}(\frac{d^{2}}{dt^{2}}e^{-\ln f})=-\omega_{0}^{2} \tag{2.6}\] and we get \[\bar{S}=\frac{1}{2}\int dt\ [\dot{y}^{2}-\omega_{0}^{2}y^{2}] \tag{2.7}\] which is the action for a harmonic oscillator. And we define \(\bar{\Psi}\) by absorbing the contribution from the boundary term: \[\underbrace{e^{-\frac{1}{2}i\frac{d\ln f(t)}{dt}y^{2}(t)}\Psi(f(t)y,t)}_{ \bar{\Psi}(y,t)}=\int dy_{i}\int y(t_{i}) = y_{i}\ \ \mathcal{D}y\ e^{i\frac{1}{2}\int_{t_{i}}^{t_{i}}[\dot{y}^{2}- \omega_{0}^{2}y^{2}]dt^{\prime}}\underbrace{e^{-\frac{1}{2}i\frac{d\ln f(t_{i })}{dt}y^{2}(t_{i})}\Psi(f(t_{i})y_{i},t_{i})}_{\bar{\Psi}(y_{i},t_{i})} \tag{2.8}\] \(\bar{S}\) thus defines an evolution operator for the harmonic oscillator wave function \(\bar{\Psi}\). \(f\) satisfies \[\frac{d^{2}}{dt^{2}}\frac{1}{f}=-\omega_{0}^{2}\frac{1}{f} \tag{2.9}\] \(y\) obeys the same equation. Thus we can take \[\frac{1}{f}=a\ cos\ \omega_{0}(t-t_{0}) \tag{2.10}\] which requires \[M(t)=a^{2}cos^{2}\omega_{0}(t-t_{0})\] Note that one can do more general cases if one is willing to reparametrize time [26, 27]. Thus let \[d\tau=\frac{dt}{Mf^{2}} \tag{2.11}\] Then one gets (2.7), (2.9) and (2.10) with \(\tau\) replacing \(t\). In terms of \(t\), (2.9) becomes \[\frac{d}{dt}(M\dot{f})=\frac{\omega_{0}^{2}}{Mf^{3}} \tag{2.12}\] Very interestingly, as pointed out in [26], it is clear from (2.7) that the energy of the harmonic oscillator given by \[E=\frac{1}{2}(\dot{y}^{2}+\omega_{0}^{2}y^{2})\] is a conerved quantity. In terms of the original variables this is \[E=\frac{1}{2}((\frac{\dot{x}f-x\dot{f}}{f^{2}})^{2}+\omega_{0}^{2}(\frac{x}{f} )^{2})\] These are known as Ermakov-Lewis invariants - see [26] for references to the literature on these invariants - and we see a nice interpretation for them. #### 2.1.2 Euclidean Case In the Euclidean case the functional integral is \[\Psi(x,\!\tau)=\int dx_{i}\int_{\begin{array}{c}x(\tau_{i})\\ x(\tau)\end{array}} = x_{i}\ \ \mathcal{D}x\ e^{-\frac{1}{2}\int_{\tau_{i}}^{\tau}M(\tau^{ \prime})\dot{x}^{2}d\tau^{\prime}}\Psi(x_{i},\tau_{i}) \tag{2.13}\] \(\Psi\) in this case is not a wave function. It was shown in [5] that the evolution operator for a \(D\)-dimensional Euclidean field theory is of this form if we take \(M_{E}(\tau)=-\frac{1}{G(\tau)}\) and \(D=0\). In this case \(\Psi\) can be taken to be \(e^{-\mathcal{H}[x_{i},\tau_{i}]}\) where \(\mathcal{H}\) is a Hamiltonian or Euclideanized action. Alternatively (depending on what \(M_{E}(\tau)\) is) it can also be \(e^{W[J]}\) - a generating functional or partition function. Setting \(x=fy\) with \(f^{2}=\frac{1}{M_{E}(\tau)}\), one goes through the same manipulations but replacing (2.6) by \[e^{\ln f}(\frac{d^{2}}{d\tau^{2}}e^{-\ln f})=+\omega_{0}^{2} \tag{2.14}\] and (2.7),(2.8) and (2.9) are replaced by \[\bar{S}=\frac{1}{2}\int d\tau\ [\dot{y}^{2}+\omega_{0}^{2}y^{2}] \tag{2.15}\] \[\bar{\Psi}(y,\tau)=\int dy_{i}\int_{\begin{array}{c}y(\tau_{i})\\ y(\tau)\end{array}} = y_{i}\ \ \mathcal{D}y\ e^{-\frac{1}{2}\int_{\tau_{i}}^{\tau}[\dot{y}^{2}+ \omega_{0}^{2}y^{2}]d\tau^{\prime}}\bar{\Psi}(y_{i},\tau_{i}) \tag{2.16}\] and \[\frac{d^{2}}{d\tau^{2}}\frac{1}{f}=\omega_{0}^{2}\frac{1}{f} \tag{2.17}\] The solutions are of the form \[f=A\ sech\ \omega_{0}(\tau-\tau_{0}) \tag{2.18}\] which means \(M_{E}(\tau)=\frac{1}{A^{2}}cosh^{2}\omega_{0}(\tau-\tau_{0})\). (2.16) has a \(\tau\) independent action. In this case there are well known physical interpretations for the Euclidean theory. The evolution operator, \(K(y,\tau;y_{i},0)\), where \[\begin{array}{rcl}K(y,\tau;y_{i},0)=\int\!y(0)&=&y_{i}\ \ {\cal D}y\ e^{-\frac{1}{ 2}\int_{0}^{\tau}[\hat{y}^{2}+\omega_{0}^{2}y^{2}]d\tau^{\prime}}\\ y(\tau)&=&y\end{array} \tag{2.19}\] is the density operator of a QM harmonic oscillator in equilibrium at temperature specified by \(\beta=\tau\). Less well known is that the evolution operator of the Fokker-Planck equation in stochastic quantization can be written in the form given in (2.16). \(\bar{\Psi}\) is then related to the probability function (see, for instance, [29] for a nice discussion). In the next section we discuss the mappings directly for the Schroedinger equation, rather than its evolution operator. ### Mapping Schrodinger Equations #### 2.2.1 Lorentzian Let us consider the same mapping from the point of view of the Schroedinger equation for the free particle wave function. Schrodinger's equation for the free particle is \[i\frac{\partial\Psi(x,t)}{\partial t}=-\frac{1}{2M(t)}\frac{\partial^{2}\Psi( x,t)}{\partial x^{2}} \tag{2.20}\] \(\Psi\) given by (2.4) obeys this equation. We make a coordinate transformation and a wave function redefinition. Both can be understood as canonical transformations [28]. Let \(x=f(t)y\) with \(f^{2}=\frac{1}{M(t)}\). We take \(f,M\) to be dimensionless. We treat this as a \(0+1\) dimensional field theory where \(x\) has the canonical dimension of \(-\frac{1}{2}\). So \(x=L^{\frac{1}{2}}X\) would define a dimensionless \(X\). \(L\) is some length scale. \[\frac{\partial\Psi(x,t)}{\partial t}=\frac{\partial\Psi(f(t)y,t)}{\partial t} -\frac{\dot{f}y}{f}\frac{\partial\Psi(f(t)y,t)}{\partial y}\] Let \[\Psi(f(t)y,t)=e^{-\frac{1}{2}\alpha y^{2}}\bar{\Psi}(y,t)\] \[\frac{\partial\Psi}{\partial t}=e^{-\frac{1}{2}\alpha y^{2}}(-\frac{1}{2} \dot{\alpha}y^{2}+\frac{\partial}{\partial t})\bar{\Psi}(y,t)\] \[-i\frac{\dot{f}y}{f}\frac{\partial\Psi(f(t)y,t)}{\partial y}=ie^{-\frac{1}{2} \alpha y^{2}}(\alpha\frac{\dot{f}}{f}y^{2}-\frac{\dot{f}}{f}y\frac{\partial}{ \partial y})\bar{\Psi}(y,t)\] \[\frac{1}{M}\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}\Psi=\frac{1}{2} \frac{\partial^{2}}{\partial y^{2}}e^{-\frac{1}{2}\alpha y^{2}}\bar{\Psi}=( \frac{1}{2}e^{-\frac{1}{2}\alpha y^{2}}(\alpha^{2}y^{2}-2\alpha y\frac{ \partial}{\partial y}-\alpha+\frac{\partial^{2}}{\partial y^{2}})\bar{\Psi})\] Collecting all the terms one finds that (2.20) becomes: \[i\frac{\partial\bar{\Psi}}{\partial t}=(\frac{1}{2}i\dot{\alpha}-i\alpha \frac{\dot{f}}{f}-\frac{1}{2}\alpha^{2})y^{2}\bar{\Psi}+(i\frac{\dot{f}}{f}y \frac{\partial}{\partial y}+\alpha y\frac{\partial}{\partial y})\bar{\Psi}+ \frac{1}{2}\alpha\Psi-\frac{1}{2}\frac{\partial^{2}}{\partial y^{2}}\bar{\Psi} \tag{2.21}\] We choose \(\alpha=-i\frac{f}{f}\) to get rid of the second term on the RHS. We get \[i\frac{\partial\bar{\Psi}}{\partial t}=[(\frac{1}{2}\frac{d^{2}\ln f}{dt^{2}}- \frac{1}{2}(\frac{d\ln f}{dt})^{2})y^{2}+\frac{1}{2}\alpha-\frac{1}{2}\frac{ \partial^{2}}{\partial y^{2}}]\bar{\Psi}\] As before it can be rewritten as \[i\frac{\partial\bar{\Psi}}{\partial t}=\frac{1}{2}[-e^{\ln f}(\frac{d^{2}}{dt^ {2}}e^{-\ln f})y^{2}-\frac{\partial^{2}}{\partial y^{2}}+\alpha]\bar{\Psi} \tag{2.22}\] Set \[\frac{d^{2}}{dt^{2}}\frac{1}{f}=-\omega_{0}^{2}\frac{1}{f}\] again as before to get \[i\frac{\partial\bar{\Psi}}{\partial t}=\frac{1}{2}[-\frac{\partial^{2}}{ \partial y^{2}}+\omega_{0}^{2}y^{2}+\alpha]\bar{\Psi} \tag{2.23}\] The term \(\frac{1}{2}\alpha\) generates a scale transformation \(e^{-\frac{1}{2}\ln\frac{f(t)}{f(t)}}\) for \(\bar{\Psi}\). #### 2.2.2 Euclidean The Euclidean version is \[\frac{\partial\Psi(x,\tau)}{\partial\tau}=\frac{1}{2M_{E}(\tau)}\frac{ \partial^{2}\Psi(x,\tau)}{\partial x^{2}} \tag{2.24}\] As mentioned above, this is of the form of a Polchinski ERG equation (with \(\frac{1}{2M_{E}(\tau)}=-\dot{G}(\tau)\)) for \(\mathcal{H}\) defined by \(\Psi\equiv e^{-\mathcal{H}}\). Going through the same steps one finds, with \(f^{2}=\frac{1}{M_{E}(\tau)}\), \[\frac{\partial\bar{\Psi}}{\partial\tau}=(\frac{1}{2}\dot{\alpha}-\alpha\frac{ \dot{f}}{f}+\frac{1}{2}\alpha^{2})y^{2}\bar{\Psi}+(\frac{\dot{f}}{f}y\frac{ \partial}{\partial y}-\alpha y\frac{\partial}{\partial y})\bar{\Psi}-\frac{1 }{2}\alpha\Psi+\frac{1}{2}\frac{\partial^{2}}{\partial y^{2}}\bar{\Psi} \tag{2.25}\] the condition \(\alpha=\frac{f}{f}\) and the equation becomes \[\frac{\partial\bar{\Psi}}{\partial t}=\frac{1}{2}[-\underbrace{e^{\ln f}( \frac{d^{2}}{dt^{2}}e^{-\ln f})}_{=\ \omega_{0}^{2}}y^{2}+\frac{\partial^{2}}{ \partial y^{2}}-\alpha]\bar{\Psi} \tag{2.26}\] Thus \[\frac{\partial\bar{\Psi}}{\partial\tau}=\frac{1}{2}[\frac{\partial^{2}}{ \partial y^{2}}-\omega_{0}^{2}y^{2}-\alpha]\bar{\Psi} \tag{2.27}\] And \(f\) obeys \[\frac{d^{2}}{dt^{2}}\frac{1}{f}=\omega_{0}^{2}\frac{1}{f} \tag{2.28}\] This is a Euclidean harmonic oscillator equation. Various physical interpretations of this equation were given in the last section. The term \(\alpha\) in (2.27) provides a multiplicative scaling \(e^{-\frac{1}{2}\int_{t_{i}}^{t}dt^{\prime}\ \partial_{\prime}\ln f}=(\frac{f(t_{i})}{f(t)})^{\frac{1}{2}}\) of \(\bar{\Psi}\). #### 2.2.3 Analytic Continuation If we set \(it=\tau\), (2.20) becomes (2.24) provided \(M(-i\tau)=M_{E}(\tau)\). Similarly (2.23) becomes (2.27). Note that in (2.23) \(\alpha=-i\frac{\dot{f}}{f}\). This analytically continues to \(\frac{\dot{f}}{f}\) as required. ### Semiclassical Treatment Most of the AdS/CFT calculations invoke large N to do a semiclassical treatment of the bulk theory- one can evaluate boundary Green's function. The analysis in [5, 7] did this for the ERG treatment - the evolution of the Wilson action/Generating functional were calculated. In [32] a semiclassical treatment was used to obtain the ground state wave function in dS space. For completeness we do the same for the simple systems discussed in this paper. This illustrates the connection between ERG and dS. #### 2.3.1 Using Harmonic Oscillator Formulation Since \[\Psi(x,t)=\int dx_{i}\int\limits_{x}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!x(t _{i}) = x_{i}\ \ {\cal D}x\ e^{i\int_{t_{i}}^{t}L(x(t^{\prime}),\dot{x}(t^{\prime} ),t^{\prime})dt^{\prime}}\Psi(x_{i},t_{i}) \tag{2.29}\] \[x(t) = x\] solves Schroedinger's equation. For the Harmonic Oscillator \[L=\frac{1}{2}(\dot{x}^{2}-\omega_{0}x^{2}) \tag{2.30}\] for the Lorentzian version. One can evaluate the path integral semiclassically by plugging in a classical solution with some regular boundary condition. We choose \(x=0\) at \(t=-\infty\). The initial state wave function is thus a delta function. Classical solution of the EOM is of the form \[x(t)=ae^{-i\omega_{0}t}+a^{*}e^{i\omega_{0}t}\] Since \(a\) should annihilate the vacuum state in the far past we would like the solution to look like \[x(t)\to e^{i\omega_{0}t}\] in order to ensure that we are in the ground state. \[x(t)=x_{f}e^{-i\omega_{0}(t_{f}-t)} \tag{2.31}\] At \(t=-\infty\) we assume that the solution vanishes. This is justified by an infinitesimal rotation \(t\to t+i\epsilon t\). Evaluated on this solution, the action becomes \[S_{classical}=\ \frac{1}{2}x(t)\dot{x}(t)|_{-\infty}^{t_{f}}\] We get \[S_{classical}=\frac{1}{2}i\omega_{0}x_{f}^{2} \tag{2.32}\] Plugging (2.31) into (2.29) we obtain \[\Psi(x_{f})\approx e^{-\frac{1}{2}i\omega_{0}x_{f}^{2}} \tag{2.33}\] If we repeat this for the free field in dS space we get the ground state wave functional [32]. #### 2.3.2 Using ERG formulation For the Euclidean version, we set \(it=\tau\) and we write \[\Psi(x,\tau)=\int dx_{i}\int_{x}(\tau_{i}) = x_{i}\ \ \mathcal{D}x\ e^{-\int_{\tau_{i}}^{\tau}L_{E}(x(\tau^{ \prime}),\dot{x}(\tau^{\prime}),\tau^{\prime})d\tau^{\prime}}\Psi(x_{i},\tau_{i}) \tag{2.34}\] \[x(\tau) = x\] It is well known that if one does the semiclassical analysis for the Euclidean case with general boundary condition one recovers the thermal density matrix. This is for the time independent Hamiltonian - such as the harmonic oscillator. We will not do this here. Instead we proceed directly to the ERG interpretation of the calculation. Here the Hamiltonian is time dependent. In [5] the analysis given below was applied to \(W[J]\). We repeat it here for the Wilson action. Our starting action in this case is (Note \(\dot{G}<0\)): \[S=-\frac{1}{2}\int_{\tau_{i}}^{\tau_{f}}\frac{\dot{x}^{2}}{\dot{G}} \tag{2.35}\] EOM is given by, \[\partial_{\tau}(\frac{\dot{x}}{\dot{G}})=0\] \[\frac{\dot{x}}{\dot{G}}=b\implies x=bG+c\] We choose \(G\) so that it vanishes at \(\tau=\infty\). For the Euclidean Harmonic oscillator case \(G\) has then to be \[G=-\frac{1}{\omega_{0}}(tanh\ \omega(\tau-\tau_{i})-1)\] Also \(x\to 0\) as \(\tau\rightarrow\infty\). So \(c=0\). \[x=bG \tag{2.36}\] \[x(\tau)=-\frac{b}{\omega_{0}}(tanh\ \omega(\tau-\tau_{i})-1)\] On shell \[S=-\frac{1}{2}\int_{\tau_{i}}^{\tau_{f}}d\tau\ \frac{d}{d\tau}(\frac{x\dot{x}}{G})\] \[=\frac{1}{2}(x(\tau_{f})-x(\tau_{i}))b=\frac{1}{2}[\frac{x(\tau_{f})x(\tau_{ f})}{G(\tau_{f})}-\frac{x(\tau_{i})x(\tau_{i})}{G(\tau_{i})}]\] If we add this change to the initial Wilson action \(\frac{1}{2}\frac{x(\tau_{i})x(\tau_{i})}{G(\tau_{i})}\) we get the final Wilson action \[\mathcal{H}_{f}=\frac{1}{2}\frac{x(\tau_{f})x(\tau_{f})}{G(\tau_{f})}\] If, for instance, we are interested in evaluating \(\mathcal{H}\) semiclassically at \(\tau=\tau_{i}\). \[x(\tau_{i})=\frac{b}{\omega_{0}}\implies b=x(\tau_{i})\omega_{0}\] \[x(\tau)=-x(0)(tanh\ \omega(\tau-\tau_{i})-1)\] \[\dot{x}(\tau)=-x(0)\omega_{0}sech^{2}\omega_{0}(\tau-\tau_{i})\] The classical action is \[S_{classical}=\frac{1}{2}\omega_{0}x(\tau_{i})^{2}\] Thus since \(G(\tau_{i})=\frac{1}{\omega_{0}}\), \(\mathcal{H}\) evaluated semiclassically is: \[\mathcal{H}[x,\tau_{i}]\approx\frac{1}{2}\omega_{0}x(\tau_{i})^{2} \tag{2.37}\] Then \[\Psi=e^{-\mathcal{H}[x,\tau_{i}]}=e^{-\omega_{0}x(\tau_{i})^{2}}\] which coincides with the ground state wave function of the harmonic oscillator. This is essentially the Hartle Hawking prescription [45]. This also motivates the dS-CFT correspondence statement [30, 31, 32] that \(\Psi_{dS}=Z_{CFT}\) This concludes the discussion of the mapping of ERG equation to a Euclidean harmonic oscillator. In higher dimensions this gives free field theory in flat space. We now return to the case of interest, namely dS space. ## 3 ERG to field theory in dS We first map the system to Euclidean AdS. Then analytically continue and obtain dS results. Alternatively, one can analytically continue the ERG equation to the Schroedinger equation (when \(D=0\) this is a free particle with a time dependent mass) and then map to de Sitter space. This is all exactly as was done for the harmonic oscillator. ### Analytic Continuation The EAdS metric in Poincare coordinates is \[ds^{2}=R^{2}[\frac{dx_{i}dx^{i}+dz^{2}}{z^{2}}] \tag{3.38}\] The dS metric in Poincare coordinates is: \[ds^{2}=L^{2}[\frac{dx_{i}dx^{i}-d\eta^{2}}{\eta^{2}}] \tag{3.39}\] The metrics are related by analytic continuation: \[i\eta=z,\quad iL=R\] #### 3.1.1 Analytic Continuation of the Action The action generically is \[S=-\frac{1}{2}\int d^{D+1}x\sqrt{g}[g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu} \phi+m^{2}\phi^{2}] \tag{3.40}\] de SitterIn this case we write \(\sqrt{-g}\) since \(g\) is negative: \(g=-(\frac{L^{2}}{\eta^{2}})^{D+1}\). Also \(g^{00}=-\frac{\eta^{2}}{L^{2}}\) and \(g^{ij}=\delta^{ij}\frac{\eta^{2}}{L^{2}}\). Thus \[S_{dS}=\int d^{D}x\int_{0}^{\infty}d\eta\ (\frac{L}{\eta})^{D+1}[\frac{\eta^{2} }{L^{2}}\partial_{\eta}\phi\partial_{\eta}\phi-\frac{\eta^{2}}{L^{2}}\partial_ {i}\phi\partial_{i}\phi-m^{2}\phi^{2}] \tag{3.41}\] In momentum space: \[S_{dS}=\int\frac{d^{D}p}{(2\pi)^{D}}\int_{0}^{\infty}d\eta\ (\frac{L}{\eta})^{D+1}[ \frac{\eta^{2}}{L^{2}}\partial_{\eta}\phi(p)\partial_{\eta}\phi(-p)-(\frac{ \eta^{2}}{L^{2}}p^{2}+m^{2})\phi(p)\phi(-p)] \tag{3.42}\] The functional integral description of the quantum mechanical evolution operator for the wave functional of the fields in dS space-time is \[\bar{\Psi}[\phi(p),t]=\int d\phi_{i}(p)\int\phi(p,t_{i}) = \phi_{i}(p)\ \ \mathcal{D}\phi(p,t)\ e^{i\frac{1}{2}\int_{t_{i}}^{t}[ \dot{\phi}(p,t^{\prime})^{2}-\omega_{0}^{2}\phi(p,t^{\prime})^{2}]dt^{\prime} }\bar{\Psi}[\phi_{i}(p),t_{i}] \tag{3.43}\] \[\phi(p,t) = \phi(p)\] Euclidean Anti de Sitter\(g=(\frac{R^{2}}{z^{2}})^{D+1}\). Also \(g^{00}=\frac{z^{2}}{R^{2}}\) and \(g^{ij}=\delta^{ij}\frac{z^{2}}{R^{2}}\). \[S_{EAdS}=\int d^{D}x\int_{0}^{\infty}dz\ (\frac{R}{z})^{D+1}[\frac{z^{2}}{R^{2 }}\partial_{z}\phi\partial_{z}\phi+\frac{z^{2}}{R^{2}}\partial_{i}\phi\partial _{i}\phi+m^{2}\phi^{2}] \tag{3.44}\] In momentum space \[S_{EAdS}=\int\frac{d^{D}p}{(2\pi)^{D}}\int_{0}^{\infty}dz\ (\frac{R}{z})^{D+1}[ \frac{z^{2}}{R^{2}}\partial_{z}\phi(p)\partial_{z}\phi(-p)+(\frac{z^{2}}{R^{2} }p^{2}+m^{2})\phi(p)\phi(-p)] \tag{3.45}\] If we set \(i\eta=z\) and \(iL=R\) we see that the functional integral (3.43) becomes \[\bar{\Psi}[\phi(p),t]=\int d\phi_{i}(p)\int\phi(p,t_{i}) = \phi_{i}(p)\ \ \mathcal{D}\phi(p,t)\ e^{-\frac{1}{2}\int_{t_{i}}^{t}[ \dot{\phi}(p,t^{\prime})^{2}+\omega_{0}^{2}\phi(p,t^{\prime})^{2}]dt^{\prime} }\bar{\Psi}[\phi_{i}(p),t_{i}] \tag{3.46}\] \[\phi(p,t) = \phi(p)\] In holographic RG this is interpreted as a Euclidean functional integral giving the evolution in the radial direction. \(\bar{\Psi}\) is to be interpreted as \(e^{-S_{I}[\phi(p),t]}\) where \(S_{I}\) is the Wilson action. It was shown in [5] (see below) that this can be obtained by mapping an ERG evolution operator. The dS functional integral (3.43) above is thus an analytically continued version of this. ### Mapping #### 3.2.1 Mapping from Quantum Mechanics Let us go back to Section (2.1) and consider the mapping from the Quantum Mechanics of a free particle with time dependent mass. We think of it as a \(0+1\) dimensional field theory. \(M(t)\) is taken to be dimensionless and \(x\) has canonical dimensions of \(-\frac{1}{2}\). \[S=\frac{1}{2}\int dt\ M(t)\dot{x}^{2} \tag{3.47}\] (In the ERG version \(M(t)=\frac{1}{G}\)) The path integral is \[\int\mathcal{D}x\ e^{iS} \tag{3.48}\] As before \(x(t)=f(t)y(t)\) with \(f^{2}(t)=\frac{1}{M(t)}\). Substitute this in (3.47) and go through the same steps to obtain: \[S=\frac{1}{2}\int dt\ [\dot{y}^{2}+e^{\ln f}(\frac{d^{2}}{dt^{2}}e^{-\ln f})y^{2}] \tag{3.49}\] Now choose \[e^{\ln f}(\frac{d^{2}}{dt^{2}}e^{-\ln f})=-(\frac{\eta^{2}}{L^{2}}p^{2}+m^{2}) \tag{3.50}\] where \(\eta=Le^{\frac{t}{L}}\). to obtain \(S_{dS}\) \[S_{dS}=\frac{1}{2}\int dt\ [\dot{y}^{2}-(\frac{\eta^{2}}{L^{2}}p^{2}+m^{2})y^{2}]\] \[=\frac{1}{2}\int d\eta\ (\frac{L}{\eta})[\frac{\eta^{2}}{L^{2}}\partial_{\eta}y \partial_{\eta}y-(\frac{\eta^{2}}{L^{2}}p^{2}+m^{2})y^{2}] \tag{3.51}\] \(p,m\) here are just some parameters. When \(D>0\) they will stand for momentum and mass of the field respectively. So starting from a free particle with time dependent mass we obtain the free field action in de Sitter space \(dS_{D+1}\) with \(D=0\). **Schroedinger Equation:** \[i\frac{\partial\Psi(x,t)}{\partial t}=-\frac{1}{2M(t)}\frac{\partial^{2}\Psi( x,t)}{\partial x^{2}} \tag{3.52}\] Using the same mapping as in Section (2.2.1), \(x=fy\) \[\Psi(f(t)y,t)=e^{-\frac{1}{2}\alpha y^{2}}\bar{\Psi}(y,t)\] with \(\alpha=-i\frac{\dot{f}}{f}\) one obtains \[i\frac{\partial\bar{\Psi}}{\partial t}=[(\frac{1}{2}\frac{d^{2}\ln f}{dt^{2}} -\frac{1}{2}(\frac{d\ln f}{dt})^{2})y^{2}+\frac{1}{2}\alpha-\frac{1}{2}\frac{ \partial^{2}}{\partial y^{2}}]\bar{\Psi}\] Using (3.50) this becomes \[i\frac{\eta}{L}\frac{\partial\bar{\Psi}}{\partial\eta}=[-\frac{1}{2}\frac{ \partial^{2}}{\partial y^{2}}+\frac{1}{2}(\frac{\eta^{2}}{L^{2}}p^{2}+m^{2})y ^{2}+\frac{1}{2}\alpha]\bar{\Psi} \tag{3.53}\] If we construct the Schroedinger equation corresponding to the action (3.51) one obtains \[i\frac{\eta}{L}\frac{\partial\bar{\Psi}}{\partial\eta}=[-\frac{1}{2}\frac{ \partial^{2}}{\partial y^{2}}+\frac{1}{2}(\frac{\eta^{2}}{L^{2}}p^{2}+m^{2})y ^{2}]\bar{\Psi} \tag{3.54}\] which barring the field independent term \(\alpha\) is exactly the same as (3.53). This term as we have seen provides an overall field independent scaling for all wave functions. It is a consequence of the ordering ambiguity in going from classical to quantum treatment. (3.54) (or its extension to \(D>0\)) describes the quantum mechanical time evolution of the matter field wave functional in de Sitter space. #### 3.2.2 Mapping from ERG ActionWe now consider the Euclidean version of (3.47), which is the Polchinski ERG equation. This is what was done in [5]. Thus we replace \(M(t)\) by \(-\frac{1}{\dot{G}}\). \[S=-\frac{1}{2}\int d\tau\ \frac{\dot{x}^{2}}{\dot{G}} \tag{3.55}\] The path integral is (\(\dot{G}<0\)) \[\int{\cal D}x\ e^{\frac{1}{2}\int d\tau\ \frac{\dot{x}^{2}}{\dot{G}}} \tag{3.56}\] which can be obtained from (3.52) by setting \(it=\tau\). We take \(z=Re^{\frac{\tau}{\dot{\tau}}}\) If we let \(i\eta=z,\ iL=R,\ it=\tau\) then this can be obtained from the corresponding Minkowski case. As before \(x(\tau)=f(\tau)y(\tau)\) with \(f^{2}(\tau)=\dot{G}\). Substitute this in (3.55) and go through the same steps to obtain: \[S=\frac{1}{2}\int d\tau\ [\dot{y}^{2}+e^{\ln f}(\frac{d^{2}}{d\tau^{2}}e^{- \ln f})y^{2}] \tag{3.57}\] Now choose \[e^{\ln f}(\frac{d^{2}}{d\tau^{2}}e^{-\ln f})=(\frac{z^{2}}{R^{2}}p^{2}+m^{2}) \tag{3.58}\] where \(z=Re^{\frac{\tau}{\dot{\tau}}}\). to obtain \(S_{EAdS}\) \[S_{EAdS}=\int dz\ (\frac{R}{z})[\frac{z^{2}}{R^{2}}\partial_{z}y\partial_{z}y+( \frac{z^{2}}{R^{2}}p^{2}+m^{2})y^{2}] \tag{3.59}\] ERG EquationBy analogy with the Schroedinger equation we can see that (3.56) is the evolution operator corresponding to the ERG equation \[\frac{\partial\Psi(x,\tau)}{\partial\tau}=-\frac{1}{2}\dot{G}\frac{\partial^ {2}\Psi(x,\tau)}{\partial x^{2}} \tag{3.60}\] By the same series of transformations as in the de Sitter case, but using (3.58), one obtains \[\frac{z}{R}\frac{\partial\bar{\Psi}}{\partial z}=[\frac{1}{2}\frac{\partial^ {2}}{\partial y^{2}}-(\frac{z^{2}}{R^{2}}p^{2}+m^{2})y^{2}-\frac{1}{2}\alpha] \bar{\Psi} \tag{3.61}\] with \(\alpha=\frac{\dot{f}}{f}\) generating an overall scale transformation for \(\bar{\Psi}\). In the ERG context \(\bar{\Psi}\) represents \(e^{W[J]}\) upto a quadratic term. This equation is the holographic RG equation in the AdS/CFT correspondence for an elementary scalar field [5]. ### Connections Let us summarize the various connections obtained above. * We start with the quantum mechanics of a free particle having a time dependent mass. The Schroedinger equation (SE) for this is (2.20). Analytical continuation of this equation (generalized to higher dimensions) gives the Polchinski ERG equation (2.24). * The free particle SE (2.20) can be mapped to a SE for a harmonic oscillator (2.23). The ERG equation (2.24) can similarly be mapped to a Euclidean harmonic oscillator (2.27)-analytically continued version of (2.23). * The evolution operators for the above equations are defined in terms of path integrals over some actions. The same mapping function \(f\) maps the corresponding actions to each other. Thus the evolution operator for the free particle Schroedinger equation is given by the action in (2.3) which is mapped to a harmonic oscillator action (2.7). The analytical continuation of these are the Euclidean ERG evolution operator (2.13) mapped to a harmonic oscillator Hamiltonian (2.16). These steps are summarized in the flow diagram in Figure 1. * The mapping function \(f\) was originally chosen in [5] to map the free particle ERG action (3.55) to an action for free fields in \(EAdS_{0+1}\) given in (3.60). The analytical continuation of this problem to real time gives us an action in \(dS_{0+1}\) (3.51). * One can also repeat these steps for the corresponding "wave" equations. The Polchinski ERG equation for \(e^{W[J]}\) gets mapped to an equation in EAdS for \(e^{W[J]}\) which is nothing but the holographic RG equations. Analytically continuing this, the Schroedinger equation for a wave functional is mapped to a Schroedinger equation for wave functionals of fields in dS. These are summarized in the figure below (Fig.2). The analytic continuation can be done before the map with \(f\) is applied or after as shown in the figure. It can be done both for the actions as well as for the equations. ### dS-CFT correspondence The connections with ERG mentioned above should, if pursued, provide some insights into dS-CFT correspondence. We restrict ourselves to some preliminary observations in this paper. Figure 1: Mapping ERG to Harmonic Oscillator The idea of dS-CFT correspondence was suggested in [30, 31, 32]. This has been investigated further by many authors, e.g. [33, 34, 38, 39, 35, 37, 36]. What we see from the above analysis is that considering the _relation between the evolution equations_, one can say that \[\Psi[\phi,J]_{wave-functional\ in\ dS}=\{Z[\phi,J]_{CFT}\}_{analytically\ continued} \tag{3.62}\] Thus we see that the dS-CFT correspondence suggested by this analysis is one between an ERG equation for a CFT generating functional and a _real time quantum mechanical evolution_ of a wave functional in dS space time. The LHS of (3.62) is a QM wave functional of fields on a \(D\)-dimensional spatial slice of a \(D+1\) dimensional dS spacetime. The RHS is the analytically continued partition function of a \(D\)-dimensional Euclidean CFT - more precisely, either \(e^{W_{\Lambda}[J]}\) or \(e^{-S_{I,\Lambda}[\phi]}\). The precise statement has to involve some statement of the boundary conditions. In the next section we give a concrete example with boundary conditions specified. Note that the LHS is a complex probability _amplitude_. Expectation values will involve \(\Psi^{*}\Psi\) and were calculated first in [30, 31, 32]. One can proceed to ask whether the expectations on the spatial slice calculated using \(\Psi^{*}\Psi\) also correspond to some other Euclidean CFT on the spatial slice. This was explored further in [38]. We do not address this question here. In the next section we give some examples that explicitly illustrate the connection made by (3.62). Figure 2: Mapping ERG to Holographic RG Obtaining Bulk field from ERG The ERG formulation stated in this paper starts with the boundary fields. The evolution operator for this involves bulk fields but with a non standard action. When this action is mapped to EAdS action one can interpret the newly mapped field as the EAdS bulk field. This analysis for Euclidean AdS is well defined and has been done in [5, 7]. However, this treatment does not have a natural interpretation in the context in dS space. We have elaborated that in this section. ### Bulk scalar field in Euclidean AdS and dS There are conceptual barriers if one tries to do similar analysis to map the ERG evolution operator directly to Lorentzian dS. First of all, it is not clear as in EAdS whether the function G(t) a.k.a \(f^{2}(t)=\dot{G}(t)\) is the Green's function of the dual field theory of dS. It has an oscillatory cutoff function. Therefore we analytically continue the ERG action to a Lorentzian action first, and then do the mapping. The result thus obtained (4.74) matches with the value found in [39] where the authors have found the bulk field in semicalssical approximation from dS bulk action. For the Lorentzian dS analysis presented here the RG interpretation is not clearly understood - except as an anlytic continuation. We have presented it here for sake of completeness. Euclidean AdSThe Euclidean action of the ERG evolution operator in momentum space, \[S=-\frac{1}{2}\int d\tau\int_{p}\,\frac{\dot{\phi}^{2}}{\dot{G}} \tag{4.63}\] is mapped to \[S_{EAdS}=\int\frac{d^{D}p}{(2\pi)^{D}}\int_{\epsilon_{EAdS}}^{\infty}dz\,( \frac{R}{z})^{d+1}[\frac{z^{2}}{R^{2}}\partial_{z}y^{EAdS}(p)\partial_{z}y^{ EAdS}(-p)+(\frac{z^{2}}{R^{2}}p^{2}+m^{2})y^{EAdS}(p)y^{EAdS}(-p)] \tag{4.64}\] with \(z=Re^{\frac{\pi}{R}}\) as described in [5]. We have rescaled the field as \(\phi=fy^{EAdS}\) where \(f\) is related to the boundary Green's function G as \(f^{2}=-\left(\frac{z}{R}\right)^{-d}\dot{G}\). The constraint on \(\frac{1}{f}\) is given by, \[\frac{\partial}{\partial z}\{\left(\frac{z}{R}\right)^{-d+1}\frac{\partial}{ \partial z}\frac{1}{f}\}=\left(\frac{z}{R}\right)^{-d+1}\left(p^{2}+\frac{m^{2 }R^{2}}{z^{2}}\right)\frac{1}{f} \tag{4.65}\] The solutions are \(z^{d/2}K_{\alpha}(pz)\) and \(z^{d/2}I_{\alpha}(pz)\) where \(\alpha^{2}=m^{2}R^{2}+\frac{d^{2}}{4}\). So \(\frac{1}{f}\) can be taken as, \[\frac{1}{f(p,z)}=(z)^{d/2}\left(AK_{\alpha}(pz)+BI_{\alpha}(pz)\right) \tag{4.66}\] The Green's function is \[G(p,z)=\frac{CK_{\alpha}(pz)+DI_{\alpha}(pz)}{AK_{\alpha}(pz)+BI_{\alpha}(pz)} \tag{4.67}\] The large argument asymptotic form of the Modified Bessel function \(I_{\alpha}(z)\) and \(K_{\alpha}(z)\) are given by, \[I_{\alpha}(z)\sim\frac{e^{z}}{\sqrt{2\pi z}}\left(1+\mathcal{O}(\frac{1}{z}) \right)\ \ for\ \ |arg\ z|<\frac{\pi}{2}\] \[K_{\alpha}(z)\sim\sqrt{\frac{\pi}{2z}}e^{-z}\left(1+{\cal O}(\frac{1}{z})\right) \ \ for\ \ |arg\ z|<\frac{3\pi}{2}\] Putting two constraints on G- i)\(G(pz\rightarrow\infty)=0\) ii)\(G(pz\to 0)=\gamma_{EAdS}\ p^{-2\alpha}\), we get, \[D=0;\ C(p)=\gamma_{EAdS}\ p^{-\alpha};\ B(p)=-\frac{1}{\gamma_{EAdS}}p^{\alpha}\] In semiclassical approximation the bulk field \(y^{EAdS}=b_{EAdS}\frac{G}{f}\). If \(y^{EAdS}\) satisfies \(y^{EAdS}_{0}\) the bulk field is given by, \[y^{EAdS}=y^{EAdS}_{0}\frac{z^{d/2}}{\epsilon^{d/2}}\frac{K_{\alpha}(pz)}{K_{ \alpha}(p\epsilon)} \tag{4.68}\] Now let's check by analytic continuation \(i\eta=z\) and \(iL=R\). First of all, \(\alpha\) becomes \(\nu\). \(\epsilon\) is replaced by \(i\epsilon\). We get, \[y^{EAdS}|_{z=i\eta,\ R=iL}=y^{EAdS}_{0}|_{z=i\eta,\ R=iL}\frac{(i\eta)^{d/2}}{ (i\epsilon)^{d/2}}\frac{K_{\nu}(ip\eta)}{K_{\nu}(ip\epsilon)} \tag{4.69}\] As, \[y^{EAdS}_{0}=b_{EAdS}\ \epsilon^{d/2}_{EAdS}\frac{\gamma_{EAdS}\ K_{\alpha}(p \epsilon)}{p^{\alpha}} \tag{4.70}\] de SitterWe would like to do the same analysis as above for the Lorentzian case. The Lorentzian action obtained from (4.63) by analytic continuation, in momentum space, \[S=-\int dt\ \int\frac{d^{D}p}{(2\pi)^{D}}\frac{1}{2\dot{G}(p)}\dot{\phi}(p) \dot{\phi}(-p)\] and needs to be mapped to \[=\frac{1}{2}\int^{\infty}_{\epsilon_{dS}}d\eta\int\frac{d^{D}p}{(2\pi)^{D}} \left[\left(\frac{L}{\eta}\right)^{D-1}\left\{(\partial_{\eta}y^{dS})^{2}-p^{ 2}{y^{dS}}^{2}-\frac{m^{2}L^{2}}{\eta^{2}}{y^{dS}}^{2}\right\}\right]\] Here \(\eta=Le^{\frac{1}{L}}\). We do the field redefinition of boundary field \[\phi=fy^{dS}\] \(f\) is a scale dependent quantity which is related to Green's function \(G\) as \(f^{2}=-\left(\frac{\eta}{L}\right)^{-D}\dot{G}\). Performing the same manipulations as in [5], one can get the constraint on f as, \[\left(\frac{\eta}{L}\right)^{d-1}\left(\left(\frac{\eta}{L}\right)^{-d+1} \frac{d}{d\eta}\right)^{2}e^{-\ln f}=\left(\frac{\eta}{L}\right)^{-d+1}\left( -p^{2}-\frac{m^{2}L^{2}}{\eta^{2}}\right)e^{-\ln f}\] \[\frac{-d+1}{\eta}\frac{\partial}{\partial\eta}\frac{1}{f}+\frac{\partial^{2} }{\partial\eta^{2}}\frac{1}{f}=\left(-p^{2}-\frac{m^{2}L^{2}}{\eta^{2}} \right)\frac{1}{f}\] The solutions are \(\left(\frac{\eta}{L}\right)^{d/2}H^{(1)}_{\nu}(p\eta)\) and \(\left(\frac{\eta}{L}\right)^{d/2}H^{(2)}_{\nu}(p\eta)\) with \(\nu^{2}=\frac{d^{2}}{4}-m^{2}L^{2}\). The \(\frac{1}{f}\) can be written in general as( note \(f\) is dimensionless), \[\frac{1}{f(p,\eta)}=\left(\frac{\eta}{L}\right)^{d/2}\left(AH^{(1)}_{\nu}(p \eta)+BH^{(2)}_{\nu}(p\eta)\right) \tag{4.71}\] and the Green's function is 2 Footnote 2: We use the term Green function by analogy with the EAdS case, where \(G\) is the propagator of the boundary CFT. Also see for instance [39]. \[G(p\eta)=\frac{CH_{\nu}^{(1)}(p\eta)+DH_{\nu}^{(2)}(p\eta)}{AH_{\nu}^{(1)}(p\eta )+BH_{\nu}^{(2)}(p\eta)}\] Physically one can expect \(G(p\eta\rightarrow\infty)=0\) which yields, \[CH_{\nu}^{(1)}(p\eta)+DH_{\nu}^{(2)}(p\eta)=0 \tag{4.72}\] The asymptotic forms of Hankel functions of both kind for large arguments are, \[H_{\nu}^{(1)}(z)\sim\sqrt{\frac{2}{\pi z}}e^{i(z-\frac{\nu\pi}{2} -\frac{\pi}{4})} -\pi<arg\ z<2\pi\] \[H_{\nu}^{(2)}(z)\sim\sqrt{\frac{2}{\pi z}}e^{-i(z-\frac{\nu\pi}{ 2}-\frac{\pi}{4})} -2\pi<arg\ z<\pi\] The presence of the oscillatory functions will not let eq.4.72 to be satisfied. Hence we analytically continue the argument of Green's function G. The choice of direction of the analytic continuation is based on the anticipation that the bulk field will have positive frequency. Hence we take \[\eta=-iz \tag{4.73}\] which prompts us to make \(C=0\). Also, from the constraint \(AD-BC=1\) we get \(A=\frac{1}{D}\). Hence the Green's function now takes the form, \[G(pz)=\frac{DH_{\nu}^{(2)}(ipz)}{\frac{1}{D}H_{\nu}^{(1)}(ipz)+BH_{\nu}^{(2)}( ipz)}\] Next another constraint will come from the fact that boundary Green's function is \(\gamma_{dS}\ p^{-2\nu}\). So in the limit of \(z\to 0\) using the formulae, \[H_{\nu}^{(1)}(z)=iY_{\nu}(z);\ H_{\nu}^{(2)}(z)=-iY_{\nu}(z);\ Y_{\nu}(z)=- \frac{\Gamma(\nu)}{\pi}\left(\frac{2}{z}\right)^{\nu}\] One can get, \[\frac{-iD}{\frac{i}{D}-iB}=\gamma_{dS}\ p^{-2\nu}\] On the other side, \(f\) should become a p independent constant at boundary \(x=0\) so that it does not modify the boundary Green's function, also \(y^{dS}\) and \(f\) should become same field in boundary field theory. This gives, \[\frac{i}{D}-iB=p^{\nu}\] Finally we get, \[D=i\gamma_{dS}\ p^{-\nu}\ ;\ B=i\left(1-\frac{1}{\gamma_{dS}}\right)p^{\nu}\] The bulk field \(y^{dS}\) is given by, \[y^{dS}=b_{dS}\frac{G}{f}=b_{dS}(i\gamma p^{-\nu})\frac{1}{L^{d/2}}x^{d/2}H_{\nu}^ {(2)}(ipx)\] If we analytically continue back to \(\eta\) we get, \[y^{dS}=b_{dS}(i\gamma p^{-\nu})\frac{1}{L^{d/2}}(-i\eta)^{d/2}H_{\nu}^{(2)}(p\eta)\] If the field \(y^{dS}\) satisfies \(y_{0}^{dS}\) at \(\eta=\epsilon_{dS}\) then, \[y^{dS}=y_{0}^{dS}\frac{\eta^{d/2}}{\epsilon_{dS}^{d/2}}\frac{H_{\nu}^{(2)}(p \eta)}{H_{\nu}^{(2)}(p\epsilon_{dS})} \tag{4.74}\] \(y_{dS}\) satisfies Bunch-Davies condition. Relation between bulk fields in EAdS and dSThe bulk field in EAdS space is given by, \[y^{EAdS}=y_{0}^{EAdS}\frac{z^{d/2}}{\epsilon^{d/2}}\frac{K_{\alpha}(pz)}{K_{ \alpha}(p\epsilon)} \tag{4.75}\] Let's apply the analytic continuation continuation \(i\eta=z\) and \(iL=R\). First of all, \(\alpha\) becomes \(\nu\). \(\epsilon\) is replaced by \(i\epsilon\). We get, \[y^{EAdS}|_{z=i\eta,\ R=iL}=y_{0}^{EAdS}|_{z=i\eta,\ R=iL}\frac{(i\eta)^{d/2}}{( i\epsilon)^{d/2}}\frac{K_{\nu}(ip\eta)}{K_{\nu}(ip\epsilon)} \tag{4.76}\] As, \[y_{0}^{EAdS}=b_{EAdS}\ \epsilon_{EAdS}^{d/2}\frac{\gamma_{EAdS}\ K_{\alpha}(p \epsilon)}{p^{\alpha}} \tag{4.77}\] Using the relation between \(K_{\alpha}(x)\) and \(H_{\alpha}(x)\), \[\begin{split} K_{\alpha}(x)&=\frac{\pi}{2}i^{\alpha +1}H_{\alpha}^{(1)}(ix);\ -\pi<arg\ x\leq\frac{\pi}{2}\\ &=\frac{\pi}{2}(-i)^{\alpha+1}H_{\alpha}^{(2)}(-ix);\ -\ \frac{\pi}{2}<arg\ x\leq\ \pi\end{split} \tag{4.78}\] Here also we want to ensure the bulk field to be of positive frequency, hence choosing \(H^{(2)}(x)\). \[y_{0}^{EAdS}|_{z=i\eta,\ R=iL}=\frac{\pi}{2}(i)^{d/2+\alpha+1}b_{EAdS} \epsilon^{d/2}\gamma_{EAdS}\frac{H_{\alpha}^{(2)}(p\epsilon)}{p^{\alpha}}\] \[=\frac{b_{EAdS}}{b_{dS}}\frac{\gamma_{EAdS}}{\gamma_{dS}}\frac{\pi}{2}(i)^{d/ 2+\alpha+1}y_{0}^{dS}\] Hence, \[y_{EAdS}|_{z=i\eta,\ R=iL}= \frac{b_{EAdS}}{b_{dS}}\frac{\gamma_{EAdS}}{\gamma_{dS}}\frac{ \pi}{2}(i)^{d/2+\alpha+1}y_{0}^{dS}\frac{\eta^{d/2}}{\epsilon^{d/2}}\frac{H_{ \alpha}^{(2)}(p\eta)}{H_{\alpha}^{(2)}(p\epsilon)}\] \[=\frac{b_{EAdS}}{b_{dS}}\frac{\gamma_{EAdS}}{\gamma_{dS}}\frac{ \pi}{2}(i)^{d/2+\alpha+1}y_{dS} \tag{4.79}\] Upto various normalization constants we see that they agree. Summary and Conclusions In [5, 6] an evolution operator for an ERG equation of a perturbed \(D\)-dimensional free field theory in flat space was mapped to a field theory action in \(AdS_{D+1}\). Similar mappings were done subsequently for the interacting \(O(N)\) model at both the free fixed point and at the Wilson-Fisher fixed point [7]. The main aim of this paper was to understand better the mapping used in these papers and to see if there are other examples. A related question was that of analytic continuation of these theories. These questions can posed, both for the ERG equation and its evolution operator. It was shown that a mapping of this type can map the ERG evolution operator of a (zero-dimensional) field theory to the action of a Euclidean harmonic oscillator. Furthermore the analytic continuation of the ERG evolution operator action gives the path integral for a free particle with a time dependent mass. A similar mapping takes this to a harmonic oscillator. This method also gives new way of obtaining the Ermakov-Lewis invariants for the original theory. The analytically continued ERG equation is a Schroedinger like equation for a free field theory wave functional. This gets mapped to the Schroedinger equation for a wave functional of a free field theory in de Sitter space. These are summarized in Figures 1,2. This is one version of the dS-CFT correspondence. From this point of view, the QM evolution of dS field theory is also an ERG evolution of a field theory, but accompanied by an analytic continuation. An example was worked out to illustrate this correspondence. To understand these issues further it would be useful to apply these techniques to the \(O(N)\) model ERG equation written in [7]. This ERG equation has extra terms and thus the theory naturally has interaction terms in the EAdS bulk action. Similarly it would be interesting to study the connection between bulk Green functions and the QM correlation functions on the space-like time slice of these theories, as considered originally in [30, 31, 32]. AcknowledgementsSD would like to thank IMSc,Chennai where part of the work was done.
2309.11630
Rotational Components of the Sun's Mean Field
This paper uses wavelet transforms to look for the rotational frequencies of the Sun's mean line-of-sight magnetic field. For a sufficiently high wavelet frequency, the spectra of the dipole, quadrupole, and hexapole field components each show a time-dependent fine structure with periods in the range of 26.5-30 days and their harmonics. These maps confirm that a large enhancement of 30-day power occurred in the dipole field during 1989-1990, as recorded previously using Fourier techniques (Sheeley 2022). Also, during some years the maps show power at 26.5 days (or its harmonics), that is clearly distinguishable from the 26.9-27.0 day rotation period at the Sun's equator. In at least one case, the 26.5-day period was a wave phenomenon caused by the systematic eruption of active regions at progressively more western locations in the Carrington coordinate system, as if the flux were emerging from a fixed longitude in a faster rotating subsurface layer. Based on previous studies of the mean field (Sheeley et al 1985, Sheeley & DeVore 1986, Sheeley 2022), I conclude that the enhanced wavelet patterns in this paper are regions where magnetic flux is emerging in configurations that strengthen the Sun's horizontal dipole, quadrupole, and hexapole fields, and (in the case of the more slowly rotating patterns) where this flux is being transported to mid-latitudes whose rotation periods are in the range 28-30 days.
Neil R. Sheeley Jr
2023-09-20T20:41:32Z
http://arxiv.org/abs/2309.11630v2
# Rotational Components of the Sun's Mean Field ###### Abstract This paper uses wavelet transforms to look for the rotational frequencies of the Sun's mean line-of-sight magnetic field. For a sufficiently high wavelet frequency, the spectra of the dipole, quadrupole, and hexapole field components each show a time-dependent fine structure with periods in the range of 26.5-30 days and their harmonics. These maps confirm that a large enhancement of 30-day power occurred in the dipole field during 1989-1990, as recorded previously using Fourier techniques (Sheeley, 2022). Also, during some years the maps show power at 26.5 days (or its harmonics), that is clearly distinguishable from the 26.9-27.0 day rotation period at the Sun's equator. In at least one case, the 26.5-day period was a wave phenomenon caused by the systematic eruption of active regions at progressively more western locations in the Carrington coordinate system, as if the flux were emerging from a fixed longitude in a faster rotating subsurface layer. Based on previous studies of the mean field (Sheeley et al., 1985; Sheeley and DeVore, 1986; Sheeley, 2022), I conclude that the enhanced wavelet patterns in this paper are regions where magnetic flux is emerging in configurations that strengthen the Sun's horizontal dipole, quadrupole, and hexapole fields, and (in the case of the more slowly rotating patterns) where this flux is being transported to mid-latitudes whose rotation periods are in the range 28-30 days. Solar magnetic fields (1503)-- Solar rotation (1524),--Solar cycle (1487)-- Stellar magnetic fields (1610) ## 1 Introduction In a previous paper, I used Fourier transforms of Wilcox Solar Observatory (WSO) measurements from 16 May 1975 to 16 November 2021 to study the rotational components of the Sun's mean line-of-sight magnetic field (Sheeley, 2022). With this 46.5-yr sequence of daily measurements, the transforms resolved long-lived 2-sector recurrence patterns of 27, 28.5, and 30 days, and showed the existence of 4-sector and 6-sector patterns. For suitable choices of the limits of integration, it was possible to determine the temporal origin of some of the 2-sector recurrence patterns. Now, in this paper, I present the results of applying wavelet transforms to the WSO mean-field measurements updated though 5 July 2022. These transforms are designed to provide spectral power as a function of time and wavelet scale (equivalent to spectral frequency, \(\omega\), or period, \(T=2\pi/\omega\)). The analysis differs from an earlier wavelet analysis of the WSO data by Boberg et al. (2002). They considered the general properties of oscillating structures over a broad range of periods from 90 minutes to 11 years, whereas in this study, I am focussing on the detailed spectral properties for periods comparable to the 27-day rotational period of the Sun and its second and third harmonics at 13.5 days and 9 days. By selecting the wavelet frequency sufficiently high, I have been able to resolve the components with periods of \(\sim\)27 days from those of periods 28.5 days and 30 days. In addition, the resulting two-dimensional maps show resolved spectral power, not only for the horizontal dipole fields with azimuthal number \(m=1\) found in the earlier paper (Sheeley, 2022), but also for the horizontal quadrupole (\(m=2\)) and hexapole (\(m=3\)) fields. As we will see, this approach verifies my previous results using Fourier transforms. Also, it reveals a long-lived quadrupole pattern in 1990-1991, having a period that is shorter than the equatorial rotation period at the Sun's surface (\(\sim\)26.5 days, compared to \(\sim\)26.9-27.0 days of synodic rotation). After describing the wavelet technique, I will show the resulting maps of spectral power (the so-called wavelet scale-ograms), and present a possible interpretation of this 2-dimensional power. Appendices A and B contain derivations of mathematical relations used in the text, and Appendices C and D contain derivations of the mean field and open flux of an idealized magnetic doublet representing the flux in a nominal bipolar magnetic region. ## 2 Analysis Techniques ### Range of Frequencies The WSO mean-field used in this paper consists of a single measurement every day for \(N=17,218\) days, corresponding to the 47-yr interval from 16 May 1975 to 5 July 2022. The corresponding range of frequencies can be interpreted as follows: The maximum frequency, \(\omega_{max}\) (the Nyquist frequency), is determined by one point every half wave. Consequently, \(\omega_{max}=\pi\) rad point\({}^{-1}\), which is \(\pi\) rad day\({}^{-1}\) at the observing rate of 1 point day\({}^{-1}\). The minimum frequency, \(\omega_{min}\), is determined by putting \(N/2\) points in the half wave, so that \(\omega_{min}=\pi/(N/2)=2\pi/N\) rad day\({}^{-1}\). Thus, we can express the range, R, in rad day\({}^{-1}\) as \[R\ =\ \left(\frac{2\pi}{N},\ \pi\right)\ =\ \frac{2\pi}{N}\left(1,\ \frac{N}{2}\right)\ =\ \omega_{min}\left(1,\ \frac{N}{2}\right). \tag{1}\] The smallest frequency, \(\omega_{min}=2\pi/N\), is also the spectral resolution, \(\Delta\omega\), of the data. In terms of this resolution, the full set of frequencies, \(\Omega\), becomes \[\Omega\ =\ \Delta\omega\left\{1,\ 2,\ 3,\...,\ (\frac{N}{2}-1),\ \frac{N}{2} \right\}. \tag{2}\] Of course, it was not possible to obtain mean-field measurements on every day due to a variety of conditions that presumably included adverse weather, and occasional hardware and software problems. Consequently, there were data gaps on 3069 of the 17218 days, corresponding to 17.8% of the data. On these days, I set the mean field equal to zero before computing the wavelet transforms. To get a feeling for how that may have affected the resulting maps, I also replaced those values by random numbers in the range (-0.32, +0.32) G, where 0.32 G was the root-mean-square value of the other measurements. This had essentially no effect on the distribution of wavelet power, and even when the threshold was set higher than 0.32 G, only the faint background distribution was affected. The enhanced regions of power were unchanged. I did not try interpolating between adjacent non-gap values because the gaps were not always isolated days, but came in a variety of combinations ranging from one to several days and sometimes to a few weeks or more. ### The Wavelet and the Wavelet Transform When using Fourier techniques, one combines the time series, f(t), with an oscillating factor, \(e^{i\omega t}\), that spans the entire time series. This gives the power as a function of frequency, \(\omega\), but does not indicate when the spectral peaks of this distribution originated. With wavelets, one introduces a damping factor of the form, \(e^{-(1/2)\{(t-\tau)/s\}^{2}}\), to limit the range of oscillation to a temporal scale, s, around the time \(t=\tau\), and then combines the time series, f(t), with this damped and phase-shifted oscillation to obtain power as a function of the temporal shift, \(\tau\), and the temporal scale, s. In particular, the damped, but unshifted wavelet, \(\psi\), analogous to the Fourier factor, \(e^{i\omega t}\), is \[\psi\ \sim\ e^{i\omega_{0}t}\ e^{-\frac{t}{2}(t/s)^{2}}\ \sim\ e^{i\gamma(t/s)}\ e^{-\frac{t}{2}(t/s)^{2}}, \tag{3}\] where \(\gamma\) is a dimensionless constant, indicating the number of radians of oscillation in a scale length, s. Equivalently, \(\gamma/2\pi\) is the number of oscillations in a decay time scale, s. Now, \(\omega_{0}\) is a true frequency in rad day\({}^{-1}\), given by \[\omega_{0}\ =\ \frac{\gamma}{s}. \tag{4}\] This differs from the conventional notation, \(e^{i\omega_{0}(t/s)}\), in which \(\omega_{0}\) is regarded as a 'dimensionless frequency', and is usually taken to be a number approximately equal to 6 (Boberg et al., 2002; Podesta, 2009; Torrence & Compo, 1998; Shi et al., 2022). I prefer to assign another variable, \(\gamma\), to the dimensionless quantity and to reserve the symbol, \(\omega_{0}\), for a bonafide frequency with dimensions of rad day\({}^{-1}\). For simplicity, I have omitted a normalization factor of \(\pi^{-1/4}\) from these wavelet expressions, but I will include it when the actual calculations are performed. The idea is to shift this wavelet by an amount, \(\tau\), to form \[\psi\left(\frac{t-\tau}{s}\right)\,\sim e^{i\gamma((t-\tau)/s)}\ e^{-\frac{1}{2}( (t-\tau)/s)^{2}}, \tag{5}\] and then find the value of \[F(t,s)\ =\ \frac{1}{\sqrt{s}}\int_{-\infty}^{+\infty}B_{m}(\tau)\psi^{*}( \frac{t-\tau}{s})d\tau\ =\ \frac{\pi^{-1/4}}{\sqrt{s}}\int_{-\infty}^{+\infty}B_{m}(\tau)\ e^{-\frac{1}{2}((t- \tau)/s)^{2}}e^{-i\gamma((t-\tau)/s)}d\tau, \tag{6}\] where \(B_{m}(\tau)\) is the mean-field time series, and the factor of \(\pi^{-1/4}\) has now been included in the wavelet. Also, I have interchanged the roles of \(t\) and \(\tau\) so that \(t\) survives the integration and now refers to the time shift of the wavelet from its peak value. Because \(F(t,s)\) is usually a complex number, we plot the real number, \(F(t,s)F^{*}(t,s)=|F(t,s)|^{2}\), as a function of \(t\) (on the horizontal axis) and as a logarithmic function of spectral period, \(T=2\pi/\omega\), (on the vertical axis). In this case, the vertical axis indicates \(\log_{2}(2T)\) increasing downward from 1 at the top of the map. As we shall see in the next section, it will be necessary to choose \(\gamma>>6\) to resolve the individual solar rotation periods and see the effects of differential rotation. As shown in Appendix A, for such very large values of \(\gamma\), there is essentially no difference between \(\omega_{0}\), the frequency defined by Eq(4), and \(\omega\), the frequency of the oscillating mean field, obtained from the wavelet analysis. This means that there is no need to retain the 0 on \(\omega_{0}\) or on \(T_{0}=2\pi/\omega_{0}\). Finally, as a point of terminology, \(\psi\) is a Gabor wavelet that depends on \(\gamma\), and for the special case of \(\gamma=\pi\sqrt{2/ln2}\approx\) 5.336, the Gabor wavelet reduces to what is usually called a Morlet wavelet. ## 3 Results Figure 1 compares the wavelet power in the WSO mean field (top panel) with the monthly averaged sunspot number from the Royal Observatory of Belgium (SILSO) shown in the lower panel. Here, \(\gamma=\pi\sqrt{2/ln2}\approx\) 5.33, corresponding to a Morlet wavelet transform of the WSO mean-field measurements. This is essentially what Boberg et al. (2002) obtained for the WSO data through 2001 using \(\gamma=6\). (See their Figure 1). The oscillation period, \(T\), is indicated in powers of 2 (like octaves of a musical scale), running logarithmically downward along the vertical axis as \(\log_{2}(2T)\). For convenience, I use the approximation that \(\log_{2}(2T)\approx\log_{2}(T/27)+5.75\) (equivalent to \(\log_{2}27\approx\) 4.75), which shows directly that the 27-day equatorial rotation period occurs at 5.75 on the vertical axis. Also, I used 32 steps per octave so that \[\log_{2}(2T)\ =\ 1\ +\ \log_{2}T\ =\ 1\ +\ \frac{j}{n}, \tag{7}\] with \(n=32\) and \(j\) running from 1 to 416 (13\(\times\)32). In this case, \(j=416\) gives \(\log_{2}(2T)=14\). Note that \(\log_{2}N=\log_{2}17218\approx\) 14.07, so that the ordinate values from 1 to 14 almost span the full set of 17218 measurements. The map shows a variety of spectral and temporal features. Toward the bottom of the map where ordinate values are in the range 11-13 (corresponding to times of years), coarse structures are aligned horizontally. Near the top of the map where ordinate values are in the range 4-7 (8-64 days), fine structures are aligned vertically. In between, the transition seems to consist of a multitude of round features scattered through the ordinate range of \(\log_{2}2T=7-9\), corresponding to periods in the range 64-256 days. Looking more closely, we can see that the vertical fine structures are distributed in rows running horizontally at locations of 5.75 (27 days), 4.75 (27/2 days), and more faintly at 4.16 (27/3 days), corresponding to the equatorial rotation period and its second and third harmonics. These features indicate rotational contributions from the dipole, quadrupole, and hexapole moments of the Sun's field. They fall into four time intervals corresponding to sunspot cycles 21-24. Their intensities are roughly correlated with the strengths of the sunspot cycles and are relatively weak during cycle 24. However, these features are widely distributed over each sunspot cycle, and often occur early in the declining phase of the cycle, as Sheeley & Wang (2015) found in their analysis of the Sun's large-scale field. If we represent the vertical dimension of this map by \(y=\log_{2}(2T)\), then we can relate small changes in \(y\) to changes in the period, T. We do this by converting to natural logarithms and then differentiating to obtain \[dy\ ln2\ =\ \frac{dT}{T}. \tag{8}\] Thus, small changes of \(y\) are proportional to the fractional change \(\Delta T/T\). In Figure 1, the rotational fine structures have a vertical extent of about \(\Delta y=0.5\), corresponding to \(\Delta T/T=0.5\,ln2=0.346\), which is \(\Delta T=9.4\) days at \(T=27\) days and 4.7 days at \(T=27/2\) days. So \(\gamma=5.33\) gives a relatively coarse picture of these rotational features. In other words, the Morlet transform shows the dipole, quadrupole and hexapole components with good temporal resolution (\(\lesssim 1\) solar rotation period), but it does not show the rotational fine structures of these components. On the other hand, with 32 pixels per octave (corresponding to \(dy=1/32\) and \(dT/T=0.02\)), the display is capable of resolving features whose periods differ by \(dT=0.54\) days, which includes the 27-day, 28.5-day, and 30-day periods. However, to achieve this spectral resolution, it is necessary to increase \(\gamma\), or \(\gamma/2\pi\), which is the number of waves of period, T, in a decay time, s, as indicated by Eq(4) rewritten in the form \[s\ =\ (\frac{\gamma}{2\pi})T. \tag{9}\] To understand this, recall from section 2.1 that a Fourier transform of the entire set of \(N=17218\) points has a frequency resolution of \(1/N\) cycles day\({}^{-1}\). In other words, the frequency resolution is inversely related to the number of data points in the sample. Accordingly, we would expect wavelets of scale \(s\) to give a frequency resolution \(\sim\)1/\(s\) cycles day\({}^{-1}\). As shown in Eq(B18b) of Appendix B, the root-mean-square angular resolution is \((\Delta\omega)_{rms}=1/(s\sqrt{2})\). Dividing by the angular frequency, \(\omega\), given by Eq(4), we obtain \(\Delta\omega/\omega=1/(\gamma\sqrt{2})\). Also, because \(\Delta\omega/\omega=-\Delta T/T\) Figure 1: (top) Wavelet scaleogram of the WSO mean field measurements, showing power as a function of time, t, in years, and period, T, in days expressed logarithmically as \(\log_{2}(T/27)+5.75\). Here, \(\gamma=\pi\sqrt{2/ln2}\approx 5.33\), corresponding to a Morlet wavelet. (bottom) Monthly averaged sunspot number from the Royal Observatory of Belgium (SILSO). it follows that the spectral resolution is \[|\frac{\Delta\omega}{\omega}|\ =\ |\frac{\Delta T}{T}|\ =\ \frac{1}{\sqrt{2}}\frac{1}{ \gamma}. \tag{10}\] For a given period, \(T=27\) days, we can obtain a large value of \(s\) (corresponding to a high spectral resolution) by choosing a large value of \(\gamma\). As a specific case, let's take \(\gamma/2\pi=20\), corresponding to \(|\Delta T/T|=0.0056\) and \(\Delta T=0.15\) days. Then, using Eq(9) and Eqs(B18a), we obtain \[(\Delta t)_{rms}\ =\ \frac{s}{\sqrt{2}}\ =\ \frac{1}{\sqrt{2}}(\frac{\gamma}{2 \pi})T\ =\ \frac{1}{\sqrt{2}}\times 20\times 27\ {\rm days}\ =\ 381.8\ {\rm days}\ =\ 1.04\ {\rm yr}. \tag{11}\] Thus, for this example of \(T=27\) days and \(\gamma/2\pi=20\), the spectral resolution is 0.15 days, which is sufficient to distinguish the 27-day, 28.5-day, and 30-day spectral features, but is obtained at the cost of degrading the temporal resolution to about 1 yr. Figure 2 illustrates this effect by changing \(\gamma/2\pi\) in powers of 2 ranging from 1 in the top panel to 16 in the bottom panel. In the top panel, the narrow slivers of power have widths on the order of a 27-day solar rotation or less. But their vertical extensions are about 0.5 unit, corresponding to several days of frequency resolution, as mentioned above. In the bottom panel where \(\gamma/2\pi=16\), the spectral features are aligned in horizontal strips of widths \(\lesssim 0.03\) unit (corresponding to about 0.5 day) and separations of about 0.1 unit (corresponding to \(\approx\) 2 days, so that the rotation periods of 27, 28.5, and 30 days are clearly resolved. These three components are especially noticeable in 1990 when the dipole field has significant power at all three frequencies. By comparison, in the second panel from the top where \(\gamma/2\pi=2\), the distribution of wavelet power looks very much like the temporal plots of power in the \(m=1\), \(m=2\), and \(m=3\) modes, as shown in Figure 7 of Sheeley (2022). This is as one would expect because those plots were obtained by inverting the Fourier transform of the entire data set for the three frequency bands corresponding to \(m=1\), \(m=2\), and \(m=3\). I used maps like those in Figure 2 to make a movie of the wavelet power, but with a wider and finer range of \(\gamma/2\pi\). When viewed in a single-step mode, the movie allows one to investigate the transition from high temporal resolution to high spectral resolution in detail. The idea is to identify individual features and track them backward and forward as a function of \(\gamma/2\pi\). In this way, one might be able to identify the separate eruptions of flux that contribute to the long-term patterns of the mean field. Figure 2 is sufficient to make these identifications for some simple cases. For example, the 30-day, 2-sector pattern in 1989-1990 seems to have originated in two bursts, one around April 1989 and the other in January 1990. Likewise, the 27-day, 2-sector pattern in 2003-2004 seems to have originated in several eruptions during that time. Further discussion of this transition between high temporal resolution and high spectral resolution is contained in Appendix B. Figure 3 shows the wavelet power in the dipole (m=1), quadrupole (m=2), and hexapole (m=3) regions of the spectrum near 5.75, 4.75, and 4.16 on the vertical axis. In the middle and upper panels, the values of \(\gamma/2\pi\) have been increased to 20 and 40 cycles per decay time, respectively, and in both panels, the display resolution has been increased to 64 pixels per octave, corresponding to \(dT/T=0.022\) and \(dT=0.27\) days. For reference, faint dotted lines are drawn at values of \(T=27\) days, 28.5 days, and 30-days in the middle panel, and at their second harmonics in the top panel. Only the line corresponding to \(T=9\) days is shown for the third harmonic in the top panel. As before, the bottom panel shows the monthly averaged sunspot number from the Royal Observatory of Belgium plotted for cycles 21-24 and the rising phase of cycle 25. In Figure 3, the rotational fine structure is clearly visible in the dipole and quadrupole modes, and to a lesser extent in the hexapole mode. As found previously (Sheeley, 2022), the splitting is mainly one-sided, ranging from the equatorial rotation period of about 27 days to 30 days, as one might expect for magnetic patterns that are subject to differential rotation at latitudes between the equator and 45\({}^{\circ}\)(Newton & Nunn, 1951; Snodgrass, 1983; Sheeley & DeVore, 1986). In addition, meridional flow and the poleward component of supergranular diffusion ought to affect the rotation by making its latitudinal profile more rigid (Sheeley et al., 1987; DeVore, 1987; Wang, 1998). The strong 30-day period is visible for the dipole field in 1989-1990, and there is some weak 27-day power in 2022 at the start of the rising phase of sunspot cycle 25. As noted previously (Sheeley, 2022), the fine structures in one sectoral mode are not reproduced in the other sectoral modes. These fine structures refer to different multipole patterns of field, and are not the same structures rotating at different rates. A puzzling feature of this map is the presence of power at 4.73 on the vertical scale in the years 1990-1991. This corresponds to a magnetic quadrupole rotating with a period of 13.24 days. Interpreted as a second harmonic, Figure 2: A sequence of Gabor scaleograms for \(\gamma/2\pi=1\) (top panel) to 16 (bottom panel) in steps of a factor of 2, showing the transition from high temporal resolution (on the horizontal axis) to high spectral resolution (on the vertical axis). The vertical axis is a logarithmic indication of spectral period, T, expressed in days as \(\log_{2}(2T)\approx\log_{2}(T/27)+5.75\). this would imply that the fundamental is recurring with a period of \(\sim\)26.48 days, which is noticeably smaller than the equatorial period of 26.9-27 days. There is a similar feature near 5.73 in the middle panel during 2002-2004, corresponding to a magnetic dipole field rotating with period of 26.5 days, which is again noticeably smaller than the equatorial rotation period of the Sun's surface. How can these magnetic features drift in longitude faster than the equatorial surface rotates? By means of waves, as I will explain in the next section of this paper. ## 4 Summary and Discussion The purpose of this paper was to see if a wavelet analysis would reveal the fine structure within each of the harmonic components of the mean field and show how that fine structure varied with time during past sunspot cycles. These objectives were achieved, but, to resolve the rotational fine structures, it was necessary to increase \(\gamma/2\pi\), the number of waves per decay time of the wavelet, well beyond the value of approximately 1 that is customarily used in the conventional Morlet transform. This was illustrated in Figures 1-3, which showed the distribution of wavelet power when \(\gamma/2\pi\) ranged from 0.85 to 40. Figure 3: Wavelet power in the region of the solar rotation period (middle panel) and its second and third harmonics (top panel), displayed using \(\gamma/2\pi=20\) and \(\gamma/2\pi=40\), respectively. Faint dotted lines indicate periods of \(T=27\) days, 28.5 days, and 30 days near 5.8 on the vertical scale, and for their second harmonics near 4.8. A single dotted line indicates the third harmonic of \(T=27\) days near 4.2. The arrows indicate features at 4.73 in 1990-1991 and 5.73 during 2002-2004, corresponding to rotation periods \(\sim\)26.5 days. (bottom) Monthly averaged sunspot number from the Royal Observatory of Belgium (SILSO). The middle and upper panels of Figure 3 showed the rotational fine structure in the 2-sector, 4-sector, and 6-sector fields, expressed as a function of time during sunspot cycles 21-24 and early in the rising phase of cycle 25. In addition to power with the 27-day equatorial rotation period during each sunspot cycle, a substantial amount of 2-sector power was visible with a 30-day rotation period during 1989-1990, reproducing the result obtained previously using Fourier transforms (Sheeley, 2022). Figure 3 showed puzzling examples of apparent solar rotation with periods of approximately 26.5 days, noticeably lower than the 26.9-27 day equatorial rotation period of the Sun's surface. In an attempt to understand this behavior, I examined a time-lapse movie constructed from MWO Carrington maps of Ca II K-line emission during the years 1915-1979 and from NSO Carrington maps of the photospheric magnetic field during 1980-2002. This movie was made previously as part of the work for a paper on the long-term variation of K-line emission (Sheeley et al., 2011). Because these Carrington maps were displayed with the 27.27-day Carrington cadence, magnetic patterns poleward of about 15\({}^{\circ}\) (where the synodic rotation period is about 27.27 days) moved rapidly eastward, and features equatorward of about 15\({}^{\circ}\) drifted westward. The visual appearance was dramatic; high-latitude regions of each new sunspot cycle seemed to 'fly' off to the left, and the flux diffusing away from the last equatorial active regions of each cycle drifted very slowly to the right. In addition to these obvious indications of differential rotation, there were some flux concentrations that seemed to move more rapidly to the west. Their speeds were about twice the drift speed of long-lived active regions at the Sun's equator, which is consistent with a rotation period \(\sim\)26.5 days. (A period of 26.5 days corresponds to a rotation rate of 13.58\({}^{\circ}\)day\({}^{-1}\), or 0.38\({}^{\circ}\)day\({}^{-1}\) faster than the Carrington rate of 13.20\({}^{\circ}\)day\({}^{-1}\). By comparison, the 26.9-day equatorial rate is 13.38\({}^{\circ}\)day\({}^{-1}\), or 0.18\({}^{\circ}\)day\({}^{-1}\) faster than the Carrington rate. Thus, in the Carrington maps, the 26.5-day feature moves westward at a rate that is 0.38/0.18 = 2.1 times faster than flux drifting at the equatorial rate.) However, a close examination of the movie showed that these fast westward motions were not due to the differential rotation of long-lived magnetic regions. Rather, they were waves - apparent motions caused by the emergence of new active regions at progressively more western longitudes in the Carrington frame. Similar wavelike motions have been observed previously in Carrington stackplots of the NSO magnetic measurements by Gaizauskas et al. (1983). In fact, their oxymoron,'simple complex IIIa', is a very clear example of this fast wavelike motion in the N10-40\({}^{\circ}\) latitude band during Carrington rotations 1656-1659 (14 June - 03 September, 1977). My measurements of their Figure 2 give an effective rotation period of 26.3 days, compared to their value of 26.5 days. But the essential point is that the period was significantly smaller than the 26.9-27.0-day equatorial rotation period of the Sun. Consequently, the motion cannot be caused by longitudinal transport of flux by solar rotation, at least at the Sun's surface. It is interesting that the 26.5-day period obtained from our wavelet measurements is nearly the same as the \(\sim\)26.4-day equatorial rotation period found at a depth of 0.06\(R_{\odot}\) (\(\sim\)42 Mm) using GONG helioseismology measurements (Howe, 2009). Thus, the fast wavelike motion in Figure 3 occurs as if the flux were emerging from a fixed longitude at that depth. In terms of the popular word, 'nesting' which Castenmiller et al. (1986) introduced for repeated eruptions of sunspot activity at the same location (see also the reviews by van Driel-Gesztelyi & Green (2015) and by Hathaway (2015)), we could call this fast wavelike motion'subsurface nesting'. This raises the question of whether all of the enhanced wavelet patterns of the mean field are due to nesting. We know that the long-lived recurrent patterns of 28-30-day periods owe their existence to the continued emergence of bipolar regions with their polarities in phase. Otherwise, differential rotation winds up their flux and leaves a weaker field rotating with the 27-day period of the unwound flux at the equator. This is consistent with our more recent demonstration that the mean field is dominated by the horizontal dipole and quadrupole fields (and to a lesser extent the hexapole field) (Sheeley, 2022). So, the active regions must emerge in a way that reinforces the lower-order multipoles of the non-axisymmetric field. It does not matter whether this systematic emergence occurs in active longitudes as described by Gaizauskas et al. (1983), or whether it occurs by chance as in the randomized doublets of Sheeley & DeVore (1986). More recently, Hudson et al. (2014) have discussed this effect as a way of reinforcing long-lived polarity patterns at Hale sector boundaries. It is interesting that when Sheeley & DeVore (1986) removed supergranular diffusion and meridional flow from their simulations during the declining phase of the sunspot cycle (when all of the sources emerged below 20\({}^{\circ}\) latitude), the 28-29-day recurrence patterns disappeared. In other words, a poleward component of transport was essential for obtaining the 28-29-day periods, and with this transport, the slowly rotating patterns could be obtained from the flux in low-latitude active regions (providing that they continued to emerge in phase. Also, this is consistent with the results of Sheeley et al. (1987), who found that the large-scale field rotates differentially when the source rate is high, but begins to weaken and rotate rigidly with the 27-day equatorial period when the flux stops emerging and the surviving non-equatorial flux migrates poleward and is eliminated by differential rotation and supergranular diffusion. Thus, it seems likely that spectrally resolved wavelet patterns correspond to active-region nests at different latitudes and with strong horizontal dipole and/or quadrupole moments. Wavelet maps like those in Figure 3 indicate the sunspot-cycle distribution of those nests, and \(\gamma\)-sequences, like those in Figure 2, provide a way of finding the individual sources that maintain the strengths of these patterns. I am grateful to Kristopher Klein (LPL/UA) for suggesting wavelet transforms as a way of tracking solar features of different rotational periods. Also, I am grateful to Chen Shi (UCLA) for providing details of how he made wavelet maps of the radial component of the interplanetary magnetic field observed at the Parker Solar Probe. Wilcox Solar Observatory data used in this study were obtained _via_ the web site [http://wso.stanford.edu](http://wso.stanford.edu) courtesy of J.T. Hoeksema. NSO data were acquired by SOLIS instruments operated by NISP/NSO/AURA/NSF. Sunspot numbers were obtained from WDC-SILSO, Royal Observatory of Belgium, Brussels. ## Appendix A Relations between \(\omega\) and \(\omega_{0}\) As described in Sections 2.1 and 2.2 of the text, the basic concepts of wavelets and wavelet transforms are relatively straightforward. The most difficultly I had was in describing the quantity that was plotted on the vertical axis of the map of wavelet power. In principle, it ought to be the logarithm of the wavelet scale, \(\log_{2}s\). But in practice, what I really wanted was a logarithmic measure of of the oscillation frequency, \(\omega\), or equivalently, its corresponding period, \(T=2\pi/\omega\). Consequently, I needed to find a suitable relation between wavelet scale and period similar to what others have obtained in the past using a different notation for the wavelet frequency and a more limited range of \(\gamma/2\pi\)(Boberg et al., 2002; Podesta, 2009; Torrence and Compo, 1998; Shi et al., 2022; Wolfram Research, Inc., 2023). I proceeded as follows: First, I note that the Gaussian decay factor, \(e^{-(1/2)(t/s)^{2}}\), does not change the frequency of the wavelet because its Fourier transform, \(\hat{\psi}(\omega)\) varies as \(e^{-(1/2)(\gamma-\omega s)^{2}}\), which peaks at \(\omega=\gamma/s=\omega_{0}\). However, to be admissible as a wavelet, \(\psi\) must have a mean value of zero (Farge, 1992; Torrence and Compo, 1998), which can be achieved by subtracting a constant term from the oscillating factor, and then adjusting that term so that \(\int_{-\infty}^{\infty}\psi(t,s)dt=0\). In this case, the constant term is \(e^{-\gamma^{2}/2}\), and the shifted wavelet becomes \[\psi\left(\frac{t-\tau}{s}\right)\,\sim e^{-\frac{1}{2}\left((t-\tau)/s\right) ^{2}}\ \left[e^{i\gamma((t-\tau)/s)}\ -\ e^{-\gamma^{2}/2}\right].\] (A1) Now, the Fourier transform changes to \[\hat{\psi}(\omega)\ =\ sA\sqrt{2\pi}\left[e^{-\frac{1}{2}(\gamma-\omega s)^{2}}\ -\ e^{-\gamma^{2}/2}\ e^{-\frac{1}{2}(\omega s)^{2}}\right],\] (A2) where \(A=\pi^{-1/4}s^{-1/2}[1\ +\ e^{-\gamma^{2}}\ -\ 2e^{-(3/4)\gamma^{2}}]^{-1/2}\) is a normalization constant chosen so that \(\int_{-\infty}^{\infty}\psi\psi^{*}dt=1\). With this change, the location of the peak of \(\hat{\psi}(\omega)\) changes, and it is necessary to find its new location by computing \(\partial\hat{\psi}/\partial\omega\) and setting it equal to 0. This gives the relation \[e^{\gamma^{2}x}\ =\ \frac{x}{x-1}\] (A3) where \(x=\omega/\omega_{0}\). For the exponential to become large, the value of \(x\) on the right hand side of this equation must approach 1. Consequently, we can set \(x=1+\epsilon\) and solve for the small quantity, \(\epsilon\). The result is \(\epsilon\approx e^{-\gamma^{2}}\). Therefore, \[x\ =\ \frac{\omega}{\omega_{0}}\approx 1+e^{-\gamma^{2}},\] (A4) and the correction is less than 1% for \(\gamma\geq 2.14\). We can find a second-order correction, \(\delta\), by defining \(x_{1}=1+e^{-\gamma^{2}}\), and substituting \(x=x_{1}-\delta\) into Eq(A3). After some algebra, we obtain \(\delta=x_{1}-\{1-e^{-\gamma^{2}x_{1}}\}^{-1}\), and \(x=\{1-e^{-\gamma^{2}x_{1}}\}^{-1}\) This second-order solution lies within 2% of the numerical solution of Eq(A3) for \(\gamma=1\), and rapidly agrees to better than 1% as \(\gamma\) exceeds 1.25. So the frequency of the wavelet is very close to \(\omega_{0}\). What about the frequency obtained from the wavelet transform? To find out, I will compute the wavelet power \(|F(t,s)|^{2}\) for a time series of angular frequency, \(\omega\), given by \(\cos\omega t\), and look for the value of \(s\) that makes this power a maximum. This will provide another relation between \(\omega\) and \(s\), that can be combined with Eq(4) to relate \(\omega\) and \(\omega_{0}\). For this purpose, we return to Eq(6) where we set \(B_{m}(t)=\cos\omega t\) and use the value of \(\psi\) given by Eq(A1): \[F(t,s)\ =\ \frac{1}{\sqrt{s}}\int_{-\infty}^{+\infty}\cos\omega t\ \psi^{*}(\frac{t-\tau}{s})d\tau\ =\ \frac{\pi^{-1/4}}{\sqrt{s}}\int_{-\infty}^{+\infty}\cos\omega t\ e^{-\frac{1}{2} \{(t-\tau)/s\}^{2}}[e^{-i\gamma\{(t-\tau)/s\}}\ -\ e^{-\gamma^{2}/2}]d\tau.\] (A5) Evaluating this integral, we obtain \[F(t,s)\sim s^{1/2}e^{-\gamma^{2}/2}\left[\cos\omega t\{\cosh(\gamma\omega s)- 1\}\ -\ i\ \sin\omega t\sinh(\gamma\omega s)\right],\] (A6) where constant factors depending on \(\pi\) have been dropped. Next, because \(F(t,s)\) is a complex number, we multiply \(F(t,s)\) by its complex conjugate to obtain \[|F(t,s)|^{2}\sim se^{-\gamma^{2}}\ [\cosh(\gamma\omega s)-1]\ [\cosh(\gamma \omega s)-\cos 2\omega t].\] (A7) At this point, we recognize that \(\omega s\ \raise 1.29pt\hbox{$>$}\kern-7.5pt\lower 3.01pt\hbox{$\sim$}\ \gamma\), in which case \(\cosh(\gamma\omega s)\ \raise 1.29pt\hbox{$>$}\kern-7.5pt\lower 3.01pt\hbox{$\sim$}\ \cosh(\gamma^{2})>>1\). Consequently, Eq(A7) reduces to \[|F(t,s)|^{2}\sim se^{-\gamma^{2}}\cosh^{2}(\gamma\omega s),\] (A8) and \(|F(t,s)|\) becomes \[|F(t,s)|\sim s^{1/2}e^{-\gamma^{2}/2}\cosh(\gamma\omega s).\] (A9) It is interesting to note that if we had ignored the \(e^{-\gamma^{2}/2}\) term in Eq(A1) from the beginning, we would have obtained \[|F(t,s)|^{2}\sim se^{-\gamma^{2}}e^{-(\omega s)^{2}}\ [\cosh^{2}(\gamma \omega s)-\cos^{2}\omega t]\] (A10) instead of Eq(A7). Then, the condition that \(\cosh^{2}(\gamma\omega s)>>\cos^{2}\omega t\) would have led to Eq(A8) and Eq(A9). This means that the approximation that we used to get there the first time is equivalent to neglecting the extra term in the wavelet that arose when we forced \(\int_{-\infty}^{\infty}\psi dt=0\). So, looking ahead, whatever we find from Eq(A9) will be produced by the original wavelet without the extra \(e^{-\gamma^{2}/2}\) term. Now, let us use Eq(A9) to evaluate \(\partial|F(t,s)|/\partial s\) and then set it equal to 0 to find the value of \(s\) that makes \(|F(t,s)|\) a maximum. The derivative is \[\frac{\partial|F(t,s)|}{\partial s}\ =\ \frac{e^{-(\omega s)^{2}/2}}{2\sqrt{s}} \left[-\{2(\omega s)^{2}-1\}\cosh(\gamma\omega s)+2(\gamma\omega s)\sinh( \gamma\omega s)\right],\] (A11) and the resulting equation for \(s\) is \[\tanh(\gamma\omega s)\ =\ \frac{2(\omega s)^{2}-1}{2(\gamma\omega s)}.\] (A12) Substituting \(s=\gamma/\omega_{0}\) from Eq(4), and converting from the hyperbolic tangent to ordinary exponentials, we obtain \[e^{2\gamma^{2}x}\ =\ \frac{x+x^{2}-\frac{1}{2\gamma^{2}}}{x-x^{2}+\frac{1}{2 \gamma^{2}}},\] (A13) where \(x=\omega/\omega_{0}\). Eq(A13) is analogous to Eq(A3), and can be solved the same way. As before, we assume that the denominator of the right hand side is close to 0, and solve for the positive root of \(x-x^{2}+(1/2\gamma^{2})=0\). The result is \[x\ =\ \frac{\omega}{\omega_{0}}\ =\ (1/2)\left[1+\sqrt{1+\frac{2}{\gamma^{2}}} \right],\] (A14) consistent with Eq(B8) of Podesta (2009) using slightly different notation. If we regard Eq(A14) as the first-order solution, \(x_{1}\), then we can write \(x=x_{1}-\epsilon\) and substitute it into Eq(A13) to obtain \(\epsilon=\left[2x_{1}/(2x_{1}-1)\right]\ e^{-2\gamma^{2}x_{1}}\) as the second-order correction. In this case, the second-order solution for \(x\) becomes \[x\ =\ \frac{\omega}{\omega_{0}}\ =\ x_{1}-\left(\frac{2x_{1}}{2x_{1}-1}\right)e^{ -2\gamma^{2}x_{1}},\] (A15) where \(x_{1}\) is the first-order solution given by Eq(A14). Thus, for \(\gamma\gtrsim 6\), this second-order correction is negligible and the first-order solution, given by Eq(A14), is accurate to about 1%. There is no difference between \(\omega\) and \(\omega_{0}\) for the larger values of \(\gamma\) that resolve the 27-day, 28.5-day, and 30-day components of differential rotation. For smaller values of \(\gamma/2\pi\), one can use Eqs(A14) and (A15). ## Appendix B The transition between high spectral and spatial resolution Regardless of the values of \(s\) and \(\omega_{0}\) (or \(\gamma\)), the wavelet and its Fourier transform satisfy the Heisenberg uncertainty principle in the form \(\Delta\omega\Delta t=1/2\), provided that \(\Delta\omega\) and \(\Delta t\) are both defined as the root-mean-square differences from their average values. As mentioned below, this provides a basis for understanding the transition between the maps of high spectral resolution and high spatial resolution like those shown in Figure 2. Let's begin with \(\psi(t)\) and its Fourier transform, \(\hat{\psi}(\omega)\): \[\psi(t)\ =\ e^{-\frac{1}{2}(t/s)^{2}}e^{i\omega_{0}t},\] (B16a) \[\hat{\psi}(\omega)\ =\ \int_{-\infty}^{\infty}\psi(t)e^{-i\omega t }dt\ =\ \sqrt{2}s\sqrt{\pi}e^{-(s^{2}/2)(\omega-\omega_{0})^{2}}.\] (B16b) In this case, the mean squared averages are \[<(\Delta t)^{2}>\ =\ \frac{\int_{-\infty}^{\infty}t^{2}\psi(t) \psi^{*}(t)dt}{\int_{-\infty}^{\infty}\psi(t)\psi^{*}(t)dt}\ =\ \frac{s^{2}}{2},\] (B17a) \[<(\Delta\omega)^{2}>\ =\ \frac{\int_{-\infty}^{\infty}(\omega- \omega_{0})^{2}\hat{\psi}(\omega)\hat{\psi}^{*}(\omega)d\omega}{\int_{-\infty }^{\infty}\hat{\psi}(\omega)\hat{\psi}^{*}(\omega)d\omega}\ =\ \frac{1}{2s^{2}}.\] (B17b) Consequently, the root-mean-square values are \[(\Delta t)_{rms}\ =\ \frac{s}{\sqrt{2}},\] (B18a) \[(\Delta\omega)_{rms}\ =\ \frac{1}{s\sqrt{2}},\] (B18b) and their product is \[(\Delta\omega)_{rms}(\Delta t)_{rms}\ =\ \frac{1}{2},\] (B19) independent of \(s\). Also, if the energy, \(E\), of the wave packet were \(E=\hbar\omega\), then \[(\Delta E)_{rms}(\Delta t)_{rms}\ =\ \frac{\hbar}{2},\] (B20) which is the conventional form of the Heisenberg uncertainty principle. As mentioned in the text, the spectral resolution varies as \(1/s\), so that Eq(B18b) gives the spectral resolution in terms of the scale, \(s\). Likewise, Eq(B18a) gives the temporal resolution in terms of \(s\). When combined with Eq(9) of Section 3 (\(s=(\gamma/2\pi)T\)), these equations provide a way of calculating these resolutions as a function of \(\gamma/2\pi\), and therefore of tracking the evolution of wavelet power from high temporal resolution to high spectral resolution as was done in Figure 2. Let's pursue this matter further by recalling from Figure 2 that well-defined frequencies in the high spectral resolution maps (with \(\gamma/2\pi\sim 8-16\)) typically correspond to bursts of short-lived features in the high temporal resolution maps (with \(\gamma/2\pi\sim 1\)). In particular, the quadrupole feature at 4.73 on the vertical axis during 1990-1991 (also marked by an arrow in Figure 3) corresponds to a 0.5-yr burst of 2-3 short-lived features in the high temporal resolution maps at the top of Figure 2. Other bursts are visible with lifetimes ranging from 0.4 yr to about 1 yr. The closely spaced doublet at 4.73-4.75 during 2002-2004 (also marked by an arrow in Figure 3) corresponds to one of the strongest and longest-lived bursts in the high spatial resolution maps. And, as mentioned in Section 3, the 30-day 2-sector pattern in 1989-1990 seems to have originated in separate bursts in April 1989 and January 1990. It is tempting to wonder if some of these temporal components are produced by the interference between closely spaced frequencies in the high spectral resolution maps, analogous to the periodic stripes that occur in Bartels displays of interplanetary magnetic field polarity when long-lived patterns of 27 days and 28.5 days overlap in time (Svalgaard & Wilcox 1975; Wang & Sheeley 1994). However, the resulting 'beat' frequencies correspond to periods that are somewhat larger than the durations of these bursts. In fact, 27-day and 28.5-day features give a period of 27\(\times\)28.5/1.5 = 513 days = 1.4 yr, and the other frequency pairs give even larger periods. This does not include the damping associated with the wavelet scale, \(s\), and additional work is necessary to resolve this issue with confidence. Meanwhile, I suppose that these components of temporal fine structure are enhancements of the Sun's equatorial dipole, quadrupole, and hexapole moments caused by the ongoing emergence of active regions in 'nests' rotating with 27-day, 28.5-day, and other periods. ## Appendix C The mean-field of a photospheric doublet Having arrived at the idea that the mean field is due to the suitable juxtaposition of magnetic doublets, I am interested to know what the contribution of a single doublet is, and the circumstances under which it could appreciably affect the mean field. An idealized magnetic doublet can be represented by an expression of the form \[B_{r}\ =\ \frac{\Phi_{0}}{R^{2}}\left[\frac{\delta(\phi-\phi_{L})\delta( \theta-\theta_{L})}{\sin\theta_{L}}\ -\ \frac{\delta(\phi-\phi_{F})\delta(\theta-\theta_{F})}{\sin\theta_{F}}\right],\] (C21) where \(L\) and \(F\) refer to the leader and follower poles of the doublet, respectively, and the delta functions indicate the concentrated nature of those poles. It is easy to confirm that this magnetic doublet satisfies \[\int_{0}^{\pi}\int_{0}^{2\pi}B_{r}(\theta,\phi)R^{2}\sin\theta d\theta d\phi\ =\ \Phi_{0}-\Phi_{0}\ =\ 0.\] (C22) To obtain the mean field of this doublet, we simply evaluate the integral \[B_{m}\ =\ \frac{1}{\pi}\int_{0}^{\pi}\int_{0}^{\pi}B_{r}(\theta,\phi)(\sin \theta\cos\phi)^{2}\sin\theta d\theta d\phi.\] (C23) The result is \[B_{m}\ =\ (\frac{\Phi_{0}}{\pi R^{2}})\left[\sin^{2}\theta_{L}\cos^{2}\phi_{L} \ -\ \sin^{2}\theta_{F}\cos^{2}\phi_{F}\right].\] (C24) Next, we define the separations between the leader and follower poles and their mid-points by \(\theta_{L}=\theta_{0}+\Delta\theta/2\), \(\theta_{F}=\theta_{0}-\Delta\theta/2\), \(\phi_{L}=\phi_{0}+\Delta\phi/2\), and \(\phi_{F}=\phi_{0}-\Delta\phi/2\), where \(\Delta\theta\) and \(\Delta\phi\) are the pole separations and \(\theta_{0}\) and \(\phi_{0}\) are the mid-points between the respective poles. Substituting these relations into Eq(C24) and using the small-angle relations for \(\Delta\theta\) and \(\Delta\phi\), we obtain \[B_{m}\approx(\frac{\Phi_{0}\Delta\theta}{\pi R^{2}})\cos^{2}\phi_{0}\sin 2 \theta_{0}\ -\ (\frac{\Phi_{0}\Delta\phi}{\pi R^{2}})\sin^{2}\theta_{0}\sin 2\phi_{0}.\] (C25) Now, let's evaluate the root-mean-square value of the expression in Eq(C25) using the relation \(<B_{m}^{2}>\ =\ \int_{0}^{\pi}B_{m}^{2}d\phi_{0}/\int_{0}^{\pi}d\phi_{0}\). The result is \[<B_{m}^{2}>\ \ =\ \frac{3}{8}(\frac{\Phi_{0}\Delta\theta}{\pi R^{2}})^{2}\sin^{ 2}2\theta_{0}\ +\ \frac{1}{2}(\frac{\Phi_{0}\Delta\phi}{\pi R^{2}})^{2}\sin^{4}\theta_{0}.\] (C26) If we neglect \(\Delta\theta\) compared to \(\Delta\phi\) and take the square root, we obtain \[B_{m}^{rms}\ =\ \frac{1}{\sqrt{2}}(\frac{\Phi_{0}\Delta\phi}{\pi R^{2}})\sin^{2} \theta_{0}.\] (C27) Thus, the rms mean field of a doublet, located near the equator where \(\theta_{0}=\pi/2\), is roughly equal to its doublet moment, \(\Phi_{0}\Delta\phi\), divided by the area of the visible disk, \(\pi R^{2}\). More accurately, \(B_{m}^{rms}\approx 0.707(\Phi_{0}\Delta\phi/\pi R^{2})\). If this idealized magnetic doublet has a pole strength \(\Phi_{0}=10\times 10^{21}\)Mx and a longitudinal pole separation \(\Delta\phi\sim 10^{\circ}\) (\(\sim\)\(10^{5}\) km), corresponding to a mid-sized active region, then its contribution to the Sun's mean field would be only about 0.08 G. This is an order of magnitude smaller than the 1 G peaks that typically occur when the Sun's equatorial dipole and quadrupole moments are strong. To achieve these strong fields, it would take a nest of several large bipolar magnetic regions arranged with their polarities in phase, as we have found in spatially resolved observations around sunspot maximum and during the initial declining phase of the sunspot cycle (Sheeley & Wang, 2015). ## Appendix D The open flux of a photospheric doublet Here, we extend the previous calculation to determine how much open flux is contributed by the idealized magnetic doublet given by Eq(C21). To obtain this open flux, we assume a potential field whose angular components, \(B_{\theta}\) and \(B_{\phi}\) vanish at a spherical source surface located at a radial distance \(R_{ss}\), and whose radial component, \(B_{r}\) matches the radial component of the doublet field given by Eq(C21). Eventually, we will select \(R_{ss}=2.5R\), where R is the solar radius, but, for the moment, let's consider a general value of \(R_{ss}\). Our objective is to find the source surface field and then integrate its positive value over the source surface. Rather than repeating the solution of this well-known boundary-value problem, I will take the solution from Eq(4a) of a previous paper by Nash et al. (1988). Setting \(r=R_{ss}\) in their equation, the source-surface field becomes: \[B_{ss}\ =\ B_{r}(R_{ss},\theta,\phi)\ =\ \sum_{l=0}^{\infty}\sum_{m=-l}^{l} \left[\frac{(2l+1)(R/R_{ss})^{l+2}}{l+1+l(R/R_{ss})^{2l+1}}\right]c_{lm}Y_{l}^ {m}(\theta,\phi),\] (D28) where \(c_{lm}\) are the spherical harmonic components of the doublet, given by \[B_{ph}\ =\ \sum_{l=0}^{\infty}\sum_{m=-l}^{l}c_{lm}Y_{l}^{m}(\theta,\phi).\] (D29) When \(B_{ph}\) is given by the doublet field of Eq(C21) with \(\theta_{L}=\theta_{F}=\theta\) and \(\phi_{L}-\phi_{F}=\Delta\phi<<\pi/2\), the solution to Eq(D29) is \[c_{lm}\ =\ -imN_{lm}P_{l}^{m}(0)\left(\frac{\Phi_{0}\Delta\phi}{R^{2}} \right),\] (D30) where \(i=\sqrt{-1}\), \(N_{lm}=\sqrt{\frac{2l+1}{4\pi}\frac{(l-m)!}{(l+m)!}}\), and \(P_{l}^{m}(0)\) is an Associated Legendre polynomial function of \(x\) evaluated at \(x=0\). Next, we suppose that the resulting field will be dominated by the contributions of the horizontal dipole and quadrupole, and limit the sum in Eq(D28) to terms of the form \(Y_{1}^{\pm 1}\) and \(Y_{2}^{\pm 2}\). In this case, the source-surface field is \[B_{ss}(\theta,\phi)\ =\ a\sin\theta\sin\phi\ +\ b\sin^{2}\theta\sin 2\phi,\] (D31) where \[a\ =\ \frac{3}{4}\left[\frac{3(R/R_{ss})^{3}}{2+(R/R_{ss})^{3}}\right]\left( \frac{\Phi_{0}\Delta\phi}{\pi R^{2}}\right)\ \approx\ \frac{9}{8}(\frac{R}{R_{ss}})^{3}\left(\frac{\Phi_{0}\Delta\phi}{\pi R^{2}}\right)\] (D32) and \[b\ =\ \frac{15}{8}\left[\frac{5(R/R_{ss})^{4}}{3+2(R/R_{ss})^{5}}\right] \left(\frac{\Phi_{0}\Delta\phi}{\pi R^{2}}\right)\ \approx\ \frac{25}{8}(\frac{R}{R_{ss}})^{4}\left(\frac{\Phi_{0}\Delta\phi}{\pi R^{2}} \right).\] (D33) It is interesting that \(b/a\approx(25/9)(R/R_{ss})\), which is approximately 1.11 when \(R_{ss}/R\) has the conventional value of 2.5. Thus, \(a\sim b\), so that the horizontal dipole and quadrupole probably make comparable contributions to the open flux. In fact, we can confirm this by evaluating their separate integrals over the regions of positive flux: \[\Phi_{open}\approx\ \frac{9}{8\pi}(\frac{R}{R_{ss}})\int_{0}^{\pi}\sin^{2} \theta\sin\theta d\theta\int_{0}^{\pi}\sin\phi d\phi\ (\Phi_{0}\Delta\phi)\ =\ \frac{9}{8}(\frac{R}{R_{ss}})(\Phi_{0}\Delta\phi)\ =\ 0.45\ (\Phi_{0} \Delta\phi)\] (D34) for the dipole field, and \[\Phi_{open}\approx(\frac{25}{8\pi})(\frac{R}{R_{ss}})^{2}\int_{0}^{\pi}\sin^{ 2}\theta d\theta\ \ 2\int_{0}^{\pi/2}\sin 2\phi d\phi\ (\Phi_{0}\Delta\phi)\ =\ (\frac{25}{3\pi})(\frac{R}{R_{ss}})^{2}\ (\Phi_{0} \Delta\phi)\ =\ 0.42\ (\Phi_{0}\Delta\phi)\] (D35) for the quadrupole field. The extra factor of 2 in Eq(D35) allows for the fact that the quadrupole has 2 separate quadrants of positive field. The result that \(\Phi_{open}=0.45(\Phi_{0}\Delta\phi)\) in Eq(D34) agrees with Eq(1) of Sheeley & Wang (2015), and its derivation in Eqs(D28)-(D34) provides the documentation for the steps that were only outlined there. When both harmonics are present at the same time, we can obtain the positive flux by numerically integrating the absolute value of the field over the entire sphere and dividing the result by 2. Schematically, the flux is given by \[\Phi_{open}\ =\ \frac{1}{2}\int_{0}^{\pi}\int_{0}^{2\pi}|p\sin\theta\sin\phi+q \sin^{2}\theta\sin 2\phi|\sin\theta d\theta d\phi\ (\Phi_{0}\Delta\phi),\] (D36) where \(p=aR_{ss}^{2}/(\Phi_{0}\Delta\phi)\approx(9/8\pi)(R/R_{ss})=0.143\) and \(q=bR_{ss}^{2}/(\Phi_{0}\Delta\phi)\approx(25/8\pi)(R/R_{ss})^{2}=0.159\). The result is \[\Phi_{open}\ =\ 0.55\ (\Phi_{0}\Delta\phi).\] (D37) So when both the horizontal dipole and quadrupole are included, the amount of open flux is approximately \(0.55(\Phi_{0}\Delta\phi)\) where the approximation was obtained by neglecting terms of order \((R/R_{ss})^{3}=(0.4)^{3}=0.064\) in the expression for \(a\) and \((0.4)^{5}=0.010\) in \(b\). Without this approximation, the combined open flux drops slightly to 0.54 of the doublet moment, mainly due to a drop of the dipole contribution from 0.45 to 0.44. The essential point is that for a current-free corona, the open flux of an idealized doublet located near the equator is about half of its doublet moment. This is comparable to \(0.71(\Phi_{0}\Delta\phi)\), the flux obtained from Eq(C27) by multiplying the rms mean field by the area of the visible solar disk. Like the mean field, several suitably arranged doublets of modest size will be required to contribute appreciably to the amount of open flux.
2310.04438
A Brief History of Prompt: Leveraging Language Models. (Through Advanced Prompting)
This paper presents a comprehensive exploration of the evolution of prompt engineering and generation in the field of natural language processing (NLP). Starting from the early language models and information retrieval systems, we trace the key developments that have shaped prompt engineering over the years. The introduction of attention mechanisms in 2015 revolutionized language understanding, leading to advancements in controllability and context-awareness. Subsequent breakthroughs in reinforcement learning techniques further enhanced prompt engineering, addressing issues like exposure bias and biases in generated text. We examine the significant contributions in 2018 and 2019, focusing on fine-tuning strategies, control codes, and template-based generation. The paper also discusses the growing importance of fairness, human-AI collaboration, and low-resource adaptation. In 2020 and 2021, contextual prompting and transfer learning gained prominence, while 2022 and 2023 witnessed the emergence of advanced techniques like unsupervised pre-training and novel reward shaping. Throughout the paper, we reference specific research studies that exemplify the impact of various developments on prompt engineering. The journey of prompt engineering continues, with ethical considerations being paramount for the responsible and inclusive future of AI systems.
Golam Md Muktadir
2023-09-30T22:27:37Z
http://arxiv.org/abs/2310.04438v2
# A Brief History of Prompt: Leveraging Language Models. (Through Advanced Prompting) ###### Abstract This paper presents a comprehensive exploration of the evolution of prompt engineering and generation in the field of natural language processing (NLP). Starting from the early language models and information retrieval systems, we trace the key developments that have shaped prompt engineering over the years. The introduction of attention mechanisms in 2015 revolutionized language understanding, leading to advancements in controllability and context-awareness. Subsequent breakthroughs in reinforcement learning techniques further enhanced prompt engineering, addressing issues like exposure bias and biases in generated text. We examine the significant contributions in 2018 and 2019, focusing on fine-tuning strategies, control codes, and template-based generation. The paper also discusses the growing importance of fairness, human-AI collaboration, and low-resource adaptation. In 2020 and 2021, contextual prompting and transfer learning gained prominence, while 2022 and 2023 witnessed the emergence of advanced techniques like unsupervised pre-training and novel reward shaping. Throughout the paper, we reference specific research studies that exemplify the impact of various developments on prompt engineering. The journey of prompt engineering continues, with ethical considerations being paramount for the responsible and inclusive future of AI systems. Prompt Engineering, Prompt Generation, Information Retrieval, Large Language Models, Language Queries ## I Pre-Introduction The goal is to show how to use Graph-of-Thought prompting and previously generated content to achieve the desired results without human editing. Often, it is easier to generate better output after generating a basic output instead of searching for the perfect prompt. _The explored prompting technique goes beyond Graph-of-Thought and follows a path to optimize and re-organize contents._. **The rest of the contents are generated by ChatGPT 3.5 Turbo**. ## II Introduction **Prompt #1:**_You are a scholar in machine learning and language models. I am writing a paper on the history of prompt engineering and generation. Can you give me a timeline for prompt engineering evolution? (We used this timeline to create prompts for each section later)_ **Prompt #2:**_Write the introduction of this paper. Emphasize that this paper focuses on how language prompts and queries have been used so far._ Natural Language Processing (NLP) has witnessed remarkable advancements over the years, revolutionizing how we interact with machines and bridging the gap between human language and artificial intelligence. One critical aspect that has played a pivotal role in shaping the capabilities of NLP systems is the design and usage of language prompts and queries. Recently, researchers and practitioners have explored various techniques to harness the full potential of language models, leveraging the power of prompts to guide, control, and tailor the output of these sophisticated AI systems. This paper embarks on a retrospective journey, offering a comprehensive review of the captivating history of language prompts and queries in NLP. From its nascent stages, where prompts were merely a means of input for rudimentary language models, to its current state as an influential component in modern deep learning models, we aim to unravel the captivating story behind this remarkable facet of NLP research. The inception of this narrative can be traced back to the early language models of the pre-2010 era. These rudimentary models, such as n-grams and statistical language models, laid the groundwork for understanding the concept of prompt engineering - a concept that would become indispensable in the future. While the terminology of "prompt engineering" had not yet emerged, the seeds of its significance were planted in information retrieval systems, where queries were employed as inputs to retrieve relevant information from vast datasets. As the decade progressed, the 2010s witnessed a seismic shift in NLP, brought about by the advent of neural networks. Groundbreaking innovations such as Word2Vec, introduced in 2013 by Mikolov et al., paved the way for capturing semantic relationships between words. Subsequently, the revolutionary Sequence-to-Sequence (Seq2Seq) model, presented by Sutskever et al. in 2014, unlocked the potential for machine translation through its encoder-decoder architecture. The transformational power of attention mechanisms came into the spotlight in 2015 with the release of the Transformer model, as proposed by Vaswani et al. The attention mechanism allowed models to process and understand context with unprecedented efficiency, facilitating long-range dependencies in sequence data. This seminal development formed the cornerstone for future advances in prompt engineering. By 2017, the NLP community began to explore reinforcement learning techniques for language generation. Researchers like Ranzato et al. and Li et al. employed reinforcement learning to fine-tune language models based on external reward signals. This pivotal step not only enhanced fluency and relevance in generated text but alsoowed the seeds for prompt engineering to emerge as a key factor in guiding the model's behavior and generating more contextually relevant responses. In 2018, a revolutionary breakthrough emerged with the introduction of BERT (Bidirectional Encoder Representations from Transformers). Developed by Devlin et al., BERT show-cased the remarkable potential of pre-training language models on vast corpora, followed by fine-tuning for specific tasks. This epoch-making moment brought prompt engineering to the forefront, demonstrating its power in leveraging pre-trained models for diverse NLP applications. The late 2010s and early 2020s marked an era of innovation in prompt engineering, as researchers explored control codes and template-based generation methods. These techniques allowed for more targeted and controlled text output, enabling researchers to guide language models towards specific behaviors and desirable outcomes. The advent of massive language models like GPT-3 in the year 2020 redefined the landscape of NLP, elevating prompt engineering to new heights. With an astonishing 175 billion parameters, GPT-3 showcased unparalleled language generation capabilities, sparking widespread interest in prompt design and its profound impact on fine-tuning language models for specific domains. As the journey through time continues, we delve deeper into the history of prompt engineering, exploring the rich tapestry of methodologies, milestones, and applications that have shaped the course of NLP. By understanding this evolution, we gain valuable insights into the challenges conquered, the accomplishments celebrated, and the promising future that lies ahead for this ever-evolving field of language prompts and queries in NLP. ## III Prehistoric Prompting: Pre NN-Era **Prompt #3:**_Now generate history of prompting or querying in early language models an information retrieval_ Before the advent of neural networks and modern deep learning models, early language models and information retrieval systems laid the foundation for the concept of prompting or querying in natural language processing. Although not explicitly referred to as "prompting" or "querying" at the time, these early approaches formed the basis for how researchers and systems interacted with textual data and sought relevant information. ### _Early Language Models:_ Early language models, such as n-grams and statistical language models, emerged in the early days of computational linguistics. In these models, text was processed as a sequence of words or characters, and the probability of the next word was predicted based on the occurrence frequencies of n-grams (sequences of n words). While these models did not involve explicit user prompts or queries, the process of predicting the next word in a sequence can be considered an implicit form of prompting, where the previous words serve as input guiding the model's prediction. ### _Information Retrieval Systems:_ Information retrieval systems have a long history predating modern NLP techniques and were primarily used to retrieve relevant documents from large collections based on user queries. Early information retrieval systems utilized keyword-based search, where users provided queries consisting of specific words or phrases, and the system returned documents containing those keywords. These queries often required careful phrasing and choice of keywords to obtain relevant results, effectively serving as explicit prompts to retrieve desired information. ### _1960s-1970s: Early Natural Language Interfaces:_ In the 1960s and 1970s, early attempts were made to develop natural language interfaces for information retrieval systems. Researchers explored methods to process user queries in natural language and transform them into suitable search queries for information retrieval systems. These early systems provided rudimentary prompting capabilities, allowing users to interact with the system using natural language queries, albeit with limited understanding and accuracy. ### _1980s-1990s: Rule-Based Approaches:_ In the 1980s and 1990s, rule-based approaches were developed to enable more sophisticated interaction with information retrieval systems. Researchers designed complex query languages that allowed users to express search criteria in a more structured and flexible manner. Users could specify constraints and relationships between search terms, effectively prompting the system to narrow down search results based on user preferences. ### _1990s-2010: Advances in Natural Language Processing:_ The field of natural language processing witnessed significant advancements, especially with the introduction of machine learning techniques and neural networks. Modern information retrieval systems, search engines, and question-answering systems leverage sophisticated language models and deep learning techniques to interpret user queries and provide more accurate and contextually relevant results. These systems employ explicit prompting through user input and implicitly leverage the context within queries to optimize search results and understand user intent better. The historical development of prompting and querying in early language models and information retrieval systems laid the groundwork for modern NLP approaches that now utilize neural networks and large-scale language models for more powerful and context-aware language understanding and generation. The evolution of prompt engineering and querying continues to progress, with ongoing research and innovations propelling the field further towards new frontiers in natural language processing. ## IV History of NLP between 2010 and 2015: The Pre-Attention Mechanism Era **Prompt #4:**_Now write the history between 2010 and 2015 before attention mechanism was invented_ The period between 2010 and 2015 was marked by significant strides in Natural Language Processing (NLP), laying the groundwork for revolutionary advancements that would follow in subsequent years. During this era, researchers explored various techniques to enhance language understanding and generation, all leading up to the eventual introduction of the attention mechanism, which would become a pivotal development in NLP. Here's an overview of the key developments during this period: ### _Neural Network Renaissance (2010-2013):_ NLP research saw a resurgence of interest in neural network-based approaches, popularly known as the "neural network renaissance." Word embeddings gained prominence as an effective way to represent words in continuous vector spaces. The Word2Vec model proposed by Mikolov et al. in their paper "Efficient Estimation of Word Representations in Vector Space" (2013) was a breakthrough in generating high-quality word embeddings. Researchers realized that these continuous word representations allowed models to capture semantic relationships between words, which led to better performance on various NLP tasks. ### _The Rise of Seq2Seq Models (2014):_ In 2014, Sutskever et al. introduced the Sequence-to-Sequence (Seq2Seq) model, which is an encoder-decoder architecture powered by recurrent neural networks (RNNs). Seq2Seq models revolutionized machine translation by effectively learning to encode the input sentence into a fixed-length vector (the "context" or "thought"), which is then used to generate the output sentence. This architecture marked a shift towards more context-aware language modeling, with the ability to process variable-length sequences and produce coherent and contextually appropriate responses. ### _The Emergence of Recursive Neural Networks (RNNs) and Long Short-Term Memory (LSTM) (2010-2015):_ Recurrent Neural Networks (RNNs) gained popularity as a means to process sequential data, including natural language. LSTM, proposed by Hochreiter and Schmidhuber in their paper "Long Short-Term Memory" (1997) but gaining more traction in the early 2010s, was a significant advancement over traditional RNNs. LSTM addressed the vanishing gradient problem, allowing for the successful training of deeper networks. LSTM-based models offered better long-term dependency handling, making them suitable for tasks where maintaining context over longer sequences was crucial. ### _Limitations of Early Approaches:_ Despite the progress made, early language models faced challenges in handling long-range dependencies and maintaining context over lengthy sentences. RNNs, while capable of capturing sequential information, suffered from vanishing or exploding gradient problems during training, limiting their effectiveness in understanding long sequences. ### _Pre-Attention Mechanism Approaches:_ Prior to the introduction of the attention mechanism, researchers devised various methods to tackle the limitations of existing models. Techniques such as Recursive Neural Tensor Networks (RNTNs) and Convolutional Neural Networks (CNNs) were employed to capture hierarchical and compositional structures within text. While these approaches showed promise, they still struggled to effectively handle longer sequences and complex dependencies. The period between 2010 and 2015 can be characterized as a time of exploration, where researchers experimented with various neural network architectures and representation techniques to improve language modeling and generation. The advancements during this era laid the foundation for the groundbreaking introduction of the attention mechanism in 2015, which would revolutionize NLP by enabling more efficient context understanding and long-range dependency handling in language models. ## V The Advent of Attention Mechanism: A Game-Changer for Prompt Engineering in 2015 ## VI 2015: Birth of the Transformer **Prompt #5: now write a section on how attention mechanism changed the future of prompt engineering in 2015** In 2015, the field of Natural Language Processing (NLP) witnessed a groundbreaking development that would forever change the landscape of prompt engineering - the introduction of the attention mechanism. Proposed by Vaswani et al. in the seminal paper "Attention is All You Need", [1], the attention mechanism revolutionized the way language models processed input data, enabling them to focus on relevant information and establish meaningful contextual relationships. This transformative breakthrough had a profound impact on prompt engineering, leading to improved language understanding, better controllability, and enhanced performance across various NLP tasks. ### _Contextual Understanding and Long-Range Dependencies:_ The attention mechanism fundamentally changed how language models processed sequences by allowing them to pay varying degrees of attention to different parts of the input. Unlike traditional approaches that treated each word in the sequence equally, attention-equipped models could dynamically assign weights to each word based on its relevance to the context. This enabled the model to establish meaningful relationships between words across long sequences, overcoming the limitations of earlier methods and greatly improving the contextual understanding of prompts. ### _Better Prompt-Model Interaction:_ With the attention mechanism, prompt engineering took on a new dimension as the models could now focus on specific parts of the prompt that mattered most for generating the desired output. Attention mechanisms facilitated an interactive process, where the model could attend to important aspects of the prompt, query, or context, refining its responses accordingly. This newfound flexibility allowed prompt designers to craft more informative, targeted, and nuanced queries, effectively guiding the model's behavior and achieving more desirable outcomes. ### _Enhanced Controllability and Interpretability:_ The attention mechanism empowered prompt engineers to exert finer control over the generated text by explicitly specifying relevant portions of the input that should receive heightened attention. By directing the model's attention, prompt engineers could guide the model's focus and influence the style, content, and tone of the generated output. Moreover, attention weights could be interpreted and visualized, providing valuable insights into the decision-making process of the language model, enhancing transparency, and aiding in bias mitigation. ### _Applications in Machine Translation and Language Generation:_ The attention mechanism significantly improved machine translation systems by allowing the model to focus on relevant source words when generating target translations. In language generation tasks, such as text summarization and question answering, attention mechanisms helped the model attend to crucial context, leading to more coherent and contextually appropriate responses. ### _Multi-Head Attention and Transfer Learning:_ The concept of multi-head attention, introduced alongside the attention mechanism, enabled models to attend to multiple aspects of the input simultaneously, leading to improved performance and flexibility. The attention mechanism also became instrumental in transfer learning. Pre-trained models could be fine-tuned using prompt engineering to adapt to specific tasks effectively, leveraging their context-awareness to improve performance on various domains. The introduction of the attention mechanism in 2015 represented a paradigm shift in prompt engineering, propelling the field into a new era of language understanding and generation. By empowering language models to focus on relevant information and establish context-dependent relationships, the attention mechanism unlocked new possibilities for fine-tuning models, interpreting their decisions, and achieving higher levels of controllability in natural language processing. As the attention mechanism continued to evolve, researchers and practitioners explored new frontiers in prompt engineering, opening the door to even more sophisticated and context-aware AI systems. ## VII The second wave in 2017: Rise of RL _Prompt #6: now discuss how the advent of reinforcement learning techniques in 2017 changed the prompt engineering_ The introduction of reinforcement learning (RL) techniques in 2017 brought another transformative wave to the field of prompt engineering. Reinforcement learning involves training an agent (in this case, a language model) to take actions in an environment (generate text) to maximize a reward signal. The use of RL in conjunction with prompt engineering led to significant advancements in fine-tuning language models, improving fluency, relevance, and controllability of generated text, [2]. Here's how the advent of RL techniques changed the landscape of prompt engineering: ### _Improving Fluency and Relevance:_ Reinforcement learning allowed prompt engineers to define appropriate reward signals that could incentivize the language model to generate more fluent and contextually relevant responses. Traditional supervised fine-tuning using maximum likelihood estimation (MLE) often led to models that were overly conservative and lacked creativity, but RL opened up possibilities for more exploratory behavior. With RL, language models could explore the space of potential responses, learning from their own generated samples and adjusting their behavior based on the received reward signal, leading to more fluent and contextually appropriate language generation, [3, 4]. ### _Addressing Exposure Bias:_ One significant challenge in language model training was exposure bias, where a model is trained on teacher-forced input during training but experiences a discrepancy during inference, often resulting in a gap between training and testing performance. RL helped mitigate exposure bias by enabling models to sample from their own predictions during training, aligning the training and inference process more closely, [5, 6]. Prompt engineers could design reward functions to encourage self-correcting behavior, leading to more consistent and robust language generation during deployment. ### _Controlling Model Behavior through Reward Shaping:_ Moreover, prompt engineers could use RL's reward shaping to encourage specific behaviors in the language model. By defining custom reward functions, prompt engineers could guide the model to generate responses that adhered to desired criteria, such as maintaining a specific tone, style, or level of formality, [7]. This controllability allowed language models to be tailored for specific applications, ensuring the generated content aligned with domain-specific requirements. ### _Addressing Biases in Language Models:_ Another important aspect was addressing biases in language models. Reinforcement learning provided a means to mitigate biases in language models by using carefully designed reward functions. By penalizing biased responses or rewarding unbiased behavior, prompt engineers could encourage the model to produce more equitable and unbiased language generation, contributing to fairer AI systems, [8, 9]. ### _Advancements in Task-Specific Prompt Engineering:_ RL-powered prompt engineering also opened up avenues for researchers to optimize language models for task-specific objectives. By defining tasks as reinforcement learning problems and designing corresponding reward functions, language models could be fine-tuned for applications such as text summarization, dialogue generation, and question answering, leading to state-of-the-art performance in various NLP tasks. ### _Challenges in Reward Design and Exploration:_ However, while RL provided powerful tools for prompt engineering, designing effective reward functions remained a challenging task. The choice of reward functions could strongly influence the behavior of the model, requiring careful consideration to ensure desired outcomes. Additionally, exploration-exploitation trade-offs in RL training required balancing novelty and diversity in generated text with the need for generating contextually appropriate responses. Overall, the incorporation of reinforcement learning techniques in prompt engineering brought about a new level of controllability and performance to language models. By shaping model behavior through reward functions and addressing issues like exposure bias and biases in generated text, RL-powered prompt engineering advanced the capabilities of language models, making them more powerful, context-aware, and adaptable for a wide range of NLP applications. ## VIII The third wave 2018: the rise of Transformers **Prompt #6:**_Now write the section on how research in 2018 shaped prompt engineering_ The year 2018 was a pivotal period in the evolution of prompt engineering, witnessing transformative breakthroughs that propelled the field to new heights. Researchers explored innovative approaches to leverage pre-trained language models, making use of transfer learning and fine-tuning techniques. This year marked the rise of BERT (Bidirectional Encoder Representations from Transformers), [10], which revolutionized NLP and significantly impacted prompt engineering. Here's an overview of how research in 2018 shaped the landscape of prompt engineering: ### _The Emergence of BERT and Pre-training:_ BERT, introduced by Devlin et al. in 2018, was a major milestone in transfer learning for NLP. The model was pre-trained on a large corpus of unlabeled text, allowing it to learn powerful language representations. BERT's architecture, based on the Transformer model with a bidirectional encoder, captured contextual information from both left and right contexts, leading to deeper and richer language understanding. The pre-training paradigm enabled prompt engineers to leverage BERT's knowledge and context-awareness while fine-tuning for specific tasks, revolutionizing prompt engineering by providing a starting point for more specialized models, [11]. ### _Transfer Learning and Fine-tuning:_ BERT popularized the concept of transfer learning in NLP. Researchers realized that pre-training a language model on a vast corpus enabled it to capture general linguistic patterns and context. Fine-tuning allowed prompt engineers to adapt pre-trained models to specific downstream tasks with minimal additional training data, [12, 13]. This transfer learning paradigm drastically reduced the need for large task-specific datasets, making prompt engineering more practical and effective. ### _Task-Specific Prompt Engineering with BERT:_ Task-specific prompt engineering with BERT became prevalent in 2019. Prompt engineers began utilizing BERT for a wide range of NLP tasks, such as sentiment analysis, named entity recognition [14], and question answering, among others. Task-specific prompt engineering involved providing the model with carefully designed inputs, such as question-context pairs or masked sentences, to guide its behavior and improve task performance. By fine-tuning BERT on task-specific data, researchers achieved state-of-the-art results in various NLP benchmarks, showcasing the power of prompt engineering with pre-trained models. ### _Masked Language Model (MLM) and Cloze-Style Prompts:_ BERT's pre-training involved a masked language model (MLM) objective, where random words in the input text were masked, and the model was tasked with predicting the masked words, [15]. This MLM pre-training opened up new possibilities for prompt engineering, as researchers could use cloze-style prompts (with masked words) to guide the model to fill in the missing information, generating coherent and contextually appropriate responses. ### _Contextualized Word Embeddings (ELMo):_ Another influential development in 2018 was the introduction of ELMo (Embeddings from Language Models) by Peters et al. ELMo generated contextualized word embeddings by training bidirectional LSTM language models on large corpora, [16]. ELMo embeddings allowed for richer word representations that captured different meanings of a word in different contexts, contributing to more sophisticated prompt engineering strategies. ### _Unsupervised and Semi-Supervised Prompt Engineering:_ BERT's ability to capture linguistic patterns from unlabeled data allowed for unsupervised and semi-supervised prompt engineering approaches. Researchers explored methods to fine-tune BERT on limited labeled data, using unsupervised techniques to leverage abundant unlabeled data for improved task performance. The research breakthroughs of 2018, centered around BERT and pre-training techniques, profoundly shaped prompt engineering. Leveraging transfer learning and fine-tuning, prompt engineers could tailor language models to specific tasks with remarkable efficiency. The rise of BERT and contextualized embeddings opened up new avenues for designing cloze-style prompts and leveraging large amounts of unlabeled data for prompt engineering. These advancements propelled the field of prompt engineering into a new era of efficiency, adaptability, and state-of-the-art performance in NLP applications. ## IX 2019: The Year of Control **Prompt #7:**_(a) Now write the section on developments in prompt engineering in 2019. (b) Now can you rewrite the section on developments in prompt engineering in 2019? Please organize your thoughts in paragraphs instead of bullet points._ The year 2019 saw remarkable progress in the field of prompt engineering, with a strong focus on improving the controllability, interpretability, and targeted generation of language models. Researchers explored novel techniques to guide language models more effectively, shaping their behavior and responses to suit specific requirements. This period witnessed advancements in control codes, template-based generation, and innovations in fine-tuning strategies, ushering in an era of more precise and context-aware language generation. Here's an overview of the key developments in prompt engineering in 2019: ### _Control Codes and Conditioning:_ One of the key developments in prompt engineering in 2019 was the incorporation of control codes into language model inputs. These control codes are special tokens or markers added to the prompt, indicating desired attributes, styles, or behavior in the generated text. By conditioning the language model on control codes, researchers could guide it to produce content adhering to specific criteria, such as sentiment, formality, or language style. This controllability allowed for more targeted and contextually appropriate responses, empowering prompt engineers to tailor language generation to various use cases. ### _Template-Based Generation:_ Template-based generation also gained prominence during this period. Prompt engineers designed prompts in the form of templates, with placeholders for dynamic content. By providing specific values for the placeholders, researchers ensured that the generated output followed the structure and format defined in the template. Template-based approaches enabled more structured and controlled text generation, making them valuable in applications where precise and consistent responses were essential. ### _Reinforcement Learning for Improved Controllability:_ Advancements in reinforcement learning techniques further improved the controllability of prompt-engineered language models. By refining reward functions, prompt engineers could encourage the model to produce more desirable responses, reducing biases and generating content that aligned better with user preferences. Reinforcement learning played a crucial role in refining prompt engineering strategies, enabling AI systems to learn from human preferences and judgments. ### _De-biasing Strategies:_ Addressing biases in language models remained a critical focus in prompt engineering in 2019. Researchers explored methods to de-bias prompt inputs and mitigate potential biases present in the training data. Carefully crafting prompts that avoid biased language or specifying fairness-related control codes aimed to generate more equitable and unbiased language output. Human-AI collaboration also became a significant aspect of prompt engineering. In some cases, human-in-the-loop approaches were employed, where human-generated responses served as reward signals for reinforcement learning. This allowed the model to learn from human preferences and judgments, enhancing the quality and relevance of generated content. ### _Adapting to Low-Resource Languages:_ Prompt engineering extended its impact to low-resource languages, where fine-tuning large language models might be challenging due to limited training data. Researchers explored methods to leverage transfer learning and unsupervised pre-training, adapting prompt engineering techniques to address the specific challenges of low-resource settings. ### _Contextual Prompting and Dynamic Response Generation:_ Furthermore, contextual prompting emerged as a powerful approach. Using preceding context or user interactions as prompts allowed language models to provide more dynamic and interactive conversation generation. Incorporating contextual information enabled the model to provide coherent and contextually appropriate responses, enhancing the overall user experience. The developments in prompt engineering in 2019 propelled the field towards enhanced controllability, targeted generation, and reduced biases in language models. The integration of control codes, template-based generation, and reinforcement learning techniques brought greater precision to prompt engineering, aligning the generated text more closely with the desired outputs. The exploration of human-AI collaboration and de-biasing strategies marked significant steps towards responsible and fair AI systems. As the field advanced, researchers continued to refine prompt engineering approaches, laying the groundwork for future breakthroughs in making AI systems more adaptable, interpretable, and human-centric. ## X 2020-2021: The rise of LLMs **Prompt #8:**_(a) now write the section for 2020 and 2021 in prompt engineering (b) now rewrite the section for 2020 and 2021 in prompt engineering? Please organize your thoughts in paragraphs instead of bullet points_ The years 2020 and 2021 marked a period of unprecedented progress in prompt engineering, largely driven by advancements in large-scale language models and the democratization of AI technology. With the release of models like GPT-3 and innovations in prompt design, researchers and developers alike explored new frontiers in controllability, interpretability, and domain adaptation. These years witnessed the rise of massive language models, diversification of prompt formats, and a growing emphasis on addressing ethical concerns. Here's an overview of the key developments in prompt engineering during this period: ### _The Age of Massive Language Models:_ One of the defining features of this period was the emergence of massive language models, [17, 18], exemplified by models like GPT-3 developed by OpenAI. With its impressive 175 billion parameters, GPT-3 demonstrated extraordinary language generation capabilities and found applications across diverse domains. Prompt engineering techniques allowed researchers to leverage these large models' contextual understanding and capabilities, making them more versatile and contextually intelligent in generating content across various tasks and domains. ### _Advancements in Prompt Format and Style:_ Advancements in prompt format and style were another key focus during 2020 and 2021. Researchers explored novel ways to design prompts, enabling more targeted and controlled text generation. By utilizing different prompt types, such as completions, instructions, or role-playing scenarios, prompt engineers could guide models to produce specific styles, tones, or perspectives, [19, 20]. Creative prompt design empowered developers to create engaging and contextually appropriate language generation systems, tailored to suit diverse use cases and applications. ### _Domain Adaptation and Fine-Tuning:_ Furthermore, domain adaptation and fine-tuning became integral to prompt engineering in 2020 and 2021, [21, 22]. The widespread availability of pre-trained language models allowed for more efficient adaptation to specific domains and tasks. By fine-tuning models on smaller, task-specific datasets, researchers achieved domain-specific performance without the need for training massive models from scratch. Fine-tuning techniques made prompt engineering more practical and effective, enabling the application of language models to address specific industry or domain requirements. ### _Ethical Considerations and Bias Mitigation:_ Addressing ethical considerations, including bias mitigation and fairness, was a crucial aspect of prompt engineering during this period. As large language models became more accessible, prompt engineering techniques placed increased emphasis on responsible AI usage. Researchers developed strategies to de-bias prompts, [23], define fairness-related control codes, [24], and craft reward functions that encouraged unbiased and equitable language generation. The aim was to ensure prompt engineering adhered to ethical guidelines, promoting responsible and fair AI systems. ### _Interpretability and Explainability:_ Furthermore, interpretability and explainability became important areas of focus in prompt engineering. Researchers explored methods to visualize attention weights and understand how language models made decisions based on their input prompts, [25]. Explainable prompt engineering allowed users to gain insights into model behavior, enhancing transparency and trust in AI systems. ### _Democratization of AI and Low-Resource Language Support:_ Moreover, the democratization of AI technology during 2020 and 2021 enabled wider access to powerful language models and prompt engineering techniques, [26, 27]. Developers could integrate prompt engineering into various applications and services, making AI-driven language generation more accessible to diverse industries and use cases. Additionally, efforts were made to extend prompt engineering to support low-resource languages, ensuring that language models were inclusive and representative of a wide range of linguistic diversity. In conclusion, the years 2020 and 2021 marked a transformative era in prompt engineering, with the rise of massive language models and innovations in prompt design. The integration of prompt engineering with large-scale language models democratized AI technology and made it more inclusive. Ethical considerations and interpretability played a significant role in ensuring responsible AI usage. Prompt engineering techniques enabled researchers and developers to create AI systems that were more contextually aware, controllable, and adaptable, paving the way for a wide range of applications and advancements in natural language processing. ## XI 2022-Current: Beyond Language Generation **Prompt #9:** _(a) Can you now write a section on 2022 and 2023 on advanced prompt techniques? (b) can you write the section on 2022 and 2023 on advanced prompt techniques in paragraphs instead of bullet points?_ The years 2022 and 2023 witnessed remarkable advancements in prompt engineering, pushing the boundaries of language models and extending their applications beyond conventional language generation tasks. Prompt techniques evolved to address complex challenges, such as multimodal inputs, multi-turn conversations, and domain-specific language understanding. Researchers explored techniques that augmented prompt engineering with additional context and domain knowledge, making AI systems more contextually aware, interactive, and versatile. Here's an overview of the key advanced prompt techniques that shaped the field during this period: ### _Multimodal Prompting and Integration:_ One of the most notable advancements during this period was the integration of multimodal prompting. Prompt engineering expanded to include various input modalities, combining textual prompts with visual, auditory, or other sensory information. By seamlessly incorporating visual elements like images or videos into textual prompts, AI systems gained the ability to process and generate content from diverse sources. This paved the way for applications in image captioning, visual question answering, and interactive chatbots, enabling AI systems to understand and generate responses in a more holistic and comprehensive manner. ### _Multi-Turn Conversational Prompting:_ In parallel, the focus of prompt engineering expanded from single-turn language generation to multi-turn conversational prompting. Techniques were developed to maintain and utilize context across multiple interactions, allowing AI systems to engage in more coherent and interactive conversations with users. Multi-turn conversational prompting greatly enhanced chatbot capabilities, making AI assistants more responsive and proficient in handling complex user queries and extended discussions.. ### _Domain-Specific Knowledge Integration:_ Moreover, the integration of domain-specific knowledge became a significant aspect of prompt engineering in 2022 and 2023. Researchers explored methods to incorporate external knowledge bases or leverage pre-existing domain-specific models to enhance the language understanding and generation process. By tapping into specialized knowledge, AI systems demonstrated improved performance in domain-specific applications such as medicine, law, finance, and more, effectively bridging the gap between AI and domain expertise. ### _Reinforcement Learning with Human Feedback:_ Reinforcement learning techniques continued to play a crucial role in prompt engineering, especially in refining AI systems' controllability and adaptability. Researchers explored strategies to integrate human feedback as reward signals for fine-tuning models. Human-in-the-loop prompt engineering allowed for more effective and interactive model refinement, reducing the need for extensive manual annotation and providing users with personalized and tailored responses. ### _Contextual Intent Prediction:_ Contextual intent prediction emerged as another significant area of focus. Prompt engineering techniques in 2022 and 2023 aimed to identify user queries' underlying intent to generate contextually relevant and accurate responses. By predicting user intent from input prompts, AI systems could better align with user goals and needs, offering more personalized and targeted language generation. ### _Personalization and User Modeling:_ Furthermore, prompt engineering techniques in 2022 and 2023 saw an increased emphasis on personalization and user modeling. AI systems were designed to learn from past interactions and form user-specific models, which informed prompt engineering decisions to generate personalized content tailored to individual users. Personalization strategies enhanced user engagement and satisfaction, allowing AI systems to adapt responses based on user preferences and behavior. In conclusion, the years 2022 and 2023 brought forth a series of groundbreaking advancements in prompt engineering. The integration of multimodal inputs, multi-turn conversational prompting, domain-specific knowledge, reinforcement learning, contextual intent prediction, and user modeling collectively transformed AI systems into contextually intelligent and versatile language generators. As prompt engineering continued to evolve, researchers and developers explored novel applications and use cases, paving the way for AI systems that are highly adaptable, interactive, and capable of serving diverse user needs across various domains. ## XII Conclusions In conclusion, the history of prompt engineering in natural language processing has been a journey of groundbreaking advancements and innovation. From the early language models and information retrieval systems to the transformative introduction of attention mechanisms, reinforcement learning, and advanced language models, prompt engineering has continually evolved to enhance language understanding, generation, and controllability. The attention mechanism, introduced in 2015, revolutionized language modeling by allowing models to focus on relevant information and establish contextual relationships, leading to improved performance across various NLP tasks. Reinforcement learning techniques in 2017 further enhanced controllability, enabling prompt engineers to shape language models with rewarding strategies and address biases. Advancements in 2018 and 2019 introduced fine-tuning strategies, control codes, and template-based generation, refining controllability and context-awareness in language models. Research focused on fairness, human-AI collaboration, and adaptation to low-resource languages, marking strides towards responsible and inclusive AI systems. In 2020 and 2021, contextual prompting and transfer learning were prominent, making language models more interactive and adaptable. These developments extended to 2022 and 2023, introducing advanced techniques like unsupervised pretraining and novel reward shaping. The history of prompt engineering showcases the collaborative efforts of researchers and practitioners driving the field's evolution. With ethical considerations, prompt engineering promises to revolutionize AI applications, fostering responsible human-AI collaboration and empowering language models to align closely with human intentions. In conclusion, prompt engineering continues to shape the landscape of NLP, opening new frontiers for intelligent and context-aware language models. The journey is ongoing, and with continuous research and responsible development, prompt engineering will propel the field towards more versatile, interpretable, and human-centric AI systems. ## Acknowledgment Authors extensively used ChatGPT for content generation.
2309.11156
CNN-based local features for navigation near an asteroid
This article addresses the challenge of vision-based proximity navigation in asteroid exploration missions and on-orbit servicing. Traditional feature extraction methods struggle with the significant appearance variations of asteroids due to limited scattered light. To overcome this, we propose a lightweight feature extractor specifically tailored for asteroid proximity navigation, designed to be robust to illumination changes and affine transformations. We compare and evaluate state-of-the-art feature extraction networks and three lightweight network architectures in the asteroid context. Our proposed feature extractors and their evaluation leverages both synthetic images and real-world data from missions such as NEAR Shoemaker, Hayabusa, Rosetta, and OSIRIS-REx. Our contributions include a trained feature extractor, incremental improvements over existing methods, and a pipeline for training domain-specific feature extractors. Experimental results demonstrate the effectiveness of our approach in achieving accurate navigation and localization. This work aims to advance the field of asteroid navigation and provides insights for future research in this domain.
Olli Knuuttila, Antti Kestilä, Esa Kallio
2023-09-20T09:03:59Z
http://arxiv.org/abs/2309.11156v2
# CNN-based local features for navigation near an asteroid ###### Abstract This article addresses the challenge of vision-based proximity navigation in asteroid exploration missions and on-orbit servicing. Traditional feature extraction methods struggle with the significant appearance variations of asteroids due to limited scattered light. To overcome this, we propose a lightweight feature extractor specifically tailored for asteroid proximity navigation, designed to be robust to illumination changes and affine transformations. We compare and evaluate state-of-the-art feature extraction networks and three lightweight network architectures in the asteroid context. Our proposed feature extractors and their evaluation leverages both synthetic images and real-world data from missions such as NEAR Shoemaker, Hayabusa, Rosetta, and OSIRIS-REx. Our contributions include a trained feature extractor, incremental improvements over existing methods, and a pipeline for training domain-specific feature extractors. Experimental results demonstrate the effectiveness of our approach in achieving accurate navigation and localization. This work aims to advance the field of asteroid navigation and provides insights for future research in this domain. ## I Introduction Asteroid exploration missions, such as the Hayabusa-2 [1] and OSIRIS-REx missions [2], have demonstrated the significance of vision-based proximity navigation in complex and dynamic environments. The emerging industry of on-orbit servicing necessitates proximity navigation, which shares many aspects with navigation in close proximity to asteroids. The upcoming Hera mission [3] to the asteroid 65803 Didymos and its accompanying cubesats (Milani and Juventas) has been of particular interest to the authors due to our involvement in the Milani precursor cubesat APEX [4]. Although the APEX project was discontinued due to changes in the programme, the insights gained from it remain valuable. Vision-based navigation near asteroids or satellites presents a unique challenge due to the limited amount of scattered light that illuminates the object, resulting in significant appearance variations depending on the direction of sunlight. Traditional feature extractors such as Scale-Invariant Feature Transform (SIFT) and Oriented FAST and Rotated BRIEF (ORB) cope poorly with such drastic changes in appearance. Addressing this challenge requires the development of a robust and efficient feature extraction method to enable accurate navigation and localization in space. Feature extraction plays a crucial role in most vision-based navigation methods in the field of robotics, including simultaneous localization and mapping (SLAM), as well as absolute navigation approaches like on-board rendering-based Synthetic Photometric Landmarks (SPLs) [5]. Additionally, SLAM methods can incorporate pre-built feature databases. Relative navigation methods also benefit from feature extraction, particularly in environments where rapid changes occur compared to the amount of relative movement present. For instance, when a spacecraft orbits an asteroid, the asteroid's rotation can be significantly faster than the spacecraft's orbital velocity. Visual odometry techniques that follow the rotating asteroid generate a relative path much longer than the spacecraft's actual travel along its orbit, resulting in poor accuracy even over short orbital paths. By matching features across asteroid rotations, the error per distance traveled can be substantially reduced. Another scenario where feature extraction is crucial is when orbiting the L4/L5 points of a binary asteroid system such as Didymos. The spacecraft's location relative to the secondary body changes much slower than the secondary body's appearance due to its orbit around the primary body. In such cases, visual odometry based solely on optical flow would not be effective. In this work, our primary objective is to develop a lightweight feature extractor specifically tailored for asteroid proximity navigation. We aim to address the challenges posed by asteroid environments by designing a feature extraction algorithm that exhibits invariance to illumination changes, moderate rotations, scaling, and affine transformations. Furthermore, we aim to compare and evaluate different local feature algorithms based on their mean matching accuracy (MMA), ratio of correct matches to ground truth matches (M-Score), spatial accuracy of correct matches, and orientation error when the matches are used to estimate relative pose, assuming that the feature 3D coordinates are known. The main contributions of this work are as follows: * We present a trained lightweight feature extractor specifically designed for asteroid proximity navigation. * We demonstrate incremental improvements over state-of-the-art feature extractors, particularly in the context of asteroid navigation. * We compare two state-of-the-art feature extraction algorithms and three lightweight network architectures in the asteroid context. * We develop a pipeline for training feature extractors specialized in a given domain. Our code together with the trained models are available at [https://github.com/oknuutti/navex](https://github.com/oknuutti/navex), while the data used [6] is published through Zenodo. The rest of this paper is organized as follows: Section II provides an overview of related work. Section III presents the methodology and experimental setup, including data augmentation, evaluation metrics and hyperparameter optimization. Section IV provides details about image data preprocessing and the resulting datasets. Section V presents the results and performance analysis of our proposed feature extractor. Finally, Section VI concludes the paper and discusses potential future research directions. ## II Related Work ### _Features for Navigation Near an Asteroid_ Before discussing existing methods for local feature extraction that detect salient features and describe them using vectors of a certain length (descriptor/embedding), we will briefly review somewhat similar methods proposed for proximity navigation in space. Traditionally, terrain relative navigation (TNR) near asteroids involved creating textured 3D maplets of small salient regions (natural landmarks) on the asteroid. This approach utilized a priori knowledge of the expected relative pose and direction of light to render the maplets, followed by template matching to locate them in the query image [7]. For the Rosetta mission, all the processing was performed on the ground by mission operators [8]. However, during the OSIRIS-REx mission, maplet rendering and template matching could be done either on the ground or automatically on board, but maplet creation was always performed on the ground [9]. An alternative method employed by Hayabusa-2 was the deployment of bright balls called target markers on the asteroid, which could subsequently be used as features [10]. Due to the challenges of creating maplets on board and the limited performance of traditional photometric features, different approaches have been suggested. One such approach is Synthetic Photometric Landmarks (SPLs) [5], which involves rendering a global shape model using a priori information. Traditional AKAZE features are then extracted and matched between the query image and the synthetic image. The AKAZE features perform adequately due to matching lighting conditions. However, creating the global shape model on board remains a non-trivial task and would likely be performed on the ground instead. Convolutional neural networks (CNNs) have been extensively used in studies to automatically detect and describe craters for use as landmarks in navigation [11, 12, 13, 14]. However, recent visits to small solar system bodies suggest that sub-kilometer objects do not possess many suitable craters for this purpose. For on-orbit rendezvous with an uncooperative spacecraft, a common approach is to encode a few tens of target object features in the network weights. This allows a CNN classifier to output a heat map for each trained feature [15, 16, 17, 18]. This approach provides the benefit of estimating feature location uncertainty, which can be subsequently used by the navigation filter. However, this approach requires prior knowledge of the target shape model, and the computational performance of the network degrades with each additional feature included. In the context of asteroid navigation, there are methods that use deep learning to regress the center of volume, sub-solar point, and various points on the limb. This enables the extraction of pseudo range for subsequent use by the navigation filter [19, 20]. Pugliatti and Topputo [21] utilize a CNN classifier trained on the target asteroid to determine the approximate position of the spacecraft. They then refine the relative pose solution using custom template matching. Instead of directly using navigation camera images as input, they employ a custom U-Net-type architecture derived from MobileNetV2 to preprocess the images into segmentation maps. Each pixel is classified to belong to boulders, craters, the background, the terminator, or the rest of the asteroid surface. This method assumes near-pointing images taken within a specific distance range and oriented so that the asteroid rotation axis points upwards in the image frame. Mancini et al. [22] employ a network that calculates a per-image (global) descriptor from a central patch of a near-pointing image. The extracted descriptor is compared using L2-distance to a reference map of precomputed global descriptors spanning the area of interest on the target object. A heat map is then generated, incorporating the results from odometry to indicate the most likely spacecraft location. Similar assumptions as those made by Pugliatti and Topputo [21] apply to the images used for navigation. However, it is not necessary to train the network specifically on the target body. ### _CNN-based Local Features_ The field of local feature detection and description has a rich history and encompasses both traditional and deep learning-based methods. Comprehensive surveys by Csurka et al. [23] and Chen et al. [24] cover the topic, with the latter focusing on deep learning for localization and mapping, including feature extraction methods. Jin et al. [25] introduce a benchmarking framework to facilitate comparisons of local feature extraction methods for relative pose estimation of wide baseline image pairs. Various approaches have been developed for local feature extraction. Some methods utilize small patches around keypoints detected by external feature detectors, such as the traditional difference of Gaussians (DoG). Examples include HardNet [26], SOSNet [27], GeoDesc [28], and ContextDesc [29]. Key.net [30] focuses exclusively on feature detection by combining gradient-based detection with subsequent CNN layers. On the other hand, methods such as D2-Net [31], ASLFeat [32], D2D [33], and UR2KiD [34] compute dense descriptors and use processing steps in the descriptor space to detect a sparse set of features, sometimes leveraging middle CNN layers. SuperPoint [35], HF-Net [36], R2D2 [37], and DISK [38] directly compute both dense descriptors and detection scores. To the best of our knowledge, there are only two previously published works that employ learning-based feature extractors for navigation in the proximity of small solar system bodies. Beccari [39] trains a SuperPoint feature extractor using the related MagicPoint teacher network [35], using images from the MS-COCO dataset and synthetic images of Eros and Bennu. The author compares the resulting SuperPoint extractors with traditional methods like SIFT, concluding that the latter outperform the former. In contrast, a concurrent study by Driver et al. [40], published during the final stages of our research, utilizes ASLFeat as the base feature extractor trained with real data acquired from 16 small bodies across eight separate missions. This study reports superior performance of the proposed feature extractor compared to traditional methods. In the Conclusion section of our paper (Section VI), we will briefly discuss their results in relation to our findings. Considering the available methods and the benchmark by Jin et al. [25], our focus lies on SuperPoint, HF-Net, R2D2, and DISK as potential candidates for our specific use case. Table I provides key details of these methods. SuperPoint [35] adopts a VGG-style architecture with three 2x2 non-overlapping max-pooling layers to reduce computation. The resulting feature map has cells of size 8x8 pixels. The model comprises two heads: one for descriptors and one for feature detection. The detection head consists of a hidden 3x3 256-channel convolution layer followed by a 1x1 convolution layer with 65 channels, generating a full-sized map of feature salience through a soft-max operation and reordering. The descriptor head has also a hidden 3x3 256-channel convolution layer followed by a 1x1 convolution layer with 256 channels producing L2-normalized descriptors that are up-sampled to the original image size. SuperPoint is trained using a teacher model called MagicPoint, which has been trained on synthetically warped data with ground truth pixel correspondences, employing a loss function combining cross-entropy and hinge losses for feature detection and descriptor matching. HF-net [36], a lightweight variant of SuperPoint, employs the first seven layers of the MobileNetV2 architecture [44] as its backbone. This variant reduces the hidden layer channels in the detector head and utilizes the remaining part of MobileNetV2 to calculate a global image descriptor for building and querying a global index, which can be used for relocalization/loop-closing in a SLAM system such as ORB-SLAM2 [45] or VINS [46]. HF-net is trained with teacher models for all outputs: SuperPoint serves as the teacher for the detector and local descriptors, while NetVLAD [47] serves as the teacher for the global descriptor. The training scheme employs multi-task learning [48] with adjusted weights for different loss terms. R2D2 [37] jointly trains a local feature detector and descriptor. Its feature detection is divided into two aspects: repeatability, which ensures consistent detection across differently warped images, and reliability, which measures the likelihood of correct matches. The R2D2 architecture resembles VGG but replaces the max-pooling layers with dilated convolutions to maintain feature map resolution at the expense of increased computational load. The resulting feature map serves as the descriptor output. The repeatability and reliability heads process the squared feature map using 2-channel, 1x1 convolution layers followed by soft-max operations. The R2D2 loss function comprises three equally weighted terms: two from the repeatability head promoting local similarity and peakiness, and one based on a differentiable approximation of average precision (AP) for descriptor ranking. The DISK architecture [38] is a variation of the original U-Net [49]. It uses a single convolutional layer per block instead of two, instance- instead of batch-normalization, and PReLU instead of ReLU non-linearities. The last 129-channel layer of the U-Net is split to produce the descriptor and the detector outputs. The descriptor part is L2-normalized, while the detector part is left unchanged. The cost function takes on a reinforcement learning perspective, where the network implements a probabilistic policy that is trained to maximize a simple reward function, which rewards correct matches and penalizes incorrect ones. ## III Proposed approach We were particularly interested in the HF-net architecture for our use case and intended to closely follow its design and learning scheme. However, HF-net relies on SuperPoint as the teacher network for local features, which is trained on the MS-COCO dataset and NetVLAD for global descriptors, trained on the Google Street View Time Machine. Since no existing feature extraction networks are trained on asteroid imagery, our first task was to train our own teacher network. While HF-net utilizes MobileNetV2 as its backbone, we also considered MobileNetV3 [50] and EfficientNet [51] for our proposed feature extractor named the Light-weight Asteroid Feature Extractor (LAFE). For the teacher network, which we will refer to as the High-performance Asteroid Feature Extractor (HAFE), we evaluated both R2D2 and DISK. SuperPoint was excluded due to its suboptimal training scheme requiring a teacher network. After preliminary testing, we found that the U-Net backbone of DISK outperformed the VGG-style backbone of the R2D2 network. Therefore, we also adopted a U-Net backbone for the R2D2 network, which we refer to as R2D2-U. It is important to consider that the majority of CNN architectures do not possess inherent scale or rotation invariance. Although some resilience against scale changes and rotations can be achieved through appropriate data augmentation, this form of invariance is acquired through brute force, consequently consuming network capacity. A more effective approach to achieve scale invariance involves extracting features at various scales during test time. This is accomplished by constructing an image pyramid with a specific scaling interval and then feeding each scale into the feature extraction network. Traditional methods like SIFT and ORB also adopt this approach. In certain applications, rotation invariance might not be a prerequisite, especially when input images consistently exhibit a particular orientation. For instance, images from self-driving cars are typically captured horizontally, keeping the sky consistently at the top. Drones, when equipped with downward-facing cameras, can potentially leverage magnetometer readings to align images with north as the upper direction. Similarly, asteroids with stable rotation axes can adopt a similar strategy by utilizing star-tracker readings. Alternatively, images can be rotated based on the position of the Sun. If these input-conditioning techniques prove inadequate, achieving rotation invariance can be pursued through a spatial transformer [52]. However, for this study, we presumed that rotation-axis-based input conditioning would suffice, leaving the exploration of a spatial transformer's integration for future investigation. In order to narrow the scope of this study, our focus was exclusively on local feature extraction, omitting components necessary for global feature extraction. Following the approach of HF-net, we employed grayscale images as input, generating 128-dimensional floating-point descriptors as part of the output. The assessment of RGB image inputs presents challenges, given the prevalent use of grayscale imagery in available asteroid data. The evaluation of binary descriptors was intentionally deferred to a subsequent phase to further constrain the study's scope. The evaluation of various design choices for both LAFE and HAFE was difficult as the outcome is affected by a large number of hyperparameters. These hyperparameters are free parameters related to data augmentation, cost function, training, and the network itself. For instance, the effectiveness of a specific backbone architecture may hinge on an appropriate weight decay value. To tackle this challenge, we optimized these parameters using the Ray Tune framework [53], which integrates the ASHA scheduling algorithm [54] and Bayesian Optimization [55]. Bayesian Optimization, in this context, utilizes Gaussian Processes to forecast network performance based on hyperparameter values, facilitating suggestions for new trial configurations. Ray Tune also supports parallel execution of trials across multiple machines and incorporates error recovery mechanisms. The ASHA scheduler aids in the early termination of trials that do not demonstrate top-tier performance, while accommodating concurrent trials. However, it is important to note that ASHA may favor parameter values that lead to good initial performance rather than optimal performance after full training. Consequently, certain parameters such as learning rates were excluded from the optimization process and were determined through less rigorous methods. Moreover, hyperparameters that exhibited negligible impact during preliminary testing were omitted from the optimization to limit the dimensionality of the search space. ### _High-performance Asteroid Feature Extractor (HAFE)_ #### Iii-A1 Architecture From the outset, it was unclear which feature extractor was better, R2D2 with a U-net backbone (R2D2-U) or DISK. Consequently, an evaluation of both was deemed necessary. The principal distinction between R2D2 and DISK, when utilizing the same backbone, lies in their respective loss functions and output heads. Unlike DISK, which employs a descriptor and detection head, the R2D2 loss function distributes detection into two heads referred to as "repeatability" and "reliability". Within R2D2, both detection heads are fed the squared output of the descriptor head (as illustrated in Fig. 2), whereas DISK directly derives descriptors and detection scores from the final layer of the backbone (as depicted in Fig. 1). The U-net backbone employed is identical to the original one used by DISK, comprising four down-blocks and up-blocks, each with a single 5x5 convolution layer, PReLU nonlinearities, and instance normalization. Down-sampling employs 2x2 average pooling, while up-sampling uses bilinear interpolation. Preliminary testing demonstrated inferior performance upon reducing down and up block count to three or utilizing two convolution layers per block. Conversely, elevating the count to five yielded negligible performance gains. The last layer comprises 129 channels (128 descriptor channels + 1 detector channel) when employed by DISK. Fig. 1: DISK architecture. The U-Net backbone is simplified in the figure by omitting the three deepest levels. See the text for additional details. However, in the context of R2D2-U, the last layer has only 128 channels. The R2D2-U heads consist of a single 1x1 convolution layer featuring L2-normalization for descriptors and a special non-linearity for the detection heads: \[f(x)=\frac{\log(1+\exp(x))}{\log(1+\exp(x))+1}. \tag{1}\] In preliminary testing, we explored both R2D2 with output directly from the final layer of the backbone and DISK with R2D2-style head configuration. Both altered feature extraction approaches exhibited inferior performance compared to their corresponding baselines, prompting the retention of their original head structures. Nevertheless, we observed a performance enhancement upon reducing the channel count of R2D2's reliability head from two to one and employing the same single-channel activation function as the repeatability head (as defined in Equation (1)) instead of the previous two-channel softmax. We experimented with employing a sigmoid function, yet it appeared to degrade the performance of R2D2-U. Similarly, DISK's performance was improved by replacing the sigmoid detection score function with the function given by Equation (1). #### Iii-B2 Loss functions The training process for both R2D2 and our modified version of DISK relies on utilizing image pairs with established pixel correspondences. In contrast, the original DISK training employed image triplets with pixel correspondences derived either from depth maps and camera poses, or through the application of epipolar constraints when depth maps were unavailable. Our training methodology involves batches of multiple image pairs, all of which are processed by the same network. We will begin by going through the R2D2 loss function, after which we continue with the DISK loss function. _R2D2's loss function_ aims to promote the sparse detection of highly distinctive and precisely localized features. The function comprises three key components: one that combines the descriptor and reliability outputs (\(L_{AP\kappa}\)), and two other components that are related to the repeatability output: \[L_{R2D2}=L_{AP\kappa}+aL_{cosim}+bL_{peaky}, \tag{2}\] where \(a\) and \(b\) are weights that could be optimized, in contrast to the original R2D2 publication by Revaud et al. [37], where they were statically set as \(a=b=1\). To facilitate optimization of these parameters, we find it more intuitive to reparameterize \(a\) and \(b\) as follows: \[a =2(1-\beta)\alpha, \tag{3}\] \[b =2\beta\alpha,\] where \(\alpha>0\) is the weight given to repeatability, while \(\beta\in[0,1]\) is the weight given to peakiness at the expense of cosine similarity. The corresponding values for the original R2D2 are then \(\alpha=1\) and \(\beta=0.5\). In the original R2D2 formulation, all three loss terms exhibit variations within the range of 0 to 1 due to the addition of a constant value of 1 to each term. However, since constant terms bear no influence on the training process, we have omitted them. Consequently, the loss terms now fluctuate within the range of -1 to 0. The loss term for cosine similarity, denoted as \(L_{cosim}\), guides the repeatability output to exhibit local similarity across the image pairs. This term can be expressed as: \[L_{cosim}=-\frac{1}{|P|}\sum_{p\in P}\frac{\mathbf{s}_{p}\cdot\mathbf{s}_{p}^{\prime} }{\left\|\mathbf{s}_{p}\right\|\left\|\mathbf{s}_{p}^{\prime}\right\|}, \tag{4}\] where \(P\) is a set of overlapping patches of size \(n_{rep}\times n_{rep}\), extracted from the repeatability output of the first image within each pair, and subsequently flattened into vectors denoted as \(\mathbf{s}_{p}\). Conversely, vectors \(\mathbf{s}_{p}^{\prime}\) are drawn from the second image's repeatability values, leveraging the known pixel correspondences. The patch size, dictated by \(n_{rep}\), directly impacts the frequency of local maxima in the repeatability output. If certain pixels within \(\mathbf{s}_{p}\) lack corresponding matches, the corresponding repeatability values within \(\mathbf{s}_{p}^{\prime}\) are set to those located at the bottom-right corner of the repeatability map. This approach, employed in the original R2D2 study, proves superior to entirely discarding these values from both patches. The loss term for "peakiness", denoted as \(L_{peaky}\), serves to enforce the sparsity of the repeatability output while discouraging the trivial solution of a constant value, which is permitted by the cosine similarity. This term can be expressed as: \[L_{peaky}=-\frac{1}{|R|}\sum_{r\in R}\left[max\left(\mathbf{s}_{r}\right)-mean \left(\mathbf{s}_{r}\right)\right], \tag{5}\] where \(R\) is a set of patches \(\mathbf{s}_{r}\), extracted from a sliding window of size \(n_{rep}\times n_{rep}\) from the repeatability output of all training images. Notably, the output of the first and second images within each pair is treated individually and equivalently. The \(max\) function returns the highest value within each patch, while the \(mean\) function computes the average value of a given patch. Finally, the term aimed at optimizing Average Precision (AP) can be written as: \[L_{AP\kappa}=-\frac{1}{|Q|}\sum_{q\in Q}\left[AP(q)R_{q}+\kappa(1-R_{q}) \right], \tag{6}\] where \(Q\) represents a set of query descriptors sampled from the first image within each image pair. Here, \(R_{q}\) denotes the Fig. 2: R2D2 with the same U-Net backbone (R2D2-U) than in DISK. Note that the figure shows one less level of U-Net than Fig. 1 reliability output of the descriptor \(q\), \(\kappa\) defines the threshold for acceptable AP, and \(AP(q)\) stands for a differentiable approximation of the actual AP. The original formulation employed a fixed \(\kappa=0.5\). However, we observed that an initial warm-up phase (comprising 1500 steps) during which \(\kappa\) gradually increases to its final value improves training performance. \(AP(q)\) provides an approximation of the precise AP, which is calculated by initially populating an array with cosine similarity scores between descriptor \(q\) and the descriptors within a set \(B\) sampled from the second images across all pairs in the training batch. Next, a label array is created, with cell values being 0 except for the cell corresponding to the correct match, which is assigned a value of 1. This label array is arranged in descending order based on the similarity score, followed by a computation of the cumulative sum. The average of this cumulative sum yields the AP. The approximation circumvents the non-differentiable sorting process by quantizing the descriptor distances into a specified number of bins, assigning values between 0 and 1 depending on the proximity of the distance value to the bin center. The precise mathematical formulation for the AP calculation can be found in Balntas et al. [56], while the details of the approximation can be found in He et al. [57]. Due to memory limitations, a subset of descriptors is used for each image. Specifically, we randomly sample one query descriptor \(q\) for every 64 available descriptors. The positive match in the corresponding second image is identified by determining the closest matching descriptor within a radius \(r_{pos}\) from the ideal pixel, thus accommodating some degree of error in pixel correspondences. A set of challenging distractor descriptors is sampled within a circular region around the location of the ideal match, at a distance of \(r_{neg}\) from the optimal positive match. Additional distractors are randomly sampled (also at a 1/64 ratio) across all the second images in the batch, with the exclusion of the circular region in the corresponding second image defined by \(r_{neg}\). _The loss function of DISK_[38] is derived from reinforcement learning, particularly the REINFORCE method [58], which seeks to maximize the expected reward \(\mathbb{E}[R|\mathbf{\theta}]\) given a policy parameterized by \(\mathbf{\theta}\). This entails stochastic gradient ascent, where the policy, denoted as a probability function \(P(A|\mathbf{I},\mathbf{\theta})\), governs possible actions \(A\) in relation to the input \(\mathbf{I}\) and policy parameters \(\mathbf{\theta}\). The fundamental approach involves iterative sampling of actions from the policy, evaluating the gradient at the sampled actions with respect to \(\mathbf{\theta}\), and then updating \(\mathbf{\theta}\) according to the gradient. In DISK, the input \(\mathbf{I}\) is divided into image pairs (\(\mathbf{I}_{A}\) and \(\mathbf{I}_{B}\)), each generating feature sets (\(F_{A}\) and \(F_{B}\), respectively). The set of possible actions \(A\) is defined in a relaxed manner, allowing simultaneous matching of each potential feature pair between \(F_{A}\) and \(F_{B}\), meaning feature \(i\in F_{A}\) can be concurrently matched with both \(j,k\in F_{B}\) with non-zero probability. The policy probability function \(P(i\leftrightarrow j|\mathbf{I}_{A},\mathbf{I}_{B},\mathbf{\theta})\) is factorized into a descriptor matching function and two identical feature detection functions: \[P(i\leftrightarrow j|\mathbf{I}_{A},\mathbf{I}_{B},\mathbf{\theta})= P(i\leftrightarrow j|\mathbf{\delta}_{i},\mathbf{\delta}_{j},\theta_{M}) \tag{7}\] \[\cdot P(i|\mathbf{K}_{A})\cdot P(j|\mathbf{K}_{B}),\] \[(\mathbf{K}_{k},\mathbf{\delta}_{k})= f(I_{k},\mathbf{\theta}_{w})_{i},\ \ k\in\{A,B\},\] where \(\mathbf{\delta}_{i}\) and \(\mathbf{\delta}_{j}\) represent the descriptors of features \(i\in F_{A}\) and \(j\in F_{B}\), respectively, while \(\theta_{M}\) indicates the scale of descriptor match L2-distances. Symbols \(\mathbf{K}_{A}\) and \(\mathbf{K}_{B}\) stand for feature detection maps of images \(\mathbf{I}_{A}\) and \(\mathbf{I}_{B}\), respectively. The function \(f(\mathbf{I}_{k},\mathbf{\theta}_{w})\) represents the DISK network. Similar to R2D2, only a subset of features is sampled. In DISK, sampling entails: 1) dividing the detection output \(\mathbf{K}\) into cells \(\mathbf{K}^{u}\) of size \(h\times h\) (\(h=8\)); 2) employing \(softmax\) for probability normalization within each cell \(\mathbf{K}^{u}\); 3) randomly proposing one sample per cell based on normalized probabilities, and finally; 4) accepting each proposed sample \(i\) with the probability given by the original detection output \(K_{i}\). Consequently, the detection probability functions from Equation (7) can be reformulated as: \[P(i|\mathbf{K}_{k})=softmax(\mathbf{K}_{k}^{u})_{i}\cdot K_{k,i},\ \ k\in\{A,B\}. \tag{8}\] The probability function for descriptor matching is factored into the forward matching \(i\to j\) and backward matching \(i\gets j\) components, each normalized individually via \(softmax\): \[P(i\leftrightarrow j|\mathbf{\delta}_{i},\mathbf{\delta}_{j},\theta_{M})= softmax(-\theta_{M}\mathbf{D}_{i,}.)_{i} \tag{9}\] \[\cdot softmax(-\theta_{M}\mathbf{D}_{,j})_{j},\] where \(\mathbf{D}\) represents a distance matrix computed between each descriptor \(\mathbf{\delta}_{i}\) and \(\mathbf{\delta}_{j}\). The notation \(\mathbf{D}_{i,}\) extracts the \(i\)-th row from \(\mathbf{D}\), while \(\mathbf{D}_{.j}\) extracts the \(j\)-th column. The descriptor distance scale is given by \(\theta_{M}\), which is the reciprocal of the softmax temperature. With these foundational concepts established, we can delve into the overall loss function, expressed as: \[L_{DISK}=L_{RE}+\lambda_{kp}L_{KP}, \tag{10}\] where \(L_{RE}\) stands for the loss component tied to the REINFORCE method, while \(L_{KP}\) represents an additional feature detection cost serving as a regularizer. This cost is weighted by \(\lambda_{kp}\) (the original DISK employs \(\lambda_{kp}=0.001\)). The cost term is defined as: \[L_{KP}=\sum_{i\in F_{A}}\log P\left(i|\mathbf{K}_{A}\right)+\sum_{j\in F_{B}}\log P \left(j|\mathbf{K}_{B}\right). \tag{11}\] By reformulating the gradient estimator from [38], the REINFORCE loss component can be written as: \[\begin{split} L_{RE}=&-\sum_{i\in F_{A}}\sum_{j\in F _{B}}P(i\leftrightarrow j|\mathbf{\delta}_{i}^{*},\mathbf{\delta}_{j}^{*},\theta_{M})R_ {ij}\Gamma_{ij},\\ \Gamma_{ij}=&\log P(i\leftrightarrow j|\mathbf{I}_{A},\bm {I}_{B},\mathbf{\theta}),\end{split} \tag{12}\] where \(R_{ij}\) denotes the reward associated with matching feature \(i\) with feature \(j\), while \(\Gamma_{ij}\) corresponds to the logarithm of the function given in Equation (7). For correct matches, \(R_{ij}=\rho_{tp}\), for false matches, \(R_{ij}=\rho_{fp}\), and for cases where pixel correspondence is missing for feature \(i\), \(R_{ij}=0\). A feature match is considered correct if it lies within an \(\epsilon\)-pixel distance from the true corresponding pixel. In the original DISK formulation, the reward values were \(\rho_{tp}=1.0\) and \(\rho_{fp}=-0.25\). Notably, \(\mathbf{\delta}_{i}^{*}\) and \(\mathbf{\delta}_{j}^{*}\) are detached copies of the descriptors, so that the gradient with respect to \(\mathbf{\theta}_{w}\) is solely affected by the weight given by the probability function that the detached copies affect. In essence, this function assigns weight to matches with similar descriptors while largely diminishing the impact of matches with highly dissimilar descriptors. The detachment results in the anomaly of the loss potentially increasing during training, while simultaneously, the network's performance continues to improve as expected. ### _Light-weight Asteroid Feature Extractor (LAFE)_ #### Iv-B1 Architecture The architecture of the lightweight feature extractor is independent of the choice of the teacher network. It consists of a single detection head. In case the R2D2-U network is selected as the teacher, the target output for the single detection head is computed as the product of the two R2D2 detection heads. The architecture, as depicted in Figure 3, follows the design of HF-net [36], which is itself based on SuperPoint [35]. In both networks, to mitigate the loss of spatial resolution in the backbone (\(W/8\times H/8\)), the 65-channel detection head output is reorganized such that each cell covers an \(8\times 8\) region. Before this reorganization, the output undergoes a channel-wise softmax operation, followed by the removal of the extra "no-detection" channel. The descriptor head output then restores the full spatial resolution through bilinear interpolation. In contrast to HF-net and SuperPoint, LAFE adopts the same activation function as R2D2, defined in Equation (1), while also omitting the "no-detection" channel. Additionally, LAFE employs 128-channel descriptors, in alignment with DISK and R2D2. We evaluated three distinct lightweight backbones: MobileNetV2 [44], MobileNetV3 [50], and EfficientNet-B0 [51]. These backbones were modified to support grayscale input and the channel count in their final layer was increased, a modification similar to that applied in HF-net. The HF-net backbone is based on MobileNetV2, with the channel counts of the last two layers increased from 32 to 64 and from 32 to 128, as per the source code referenced in the article [36]. However, the article itself cites the channel widths as 48 and 96. Since our descriptors only require 128 channels (instead of 256), we concluded that for MobileNetV2, elevating the last layer's channel count from 32 to 64 suffices. For both MobileNetV3 and EfficientNet-B0, we raised the channel count of the final layer from 40 to 72, approximating the geometric mean of 40 and 128. Inspired by the block architectures within the various lightweight backbones, we propose replacing the first \(3\times 3\) convolution layer in the descriptor head with a generalized inverted residual block (GIRB), as illustrated in Figure 4. All three lightweight backbones can be implemented using this generalized block. For instance, MobileNetV2 omits Squeeze Excitation (SE) entirely, while MobileNetV3 employs it for some blocks, and EfficientNet uses it for all blocks. Furthermore, characteristics such as kernel size, stride, output channel count, expansion factor, and the chosen activation function vary. During preliminary testing, we experimented with employing a GIRB for the detection head, but discovered that detection performed well without any hidden layers. Initially, we had planned to employ hyperparameter optimization to determine the specific details of the descriptor head's GIRB. However, it soon became evident that optimization consistently favored higher channel counts and expansion factors, even for marginal gains in performance, thus increasing the network's capacity and undermining the lightweight nature of LAFE. Ultimately, we limited hyperparameter optimization to choosing one of the three available backbones and deciding whether SE would be used in the descriptor head or not. The channel count and expansion factor for the descriptor head were fixed at 128 and 6, respectively. #### Iv-B2 Loss function The loss function employed for LAFE closely follows that of HF-net, with the exception of excluding the global descriptor term: \[\begin{split} L_{LAFE}=& e^{-w_{1}}\sum_{i\in F}\left\| \mathbf{\delta}_{i}^{*}-\mathbf{\delta}_{i}^{t}\right\|_{2}^{2}\\ &+2e^{-w_{2}}\sum_{i\in F}BCE\left(K_{i}^{*},K_{i}^{t}\right)\\ &+w_{1}+w_{2},\end{split} \tag{13}\] Fig. 4: General inverted residual block (GIRB) used by different LAFE backbones as building blocks. The typical expansion factor \(\varepsilon=6\) and stride \(s\) is 1 or 2. A skip-connection is used if output channel width \(C_{\text{out}}\) equals input channel width \(C_{in}\). Squeeze-excitation is optional. The activation function (AF) can be Hard-swish or ReLU6. Fig. 3: LAFE architecture. Various backbones that use GIRB as a building block (see Fig. 4) are considered. where \(w_{1}\) and \(w_{2}\) are weights associated with multitask learning [48], and they are jointly optimized with the network weights. The index \(i\in F\) iterates through all the detection values \(\mathbf{K}^{s}\) and their associated descriptors \(\mathbf{\delta}^{s}\). The matching target values are denoted as \(\mathbf{K}^{t}\) and \(\mathbf{\delta}^{t}\). Unlike training the teacher network, no sampling is required. In principle, LAFE could be trained using R2D2 or DISK loss functions. However, as argued by the authors of HF-net [36], learning to predict the output of a teacher network is a more straightforward learning task. This allows us to expect reasonable performance from LAFE, even though it is less capable than HAFE. ### _Data augmentation_ An essential aspect of neural network training is data augmentation, where training data is transformed to retain essential information while altering non-essential aspects. This helps prevent overfitting and leads to improved performance on new data. _Our data augmentation pipeline for HAFE_ training data consists of operations that can affect either one or both images in any given image pair: 1. First image: Random scaling so that the shortest edge width is 256-1024 pixels. 2. Second image: The true relative scale between the image pair is inferred from pixel correspondences, and the second image is randomly scaled so that the scale difference \(k_{rnd}\) between the pair is at most half of the image pyramid scaling factor \(k\) used during inference, i.e., \(k_{rnd}\in[k^{-1/2},k^{1/2}]\). 3. First image: Random cropping weighted by available pixel correspondences in potential cropping areas. 4. Second image: Deterministic cropping that maximizes the number of pixel correspondences. 5. Both images: Random horizontal flip, either flipping both images or none. 6. Both images: Add uniformly distributed pixel noise with amplitude \(\lambda_{n}\). 7. Second image: Random brightness change by multiplying the image with gain \(g\) distributed as \(\ln(g)\sim\mathcal{U}(-\ln(\lambda_{g}),\ln(\lambda_{g}))\), where \(\mathcal{U}\) represents the uniform distribution. We have chosen to follow R2D2 [37] and selected the image pyramid to have \(s=4\) images per octave (doubling of scale), making the scaling factor \(k=2^{1/s}\approx 1.189\). Depending on the final application, a trade-off analysis between resource usage and accuracy should be performed to select an optimal \(s\). For certain datasets lacking geometry backplanes that allow image pair construction, we generate image pairs by warping each image with a random homography before using the data augmentation pipeline outlined above. The homography transformation can be factored into the rotation, translation, shear, and projection components. Based on initial testing, we found that rotation and projection components seemed sufficient. The random transformation matrix used for the synthetic pairs becomes: \[H_{rnd} =\begin{bmatrix}\cos(\phi)&-\sin(\phi)&0\\ \sin(\phi)&\cos(\phi)&0\\ 0&0&1\end{bmatrix}\begin{bmatrix}1&0&0\\ 0&1&0\\ p_{1}/w&p_{2}/h&1\end{bmatrix},\] \[\phi^{2} \sim\mathcal{U}\left(-\lambda_{r}^{2},\lambda_{r}^{2}\right),\] \[p_{1}^{2},p_{2}^{2} \sim\begin{cases}\mathcal{U}\left(0,\lambda_{p}^{2}\right)&\text {if }z=1,z\sim B(0.5)\\ \mathcal{U}\left(\left[(\lambda_{p}+1)^{-1}-1\right]^{2},0\right)&\text{ otherwise,}\end{cases} \tag{14}\] where \(B\) represents the Bernoulli distribution, \(\lambda_{r}\), and \(\lambda_{p}\) are hyperparameters determining the extremeness of the generated rotations and projections, and \(w\) and \(h\) are the image width and height in pixels, respectively. Note that the square of \(\phi\), \(p_{1}\), and \(p_{2}\) are (piece-wise) uniformly distributed. This has the advantage of generating more extreme values, which tend to be more valuable for training purposes. _LAFE training_ does not require paired images, leading to a slightly different data augmentation pipeline: 1. Random rotation and projection \(H_{rnd}\) (same as used for synthetic pairs). 2. Random scaling so that the shortest edge width is 256-1024 pixels. 3. Random cropping. 4. Random horizontal flip. 5. Uniform pixel noise. 6. Random exposure. All these steps use the same hyperparameter values as the paired image pipeline. There is an additional opportunity to use data augmentation because a slightly modified image can be fed to the student network compared to the one given to the teacher network. Here, we consider using random exposure with a maximum gain \(\lambda_{g}^{st}\), followed by adding normally distributed noise with a standard deviation \(\lambda_{g}^{st}\). To reduce randomness in validation performance metrics, validation data is not processed by these pipelines. However, image scaling and cropping are still necessary. If the shortest edge width of the image is not in the 256-1024 pixel range, the image is either upscaled or downscaled so that the shortest edge becomes either 256 pixels or 1024 pixels. Central cropping is used for LAFE. For HAFE, both the first and second image cropping locations are chosen to maximize the pixel correspondence count of the resulting cropped image pair. ### _Metrics_ The matching pipeline employed for evaluating feature extractors proceeds as follows: it first extracts a sparse set of features from both images using non-maxima suppression (NMS), discards features with low detection scores, calculates a descriptor distance matrix between all retained descriptor pairs, and applies mutual nearest neighbor criteria to eliminate non-circular matches. Matches are labeled as "possible to match" if a pixel correspondence exists for the first image descriptor location. Additionally, if the matching second image descriptor is within 5 pixels of the ground-truth location, the match is labeled as correct. In case a network has multiple detection outputs (as in R2D2), the repeatability output is employed for NMS. NMS is executed by initially filtering out the highest spatial frequencies using a \(3\times 3\) averaging kernel and then selecting all locations that are the maximum in their \(3\times 3\) neighborhoods. A feature is discarded after NMS if the corresponding detection output (values \(\in[0,1]\)) falls below the threshold of 0.5 or if its combined detection score (repeatability times reliability) is lower than that of the top \(N\) features, where \(N=round(0.001hw)\), and \(h\) and \(w\) represent the height and width of the image, respectively. During the training and hyperparameter optimization phases, features are extracted at a single scale level. However, when evaluating the final feature extractors, features are extracted at various scale levels by creating an image pyramid. Along with image coordinates, the features also retain the scale of the image from which they were extracted. To account for these feature scales during matching, we follow the methodology proposed in [59]. Matches are initially established without considering their scales. Subsequently, the final matches are determined by adjusting the scales based on the estimated intrinsic scale difference and constraining the matches to the nearest scale levels. Several metrics can be derived from the matching pipeline to evaluate the performance of the feature extractors: * The ratio of correct matches over all possible matches (Matching Score, _M-Score_[35, 37]), which serves as the target metric for hyperparameter optimization, as it is the only metric that cannot be improved simply by detecting fewer features. * The ratio of correct matches over proposed matches (Mean Matching Accuracy, _MMA_[31, 37]), which quantifies the quality of proposed features without considering missed opportunities. * Mean Average Precision (_mAP_, [56]), discussed previously in the context of the R2D2 loss function. * Pixel Localization Error (_LE_, [35]), providing the average image-space distance between correct matches and their associated ground truth. ### _Hyperparameter search_ The performance of the feature extractor designed in this study is highly dependent on the hyperparameter values inherent to the extractor network and its training process. Hyperparameter optimization is, therefore, indispensable if we hope to produce a state-of-the-art feature extractor. Various hyperparameter optimization frameworks are available, and in this study, we have chosen Ray Tune [53]. Ray Tune not only distributes the training workload across different computing nodes but also provides interfaces for exploring the search space and scheduling trials. A trial involves evaluating the search space at a specific point determined by the search method. The evaluation entails training the feature extractor for a specified number of epochs and calculating the target performance metric on the validation dataset. The trial scheduler manages which trials are processed by computing nodes and decides whether to pause or terminate a trial before it reaches the maximum number of epochs. For our search method, we have employed a Bayesian Optimization (BO) variant implemented by the Scikit-Optimize software package [55]. This approach is combined with the Asynchronous Successive Halving Algorithm (ASHA) scheduler [54], which enables us to prioritize promising trials while discontinuing non-promising ones early in the process. Both the Aalto University HPC cluster (Triton) and the CSC IT Center for Science HPC cluster (Puhti) were utilized during the study. Six computing nodes featuring NVIDIA Tesla V100 Volta GPUs were utilized in parallel during the optimization phase. #### Iv-E1 Ash Scheduling The core concept behind ASHA is to allocate a small initial resource budget, denoted as \(r_{0}\) (e.g., one training epoch), to each trial. Subsequently, only the top-performing \(1/\eta\) trials are allowed to continue with an increased budget of \(r_{1}=\eta r_{0}\) per trial. Trials are terminated when they reach the maximum resource usage per trial, denoted as \(r_{max}\). Instead of measuring resource use in terms of epochs, we opted for 1500 training batches to obtain more frequent validation results and reach the first ASHA decision point sooner. With \(\eta=3\), \(r_{0}\) set to 1500 training steps, and \(r_{max}\) at 24000 training steps, stopping decisions are made at 1500, 4500, and 13500 steps. We maintained a fixed total of 243 trials, expecting at least 9 fully trained trials, although the asynchronous nature of the algorithm may yield a larger number. The approximate total resource usage can be estimated as \(792r_{0}\). The ASHA variant implemented by Tune differs slightly from the one described in [54]. In this variant, a decision is taken immediately whether to halt or continue a trial at each rung, as opposed to pausing trials for potential promotion later. This modification allows us to update the search algorithm with intermediate and lower-fidelity evaluation results from stopped trials. However, it's worth noting that Scikit-Optimize does not explicitly support multifidelity evaluations, as discussed by Klein et al. [60], something which is beyond the scope of our current work. In the same study, the authors also compared the original trial-promoting ASHA with the Tune variant featuring trial stopping, with the latter yielding superior performance [60]. #### Iv-E2 Scikit-Optimize Bayesian Optimization Scikit-Optimize's implementation of Bayesian Optimization (BO) compares favorably to other BO methods [61], even though some methods, such as Trust Region Bayesian Optimization (TuRBO) [62] and ensembles of TuRBO and Scikit-Optimize, outperform it to a certain extent. Our understanding of Scikit-Optimize is primarily derived from an analysis of its source code, as comprehensive articles on the topic were not readily available. Scikit-Optimize's BO recommends hyperparameter values for new trials by constructing a Gaussian Process (GP) surrogate model to predict network performance based on hyperparameter values. Whenever a new evaluation result becomes available, the prior surrogate model is discarded, and a fresh one is fitted using the expanded training data. The training process involves estimating GP kernel parameters, such as hyperparameter length scales and Gaussian noise, through maximum likelihood gradient ascent. These length scales offer valuable insights into the influence of specific hyperparameters on network performance; larger length scales correspond to lower impact. The parameter space is normalized to the 0-1 range, and the reported length scales are also presented in this normalized space. After GP fitting, new suggestions for evaluating points in the search space are generated by randomly selecting one of three acquisition functions: Probability of Improvement (PI) [63], Expected Improvement (EI) [64], and Upper Confidence Bound (UCB) [65]. This strategy of combining multiple acquisition functions has been shown to be more effective than relying on any single criterion [66]. To mitigate the risk of converging to a local maximum, the surrogate model search involves sampling 10,000 random locations, with the best five serving as starting points for gradient ascent optimization. Generating multiple distinct points with identical knowledge is necessary due to parallel search space evaluation. This is achieved by assuming a poor dummy response for previously generated points that lack real responses [67]. Intuitively, Successive Halving, and by extension, ASHA, may exhibit a bias toward early performance at the expense of overall performance because only the early high performers receive full training. However, the extensive exploration of the search space made possible by ASHA outweighs this bias, as demonstrated by studies comparing random search ASHA to random search with full training [54, 68]. Moreover, although Scikit-Optimize treats intermediate, low-fidelity performance evaluations from trials stopped by ASHA as full evaluations, based on Wulff et al. [68], the combination of fixed-fidelity BO with ASHA outperforms random search ASHA. Consistent with Wulff et al. [68], we excluded the learning rate from optimization, given its potential to introduce early performance bias. To ensure that we did not overlook trials achieving their best performance early, we assessed performance based on the maximum M-Score reached by a trial during any validation run conducted after each 1500 training step interval. #### Iii-C3 Hyperparameter selection All free parameters involved in the process of generating a feature extractor that are not optimized through backpropagation can be classified as hyperparameters. However, due to limited computational resources and the chosen optimization method, only a subset of possible hyperparameters can be optimized. To restrict the search space, most parameters related to network architecture, such as the number of layers, channel widths, activation functions, etc., have been excluded. Other optimization methods, which exploit weight sharing to reduce training times, may be better suited for network architecture search (NAS), but NAS was considered to be beyond the scope of this study. Additionally, we excluded parameters that are expected to have values within a reasonable range or minimal impact on the target performance metric. The optimization outcome corresponds to the hyperparameter configuration that yielded the best performance. However, due to the stochastic nature of evaluations, we also extracted the best configuration as indicated by the surrogate model. We focused initially on hyperparameter optimization of the two HAFE models, R2D2-U and DISK. The best-performing model among the two was subsequently chosen as the teacher for LAFE during its hyperparameter optimization. As the teacher network training did not include synthetic data, we also excluded it in the LAFE training. Section V Results presents the detailed hyperparameters optimized, their search spaces, and the optimization results. Before delving into the results, we will provide an overview of the image data used for training, validation, and testing. ## IV Asteroid/Comet data Various image sets are available thanks to several exploration missions targeting solar-system small bodies. Notable recent endeavors include the OSIRIS-REx mission, which orbited and later sampled asteroid 101955 Bennu, and Hayabusa-2, which undertook a similar mission with asteroid 162173 Ryugu. Both missions successfully reached their target asteroids in 2018. The first mission that provided extensive imagery of an asteroid was NEAR Shoemaker, which orbited 433 Eros in 2000 and subsequently landed on its surface in 2001. Following this, the Hayabusa spacecraft imaged asteroid 25143 Itokawa while orbiting it in 2005. In 2014, Rosetta achieved the distinction of becoming the first spacecraft to orbit a comet, 67P/Churyumov-Gerasimenko. The images acquired during these missions served as the basis for training our feature extractor. Additionally, we supplemented our dataset with synthetic data generated using a Bennu shape model [69] and OpenGL-based rendering software [70, 5]. Images from various missions can be accessed through the NASA Planetary Data System (PDS) or, in the case of the Rosetta mission, via the ESA Planetary Science Archive [71]. The datasets employed in this study are detailed in Table II. It's worth noting that some missions encompass multiple instruments, each contributing to a distinct dataset. Interpreting the available data requires caution, as each mission adopts its own image and metadata formats. None of the datasets provide readily available spacecraft-to-target-body relative pose information. To overcome this limitation, we estimated relative poses based on pixel georeferencing, camera instrument intrinsics, and employed them during image pair creation. For datasets such as 67P/Churyumov-Gerasimenko (67P/C-G) NAVCAM and Bennu TAGCAMS, which lack geometry backplanes, we opted for image warping to generate synthetic image pairs, as we did not pursue geometry estimation through structure-from-motion (SfM) algorithms. Images of Ryugu were not included in this study due to their late availability. When dividing the datasets for training and validation, we excluded both the synthetic images and the synthetic pairs of real images from the validation set. This approach ensures that hyperparameter optimization focuses solely on real image pair performance. The total count of available images, as presented in Table II, only includes images that are georeferenced, if supported by the respective dataset. For datasets lacking georeferencing, the total count encompasses images taken during proximity operations, with those acquired during the cruise and approach phases being excluded. A selection process was applied to filter out images that are corrupted, saturated, or contain only a small portion of the target body. In cases where datasets contained a surplus of acceptable images, we randomly selected a subset to create a balanced combined dataset. Notably, Bennu TAGCAMS images often suffer from saturation due to the navigation mode's requirement to capture background stars. Due to the limited number of available Itokawa images, we reserved that dataset exclusively for testing purposes. Additionally, the datasets 67P/C-G OSIWAC and Bennu OCAMS were omitted due to time constraints imposed by our work schedule. ### _Preprocessing_ The images utilized in this study encompass various processing levels, ranging from raw images to cleaned-up radiance factor (I/F) images that may have undergone resampling to correct for geometric distortions. As a compromise between efficient CNN training and image quality, we reduce the image depth to 8 bits and save them with lossless PNG compression. To mitigate information loss when encoding pixel values with only 8 bits, we calculate the percentiles of the image pixel values at \(p_{lo}=0.05\%\) and \(p_{hi}=99.99\%\), denoted as \(v_{lo}\) and \(v_{hi}\), respectively. We then rescale the pixel values based on these percentiles and apply gamma correction with \(\gamma=1.8\) as follows: \[v^{\prime}=255\left(\begin{matrix}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0. a similarity in the lighting direction among the created pairs. Notably, the direction of light was not considered during image pair creation, which is left as a potential area for future research due to the challenges in extracting this information from image metadata. To address concerns about lighting robustness, we incorporate synthetic image pairs that exhibit significant variations in lighting. These synthetic images are generated using an existing OpenGL-based image rendering pipeline, as introduced by Knuuttila et al. [5], and integrated with SISPO [70]. The renderer leverages camera intrinsics, lighting direction, surface normals at specific locations, and a bidirectional reflectance distribution function (BRDF) to calculate the irradiance (\(W/m^{2}\)) received by each pixel. Self-shadowing is handled through shadow mapping. Post-processing includes the addition of background stars, followed by scaling of irradiance values to digital numbers (DNs), assuming arbitrary aperture and optimal integration time. Subsequently, shot noise, moderate readout noise, and dark noise are incorporated. Given the relative brightness of the target object compared to the stars, the stars are mostly imperceptible. The Hapke 2012 BRDF [79] is employed with default parameter values, which align with values derived from light curve analysis of Bennu [80], as shown in Table III. However, for each image pair, we introduce randomness by multiplying each parameter value with coefficients drawn from a log-normal distribution with \(\sigma^{2}=0.2\). Consistent with the selected target, TAGCAMS [81] is chosen for the camera model. The renderer utilizes a shape model based on the 3.17 m resolution stereo photolcinometry (SPC) derived shape model of Bennu provided by the ORX Altimetry Working Group in 2019 [69]. The original model's vertex count (98,306) led to visible triangles, so we increased the count through smooth interpolation to approximately 2,360,000 vertices. It is worth noting that, subsequent to our data generation stage, various higher resolution shape models derived from laser altimeter data with resolutions of 1.68 m, 0.88 m, and 0.4 m became available as SPICE kernels. The shape model lacks an accompanying albedo map (texture). In the spirit of data augmentation, we procedurally generate a new albedo map for each image pair. The generated texture, with zero mean, is locally summed to the randomized single scattering albedo \(\overline{w_{0}}\) of the entire body. Our generation scheme is based on Gaussian processes (GPs) [82]. Texture values are generated for each vertex of a low-resolution version of the shape model. We construct the associated covariance matrix using white noise and two different scale Matern kernels with the shape model's 3D coordinates as inputs. High-resolution shape model vertex texture values are interpolated from the low-resolution values. Additionally, we introduce a high-frequency noise component, modulated by the low-frequency amplitude generated using the same GP covariance matrix. In retrospect, employing three-dimensional Perlin noise [83] or Simplex noise [84] would likely result in a simpler and more efficient texture generation process. For the first image in each pair, we randomly select the relative orientation, while determining the relative position to ensure that the target fits within the image with some margin. The direction of light is also randomly chosen, ensuring that the phase angle falls within the 0-90deg range. For the second image, we perturb the direction of light by introducing two random angles, with their squares uniformly distributed to sample fewer moderate values. \[\begin{split}\alpha^{2}\sim&\mathcal{U}(-\alpha_{ max}^{2},\alpha_{max}^{2}),\\ \beta^{2}\sim&\mathcal{U}(-\beta_{max}^{2},\beta_{ max}^{2}),\end{split} \tag{17}\] where \(\alpha\) rotates the direction of light away (or towards) the camera axis, affecting only the phase angle, while \(\beta\) rotates it around the camera axis. We limit the maximum rotations to \(\alpha_{max}=45\)deg and \(\beta_{max}=180\)deg. If the resulting phase angle falls outside the 0-90deg range, \(\alpha\) is resampled. ## V Results ### _Hyperparameter optimization_ For all network training, we employ the Adam optimizer with a learning rate of 0.001. The batch size for DISK and R2D2-U networks is set to 8, with an image size of \(224\times 224\). The LAFE network, on the other hand, is trained with a batch size of 32 and an image size of \(448\times 448\). In the following sections, we will present the optimization results for DISK, R2D2-U, and LAFE one by one. #### V-A1 Disk Table IV provides details about the parameters selected for optimization, their initial values, search space, and results. The parameters are categorized into three groups, which correspond to loss function, optimizer, and data augmentation. The only parameter optimized for the Adam optimizer is the weight decay (wd). In terms of loss function, we optimize the false match penalty \(\rho_{fp}\), which affects \(R_{ij}\) in Equation (12), the sampling cell size \(h\), which influences \(\mathbf{K}_{k}^{u}\) in Equation (8), the match distance scale \(\theta_{M}\) in Equation (9), and the pixel error margin \(\epsilon\), used as the threshold for true/false matches. The data augmentation group includes parameters such as pixel noise amplitude \(\lambda_{n}\) (section III-C pipeline step 6), synthetic pair maximum rotation \(\lambda_{r}\) and projection \(\lambda_{p}\) in Equation (14), and "synth", which determines whether synthetic images (section IV-C) are used for training or not. The parameter types include log-uniform (log), uniform (uni), uniform integer (int), or categorical (cat). Different preprocessing methods are applied depending on the parameter type. Log-uniform parameters are transformed into log space before normalization for the GP model. Uniform integer parameters do not require any transformation before normalization, but the values suggested by the surrogate model are rounded before use. To account for nonlinear parameter effects, categorical parameters are label-encoded using integers, avoiding the creation of multiple binary parameters for each label. The "Initial" column displays the range of values randomly sampled for the first ten trials. These ranges are centered around the values we converged upon after the preliminary testing phase. We designed the full parameter ranges to encompass all reasonable values while avoiding an unnecessarily large search space. The "Scale" column presents the length scales estimated by the surrogate model during optimization. These length scales provide insight into the parameter's impact on model performance within the specified search space. A longer length scale corresponds to a smaller impact, with a maximum value of 100 indicating negligible impact. The "Result" and "SMMM" columns show the optimization results and the parameter values that maximize the surrogate model mean. Ideally, the optimization result should closely align with the model's optimizing values, especially for parameters with short length scales. Parameter values near the edges of the search space suggest that the search space may have been too narrow. The M-Score on the validation set for the best DISK model is 34.6, while the maximum mean surrogate model M-Score is 29.1, indicating how much of the performance variation between trials the model attributes to noise. Analyzing the resulting parameter values (see Table IV), it seems that for DISK, further optimization of \(\theta_{M}\) might be possible by exploring values less than 20. However, when considering the cost of redoing the optimization, this was not deemed worthwhile. An interesting observation is the discrepancy between the weight decay (wd) and \(\lambda_{r}\) values in the result and SMMM. This implies that rotation data augmentation consumes network capacity, reducing the need for regularization through weight decay. Additionally, the high value for the rotation augmentation may indicate that the scheme, which normalizes scene orientation by rotating the images so that the target object's rotation axis points upward, is not ideal. Parameters \(\lambda_{p}\) and \(\rho_{fp}\) appear to have minimal impacts. Another potential approach to examine the hyperparameter optimization results is to analyze the partial dependence of the optimized metric on different hyperparameter pairs, as estimated by the surrogate model. However, the resulting figures are large and non-essential for presenting our results and are therefore omitted. #### Vi-C2 R2d2-U Table V presents the hyperparameter optimization results for R2D2-U. The selected hyperparameters are the same as those for DISK in the optimizer and data augmentation groups, while the parameters related to the loss function are different. We optimize the repeatability weight \(\alpha\) and peakiness weight \(\beta\) (Equation (3)), the acceptable AP threshold \(\kappa\) (Equation (6)), the cosine similarity window size \(n_{rep}\) (affecting \(P\) in Equation (4)), the maximum distance for positive samples \(r_{pos}\), and the minimum distance for negative samples \(r_{neg}\). Please refer to the discussion related to DISK's results in Table IV for explanations of common hyperparameter and column meanings. The optimized R2D2-U M-Score on the validation set is 39.9, outperforming DISK, and its performance appears more stable when re-training it with similar hyperparameter values. Examining the resulting parameter values (see Table V), it is evident that \(\alpha\) and weight decay (wd) are located at the edge of the search space, suggesting potential for further optimization in the future. Similar to DISK, R2D2-U also exhibits a discrepancy between the result and the SMMM for \(\lambda_{p}\) and \(\lambda_{n}\), which seem to impact network capacity and provide regularization, respectively. We select the optimized R2D2-U model as the authoritative high-performance HAFE model, which serves as a teacher for our lightweight LAFE model. As synthetic Bennu images did not improve performance, LAFE training is also conducted without them. However, we will still use synthetic images to evaluate the performance of the resulting feature extractors when the direction of sunlight changes. #### Vi-C3 Lafe Table VI provides the hyperparameter optimization results for LAFE. Due to the relative simplicity of the loss function combined with the multitask learning scheme [48], there is no need to optimize any parameters related to the loss function. This allowed us to use the negated validation loss as the optimization metric. Regarding data augmentation, none of the previous parameters are required, as single images are used for training. We have included two parameters: student image random gain \(\lambda_{g}^{st}\) and noise SD \(\lambda_{\sigma}^{st}\) (introduced at the end of section III-C). There is also a new group for network model-related parameters, "arch", determining which backbone to use (mn2 for MobileNetV2, mn3 for MobileNetV3, and en0 for EfficientNet-B0), and "desc-se", determining if the descriptor head should use squeeze-excitation or not. The negated validation loss achieved by the best model was 1.014, while the maximum mean value given by the surrogate model was 0.803. Examining the length scales of \(\lambda_{g}^{st}\) and \(\lambda_{\sigma}^{st}\), it appears that adding noise to images used as input by the student network during training has a negligible effect on the resulting network. Interestingly, the resulting model from the optimization is based on MobileNetV3, while the surrogate model suggests that MobileNetV2 might be a better choice. The length scale of the backbone selection is very small at 0.412, prompting us to train the predicted best model, resulting in a negated validation loss of 0.877. Although this metric is lower than that of the best model, we will include this surrogate model-suggested feature extractor in our subsequent analysis, referring to it as LAFE-SM. ### _Evaluation_ To assess the expected performance of the resulting feature extractors in our specific use case of visual navigation in close proximity to asteroids, we will evaluate them by calculating typical feature matching metrics, such as M-Score, MMA, and pixel localization error. Additionally, we will estimate the relative poses between the asteroid and spacecraft based on the matched features, assuming knowledge of their 3D coordinates. This involves initial geometric verification of matches using RANSAC and then refining the pose through a simplified bundle adjustment (BA) scheme, where adjustments are made solely to the pose parameters. To enhance robustness against high reprojection errors, we employ a pseudo-Huber loss function. The local 3D coordinates are derived from depth maps extracted for Eros, 67P/C-G OSINAC, Itokawa, and synthetic image datasets. Ground truth relative poses are obtained using the same pose estimation pipeline, but instead of feature matches, it relies on ground truth pixel correspondences. For feature matches, the maximum allowable reprojection error for RANSAC is set to 5 pixels, while for ground truth poses, it is reduced to 0.75 pixels. When the number of pixel correspondences exceeds 20,000, we subsample by dropping every \(k=floor(n/10000)\) pixel correspondences, ensuring a minimum of 10,000 correspondences for pose estimation. Bennu and 67P/C-G NAVCAM datasets are excluded from the evaluation since real image pairs were not available for them. The Itokawa dataset was initially reserved solely for the final evaluation of the feature extractors and was not used for hyperparameter optimization. Even though we considered including the synthetic image dataset in the training process, the final feature extractors were not trained with it. The inclusion of synthetic images in hyperparameter optimization is unlikely to introduce subtle overfitting, as the dataset was excluded from the training set. All feature extractors were trained using the Eros and 67P/C-G OSINAC datasets, which may lead to some bias toward these datasets. In addition to our DISK and R2D2-U HAFE models (HAFE-DISK, HAFE-R2D2), and the two LAFE models (regular LAFE and LAFE-SM), we also include RootSIFT in our evaluation. RootSIFT consistently outperformed regular SIFT and AKAZE, which performed at a similar level to regular SIFT. RootSIFT is essentially SIFT with the descriptor elements subjected to the square root operation and the resulting vectors normalized to unit length [85]. To ensure comparability in feature performance, we extract the same number of features with all the feature detectors. We set the number of layers per octave (halving of image size) to 4 for both RootSIFT and our learning-based extractors. Due to challenges encountered in training the original R2D2 feature extractor with our dataset, we cannot directly compare our proposed feature extractors with the original R2D2. Nevertheless, we include the original R2D2 feature extractor, trained on urban scenery (R2D2-Orig, model file r2d2_WASF_N16.pt), in our evaluation as a reference baseline to emphasize the significance of training the network with asteroid data. Additionally, we developed our own version of R2D2 (R2D2-VGG), closely resembling the original but using our dataset. It's worth noting that there are some differences between our implementation and the original, including variations in data augmentation and loss function annealing. Consequently, we cannot conclusively assert that our HAFE-R2D2 model outperforms the original R2D2. The results are reported separately for each dataset, as shown in Table VII, Table VIII, Table IX, and Table X. The image pairs within each dataset are categorized as "easy" or "hard" based on the magnitude of the change in viewing angle \(|\varphi|\). The synthetic dataset, however, is classified based on both the magnitude of the change in phase angle \(|\alpha|\) and the light direction \(|\beta|\), as defined in Equation (17). An image pair is considered "easy" if \(|\varphi|<15^{\circ}\), and "hard" otherwise. For the synthetic dataset, both \(|\alpha|<20^{\circ}\) and \(|\beta|<30^{\circ}\) must hold for the pair to be deemed "easy". For clarity and conciseness, we include only M-Score, orientation estimation failure rate, and orientation error 50- and 85-percentiles as metrics in these tables. Orientation estimation is considered to have failed for an image pair if the result comprises fewer than 12 features with a reprojection error of less than five pixels or if the orientation error exceeds 20\({}^{\circ}\). To enable comparability of orientation error percentiles across different failure rates, we treat failures as having an arbitrarily large orientation error. The best-performing method for each metric and dataset is highlighted in gray. For the Eros, 67P/C-G, and synthetic datasets, HAFE-R2D2 dominates in terms of M-Score, failure rate, and orientation error percentiles. The synthetic dataset produces unreliable results for the "easy" subset due to the limited number of samples therein. This is a consequence of the sampling distribution that prioritizes significant changes in lighting, resulting in numerous "hard" samples. Consequently, the "hard" subset exhibits a bias toward more challenging image pairs, leading to excessively pessimistic performance metric values. On these datasets, LAFE performs slightly worse than HAFE-R2D2 but outperforms all other feature extractors, including those with more sophisticated architectures such as HAFE-DISK and R2D2-VGG. Interestingly, R2D2-VGG outperforms the others on the Itokawa dataset, while both LAFE and LAFE-ME significantly outperform HAFE-R2D2, the model used to train both extractors. The small sample size may contribute to the observed outcome, although further investigation is required to determine the exact cause with certainty. As expected, the original R2D2, trained solely on urban scenery, performs poorly compared to other learning-based feature extractors trained on relevant data. Nevertheless, it still outperforms RootSIFT. The primary objective of this work has been to develop a lightweight feature extractor for navigation near asteroids. Therefore, Figures 5-8 focus solely on LAFE performance metrics. The first three figures illustrate how the distribution (including the median) of the metrics is influenced by \(|\varphi|\). The samples are grouped into bins based on \(|\varphi|\), and intra-bin empirical distributions are estimated using Gaussian kernels, presented as violin plots. For the synthetic dataset (Fig. 8), we display the medians in specific bins determined by \(|\alpha|\) and \(|\beta|\) The metrics include M-Score, MMA, LE, and orientation error. For orientation error median calculations, estimation failures are considered as arbitrarily large errors. Due to the challenge in interpreting the mAP metric, it is excluded from the figures to enhance readability. The violin plots reveal substantial variation in performance among different image pairs within the view angle change bins. This variability could stem from varying lighting conditions, which we were unable to extract from these images during this study. Notably, this variance is more pronounced in the 67P/C-G dataset, while it is relatively subdued in the Itokawa dataset. Based on the median orientation error in the synthetic dataset, it appears that LAFE performs reasonably well up to a 50\({}^{\circ}\) change in phase angle (\(\alpha\)) and a 50\({}^{\circ}\) change in the direction of light (\(\beta\)). However, this should be verified with real asteroid imagery and properly extracted lighting-direction information. Fig. 9 provides an example of an image pair from the 67P/C-G dataset. It displays a corresponding LAFE detection map along with feature matches that have undergone geometric validation. Observations suggest that the detections primarily focus on smaller shadows, which may raise some level of concern since shadows tend to shift with changes in lighting conditions. However, under normal circumstances, small shadows typically move only short distances, resulting in reprojection errors of 5 pixels or less. To assess the computational performance of LAFE, its PyTorch model was exported to the ONNX format [86], which was then utilized in conjunction with ONNX Runtime [87] to extract features at a single scale from 756 Itokawa images. Memory usage and execution time (excluding initialization) were measured on an Ultra96 board [88] equipped with a Xilinz Zynq UltraScale+ MPSoC, model ZU3EG. The primary four-core ARM Cortex A53 CPU, operating at 1200 MHz, was utilized for processing. Throughout the experiment, memory usage remained below 400 MB, and with all four CPU cores in use, the average execution time per image was 0.66 Fig. 5: Eros: LAFE performance metrics on the vertical axes versus changes in viewing angle in degrees. Fig. 8: Synthetic data: LAFE median performance versus changes in lighting in degrees. See text for details. Fig. 6: 67P/C-G: LAFE performance metrics on the vertical axes versus changes in viewing angle in degrees. Fig. 7: Itokawa: LAFE performance metrics on the vertical axes versus changes in viewing angle in degrees. seconds. If only one core was used, the average time was 1.34 seconds. For comparison, we also measured the single-core performance of the HAFE-R2D2 model. In this case, memory usage was approximately 1200 MB, and the average execution time for the first 12 images was 54.1 seconds. The breakdown of tasks performed for each image, along with the respective time allocations, were as follows: loading the image from the filesystem (6.4%), resizing it to a resolution of 512x512 (2.3%), performing LAFE inference (84.1%), extracting sparse features (6.1%), and saving the features back to the filesystem (0.4%). The remaining 0.7% accounted for managing the image processing loop, ensuring a cumulative total of 100%. We did not conduct experiments with multiscale extraction. However, we can provide a rough estimation by assuming a constant per-pixel computation time. If we were to extract features ranging from images of size 512x512 to 128x128, with four scale levels per octave, the total pixel count would be equivalent to an image of size 925x925. Based on the pixel count ratio, the estimated time using a single core would be 4.37 seconds. ## VI Conclusion In this study, we have successfully developed a lightweight asteroid feature extractor (LAFE) designed for onboard execution on a reasonably capable CPU, such as the Xilinx Zynq 7000-series SoC or the more capable Xilinx Zynq Ultrascale+ MPSoC. Real-time performance is achievable for mission profiles with slow dynamics that require infrequent feature extraction as permitted by CPU execution. For higher-frequency feature extraction, hardware acceleration can be employed by utilizing Xilinx Vitis AI [89] to convert and fine-tune the model for execution on an 8-bit DPU (Deep Learning Processing Unit) provided by Xilinx, which can be implemented on the FPGA section of an Ultrascale+ MPSoC. During our research, we also trained a high-performance asteroid feature extractor (HAFE), which served as the teacher for LAFE. HAFE incorporates several improvements over the state-of-the-art R2D2 feature extractor, including the use of a U-Net backbone, a warmup period for the loss function parameter \(\kappa\) in Equation (6), optimized loss function weights \(\alpha\) and \(\beta\) in Equation (2), and a single-channel output for the reliability head instead of using softmax and two channels. Through hyperparameter optimization, we were able to reliably compare two state-of-the-art feature extractors and determined that our incrementally improved R2D2-U extractor outperformed DISK on asteroid imagery. Additionally, we compared three lightweight architectures for LAFE and found that MobileNetV3 outperformed both EfficientNet-B0 and MobileNetV2. Concurrent to our research, Driver et al. [40] introduced an intriguing ASLFeat-based feature extractor, AV-ASLFeat (originally ASLFeat-CVGBEDTRPJMU), along with a valuable small body dataset called AstroVision. However, a notable limitation of their work is the inability to compare AV-ASLFeat with other learning-based feature extractors trained on relevant data. Their comparisons were limited to traditional feature extractors and learning-based extractors trained on various terrestrial datasets. Unfortunately, their dataset and trained model were not available at the time of writing, preventing direct comparisons between our proposed feature extractors, HAFE and LAFE, with AV-ASLFeat. While Driver et al. utilized a true match limit of 5 pixels and employed metrics such as recall, precision, and orientation error, which align with our M-Score, MMA, and orientation error, respectively, the differences in image pairing, pose estimation algorithm, and allowed feature count per image make direct comparison of results impossible. However, since Driver et al. also included the original R2D2 model in their results, it may be possible to gain some insight into the relative performance of AV-ASLFeat and HAFE by comparing their M-Score/Recall to the original R2D2 evaluated in their respective studies. Table XI provides a summary of the relative improvements of all proposed methods over R2D2 for different subsets of data. It is important to note that even this form of comparison, as highlighted by the Itokawa data and its easy and hard subsets, is severely limited due to the significant influence of image pairing on the relative gains in M-Score. Future work should include consolidating and directly comparing our proposed methods with AV-ASLFeat. Regarding future work, several avenues remain unexplored for potentially enhancing our feature extraction performance. Notably, preprocessing the datasets to include information about the direction of light could improve image pairing, resulting in better training data and more informative evaluation results, which can be presented as performance metrics as a function of the magnitude of change in the direction of light. Additionally, investigating the impact of descriptor dimensionality on performance and computational resource Fig. 9: An example 67P/C-G OSINAC image pair with overlaid LAFE feature detection map (top) and successfully matched features (bottom). Image credit: ESA usage is important, as the choice of 128 dimensions was based solely on convention. Other unoptimized parameters, such as learning rate (currently set at 1e-3), match filtering using the ratio test, and non-maximum suppression radius (currently set at 3 pixels), warrant exploration. It may also be instructive to test the feature extractors by replacing the feature detection component with classical methods, such as Harris corners, for instance. Eliminating the need to rotate images based on an asteroid rotation model prior to feature extraction could potentially be achieved using a spatial transformer [52]. Another promising avenue for future work that could result in full illumination invariance could be to investigate a two-stage feature extraction architecture consisting of a depth-estimating first stage and a feature-extracting second stage, connected by a 3D spatial transformer. The first stage could be directly trained with depth information and could include outputs for depth uncertainty and possibly an albedo estimate. Furthermore, in addition to improving local feature extraction, the shape-model-based SPL algorithm [5] used for absolute navigation could be significantly improved by substituting the AKAZE features with a lightweight feature extractor specifically trained for this purpose. A crucial next step involves enhancing LAFE by incorporating a global descriptor head and integrating the resulting network with a SLAM algorithm, allowing the evaluation of navigation performance on a specified mission profile using appropriate simulation software. ## Acknowledgment The authors wish to acknowledge Aalto Science-IT project and CSC - IT Center for Science, Finland, for computational resources.
2309.09719
FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup for Non-IID Data
Federated learning is an emerging distributed machine learning method, enables a large number of clients to train a model without exchanging their local data. The time cost of communication is an essential bottleneck in federated learning, especially for training large-scale deep neural networks. Some communication-efficient federated learning methods, such as FedAvg and FedAdam, share the same learning rate across different clients. But they are not efficient when data is heterogeneous. To maximize the performance of optimization methods, the main challenge is how to adjust the learning rate without hurting the convergence. In this paper, we propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate based on local historical gradient squares and synchronized learning rates. Theoretical analysis shows that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients, which enables promising scalability in federated optimization. We also empirically compare our method with several communication-efficient federated optimization methods. Extensive experimental results on Computer Vision (CV) tasks and Natural Language Processing (NLP) task show the efficacy of our proposed FedLALR method and also coincides with our theoretical findings.
Hao Sun, Li Shen, Shixiang Chen, Jingwei Sun, Jing Li, Guangzhong Sun, Dacheng Tao
2023-09-18T12:35:05Z
http://arxiv.org/abs/2309.09719v1
# FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup for Non-IID Data ###### Abstract Federated learning is an emerging distributed machine learning method, enables a large number of clients to train a model without exchanging their local data. The time cost of communication is an essential bottleneck in federated learning, especially for training large-scale deep neural networks. Some communication-efficient federated learning methods, such as FedAvg and FedAdam, share the same learning rate across different clients. But they are not efficient when data is heterogeneous. To maximize the performance of optimization methods, the main challenge is how to adjust the learning rate without hurting the convergence. In this paper, we propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate based on local historical gradient squares and synchronized learning rates. Theoretical analysis shows that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients, which enables promising scalability in federated optimization. We also empirically compare our method with several communication-efficient federated optimization methods. Extensive experimental results on Computer Vision (CV) tasks and Natural Language Processing (NLP) task show the efficacy of our proposed FedLALRmethod and also coincides with our theoretical findings. Federated learning, Non-convex optimization, Non-IID, linear speedup. ## I Introduction Federated learning (FL) [1, 2, 3] comes from distributed machine learning that allows multiple clients to train a model and communicate with a central server. The clients do not share their local data during the training period due to privacy concerns and data protection policies. With the number of clients increasing, the bottleneck of FL lies in the communications between clients and the central server. Efficient federated optimization is eager to be studied to relieve this pain point. One of the practical methods to reduce communication costs is training the local model for several steps in each client and exchanging information with the server at a low frequency. FedAvg [4, 5] is a representative method via updating the parameters using the stochastic gradient descent (SGD) method at the local step and exchanging parameters in a fixed period. Recent work [6] finds that this method also meets the convergence and performs better than mini-batch SGD when the objective function is quadratic. The other way is to accelerate the convergence rate by adopting the adaptive SGD methods to reduce the number of iterations, which are widely adopted to handle many tasks, such as the NLP and recommender system, and achieve faster convergence speed and better performance than vanilla SGD without manually tuning the learning rate. For the federated optimization, each client is associated with heterogeneous data. Each client has its unique objective function, which motivates several works to apply the adaptive method for federated optimization, e.g., FedAdam [7], Local Adaalter [8]. Specifically, FedAdam applies the Adam method on the server-side. The clients update their parameters using a fixed learning rate and the server collects the difference before updating the parameters. On the other hand, Local Adaalter tunes the learning rates for the clients during the training, which updates the learning rate at the synchronized period and requires it consistent in local steps. Both FedAdam and Local Adaalter share the same learning rate in the local steps, which Fig. 1: An overview of FedLALR. The parameter server first broadcasts weight parameters to the selected clients. Then the clients train the received model with several local steps based on their local data and then send the parameters to the server. In FedLALR, each client automatically tunes its learning rate with its local data distribution. could not sufficiently utilize the adaptive gradient method as demonstrated in [9]. Moreover, the theoretical analysis in [9] is based on the full gradient, which could be unavailable and thus do not fit the online learning situation. Besides, when the local dataset is very large, computing full gradient is extremely expensive which limits its application. More importantly, they do not establish the linear speedup for the proposed algorithms. To reduce the effect of heterogeneity among local functions, we propose a **F**ederated **L**ocal **A**daptive **L**earning **R**ate method based on AMSGrad [10], dubbed as FedLALR. In the local steps, our method can automatically tune the learning rate based on the local training steps using AMSGrad. Compared to existing works, our method allows clients to adjust their learning rate in local steps to accelerate convergence by exploiting the curvature information with respect to the local data. We prove that our method achieves **linear speedup**, namely our convergence rate can be linearly improved with respect to the number of clients. We emphasis that the main difficulty for analyzing the linear speedup property lies the inconsistency of the local learning rate in each local client due to local data heterogeneity. We separate the local learning rate and momentum by leveraging delay expectation technique and transpose this problem into two sub problems. The first one is that the inconsistency of the local learning rate in each local client at every parameter optimization steps is still controlled by the algorithm and they will not diverge. We prove that the local learning rate in each client is bounded in a small area that will not hurt converge. The second one is that the changing of the local learning rate is limited. We find that the changing of the local learning rate is constraint by the optimization update rules when the bounded stochastic gradient assumption holds. The limited inconsistency of the adaptive local learning rate can be derived from previous two results as they are limited at every step and do not change much and they do not hurt the convergence. To further reduce the communication, we also extend FedLALR with adaptive local interval as [11]. Local updating with a large interval leads to low communication frequency, but it may be diverse in non-convex settings. Taking adaptive local interval into account, we further prove that when the local interval is not greater than \(O(\log(t))\), where \(t\) is the global number of iterations, our method still converges and achieves linear speedup. At last, we apply our proposed FedLALR to train several deep neural networks on various benchmarks. Experiments also demonstrate that client-specific adaptive learning rates can significantly improve the convergence speed. To summarize, our contributions are listed as follows. * We develop a local adaptive SGD for communication-efficient federated optimization, dubbed FedLALR, which allows a client to adapt the learning rate in local updating steps. To the best of our knowledge, it is the first study introducing the client-specific adaptive stochastic gradient descent method with convergence analysis. * We present a rigorous analysis for the convergence rate of FedLALR under full clients participation, which achieves linear speedup with respect to the number of clients, _i.e._, \(O(\frac{1}{\sqrt{NKT}})\). Besides, we prove that combining our method with adaptive local interval reduces the communication overhead and linear speedup still holds. The theoretical results show that our method is efficient for federated learning while the number of clients is large. * We conduct extensive experiments on computer vision (CV) and natural language processing (NLP) tasks. The results show that our method achieves a faster convergence, which coincides with our theoretical findings. ## II Related work **Adaptive SGD methods.** Adaptive SGD methods are a class of gradient-based optimization methods that use history gradient to adjust the learning rate. Adagrad [12, 13] is the first adaptive algorithm, and it is better when the gradient is sparse. In the deep learning problem, the objective function is non-convex and the dimension of the parameter is high. AdaGrad accumulates all previous gradients leading the learning rate to decay rapidly. Some variants are proposed such as Adadelta [14], Adam [15] and Nadam [16] which use exponential moving averages of squared history gradients to avoid learning rate decaying rapidly. Notably, Adam is the most popular adaptive stochastic gradient in practical applications due to its high performance. Reddi et al. [10] point out that Adam might diverge and propose AMSGrad to fix it. Zou et al. [17] and Chen et al. [18, 19, 20] give a sufficient condition that guarantees the global convergence of Adam in the stochastic non-convex setting and extend Adam to the distributed Adam, respectively. Many recent works [21, 22] have also given theoretical analysis on these algorithms. **Federated optimization.** FedAvg [4] is one of the most popular methods of reducing communication in federated learning, which updates its parameters with \(H\) local steps then synchronizes with the central parameter server. Several works [23, 24, 25, 26, 5, 6] prove that this method can largely save the communications and converge in both the convex and non-convex settings. A lot of works [27, 28, 29, 30] have made significant progress in advancing the convergence analysis of federated learning. Some works [31, 32, 33] try to handle heterogeneous data. Slomo [34] introduces a method that applies the momentum technique at the server-side and FedCM [35] introduces a client-level momentum technique. Adopting the proximal operator [36, 37] during the local training is another way to deal with the problem caused by heterogeneous data. Recently, SCAFFOLD [38] can also achieve remarkable performance for the federated learning task by adopting variance reduction technique. In the federated learning approach, clients typically train using fixed local steps. However, some studies have discussed an adaptive interval method where the number of local steps can be adjusted during the training process. In the work cited as [11], the authors discuss a scenario involving a local training process with varying intervals. Qin et al. [39] delve into the impact of local steps in the context of Local Stochastic Gradient Descent (SGD), a type of federated learning. On the other hand, training with an adaptive interval bears resemblance to the adaptive batch size method, which involves training the model with varying batch sizes during different periods of training. Ma et al. [40] incorporated this adaptive batch size method into edge computing. Meanwhile, in the study referenced as [41], their method refines the batch size and local epoch, enhancing computational efficiency by eliminating stragglers. It also scales the local learning rate to boost the model's convergence rate and accuracy. In addition, there also exist several works have been proposed to tackle the federated learning task with an adaptive learning rate. FedAdam [7] updates the parameter using Adam on the server-side. Local Adaalter [8] adjusts its learning rate periodically by using AdaGrad. Chen et al. [42] propose a similar method like Local Adaalter with a linear speedup convergence rate, which uses AMSGrad to adjust the learning rate periodically. [7, 43] analyze server-side adaptive methods. Compared to existing work, our method can adjust the learning rate in the local steps to explore the curvature information with respect to the local heterogeneous data to accelerate the training speed. ## III Methodology In this section, we describe the proposed FedLALR. Below, we first present several preliminaries. ### _Preliminary_ We consider the finite-sum optimization problem \[\min_{x}f(x):=\frac{1}{N}\sum_{i=1}^{N}f_{i}(x), \tag{1}\] where \(f_{i}(x):=\mathbb{E}_{\xi_{i}\sim D_{i}}\nabla f_{i}(x,\xi_{i})\) denotes the local objective function at the client \(i\), \(\xi_{i}\) is a random variable obeying distribution \(D_{i}\), and \(N\) is the number of clients. Note that the distribution \(D_{i}\) for \(i=1,2,\cdots,N\) could be heterogeneous in this work. Here, we are particularly interested in the non-convex optimization, _i.e._, \(f_{i}(x)\) being a non-convex function. **Notations**. We define a stochastic gradient \(g_{t,k,i}=\nabla f(x_{t,k,i},\xi_{t,k,i})\), where \(\xi_{t,k,i}\) is a data point sampled from node \(i\) at global time \(t\) and local time \(k\). The expectation of the gradient is unbiased, i.e., \(\mathbb{E}_{\xi_{i}\sim D_{i}}[\nabla f(x,\xi_{i})]=\nabla f_{i}(x)\). The expectation of the global gradient is defined as \(\nabla f(x)=\mathbb{E}_{i\sim N}\nabla f_{i}(x)\). We use \(d\) to represent the dimension of the parameter \(x\). We use \(\|a\|\) to denote the \(L^{2}\) norm of vector \(a\). We represent a Hadamard product as \(a\odot b\), where \(a\),\(b\) are two vectors. ### _FedLALR Algorithm_ In this part, we will describe our method and explain how it reduces communication costs. We will discuss two situations, full client participation with a fixed interval and an adaptive interval, respectively. #### Iii-B1 Full client participation Compared with FedAvg, our FedLALR (Algorithm 1) replaces the SGD with the AMSGrad. In FedLALR, clients can adjust their learning rate based on the local step and the local dataset. Here we consider the case that a fixed local interval is adopted by setting \(K_{t}=K\) in Algorithm 1. ``` Input: Initial parameters \(x_{0}\), \(m_{-1}=0\), \(\hat{v}_{-1}=\epsilon^{2}\), learning rate \(\alpha\), momentum parameters \(\beta_{1}\), \(\beta_{2}\). Output: Optimized parameter \(x_{T+1}\) 1foreiteration t \(\in\{0,1,2,...,T-1\}\)do 2forclient\(i\in\{1,2,3,...,N\}\) in paralleldo 3\(x_{t,1,i}\!=\!x_{t},m_{t,0,i}\!=\!m_{-1},v_{t,0,i}\!=\!\hat{v}_{t,0,i}\!=\! \hat{v}_{t-1}\); 4forlocaliteration k \(=1,2,...,K_{t}\)do 5\(g_{t,k,i}\!=\!\nabla f(x_{t,k,i},\xi_{t,k,i})\); 6\(m_{t,k,i}\!=\!\beta_{1}m_{t,k-1,i}+(1-\beta_{1})g_{t,k,i}\); 7\(v_{t,i}\!=\!\beta_{2}v_{t,k-1,i}+(1-\beta_{2})[g_{t,k,i}]^{2}\); 8\(\hat{v}_{t,k,i}\!=\!\max(\hat{v}_{t,k-1,i},\ v_{t,k,i})\); 9\(\eta_{t,k}=1/\sqrt{\hat{v}_{t,k,i}}\); 10\(x_{t,k+1,i}=x_{t,k,i}-\alpha m_{t,k,i}\odot\eta_{t,k,i}\); 11 12 end for 13 14 end for 15 16 At server: Receive \(x_{t,K+1,i},m_{t,K,i},\hat{v}_{t,K,i}\) from clients; Update \(x_{t+1}=\frac{1}{N}\sum_{i=1}^{m}x_{t,K+1,i}\); 17\(m_{t}=\frac{1}{N}\sum_{i=1}^{m}m_{t,K,i}\); 18\(\hat{v}_{t}=\frac{1}{N}\sum_{i=1}^{m}\hat{v}_{t,K,i}\); 19 Broadcast \(x_{t+1},m_{t},\hat{v}_{t}\) to clients; 20 21 end for ``` **Algorithm 1**FedLALR In Algorithm 1, each client has the same initial parameters \(x_{1}\), \(m_{0}=0\) and \(\hat{v}_{0}=\epsilon^{2}\), where \(\epsilon^{2}\) is a small positive scalar to avoid the denominator diminishing. For the global iteration \(t\), the clients start to process local updating in parallel. When the client updates the parameters locally, it starts from the initial parameters or received parameters. Each client \(i\) computes the stochastic gradient \(g_{t,k,i}=\nabla f_{i}(x_{t,k,i},\xi_{t,k,i})\) according to the i.i.d random variable \(\xi_{t,k,i}\). Then the parameters are updated by using AMSGrad on the local steps. AMSGrad computes the momenta following \[m_{t,k,i}=\beta_{1}m_{t,k-1,i}+(1-\beta_{1})g_{t,k,i}, \tag{2}\] and second order momenta following \[v_{t,i}=\beta_{2}v_{t,k-1,i}+(1-\beta_{2})[g_{t,k,i}]^{2}. \tag{3}\] And it updates the large second order momenta following \[\hat{v}_{t,k,i}=\max(\hat{v}_{t,k-1,i},\ v_{t,k,i}). \tag{4}\] Then the parameters are updated locally following \[x_{t,k+1,i}=x_{t,k,i}-\alpha m_{t,k,i}\odot\frac{1}{\sqrt{\hat{v}_{t,k,i}}}, \tag{5}\] where \(\sqrt{\cdot}\) is the element-wise square root. After \(K\) steps of local update steps, the client sends its information to the central parameter server including \(x_{t,K+1,i}\), \(m_{t,K,i}\), and \(\hat{v}_{t,k,i}\). After the server receives the information, it averages them by computing \[\begin{cases}x_{t+1}=\frac{1}{N}\sum_{i=1}^{N}x_{t,K+1,i},\\ m_{t}=\frac{1}{N}\sum_{i=1}^{N}m_{t,K,i},\\ \hat{v}_{t}=\frac{1}{N}\sum_{i=1}^{N}\hat{v}_{t,K,i}.\end{cases} \tag{6}\] Lastly, the central server broadcasts the averaged information to all clients and continues execution of the loop with the next iteration. #### Iii-A2 Adaptive interval In this part, we consider adaptively tuning the local interval \(K_{t}\). Recent works [44, 45] show that large mini-batches can improve the algorithm performance since a large mini-batch estimates the gradient more accurately. Local training with an adaptive interval shares a similar idea that more computation at the local steps could accelerate convergence. Bijral et al. [46] claim that the interval should be small at the beginning stage, which yields a faster convergence, while large intervals reduce the communication rounds. On the other hand, when the data is Non-IID, a large local step will encourage each client to converge to the local minima that varies across different clients. This may lead the algorithm to a bad solution for the global model or even result in divergence. Therefore, it motivates us to use a small interval \(K_{t}\) to achieve a good initialization at the beginning stage and then gradually increase \(K_{t}\) to stabilize the training process and reduce the communication cost. In the next section, we show that the FedLALR with adaptive interval can also achieve linear speedup. **Remark 1**.: _To conclude this section, we have two comments on Algorithm 1._ **(i)** _Compared with the FedAdam and Local Adadler, FedLALR adjusts the learning rate at the local training step and synchronizes it periodically, which could be more thorough and accurate to estimate the local learning rate by exploiting the data structure. FedAdam applies SGD in the local training step and uses the adaptive method, Adam, on the server-side updating. LocalAdadler also applies SGD in the local training step while the learning rate is updated at every communication round and calculated by the server._ **(ii)** _On the other hand, our proposed FedLALR supports the adaptive interval update, which is more flexible and could further reduce the communication cost._ ## IV Convergence Analysis In this section, we establish the linear speedup for the FedLALR algorithm in the difficult non-convex setting. Below, we make several commonly used assumptions for characterizing the convergence of stochastic non-convex optimization. ### _Assumptions_ **Assumption 1**.: _Smoothness. For all \(i\in[N]\), \(f_{i}\) is differentiable and its gradient is L-lipschitz._ **Assumption 2**.: _Bounded variances. Each gradient estimator is unbiased, i.e., \(\mathbb{E}[g_{i,t}]=\nabla f_{i}(x_{i,t})\). And we assume there exists \(\sigma\) satisfies that \(\mathbb{E}\|g_{i,t}-\nabla f_{i}(x_{i,t})\|^{2}\leq\sigma^{2},\forall i,t\)._ **Assumption 3**.: _Bounded stochastic gradients. Each coordinate of stochastic gradient \(g_{i,t}\) is bounded, i.e., \(|(g_{i,t})_{j}|\leq G_{\infty}\), or simply \(\|g\|_{\infty}\leq G_{\infty}\), and the local gradient is also uniformly bounded: \(\|\nabla f_{i}(x)\|_{\infty}\leq G_{\infty}\)._ Please note in Assumption 3, we use bounded stochastic gradient \(g\) which is stronger than bounded gradient \(\|\nabla f(x)\|^{2}\). Bounded stochastic gradient assumption is adopted in [22, 21] and bounded gradient assumption is adopt in [47]. Both of them are widely adopted in adaptive stochastic gradient methods. Under the finite-sum setting, these two are similar in that one can be derived from the other one. **Remark 2**.: _Usually, date heterogeneity in federated learning in the stochastic non-convex setting is measured by_ \[\|\nabla f(x)-\nabla f_{i}(x)\|^{2}\leq\sigma_{G}^{2},\] _where \(\sigma_{G}\) is a constant that is the upper bound of the dataset heterogeneity. Here, we comment that the bounded stochastic gradient assumption implies the above data heterogeneity since_ \[\|\nabla f(x)-\nabla f_{i}(x)\|^{2}\leq 2\|\nabla f(x)\|^{2}+2\|\nabla f_{i}(x )\|^{2}\leq 4dG_{\infty}^{2},\] _when we set \(\sigma_{G}^{2}=4dG_{\infty}^{2}\). \(d\) is the dimension of the x._ ### _Full clients participation_ The following theorem characterizes the linear speedup of FedLALR in the stochastic non-convex setting. **Theorem 1** (Full clients participation).: _We update the parameters with full clients participation update. Under the Assumptions 1,2,3, \(\alpha\leq\frac{3\epsilon}{20L}\) and \(K_{t}=K\) is a fixed constant in Algorithm 1. We have_ \[\mathbb{E}\left[\frac{\sum_{t=0}^{T-1}\sum_{k=1}^{K}\|\nabla f(\bar{x}_{t,k}) \|^{2}}{KT}\right]\leq\frac{2G_{\infty}(f(Z_{1})-f^{*})}{\alpha KT}+\Phi\] _where \(K\) is the period of the local updates, \(T\) is the iteration number of the global synchronization, \(\bar{x}_{t,k}=\frac{1}{N}\sum_{i=1}^{N}x_{t,k,i}\), \(N\) is the number of the clients, and_ \[\Phi= 2G_{\infty}\left(\Big{(}\frac{2L^{2}\beta_{1}^{2}G_{\infty}^{2} d}{(1-\beta_{1})^{2}\epsilon^{4}}+\frac{K^{2}L^{2}G_{\infty}^{2}}{\epsilon^{4}}(1+4K^{ 2}(1-\beta_{1})^{2}d)\Big{)}\alpha^{2}\right.\] \[+\Big{(}(2-\beta_{1})\frac{G_{\infty}^{2}Kd(G_{\infty}^{2}- \epsilon^{2})}{(1-\beta_{1})\epsilon^{3}}+\frac{3d(G_{\infty}^{2}-\epsilon^{2} )G_{\infty}^{2}}{2\epsilon^{3}(1-\beta_{1})}\Big{)}\frac{1}{T}\] \[+\Big{(}\frac{5LG_{\infty}^{2}d(G_{\infty}^{2}-\epsilon^{2})^{2}} {8\epsilon^{6}(1-\beta_{1})^{2}}(2\beta_{1}^{2}+(1-\beta_{1})^{2})\] \[+\left.\frac{5LKG_{\infty}^{2}d(G_{\infty}^{2}-\epsilon^{2})^{2} }{2\epsilon^{6}}\right)\frac{\alpha N}{T}+\frac{5Ld\sigma^{2}}{4\epsilon^{2}} \frac{\alpha}{N}\right).\] **Corollary 1** (Linear speedup).: _When taking base learning rate \(\alpha=\min\left(\sqrt{\frac{N}{KT}},\frac{3\epsilon}{20L}\right)\), we have the convergence rate:_ \[\mathbb{E}\left[\frac{\sum_{t=0}^{T-1}\sum_{k=1}^{K}\|\nabla f(\bar{x}_{t,k}) \|^{2}}{KT}\right]=O\left(\frac{1}{\sqrt{NKT}}\right). \tag{7}\] **Remark 3** (Communication complexity).: _To achieve an \(O(\epsilon)\) accurate solution, our method has \(O(\frac{1}{\sqrt{NKT}})\) convergence. Then it needs \(O(\frac{1}{NK\epsilon^{2}})\) iterations. The communication complexity = number of communication rounds \(\times\) number of communicated clients each communication rounds. So the communication complexity is \(O(\frac{1}{K\epsilon^{2}})\)._ **Corollary 2** (Restart momentum).: _In each global synchronization step, the clients do not send their momenta and the server does not receive and broadcast the momenta. The client initializes their momenta with 0 at the start of the local steps._ Yu et al. [23] have proposed a similar strategy which is able to reduce communication cost. In Algorithm 1, in the line 4, we set \(m_{t,0,i}=0\). In this setting, the convergence rate can also achieve linear speedup. We have_ \[\mathbb{E}\left[\frac{\sum_{t=0}^{T-1}\sum_{k=1}^{K}\|\nabla f(\bar{x}_{t,k})\|^ {2}}{KT}\right]=O\left(\frac{1}{\sqrt{NKT}}\right).\] _In other words, the restart momenta strategy does not influence the domination item of convergence bound._ **Corollary 3** (Maximize the second order momentum).: _In Algorithm 1, the central server updates the second-order momenta \(\hat{v}_{t}\) by averaging the collected second-order momenta from the clients which reads_ \[\hat{v}_{t}=\frac{1}{N}\sum_{i=1}^{N}\hat{v}_{t,K,i}.\] _We can also replace the average operator as maximization:_ \[\hat{v}_{t}=\underset{i}{max}(\hat{v}_{t,K,i}).\] _This method can also achieve linear speedup._ ### _Adaptive interval_ **Theorem 2** (Adaptive interval).: _Under the Assumptions 1,2,3, we take \(\alpha=\min(\sqrt{\frac{N}{\sum_{t=0}^{T-1}K_{t}}},\frac{3\epsilon}{20L})\) and full clients participation in Algorithm 1. The adaptive local update is set as \(K_{t}<O(logt)\). We have_ \[\mathbb{E}\left[\frac{\sum_{t=0}^{T-1}\sum_{k=1}^{K_{t}}\|\nabla f(\bar{x}_{t, k})\|^{2}}{\sum_{t=0}^{T-1}K_{t}}\right]=O\left(\frac{1}{\sqrt{N\sum_{t=0}^{T-1 }K_{t}}}\right),\] _where \(N\) is the number of the clients, \(K_{t}\) is the period of the local updates, \(T\) is the iteration number of the global synchronization._ **Remark 4** (Communication complexity).: _Our method has a convergence rate of \(O\left(\frac{1}{\sqrt{N\sum_{t=0}^{T-1}K_{t}}}\right)\). We take \(K_{t}=\log(t)\) for simplicity. To achieve an \(O(\epsilon)\) accurate solution, it needs \(\frac{1}{N\epsilon^{2}W(\frac{1}{N\epsilon^{2}})}\) iterations where \(W(\cdot)\) is Lambert W-Function. So the communication complexity is \(O\left(\frac{1}{\epsilon^{2}W(\frac{1}{N\epsilon^{2}})}\right)\)._ ## V Experiments In this section, we demonstrate the efficacy of the proposed FedLAR by applying it to solve federated learning problems on CV and NLP tasks. ### _Implementation details_ **Dataset** We conduct experiments on two tasks, an image classifier task and a language model task. In the image classifier task, we evaluate two benchmarks, the CIFAR-10 and CIFAR-100 [48], which contain 50000 images with 10 classes and 100 classes, respectively. To generate a non-IID dataset, we use Dirichlet distribution same as [49, 50]. In Figure 2, we split the data into 100 parties and draw from the Dirichlet distribution with the parameter of 0.3 and 0.6, respectively. In the language model task, the Shakespeare dataset is collected from the work of William Shakespeare. Each client represents a speaking role. **Baselines.** In our study, we conducted a comparative analysis of our approach with four existing methods: FedAvg [4], FedAdam [7], FedAMS [51], and LaFedOPT [9]. Among these methods, FedAvg stands out as one of the most widely adopted federated optimization techniques. FedAdam, on the other hand, is recognized for its effectiveness in adaptive federated learning, as it automatically adjusts the learning rate on the server. Similarly, the FedAMS method utilizes the AMSGrad algorithm to implement an adaptive learning rate strategy during the server training phase. Notably, the FedAMS method has two variants FedAMSv1 and FedAMSv2 that differ in the computation of epsilon. Furthermore, during the client training period, Wang et al. propose a local adaptive federated optimization method referred to as LAFedOPT. **Network architectures.** For the CV task, we use ConvMixer [52] as the deep neural network. The kernel size is 5, the patch size is 2 and the number of repetitions of the ConvMixer layer is 8. The other task is training a language model on the Shakespeare dataset based on the LEAF [53] with a stacked LSTM, the same as [50]. **Hyperparameter setting.** To show the effect of heterogeneity, the CIFAR10&100 datasets are partitioned based on the Dirichlet distribution with the parameter of 0.6 and 0.3. We carefully tune the hyperparameter, including the base learning rate, weight decay parameter and the frequency of learning rate decay, to achieve reasonable results and report Fig. 2: Heterogeneity data partition using Dirichlet distribution with parameter 0.6 and 0.3 in CIFAR-10 and CIFAR-100. the tuned parameters as follows. The batch size is set to 50, the learning rate decay is set to 0.998, and the weight decay is set to 0.001. For the CIFAR10 dataset, we set the momentum parameter \((\beta_{1},\beta_{2})\) as \((0.9,\ 0.99)\) for FedAdam, FedAMSv1, and FedAMSv2 except FedLALR with \((0.9,\ 0.995)\) and LaFedOPT with \((0.8,\ 0.999)\). The \(\epsilon\) is set to 0.01, 1e-4, 1e-2, 1e-4 and 1e-8 for FedAdam, FedAMSv1, FedAMSv2, LAFedOPT and FedLALR, respectively. The local learning rate is set to 1.0 for FedAvg, 1e-4 for LAFedOPT, 2e-3 for FedLALR and 0.1 for FedAdam, FedAMSv1, and FedAMSv2.- The global learning rate of FedAMSv1 and FedAMSv2 is set to 0.1, and it is set to 1.0 for FedAdam. For the CIFAR100 dataset, the momentum parameter changes to \((0.9,\ 0.999)\) and \((0.9,\ 0.995)\) for LAFedOPT and FedLALR. The local learning rate is set to 2.0 for FedAvg, 1e-3 for LAFedOPT, 2e-3 for FedLALR and 1.0 for FedAdam, FedAMSv1, and FedAMSv2. The global learning rate of FedAdam, FedAMSv1 and FedAMSv2 is set to 0.1. In this paper, we conducted the training process with a considerable number of participating clients and a high number of local epochs. Specifically, we set the communication rounds to 100 to accommodate this extensive participation. The total training data has been iterated through approximately 250 epochs to achieve comprehensive learning. For the Shakespeare dataset, both the hyperparameter with the non-IID setting and the IID setting are the same. We set batch size as 100, and weight decay parameter as 1e-4. The local learning rate is 1.0 for FedAvg, 5e-2 for FedLALR, 1e-3 for LAFedOPT, and 0.1 for FedAdam, FedAMSv1, and FedAMSv2. The global learning rate of FedAdam, FedAMSv1 and FedAMSv2 is set to 0.1. The momentum parameter \((\beta_{1},\beta_{2})\) is set as \((0.9,\ 0.998)\) for FedAdam, FedAMSv1, and FedAMSv2 except FedLALR with \((0.8,\ 0.998)\) and LaFedOPT with \((0.5,\ 0.999)\), respectively. The local epoch is set to 5. ### _CV tasks_ Results in Figures 3 show the performance curves under the setting with 100 local clients and Dirichlet distribution parameter being 0.6 and 0.3, respectively. At each communication round, the server chooses 50 clients to update the model parameter. The local training epoch is set up to 5. The results show that our optimization method converges the fastest in the six algorithms. The experiment also shows that the result of our method achieves competitive generalization. In the early period of the communication rounds, our method converges fast. At the end of the train steps, our results indicate that FedLALR gets slightly better performance than other methods. ### _NLP task_ This dataset is built from the works of Shakespeare. Each client corresponds to a speaking role. For the Non-IID setting, each client has its data of the role. For the IID settings, the data is mixed and then split into pieces before distributing. For the Non-IID setting, we choose 100 roles for each clients. In each round, we choose 10 clients to participate in the communication. For the IID setting, 10 clients with mixed data are chosen randomly to participate in the local training step. The results are shown in Figure 4, which illustrates that our method gets the best result both on the Non-IID and IID settings. Under the Non-IID setting, both FedLALRand LAFedopt excel in convergence rate and accuracy when compared to other baseline methods. While our method converges slightly slower than LAFedopt, it achieves superior accuracy. Fig. 4: The loss and accuracy v.s. communication rounds. We choose 100 roles from the Shakespeare dataset as our clients. For the IID data distribution, we mix data across the clients. The server chooses 10 clients to participate in the training at each communication rounds. Fig. 3: The top-1 accuracy v.s. communication rounds. The dataset is split into 100 parties based on the Dirichlet distribution with the parameter 0.6 and 0.3, respectively. The two figures on the left come from the CIFAR-10 dataset, while the two on the right are from the CIFAR-100 dataset. The server chooses 50 clients to participate in the training at each communication round. Both FedAMSv1 and FedAMSv2 exhibit slow convergence and display instability during the initial phases. FedAvg, though converging quickly, falls short in its final performance compared to other methods. In the IID setting, our method stands out, boasting the highest accuracy and convergence rate. While FedAvg, LAFedopt, and FedAdam demonstrate rapid convergence, their ultimate accuracy does not match up to our method. Additionally, the trajectories of FedAMSv1 and FedAMSv2 show inconsistencies, lacking smoothness. These results indicate that our method performs well in NLP tasks. ### _Ablation study_ In this section, we conduct several ablation studies to further demonstrate the efficacy of the proposed FedLALR. **Heterogeneity.** In this section, we evaluate our method under two distinct heterogeneous settings. Specifically, we examine the effects of using parameters 0.3 and 0.6 for the Dirichlet distribution to dictate data participation. As illustrated in Figure 2, a lower parameter value for the Dirichlet distribution implies greater data heterogeneity, meaning that the data assigned to each client may encompass fewer labels in the classification task. Results specific to the CV task are depicted in Figure 3. Observing the Accuracy-Communication Rounds curve, our method demonstrates rapid growth among the six evaluated methods across both heterogeneous settings. For the CIFAR-10 dataset, our approach achieves superior accuracy. Meanwhile, in the CIFAR-100 dataset, our method's curve rises more swiftly compared to the curves of other methods. Notably, our technique proves more stable than both FedAdam, FedAMSv1, and FedAMSv2, as the latter three exhibit significant accuracy drops during training. This suggests that our method is well-equipped to handle situations where the data is highly heterogeneous. **Scalability.** In this section, we assess the scalability of our method by involving varying numbers of clients in the training process. We partition the CIFAR-10 and CIFAR-100 datasets into 100 groups using the Dirichlet distribution with a parameter of 0.6. We then select 25 clients and 75 clients to participate in the training during each communication round. When opting for 25 clients per communication round, the total number of communication rounds extends to 200. Conversely, with 75 participating clients, the total number of communication rounds reduces to 80 based on similar calculations. The results are presented in Figure 5. Notably, the accuracy curve of our method exhibits the steepest ascent in both CIFAR-10 and CIFAR-100 datasets. When compared to experiments involving 50 clients, our method produces equivalent results. This indicates that our method maintains its performance when scaling from 25 to 75 clients, underscoring its robustness in accommodating varying numbers of clients. **Adaptive interval.** We test adaptive interval settings on the CIFAR10 dataset and CIFAR100 dataset with 100 parties and 50 clients participation to show the effectiveness of the adaptive local interval. We set \(K_{t}=K_{initial}+\lfloor\log_{K_{a}}t\rfloor\) based on the findings of Theorem 2 and Remark 4. In our experiments, we evaluate two distinct adaptive interval settings. In the first setting, \(k_{init}\) is set to 3, and \(k_{init}\) is set to 4.0. In the second setting, \(k_{init}\) is set to 4.0 and \(k_{init}\) is set to 2.0. We also conduct an experiment with a fixed local interval number Fig. 5: The top-1 accuracy v.s. communication rounds. Two figures on the left were generated using the CIFAR-10 dataset, with different numbers of clients: 25 and 75, respectively. On the right, there are two additional figures generated using the CIFAR-100 dataset, with the same client numbers: 25 and 75. Fig. 6: The top-1 accuracy v.s. communication rounds. We choose 50 clients at each communication round and adaptively change the local interval. We compare optimization methods with a fixed interval and methods with an adaptive interval. set at 10 to serve as a comparison benchmark. We'd like to highlight that for the setting where \(k_{init}=4.0,k_{alpha}=2.0\), the number of the local interval is 10 when the number of communication rounds ranges between 64 and 100. The results of these experiments are reported in Figure 6. Our findings indicate that the adaptive interval method surpasses the fixed interval method in performance across two datasets. Specifically, when comparing the \(k_{init}=4.0,k_{alpha}=2.0\) setting with the fixed local interval, the adaptive interval approach exhibits similar performance to the fixed method. However, it's noteworthy that the loss curve for the adaptive interval descends more rapidly during the initial communication rounds, eventually stabilizing to a comparable final loss. This suggests that the adaptive technique gains an early advantage from its initial smaller intervals. Furthermore, when comparing the two adaptive interval configurations, the data indicates that the setting with \(k_{init}=3.0,k_{alpha}=4.0\) yields better results in terms of both loss and accuracy metrics. This suggests that with careful tuning, the adaptive method can enhance accuracy and expedite the reduction in loss. **Local learning rate trajectory.** According to our method, \(\hat{v}\) automatically controls the learning rate. Here, we drew the variation of square norm of \(\hat{v}\) from two random clients during local training. It is recorded at communication rounds of 1, 31, 61, and 91 on the CIFAR10 of 50 participation with 100 clients when the Dirichlet parameter is set to 0.6. The results are shown in Figure 7. Two clients have different \(\hat{v}\) due to the data heterogeneity. During the local training steps, the learning rate tunes in each client. As these \(\hat{v}\) increase with a different speed, the learning rate decreases with a different speed in each client. ## VI Conclusion In this paper, we propose FedLALR method for federated learning, which can automatically adjust the learning rate at local steps to exploit the curvature information related to local data distribution. Moreover, FedLALR is proposed with a fixed interval and adaptive interval, respectively. We theoretically analyze the convergence rate of our proposed FedLALR for the difficult non-convex stochastic setting, which indicates that our approach achieves linear speedup with both a fixed interval and adaptive interval, with respect the number of clients. Extensive experiments on the CV task and the NLP task show our method converges much faster, which also coincides with our theoretical analysis. Although FedLALR converges faster than FedAdam thanks to the local adaptive learning rate, they show comparable generalization performance. Generally, fast convergence does not imply a high generalization accuracy. We left it a future research direction to analyze the generalization ability for FedLALR, especially on large scale federated learning tasks.
2305.19807
Variational quantum algorithms for scanning the complex spectrum of non-Hermitian systems
Solving non-Hermitian quantum many-body systems on a quantum computer by minimizing the variational energy is challenging as the energy can be complex. Here, based on energy variance, we propose a variational method for solving the non-Hermitian Hamiltonian, as zero variance can naturally determine the eigenvalues and the associated left and right eigenstates. Moreover, the energy is set as a parameter in the cost function and can be tuned to obtain the whole spectrum, where each eigenstate can be efficiently obtained using a two-step optimization scheme. Through numerical simulations, we demonstrate the algorithm for preparing the left and right eigenstates, verifying the biorthogonal relations, as well as evaluating the observables. We also investigate the impact of quantum noise on our algorithm and show that its performance can be largely improved using error mitigation techniques. Therefore, our work suggests an avenue for solving non-Hermitian quantum many-body systems with variational quantum algorithms on near-term noisy quantum computers.
Xu-Dan Xie, Zheng-Yuan Xue, Dan-Bo Zhang
2023-05-31T12:50:22Z
http://arxiv.org/abs/2305.19807v2
# Variational quantum eigensolvers for the non-Hermitian systems by variance minimization ###### Abstract Solving non-Hermitian quantum many-body systems on a quantum computer by minimizing the variational energy is challenging as the energy can be complex. Here, based on energy variance, we propose a variational method for solving the non-Hermitian Hamiltonian, as zero variance can naturally determine the eigenvalues and the associated left and right eigenstates. Moreover, the energy is set as a parameter in the cost function and can be tuned to obtain the whole spectrum, where each eigenstate can be efficiently obtained using a two-step optimization scheme. Through numerical simulations, we demonstrate the algorithm for preparing the left and right eigenstates, verifying the biorthogonal relations, as well as evaluating the observables. We also investigate the impact of quantum noise on our algorithm and show that its performance can be largely improved using error mitigation techniques. Therefore, our work suggests an avenue for solving non-Hermitian quantum many-body systems with variational quantum algorithms on near-term noisy quantum computers. ## I Introduction The exploration of non-Hermitian physics holds great importance in physics, as numerous natural physical phenomena exhibit non-Hermitian characteristics [1]. Recently, much attention has been paid to non-Hermitian physical systems due to their exceptional properties, including PT symmetry breaking [2; 3; 4; 5], skin effect [6; 7; 8], topological properties [9; 10; 11; 12] and so on [13; 14; 15; 16]. Nevertheless, tackling the many-body non-Hermitian physical systems presents a challenge for classical computers due to the exponential growth of Hilbert space [17]. While tensor network provides a potential powerful method to solve non-Hermitian systems [18; 19; 20], it is imperative to limit the matrix dimension to a moderate threshold value to alleviate computational complexity[21]. Quantum computing has the potential to provide solutions for hard problems [22; 23; 24; 25]. For near-term quantum computers, variational quantum eigensolver (VQE) is designed to solve eigenstates of many-body systems [26; 27]. It can efficiently calculate the ground state [28; 29; 30; 31] and low-lying excited states [32; 33; 34; 35] of a given Hamiltonian. The efficacy of VQE algorithms is based on minimizing the variational energy [28]. However, non-Hermitian systems may not have a minimum energy as the eigenvalue of the Hamiltonian may be a complex number, rendering the conventional VQE not implementable. Alternatively, the energy variance can be utilized as the cost function, as it can be used to determine an eigenstate since any eigenstate is characterized by zero energy variance [36; 37]. This approach allows us to bypass the energy minimization issue in non-Hermitian Hamiltonians [38]. Therefore, energy variance can be leveraged as a cost function to design the variational quantum algorithm to effectively solve the non-Hermitian Hamiltonian. Here, we develop a variational quantum algorithm for solving non-Hermitian quantum systems by using a cousin of the energy variance as the cost function. The cost functon differs from the conventional energy variance in that the energy is parameterized with a complex number. By zero-variance principle, the eigenstates and the corresponding eigenenergies can be obtained and self-verified by minimizing the energy variance to zero. For efficient optimization, we adopt a two-step optimization strategy, which allows the system to evolve towards the target quantum state by adjusting the optimization sequence among parameters. The effectiveness of the algorithm is demonstrated through numerical simulations. Additionally, we investigate the impact of quantum noise on the algorithm and improve its performance by incorporating error mitigation techniques. Therefore, our work highlights the significant potential of quantum variational algorithms for simulating non-Hermitian physics. The rest of this paper is organized as follows. In Sec. II, we first introduce the non-Hermitian Hamiltonian in brief, and then propose the variational quantum algorithm as well as the optimization. In Sec. III, we present the results of the numerical simulation of the algorithm, analyze the effects of quantum noise, and show the improvements using error mitigation. Finally, we draw conclusions in Sec. IV. ## II The quantum variational algorithm In this section, we initially provide a concise introduction to the non-Hermitian systems and the difficulties in utilizing the conventional VQE algorithm in this scenario. Then, we present a quantum variational algorithm that is suitable for the non-Hermitian Hamiltonian. Afterwards, a two-step optimization strategy is proposed to determine the desired eigenstates and eigenvalues. Finally, we show how to estimate operator expected values for non-Hermitian systems. ### Motivation In an isolated quantum system, Hermiticity is a fundamental postulate in the framework of quantum mechanics. This property guarantees that the expectation value of the Hamiltonian with respect to a given quantum state should be a real number. In contrast, for an open physical system, the Hamiltonian operator describing the system may not possess the property of Hermiticity [39], i.e., \(\hat{H}\neq\hat{H}^{\dagger}\). In this case, \(\hat{H}\) and \(\hat{H}^{\dagger}\) have distinct sets of eigenstates, known as right eigenstates and left eigenstates, respectively, \[\hat{H}\ket{\psi_{n}^{r}}=E_{n}\ket{\psi_{n}^{r}},\quad\hat{H}^{\dagger}\ket{ \psi_{n}^{l}}=E_{n}^{*}\ket{\psi_{n}^{l}}, \tag{1}\] where \(E_{n}^{*}\) is the complex conjugate of \(E_{n}\) and \(n\) is the spectral label. In the context of non-Hermitian physics, \(\{\psi_{n}^{r}\}\)and\(\{\psi_{n}^{l}\}\) are referred to as biorthogonal basis vectors, because a biorthogonal relationship exists between left and right eigenstates [40], \[\left\langle\psi_{m}^{l}|\psi_{n}^{r}\right\rangle=c_{n}\delta_{mn}, \tag{2}\] in which \(c_{n}=\left\langle\psi_{n}^{l}|\psi_{n}^{r}\right\rangle\) and \(\delta_{mn}\) denotes the Kronecker delta function. When the system is in the quantum state \(|\psi_{n}^{r}\rangle\), the expected value of the operator \(\hat{A}\) can be expressed as [39] \[\langle\hat{A}\rangle=\frac{\langle\psi_{n}^{l}|\hat{A}|\psi_{n}^{r}\rangle}{ \langle\psi_{n}^{l}|\psi_{n}^{r}\rangle}, \tag{3}\] It is worth noting that in the non-Hermitian case, the expectation value of the Hamiltonian cannot be guaranteed to be a real number. Computing the energy levels of many-body systems using classical computers becomes increasingly difficult as the system size grows. This is due to the NP-hard problem for solving eigenvalues of generic Hamiltonian, which scales exponentially with the system size [17]. Quantum computing has great potential for solving the eigenvalue problem of massive Hamiltonians using the variational quantum eigensolver (VQE) algorithm. VQE is a highly promising quantum algorithm for near-term quantum computers, which can be utilized to compute the ground state and low-lying excited states of a given system. The VQE algorithm employs a parameterized quantum circuit to prepare a trial state, and the expectation value of the Hamiltonian is measured. Subsequently, the parameters of the trial state are optimized iteratively on a classical computer to minimize the cost function, which is designed based on the variational principle in the variational quantum algorithm [41]. In VQE, the energy of a system is commonly used to construct the cost function. However, the classical optimizer requires the expectation value to be a real number when minimizing the cost function. Hence, while the VQE algorithm has demonstrated great potential in solving the eigenvalue problem of Hermitian Hamiltonians, it has limitations in solving non-Hermitian physical systems due to the non-real expected value of such systems. To surmount this obstacle, we introduce a cost function in the variational quantum algorithm, which represents the energy variance of the system. The energy variance is zero if and only if the system is in an eigenstate. Thus, we can get the eigenstates of the system and the corresponding eigenvalues. Our algorithm facilitates the application of the VQE algorithm to the computation of non-Hermitian eigenenergies, enabling its use in a broader range of physical systems. In the following, we provide a detailed exposition of the algorithm. ### Variational quantum eigensolver Given an open physical system that can be described by a non-Hermitian Hamiltonian \(\hat{H}\), our aim is to employ quantum variational algorithms to compute the system's eigenvalues and eigenstates. To begin with, we employ the Hermitianization technique to construct a Hermitian Hamiltonian from the given non-Hermitian Hamiltonian \(\hat{H}\) \[\hat{M}(E)=(\hat{H}^{\dagger}-E^{*})(\hat{H}-E), \tag{4}\] where \(E^{*}\) is the complex conjugate of \(E\). The variable \(E\) in the Hamiltonian matrix \(M(E)\) is a complex number, which reflects the fact that the eigenvalues of the non-Hermitian Hamiltonian are generally complex. Through the construction method of Eq. (4), we obtain a Hermitian Hamiltonian matrix \(M(E)\) that is non-negative. The Hermitian Hamiltonian matrix \(M(E)\) is semi-positive definite which satisfies \[\left\langle\hat{M}(E)\right\rangle=\left\langle\psi\right|(\hat{H}^{\dagger}- E^{*})(\hat{H}-E)\left|\psi\right\rangle\geq 0. \tag{5}\] As the Eq. 5 shows, the expected value of \(\hat{M}\) is actually closely related to the variance energy of the Hamiltonian \(\hat{H}\). The difference is that in \(\left\langle\hat{M}(E)\right\rangle\) the energy is parameterized. The condition for \(\left\langle\hat{M}(E)\right\rangle\) to be equal to zero is if and only if \[(\hat{H}-E)\left|\psi\right\rangle=0. \tag{6}\] This condition implies that the state vector \(\left|\psi\right\rangle\) is a right eigenstate of the Hamiltonian \(\hat{H}\), with the corresponding eigenvalue Figure 1: Illustration of the variational quantum algorithm, which can prepare the eigenstates and compute the corresponding eigenenergies. The quantum circuit \(U(\theta)\) is parameterized by \(\theta\) and should be performed on a quantum computer. The variable parameters \((\theta,E)\) are updated and optimized with classical computing in order to minimize the cost function. \(E\). Likewise, if we want to solve the left eigenstate, we only need to replace \(\hat{M}(E)\) with \(\hat{M}^{{}^{\prime}}(E)\), \[\hat{M}^{{}^{\prime}}(E)=(\hat{H}-E)(\hat{H}^{\dagger}-E^{*}). \tag{7}\] The cost function for the variational quantum algorithm can be designed as following, \[\mathcal{L}(\theta,E) =\left\langle\psi(\theta)\right|(\hat{H}^{\dagger}-E^{*})(\hat{H }-E)\left|\psi(\theta)\right\rangle \tag{8}\] \[=\left\langle\psi(\theta)\right|\hat{M}(E)\left|\psi(\theta) \right\rangle,\] where \(\left|\psi(\theta)\right\rangle=U(\theta)\left|0\right\rangle\) and \(U(\theta)\) is a unitary operation, which can be implemented with parameterized quantum circuits. As presented in Eq. (8), the cost function \(\mathcal{L}(\theta,E)\) is equivalent to the expected value of \(\hat{M}(E)\) in a specific quantum state \(\left|\psi(\theta)\right\rangle\). By decomposing \(\hat{M}(E)\) as a sum of Pauli operators and performing each Pauli measurement alone, the cost function can be obtained efficiently. The cost function can be minimized with a hybrid quantum-classical optimization method. As depicted in the Fig. 1, the \(U(\theta)\) is utilized to prepare the quantum state \(\left|\psi(\theta)\right\rangle\) on the quantum computer. Subsequently, the expected value of \(\hat{M}(E)\) is measured, and classical optimization techniques are applied to minimize the expected value by updating the parameters \((\theta,E)\). Upon achieving a minimum value of the cost function, a right eigenstate \(\left|\psi(\theta)\right\rangle\) of \(\hat{H}\) and the corresponding eigenvalue \(E\) are obtained. ### Optimization strategy For a given Hamiltonian \(\hat{H}\), there may exist multiple sets of eigenstates and corresponding eigenvalues which makes the cost function reach the minimum value of zero. In general, when minimizing a cost function using a classical optimizer, different initial parameter values can lead to different solutions for the optimization problem. Therefore, in order to obtain a specific eigenstate, such as the ground state, or the eigenstate with the largest loss, we need to design reasonable optimization schemes. To this end, we adopt a two-step optimization strategy, which consists of two different optimization processes. For convenience, we denote \(E\) as \(E_{r}+\mathrm{i}E_{i}\), where \(E_{r}\) is the real component and \(E_{i}\) is the imaginary component. Thus, the cost function \(\mathcal{L}(\theta,E)\) can be represented as \(\mathcal{L}(\theta,E_{r},E_{i})\). As described in Algorithm 1, in the first step, we leave the initial value of \(E_{r}\) unchanged and update the parameters \(E_{i}\) and \(\theta\) to minimize the cost function; in the second step, all parameters are updated together to minimize the cost function to 0. Adopting the two-step optimization strategy, we can obtain the eigenstate \(\left|\psi(\theta)\right\rangle\) of the Hamiltonian \(\hat{H}\) with high accuracy. The real component of the corresponding eigenvalue \(E\) is very close to the initial value \(E_{r0}\). To obtain the ground state of the Hamiltonian \(\hat{H}\), it is necessary to set the real component of \(E_{r0}\) to a sufficiently small value. This is because the ground state of the Hamiltonian has the lowest eigenenergy (concerning the real part of the energy). ``` Input: Input the initial energy value, \(E_{r}=E_{r0},E_{i}\), and the initial parameter set, \(\theta\). 1while\(\mathcal{L}(\theta,E_{r},E_{i})\) has not convergeddo 2\(\theta\leftarrow\theta-\alpha\frac{\partial\mathcal{L}}{\partial E_{i}}\); 3\(E_{i}\gets E_{i}-\alpha\frac{\partial\mathcal{L}}{\partial E_{i}}\); 4 5 end while 6while\(\mathcal{L}(\theta,E_{r},E_{i})\) has not convergeddo 7\(\theta\leftarrow\theta-\alpha\frac{\partial\mathcal{L}}{\partial\theta}\); 8\(E_{i}\gets E_{i}-\alpha\frac{\partial\mathcal{L}}{\partial E_{i}}\); 9\(E_{r}\gets E_{r}-\alpha\frac{\partial\mathcal{L}}{\partial E_{r}}\); 10 11 end while return \(E_{r},E_{i},\theta\) ``` **Algorithm 1**two-step optimization By utilizing algorithm 1, we can effectively identify an eigenstate whose eigenenergy is in close proximity to the initial value \(E_{r0}\). Consequently, by manipulating the initial value, \(E_{r0}\), we can systematically traverse the eigenstates of the Hamiltonian. In order to calculate the ground state and the low-lying excited state, we adopted the strategy described in the algorithm 2 (see in the Appendix A)which is based on algorithm 1. It first solves the ground state energy, and then starts from the ground state energy to find the low-lying excited state step by step through gradually increasing the value of \(E_{r0}\). This approach enables us to explore the complete spectrum. Similar to the process for determining the ground state, the two-step optimization approach can also be utilized to determine the eigenstate with the greatest absolute value of the imaginary component of energy. First, we keep the initial value of \(E_{i}\) constant and update the parameters \(E_{r}\) and \(\theta\); then, all parameters are updated together to minimize the cost function to 0. Then we can obtain the eigenstate \(\left|\psi(\theta)\right\rangle\), and the imaginary component of the corresponding eigenvalue \(E\) is in close proximity to its initial value \(E_{i0}\). So we just need to set \(E_{i0}\) to a large value. ### Biorthogonal relations and operator expected values As shown in Eq.(2), there is a biorthogonal relationship between the left and right eigenvectors of the non-Hermitian Hamiltonian. Under the biorthogonal basis, the expected value of the operator is not directly measurable as in the Hermitian case. In this section, we will describe in detail how to use the Hadamard test to verify the biorthogonal relationship and determined the expected value of the given operator in a eigenstate. Initially, we employ a variational quantum algorithm to generate both the left eigenstates \(\left|\psi_{n}^{l}\right\rangle\) and right eigenstates \(\left|\psi_{n}^{r}\right\rangle\) eigenstates of the Hamiltonian \(\hat{H}\) on a quantum computing device \[\left|\psi_{n}^{r}\right\rangle=U(\theta_{n}^{r})\left|0\right\rangle,\quad \left|\psi_{m}^{l}\right\rangle=U(\theta_{m}^{l})\left|0\right\rangle, \tag{9}\] where \(U(\theta_{n}^{l})\) and \(U(\theta_{n}^{r})\) are the parameterized quantum circuit associated with the eigenstate \(\left|\psi_{n}^{l}\right\rangle\) and \(\left|\psi_{n}^{r}\right\rangle\), respectively. The biorthogonality of the eigenstates of the non-Hermitian Hamiltonian can be verified through the evaluation of fidelity. The fidelity between \(|\psi_{n}^{r}\rangle\) and \(\left|\psi_{n}^{l}\right\rangle\) can be described as \[\mathcal{F} =\sqrt{||\left\langle\psi_{m}^{l}|\psi_{n}^{r}\right\rangle||}= \sqrt{|c_{n}|^{2}}\delta_{m,n} \tag{10}\] \[=\sqrt{||\left\langle 0|U^{\dagger}(\theta_{m}^{l})U(\theta_{n}^{r })|0\right\rangle||}.\] Suppose that the operator \(\hat{A}\) can be expressed as a linear combination of Pauli operators \[\hat{A}=\sum_{i}a_{i}\hat{O}_{i}, \tag{11}\] where \(\hat{O}_{i}\) denotes a Pauli operator. According to the Eq. 3, the expected value of the operator \(\hat{A}\) in the eigenstate \(|\psi_{n}^{r}\rangle\) is given by \[\langle\hat{A}\rangle=\frac{\langle\psi_{n}^{l}|\hat{A}|\psi_{n}^{r}\rangle}{ \langle\psi_{n}^{l}|\psi_{n}^{r}\rangle}=\frac{\sum_{i}a_{i}\langle\psi_{n}^{ l}|\hat{O}_{i}|\psi_{n}^{r}\rangle}{\langle\psi_{n}^{l}|\psi_{n}^{r}\rangle}. \tag{12}\] As shown in Eq. 10 and Eq. 12, to figure out the fidelity and the expected value of the operator \(\hat{A}\), we just have to obtain the value of \(\left\langle\psi_{m}^{l}|\psi_{n}^{r}\right\rangle\) and \(\langle\psi_{n}^{l}|\hat{O}_{i}|\psi_{n}^{r}\rangle\). As portrayed in Fig. 2(a), the quantum circuit at hand possesses the capability to obtain the real component of \(\langle\psi_{n}^{l}|\hat{O}|\psi_{n}^{r}\rangle\) through the Hadamard test. Upon performing a measurement on the first qubit, the probability of finding the system in the state \(|0\rangle\) is given by [42] \[P_{r}(0) =\frac{1}{2}+\frac{1}{2}\Re(\langle 0|U^{\dagger}(\theta_{m}^{l}) \hat{O}U(\theta_{n}^{r})|0\rangle) \tag{13}\] \[=\frac{1}{2}+\frac{1}{2}\Re(\langle\psi_{m}^{l}|\hat{O}|\psi_{n} ^{r}\rangle).\] On the other hand, as depicted in Figure 2 (b), the aforementioned quantum circuit can employ the Hadamard test to obtain the imaginary part of \(\langle 0|U^{\dagger}(\theta_{m}^{l})U(\theta_{n}^{r})|0\rangle\). The probability of observing the state \(|0\rangle\) after measuring the first qubit can be expressed as follows [43] \[P_{i}(0) =\frac{1}{2}-\frac{1}{2}\Im(\langle 0|U^{\dagger}(\theta_{m}^{l}) \hat{O}U(\theta_{n}^{r})|0\rangle) \tag{14}\] \[=\frac{1}{2}-\frac{1}{2}\Im(\langle\psi_{m}^{l}|\hat{O}|\psi_{n}^ {r}\rangle).\] Thus, the value of \(\langle\psi_{m}^{l}|\hat{O}|\psi_{n}^{r}\rangle\) can be obtained, \[\langle\psi_{m}^{l}|\hat{O}|\psi_{n}^{r}\rangle =\Re(\langle\psi_{m}^{l}|\hat{O}|\psi_{n}^{r}\rangle)+\mathrm{i} \Im(\langle\psi_{m}^{l}|\hat{O}|\psi_{n}^{r}\rangle) \tag{15}\] \[=2P_{r}(0)-1+\mathrm{i}\left[\,2-P_{i}(0)\right].\] The overlapping \(\langle\psi_{m}^{l}|\psi_{n}^{r}\rangle\) can be obtained by setting \(\hat{O}=\hat{I}\). ## III Simulation results In this section, we will utilize a standard non-Hermitian Hamiltonian as a demonstration of our algorithm. To investigate the performance of our algorithm, we conduct numerical simulations under different conditions. Additionally, we assess the algorithm's practicality on Noisy Intermediate-Scale Quantum device by considering the effects of quantum noise and utilizing error mitigation techniques to enhance its accuracy. We carried out the numerical simulations using the open-source package qibo[44] and qutip [45]. ### Performance of quantum algorithm The selected non-Hermitian lattice model is the Ising quantum spin chain in the presence of a magnetic field in the \(z\)-direction as well as a longitudinal Figure 3: The logarithm of the cost function \(\mathcal{L}(E,\theta)\) as a function of the circuit depth \(P\), for various system sizes \(L\). Here, \(\lambda=1,\kappa=0.8\). model can be described by the following Hamiltonian \[H_{\lambda,\kappa}=-\frac{1}{2}\sum_{j}^{L}(\lambda\sigma_{j}^{x}\sigma_{j+1}^{x}+ \sigma_{j}^{z}+i\kappa\sigma_{j}^{x}), \tag{16}\] where \(\lambda,\kappa\in R\). To implement the Hamiltonian of interest, we device a unitary quantum circuit, which is composed of a series of single qubit rotation gates and two-qubit rotation gates. The design of the circuit is as follows [47; 48] \[U(\theta) =\prod_{j=1}^{P}U_{j}(\theta_{j}),\] \[U_{j}(\theta_{j}) =e^{-iH_{xx}(\alpha_{j})}e^{-iH_{x}(\beta_{j})}e^{-iH_{x}(\gamma_{ j})}, \tag{17}\] where \(H_{xx}(\alpha_{j})=\sum_{l}\alpha_{j,l}X_{l}X_{l+1}\), \(H_{z}(\beta_{j})=\sum_{l}\beta_{j,l}Z_{l}\), \(H_{x}(\gamma_{j})=\sum_{l}\gamma_{j,l}X_{l}\) and \(\theta_{j}=(\alpha_{j},\beta_{j},\gamma_{j})\) is the parameter set used to control the rotation angle of quantum gates. Fig. 3 demonstrates that the accuracy of the computed cost function by our quantum algorithm improves as the depth of the quantum circuit \(P\) increases, ultimately approaching zero. This indicates that the obtained eigenstates and corresponding eigenvalues become increasingly accurate. The observed improvement can be attributed to the increased complexity and expressiveness of the circuit, which facilitate a more faithful encoding of the target Hamiltonian's properties [49]. We utilize the quantum variational algorithm to evaluate the energy spectrum of \(H_{\lambda,\kappa}\), as shown in Fig. 4. To compute the ground state and low-lying excited states, we employ the method outlined in the algorithm 2. This involves initially obtaining the ground state energy and subsequently using it as a starting point to determine the low-lying excited states step by step. The outcomes exhibit strong conformity with the corresponding exact values, thereby providing compelling evidence for the effectiveness of our algorithm in computing non-Hermitian Hamiltonian eigenvalues. Additionally, the data presented in Fig. 4(a) manifests a gradual convergence of the energy levels for both the ground and the first excited states with increasing values of \(\kappa\), which culminates at the exceptional point. Furthermore, Fig. 4(b) discloses that beyond the exceptional point, both the ground and the first excited states feature the presence of imaginary components in their energy levels, indicating the occurrence of a real-to-complex spectral transition at the exceptional point. Using our variational quantum algorithm, we are able to prepare both the right and left eigenstates of a given Hamiltonian on a quantum device. Utilizing the Hadamard test, we can calculate the fidelity between the two eigenstates. As shown in Fig. 5, we have obtained the fidelity between the left and right eigenstates of the Hamiltonian \(H_{\lambda,\kappa}\) with \(L=3\), which confirms the biorthogonal relationship between the eigenstates of the non-Hermitian Hamiltonian. As can be seen from table 1, the results obtained by the quantum algorithm are in good agreement with the exact values. This verifies that our quantum algorithm is suitable for studying non-Hermitian physical systems. In Section II.4, we introduce an approach to determine the expected value of the operator by means of the Hadamard test. Figure 4: The energy levels of \(H_{\lambda,\kappa}\) as a function of \(\kappa\), in the case \(L=4,\lambda=1\). (a) illustrates the real component of the energy levels, while (b) depicts the imaginary part of the energy levels. Those lines denoted by \(E_{0}\),\(E_{1}\),\(E_{2}\) and \(E_{3}\) are obtained by exact diagonalization. To ascertain the soundness of our methodology, we proceed to evaluate the expected value of the Hamiltonian operator \(\hat{H}=H_{\lambda,\kappa}\) under biorthogonal base vectors. As depicted in the Fig 6, our quantum algorithm yields results that are in excellent agreement with the exact values, thus attesting to the efficacy of our algorithm. ### Error mitigation Quantum noise is a major challenge for implementing quantum algorithms on current quantum processors [50]. To evaluate and optimize the quamtun algorithms, it is crucial to consider the noise impact in numerical simulations. For demonstration, we adopt a noise model with depolarization. The quantum circuit consists primarily of single-qubit quantum gates and two-qubit quantum gates. Therefore, in the numerical simulations, after applying a single-qubit gate, noise in the form of depolarization can be added to each qubit with a probability of \(p_{1}\). \[\varepsilon(\rho)=(1-p_{1})\rho+\frac{p_{1}}{3}(X\rho X+Y\rho Y+Z\rho Z). \tag{18}\] Similarly, after applying a two-qubit gate, depolarization noise can be added to each qubit with a probability of \(p_{2}\). In order to study the influence of noise on quantum algorithms, we set the noise rate of single-qubit quantum gate \(p_{1}=0.001\), and the rate of noise of two-qubit quantum gate \(p_{2}=0.01\). Fig. 7 illustrates the variations of the cost function with respect to \(E\), both in the absence of noise and in the presence of noise. As illustrated in Fig. 7, the impact of noise exposure on the cost function landscape is not substantial. Nevertheless, the landscape's overall elevation resulting from such exposure prevents the attainment of a minimum loss value of zero. It is worth noting that the optimal solution for energy may undergo some variations in the presence of noise. This suggests that when implementing our algorithm on a noisy quantum device, the obtained results are likely to deviate to some extent from the ideal ones in the noiseless case. In order to enhance the performance of quantum algorithms on noisy quantum devices, we utilize an error-mitigation technique known as Richardson's deferred method [51, 52], which does not need extra quantum resources and can significantly reduce the error in the expected value of the observation caused by quantum noise. As depicted in Fig. 8, we investigate the variations of multiple variables in the optimization Figure 8: Comparison of optimization processes for idea, noisy and mitigated variational quantum algorithm. (a) The cost function as a function of the number of iterations; (b) The fidelity respect to target ground state as a function of the number of iterations; (c) The real component of the energy \(E\) as a function of the number of iterations; (d) The imaginary component of the energy \(E\) as a function of the number of iterations. In all cases \(L=4,P=4,\lambda=1,\kappa=0.4\). Figure 7: Illustration of effects of quantum noises on the landscapes. The minimum point for the complex energy can be shifted in the presence of noise in (b) compared with the noiseless case in (a). process with respect to the number of iterations. Specifically, we observe that in the presence of noise, the cost function fails to converge to zero due to the elevation of the cost function landscape caused by noise, as illustrated in Fig. 8(a). To address this issue, we apply error mitigation techniques, which results in a smaller cost function value that approaches the ideal case, indicating an improvement in the algorithm performance. Our findings are further supported by Fig. 8(b) and Fig. 8(c), which demonstrate that the energy eigenvalues obtained using error mitigation techniques are consistent with those in the ideal case. Although error mitigation techniques can reduce the adverse effects of quantum noise on energy measurements, they cannot entirely eliminate quantum noise, and thus there is no significant improvement in the fidelity between the resulting eigenstates and those in the ideal case. ## IV Conclusion In conclusion, we have proposed a variational quantum algorithm for solving the the eigenvalues and eigenstates of non-Hermitian Hamiltonian by utilizing the zero-variance variational principle. We have also developed a two-step optimization method to efficiently compute specific eigenvectors and eigenvalues. Through numerical simulations, we have demonstrated the effectiveness of our algorithm in computing the eigenvalues of non-Hermitian Hamiltonian and estimating the expected value of operator for non-Hermitian systems. Moreover, we have investigated the impact of quantum noise on our algorithm and incorporates error mitigation techniques to improve its performance. Overall, our work showcases the feasibility for simulating non-Hermitian many-body physics on near-term quantum computers. ###### Acknowledgements. This work was supported by the National Natural Science Foundation of China (Grant No. 12005065 and No. 12275090) and the Guangdong Basic and Applied Basic Research Fund (Grant No.2023A1515011460, 2021A1515010317), the Guangdong Provincial Key Laboratory (Grant No. 2020B1212060066), and the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302300). ## Appendix A Spectrum scanning Using algorithm 1, we can efficiently locate an eigenstate with an eigenenergy that is close to the initial estimate \(E_{r0}\). Therefore, by adjusting the initial estimate, \(E_{r0}\), we can systematically explore the eigenstates of the Hamiltonian. To compute the ground state and the low-lying excited states, we employed the strategy outlined in the algorithm 2 based on algorithm 1. It first determines the ground state energy, and then iteratively searches for the low-lying excited states by incrementing the value of \(E_{r0}\) from the ground state energy. This approach enables us to obtain the complete spectrum.
2308.16510
Robust GAN inversion
Recent advancements in real image editing have been attributed to the exploration of Generative Adversarial Networks (GANs) latent space. However, the main challenge of this procedure is GAN inversion, which aims to map the image to the latent space accurately. Existing methods that work on extended latent space $W+$ are unable to achieve low distortion and high editability simultaneously. To address this issue, we propose an approach which works in native latent space $W$ and tunes the generator network to restore missing image details. We introduce a novel regularization strategy with learnable coefficients obtained by training randomized StyleGAN 2 model - WRanGAN. This method outperforms traditional approaches in terms of reconstruction quality and computational efficiency, achieving the lowest distortion with 4 times fewer parameters. Furthermore, we observe a slight improvement in the quality of constructing hyperplanes corresponding to binary image attributes. We demonstrate the effectiveness of our approach on two complex datasets: Flickr-Faces-HQ and LSUN Church.
Egor Sevriugov, Ivan Oseledets
2023-08-31T07:47:11Z
http://arxiv.org/abs/2308.16510v1
# Robust GAN inversion ###### Abstract Recent advancements in real image editing have been attributed to the exploration of Generative Adversarial Networks (GANs) latent space. However, the main challenge of this procedure is GAN inversion, which aims to map the image to the latent space accurately. Existing methods that work on extended latent space \(W+\) are unable to achieve low distortion and high editability simultaneously. To address this issue, we propose an approach which works in native latent space \(W\) and tunes the generator network to restore missing image details. We introduce a novel regularization strategy with learnable coefficients obtained by training randomized StyleGAN 2 model - WRanGAN. This method outperforms traditional approaches in terms of reconstruction quality and computational efficiency, achieving the lowest distortion with 4 times fewer parameters. Furthermore, we observe a slight improvement in the quality of constructing hyperplanes corresponding to binary image attributes. We demonstrate the effectiveness of our approach on two complex datasets: Flickr-Faces-HQ and LSUN Church. ## 1 Introduction The emergence of generative adversarial neural networks (GANs) has made a great contribution to high quality image synthesis. A well-known model in this field is StyleGAN, which has achieved remarkable results. Moreover, several works [19, 14, 17, 25, 23, 10] have demonstrated that GANs possess a wide range of interpretable semantics, providing the basis for image editing. This property enables the alteration of certain attributes while preserving the identity of the image relative to others. However, the application of this property to real images has been limited due to the need to accurately map them into the latent space. This task, known as GAN inversion, initially focused on mapping images into the native latent space \(W\). Yet, authors in [1] have shown that this approach leads to significant differences between the original and generated images. Subsequent work has shifted focus to the extended latent space \(W+\)[2, 20, 3, 24, 26, 9], which improves the quality of image reconstruction but degrades editability. This issue, called the distortion-editability tradeoff [24], limits the possibility of using codes obtained in the \(W+\) space. Another way to solve the problem was proposed in [21], which includes a small change in the generator parameters when working with the latent space \(W\) - pivotal tuning inversion (**PTI**). We improve their idea by using adaptive regularization instead of one number, since each parameter has different contribution to the model performance. In this paper, we present a novel approach for learning regularization that allows for high-quality image reconstructions while preserving the ability of the model to generate realistic images. This approach is based on a randomized version of StyleGAN 2 called WRanGAN, in which part of the model weights are assumed to be normally distributed with trainable mean and variance. To apply different regularization coefficients, we use the reparameterization trick [15] during the inversion procedure. The effectiveness of our technique was evaluated on two complex datasets, the Flickr-Faces-HQ Dataset (FFHQ) [12] and LSUN Churches [28]. Our contributions are summarized as follows: * We present a novel adaptive regularization scheme based on an investigation of different regularization strategies and their effect on reconstruction quality and model corruption. * We introduce **WRanGAN**, a model that learns appropriate regularization coefficients via a randomization of the StyleGAN model. * We evaluate **WRanGAN** in terms of generation, reconstruction, binary attributes extraction and computational cost, and compare it to several baselines in a qualitative and quantitative manner. ## 2 Problem setting ### Latent Space Manipulation GANs allow the generation of images that are controlled by semantic directions [19, 14, 17, 25, 23, 10]. In particular, in the work [10] authors proposed estimating the subspaces that are invariant under random-walk diffusion for identification. Supervision in the form of facial attribute labels was used in [23] to find meaningful linear directions in the latent space. The identification of latent directions based on the principal component analysis (PCA) was proposed in [10]. ### GAN inversion Recent research has focused on the improving reconstruction quality of GAN inversion task, which involves finding the latent code that accurately reproduce real image. This task can be divided into two main groups: optimization methods that directly modify the latent code to minimize a loss function [1, 18], and encoder-based methods that use a trained encoder to generate an image [7, 3, 20, 24]. Generally, methods operate in the native latent space \(W\), which can lead to significant visual differences compared to the original image [1]. On the other hand, the extended latent space \(W+\) is much more expressive and allows for the reproduction of more unique image details. However, this approach is limited by the fixed generator parameters. To address this issue, some approaches have proposed to modify the generator network to fix visual artifacts, as demonstrated in [21]. Others have used hypernetworks to predict the change of generator parameters in order to minimize distortion and preserve the realism of the generated image, as seen in [4] and [6]. ### Distortion-editability tradeoff The GAN inversion in the extended latent space \(W+\) significantly improves reconstruction of real images, but at the same time it leads to degradation of editability called distortion-editability trade-off [24]. There are works [30] and [24], where the authors proposed to search for editable latent codes in an extended latent space \(W+\). A completely different way to solve this problem was proposed in the works [4] and [21]. Instead of trying to find a balance between editability and distortion, the authors suggest using the advantage of projection into the latent space \(W\) and updating the generator parameters to minimize distortion. In this paper we also used projection to native latent space to reach high editability. ### Generator tuning Model tuning significantly improves ability to reproduce real image [21, 4, 6]. But changing the parameters of the model can damage its quality. In order to improve the realism of generated images after modification of the generator weights, non-saturating GAN loss was used to train hypernetworks [4, 6] (encoders predicting the necessary weight shift). Despite the significant improvement in the quality of reproduction, these methods are still inferior to the PTI approach [21] based on direct weight optimization. But optimization of model parameters without any additional constraints requires imposing a regularization with a high coefficient in order not to damage the realism of the generated images and forces to optimize all the parameters of the model to reach low distortion. As a result, it leads to a significant increase in the computational costs. ## 3 Method In general, the proposed method solves the general problem by applying non-equal learnable regularization. This allows to set appropriate regularization coefficient for each parameter depending on its effect on model performance (realism of generated images). In general approach, the first stage learns appropriate regularization coefficients for the inversion task by adversarial training of a generator with partially randomized parameters. The second stage, then, uses these coefficients for an inversion procedure consisting of encoder projection and regularized optimization minimizing particular loss function. ### Model corruption assessment To measure how well the model after tuning is, we use a common technique known as Frechet Inception Distance (FID) [8] and Kernel Inception Distance (KID) [5]. In general case, evaluation is performed on a large set of images produced by the generator network. However, since our main focus is on edited images, we slightly change this tool by performing calculations over 1000 images obtained by shifting latent code (editing) in random directions. The shift norm is taken accordingly to the characteristic size of the style space (variance of style features). ### Regularized inversion In order to avoid degradation of realism in generated images, regularization term is often added to optimization procedure. The general task formulation looks like this: \[\hat{w},\hat{\theta}_{G}=\arg\min_{w,\theta_{G}}\mathcal{L}(G(w,\theta_{G}), \hat{x})+\alpha_{\mathrm{reg}}\|\theta_{G}-\theta_{G,0}\|_{2}^{2}\] Here, \(\hat{x}\) some real image, \(\alpha_{\mathrm{reg}}\|\theta_{G}-\theta_{G,0}\|_{2}^{2}\) the regularization term, \(\theta_{G,0}\) the initial values of the generator weights. For **WRanGAN** inversion we used \(\mathcal{L}=2\mathcal{L}_{2}+\mathcal{L}_{\mathrm{LPIPS}}\) and initialize intermediate latent code \(w\) by mapping the output of the trained encoder \(E\) to intermediate latent space \(W\): \(w=f(E(\hat{x}))\). \(\alpha_{\mathrm{reg}}\) - regularization coefficient, which choice is the balance between reconstruction quality and model corruption. We considered three strategies of regularization to illustrate this paradigm: * low regularization value (**Simple Weight Tune**) * high regularization value (**PTI**) * appropriate regularization coefficients (**WRanGAN**) The results of our experiments, presented in Figure 1, demonstrate that a low regularization value can impair visual quality and significantly reduce the variability of the model (the ability to manipulate certain image attributes suffers significantly). This is reflected in the FID metric used for model corruption evaluation, which shows the highest value for **Simple Weight Tune** strategy. Furthermore, applying a high regularization coefficient does not reach the lowest distortion, as evidenced by the mean squared error (MSE). Finally, the proposed **WRanGAN** model shows a good balance between both aspects: reconstruction quality and model corruption. A more detailed statistical view is presented in Figure 2. Here, we evaluated the model corruption for the **Simple Weight Tune** and **WRanGAN** strategies in comparison with the **PTI** approach - the difference for each image between the FID metric value for the particular strategy and the corresponding value for **PTI**. The results for the appropriate regularization strategy are much better than those for Figure 1: Comparison of different regularization strategies of GAN inversion. **Simple Weight Tune** represents optimization of model parameters with low regularization coefficient. **PTI** represents pivotal tuning approach - high regularization coefficient. And the last one is proposed **WRanGAN**. For each approach metrics were calculated: MSE (lower values is better) measuring distortion, FID (lower values is better) evaluated over images generated by shifting latent code, measuring model corruption. For each approach we presented result of latent code shifting in the 4 orthogonal directions corresponding to the maximal change in the image. Figure 2: Model corruption evaluation for two regularization strategies: **Simple Weight Tune** and **WRanGAN**. Calculations performed over 30 randomly taken images from FFHQ dataset. Corresponding mean values are also presented. **Simple Weight Tune**. The proposed method does not corrupt the model. ### WRanGAN inversion In this part we have discussed how to apply regularization to randomized model parameters \(\theta_{G}\sim N(\mu_{\theta},\sigma_{\theta})\). To this end, we employ the reparameterization trick, which states that \(\theta_{G}^{i}=\mu_{\theta}^{i}+\epsilon^{i}\sigma_{\theta}^{i}\) where \(\epsilon\sim N(0,1)\) and \(i\) is the index of a particular parameter. By regularizing the parameter \(\epsilon\), we get the following equation: \[\alpha_{\text{reg}}\|\epsilon\|_{2}^{2}=\sum_{i}\frac{\alpha_{\text{reg}}}{ \sigma_{\theta}^{i}}(\theta_{G}^{i}-\mu_{\theta}^{i})^{2}=\sum_{i}\alpha_{ \text{reg}}^{i}(\theta_{G}^{i}-\theta_{G,0}^{i})^{2}\] Here, we have used the notations \(\alpha_{\text{reg}}^{i}=\frac{\alpha_{\text{reg}}}{\sigma_{\theta}^{i}}\) and \(\mu_{\theta}^{i}=\theta_{G,0}^{i}\), and have obtained a standard regularization formulation with different regularization coefficients for each randomized model parameter. The tips discussed in this part are summarized in Algorithm 1. ``` Input: real image \(\hat{x}\), generator parameters \(\mu_{\theta},\sigma_{\theta}\) Parameter: regularization coefficient \(\alpha_{\text{reg}}\) Output: latent code \(w\) and parameterized randomization \(\epsilon\) Initialize \(w=E(x)\) by the output of encoder network Initialize \(\epsilon\) with small value (\(10^{-4}\)) for number of iterations do Set generator weights \(\theta_{G}\leftarrow\mu_{\theta}+\sigma_{\theta}\epsilon\) Update parameters \(w,\epsilon\) minimizing: \[\mathcal{L}(G(w,\theta_{G}),\hat{x})+\alpha_{\text{reg}}\|\epsilon\|_{2}^{2}\] endfor return\(w\),\(\epsilon\) ``` **Algorithm 1** Algorithm of WRanGAN inversion ### Weight randomization The idea of randomizing the model was inspired by Bayesian GAN [22], where the generator and discriminator networks both assumed to have some distribution over their internal parameters. However, randomizing the entire network is computationally expensive due to the increased number of parameters required for both training and generator parameters tuning during the inversion step. We have already illustrated how to perform inversion using appropri \begin{table} \begin{tabular}{|c|c|} \hline **Input**: pretrained StyleGAN 2 weights \(\theta_{G,0}\), dataset \(\hat{x}\) **Parameter**: batch size \(m\) **Output**: \(\mu_{\theta}\), \(\sigma_{\theta}\) \\ \hline Initialize \(\mu_{\theta}=\theta_{G,0}\) Initialize \(\sigma_{\theta}=1\) for randomized parameters for number of training iterations do Sample \(z^{(1)},...,z^{(m)}\sim N(0,1)\) Map to intermediate latent space \(w^{(i)}=f(z^{(i)})\) Sample \(\hat{x}^{(1)},...,\hat{x}^{(m)}\) from training dataset \(\hat{x}\) Sample \(\epsilon\sim N(0,1)\) and calculate \(\theta_{G}=\mu_{\theta}+\epsilon\sigma_{\theta}\) Update discriminator weights \(\theta_{D}\) minimizing: \[\frac{1}{m}\sum_{i=1}^{m}\mathcal{L}_{D}(D(\hat{x}^{(i)}),D(G(w^{(i)},\theta_{G })))\] Sample \(z^{(1)},...,z^{(m)}\sim N(0,1)\) Map to intermediate latent space \(w^{(i)}=f(z^{(i)})\) Sample \(\epsilon\sim N(0,1)\) and calculate \(\theta_{G}=\mu_{\theta}+\epsilon\sigma_{\theta}\) Update parameters \((\mu_{\theta},\sigma_{\theta})\) minimizing: \[\frac{1}{m}\sum_{i=1}^{m}\mathcal{L}_{g}(D(G(w^{(i)},\theta_{G })))\] endfor return\(\mu_{\theta}\),\(\sigma_{\theta}\) ``` **Algorithm 2** WRanGAN training algorithm. Figure 3: Dependence of MSE on number of randomized layers. \(N\) versus regularization coefficient. The lower the curve the better chosen number of layers. \begin{table} \begin{tabular}{|c|c|} \hline **Input**: real image \(\hat{x}\), generator parameters \(\mu_{\theta},\sigma_{\theta}\) **Parameter**: regularization coefficient \(\alpha_{\text{reg}}\) **Output**: latent code \(w\) and parameterized randomization \(\epsilon\) \\ Initialize \(w=E(x)\) by the output of encoder network Initialize \(\epsilon\) with small value (\(10^{-4}\)) **for** number of iterations **do** Set generator weights \(\theta_{G}\leftarrow\mu_{\theta}+\sigma_{\theta}\epsilon\) Update parameters \(w,\epsilon\) minimizing: \[\mathcal{L}(G(w,\theta_{G}),\hat{x})+\alpha_{\text{reg}}\|\epsilon\|_{2}^{2}\] endfor return\(w\),\(\epsilon\) \\ \hline \end{tabular} \end{table} Table 1: Quantitative comparison of memory cost on the number of randomized layers ate regularization coefficients, and here, we present how to obtain such coefficients. How many parameters to randomize?In [4], experiments were conducted to determine the most effective parameters to be changed in the generator. It was decided to limit the randomization to the last few convolutional layers, excluding the toRGB layers, and to keep the discriminator architecture unchanged. To determine the appropriate number of layers for randomization, a grid search was conducted over \(N=4,6,8\) and different equal regularization coefficients for the **Simple Weight Tune** method. The results of this search are presented in Figure 3, and the computational costs are shown in Table 1. It was determined that randomizing only the last \(N=6\) convolutional layers yielded the best results with a minimal increase in computational costs. How to train?To train the WRanGAN model, a pre-trained model was used to initialize the mean value of the model parameters \(\mu_{\theta}=\theta_{G,0}\). Standard deviation was then added to each randomized parameter with value equal one. The generator and discriminator were trained together to reach the global optima, as outlined in Algorithm 2. ## 4 Experiments This section presents the results of the evaluation of the proposed **WRanGAN** model. Below are presented the details of conducted experiments: datasets, baselines, and hyperparameters. Technical details. * **Models:** We used the StyleGAN 2 [13] model as a basis, with pre-trained models and base code for implementation taken from an open resource1. Footnote 1: [https://github.com/rosinality/stylegan2-pytorch](https://github.com/rosinality/stylegan2-pytorch) * **Datasets:** We trained using the Flickr-Faces-HQ Dataset (FFHQ) [12] with pictures resized to resolution \(256\)x\(256\), and LSUN Churches [28] with pictures center-cropped and resized to \(256\)x\(256\). We randomly sampled 1000 images from both datasets for testing. * **WRanGAN training details:** We used standard parameters for StyleGAN 2, and trained on 2 GPUs with a batch size of \(8\) for \(200k\) iterations. * **WRanGAN inversion details:** For the encoder \(E\) in Algorithm 1, we used the architecture proposed by [24] and trained with default parameters. We used the Adam optimizer with a learning rate of \(lr=10^{-3}\), and the number of iterations needed for convergence was set to \(500\). The randomization parameter was initialized with the value \(\epsilon=10^{-4}\), and we used a regularization coefficient of \(\alpha_{\rm reg}=10^{-4}\). \begin{table} \begin{tabular}{c|c|c|c c c c c} \hline \hline \multirow{2}{*}{**Domain**} & \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**LPIPS\(\downarrow\)**} & \multirow{2}{*}{**MS-SSIM\(\uparrow\)**} & \multirow{2}{*}{**GPU usage**} & \multirow{2}{*}{**Time**} \\ & & & & **VGG** & & **Alex** & **(MB)\(\downarrow\)** & **(s)\(\downarrow\)** \\ \hline \multirow{6}{*}{FFHQ} & \multirow{6}{*}{StyleGAN 2} & E4E & 0.062 & 0.389 & 0.235 & 0.605 & 2499 & 1.64 \\ & & Restyle & 0.035 & 0.335 & 0.154 & 0.72 & **2483** & **0.28** \\ & & SG2 W+ & 0.04 & 0.14 & 0.138 & 0.783 & 4295 & 97.9 \\ & & HyperStyle & 0.026 & 0.288 & 0.105 & 0.788 & 3583 & 0.31 \\ & & PTI & 0.024 & 0.293 & **0.06** & 0.776 & 3133 & 35.46 \\ \cline{2-8} & \multirow{2}{*}{WRanGAN} & WRanGAN & \multirow{2}{*}{**0.007**} & \multirow{2}{*}{**0.085**} & \multirow{2}{*}{0.083} & \multirow{2}{*}{**0.929**} & \multirow{2}{*}{2557} & \multirow{2}{*}{23.27} \\ & & inversion & & & & & \\ \hline \multirow{6}{*}{LSUN Church} & \multirow{6}{*}{StyleGAN 2} & E4E & 0.142 & 0.506 & 0.418 & 0.263 & 2499 & 1.64 \\ & & Restyle & 0.087 & 0.411 & 0.25 & 0.489 & **2483** & **0.28** \\ & & SG2 W+ & 0.107 & 0.225 & 0.235 & 0.543 & 4295 & 97.9 \\ \cline{1-1} & & PTI & 0.053 & 0.411 & **0.065** & 0.643 & 3133 & 47 \\ \cline{1-1} \cline{2-8} & \multirow{2}{*}{WRanGAN} & WRanGAN & \multirow{2}{*}{**0.033**} & \multirow{2}{*}{**0.177**} & \multirow{2}{*}{0.224} & \multirow{2}{*}{**0.782**} & \multirow{2}{*}{2557} & \multirow{2}{*}{23.27} \\ \cline{1-1} \cline{6-6} \cline{8-8} & & inversion & & & & & \\ \hline \end{tabular} \end{table} Table 2: Quantitative reconstruction results of **WRanGAN** model with corresponding method compared to StyleGAN 2 inversion approaches including encoder and optimization based. Assessment performed over several standard metrics, for each of them, the arrow identifies which values are better (lower \(\downarrow\) / higher \(\uparrow\)). The best results for each evaluated metric are highlighted in **bold**. Values in blue outline the cases that we outperform PTI. \begin{table} \begin{tabular}{c|c|c c c} \hline \hline **Domain** & **Model** & **FID** & **Precision** & **Recall** \\ \hline Human & StyleGAN 2 & **4.27** & **0.7** & 0.42 \\ Faces & WRanGAN & 5.61 & 0.65 & **0.45** \\ \hline LSUN & StyleGAN 2 & 4.3 & **0.61** & 0.37 \\ Church & WRanGAN & **3.57** & 0.55 & **0.42** \\ \hline \hline \end{tabular} \end{table} Table 3: **WRanGAN** model quality evaluation performed using FID, Precision, and Recall metrics for two domains, FFHQ and LSUN Church. The best results for each domain and metric are highlighted in bold. Experiments were conducted on 4 Tesla V100-SXM2 GPUs with 16 GB of memory. ### WRanGAN model evaluation We evaluated the performance of our WRanGAN model by running several metrics such as FID, Precision, and Recall [16] and comparing our results with those produced by the StyleGAN 2 model (see Table 3). WRanGAN showed an improvement in the Recall metric for both data domains, which suggests that the generator is more likely to reproduce particular real images. However, we observed a slight decrease in the Precision metric. For further details on the randomized parameters of the model and their effect on the generated images, please refer to Appendix A. ### Inversion quality assessment Evaluating the quality of inversion, various encoder based approaches such as e4e [24], ReStyle [3], and HyperStyle[4], as well as optimization based approaches like SG2 W+ [13] and PTI [21], were taken into consideration. Standard metrics including mean squared error (MSE), LPIPS[29] with the VGG and Alex feature network, and MS-SSIM [27] were used for assessment. As summarized in Table 2, the WRanGAN model proposed surpasses all the other methods applied to StyleGAN 2 on most metrics. Not only does it achieve the lowest distortion, its computational efficiency is also much higher compared to PTI, as the tuning procedure requires optimizing 4 times fewer parameters. This translates to 600 megabytes less GPU memory required during image inversion, making it more practical to use. Additionally, calculations are 1.5 and 2 times faster for FFHQ and LSUN Church domain respectively. Visualization of the results in Figure 4 and Figure 5 further illustrate how the improvement in reconstruction affects the image, with the WRanGAN approach being able to reproduce unique details such as bangs, outline of the eyes, and small church windows. More detailed visualizations and an investigation of the distribution of randomized parameters for real mapped images can be found in Appendices B and C respectively. ### Model corruption evaluation Initially, we mentioned the effect of tuning the model parameters on the ability to generate realistic images. In Figure 4: Qualitative evaluation of **WRanGAN** inversion results compared to ones produced by StyleGAN 2 using various approaches for FFHQ domain. For each reconstruction provided zoomed version (interesting regions were cropped) to see the difference in details completely. Figure 5: Qualitative evaluation of **WRanGAN** inversion results compared to ones produced by StyleGAN 2 using various approaches for LSUN Church domain. For each reconstruction provided zoomed version (interesting regions were cropped) to see the difference in details completely. this part, we performed an assessment of model corruption of tuned models using FID and KID metrics. The comparison performed with two most efficient encoder and optimization based approaches: Restyle and **PTI**. The results for both domains are presented in Table 4. The difference in FID and KID values between the proposed **WRanGAN** and StyleGAN 2 inversion approaches does not exceed \(11\) for both domains, which is much smaller than \(24\) (quality drop demonstrated by the **Simple Weight Tune** regularization strategy). As a result, we concluded that **WRanGAN** preserves ability generate realistic images after tuning. ### Editing and interpolation quality assessment We conducted an experiment to confirm that the **WRanGAN** model possesses the same excellent property as the StyleGAN 2 model - for any binary attribute, there exists a hyperplane in latent space such that all samples from the same side have the same attribute [23]. To do this, we trained a classifier predicting the following attributes: Gender, Eyeglasses, Smile, Age, Open Mouth. We then constructed hyperplanes in the latent space corresponding to the selected attributes and evaluated their correctness, as shown in Table 5. The results of our experiment demonstrate that \begin{table} \begin{tabular}{c|c|c|c c} \hline **Domain** & **Model** & **Method** & **FID** & **KID** \\ \hline \multirow{3}{*}{FFHQ} & \multirow{3}{*}{StyleGAN 2} & Restyle & 244 & 0.25 \\ & & PTI & 222 & 0.225 \\ \cline{2-5} & & WRanGAN & & \\ & & inversion & & \\ \hline \multirow{3}{*}{LSUN} & \multirow{3}{*}{StyleGAN 2} & Restyle & 139 & 0.128 \\ & & PTI & 133 & 0.134 \\ \cline{2-5} & & WRanGAN & & \\ \cline{1-1} \cline{2-5} & & inversion & & \\ \hline \end{tabular} \end{table} Table 4: Model corruption evaluation. The two best methods for StyleGAN 2 were taken for comparison:**PTI** and Restyle. The lower values are better for each metric. Figure 6: Qualitative evaluation of **WRanGAN** editing quality compared to **PTI** approach applied over StyleGAN 2 model. \begin{table} \begin{tabular}{c|c c} \hline **Attribute** & **StyleGAN 2** & **WRanGAN** \\ \hline **Gender** & 73.9 & **75.0** \\ **Eveglasses** & 99.8 & **99.9** \\ **Smile** & 99.5 & **99.8** \\ **Age** & **99.5** & 99.4 \\ **Mouth open** & 98.2 & **98.5** \\ \hline \end{tabular} \end{table} Table 5: Classification accuracy (%) on separation boundaries in latent space with respect to different face attributes. The best results are highlighted in bold. Figure 7: Qualitative evaluation of **WRanGAN** interpolation quality compared to **PTI** approach applied over StyleGAN 2 model. Here \(\alpha\) denotes interpolation step. the **WRanGAN** model has slightly superior performance compared to the basic StyleGAN 2 model. This is also evident in the visualization presented in Figure 6, where it is easy to notice that the presence of glasses in the original image significantly affected the editing of attributes in the case of the **PTI** method. However, **WRanGAN** demonstrates an excellent performance. It's also noticeable from interpolation comparison presented in Figure 7. For more examples please refer to Appendix D. ## 5 Conclusion We have proposed a randomized version of the StyleGAN 2 model, dubbed WRanGAN, which is able to learn the appropriate scaling (standard deviation) for each parameter defining the corresponding regularization coefficient. Our approach to GAN tuning using non-equal regularization coefficients demonstrated superior results in terms of distortion and computational efficiency compared to the most successful approach, pivotal tuning inversion. Moreover, it did not corrupt the model, allowing for image editing. We also showed that in the latent space of a randomized model, it is easy to construct a hyperplane corresponding to the standard image attributes for the FFHQ domain. Our method requires less memory per image in the inversion process, making it easy to parallelize calculations. Additionally, the method is slightly dependent on the network architecture enabling to transfer to other structures such as StyleGAN 3 [11]. ## Ethical Statement The considered approach allows one to reproduce a photo of a real person with high accuracy. So, the user can edit photos of real people, which can lead to illegal actions.
2309.08407
Gravitational waves from axion wave production
We consider a scenario with axions/axion-like particles Chern-Simons gravity coupling, such that gravitational waves can be produced directly from axion wave parametric resonance in the early universe after inflation. This axion gravity term is less constrained compared to the well-searched axion photon coupling and can provide a direct and efficient production channel for gravitational waves. Such stochastic gravitational waves can be detected by either space/ground-based gravitational wave detectors or pulsar timing arrays for a broad range of axion masses and decay constants.
Mingqiu Li, Sichun Sun, Qi-Shu Yan, Zhijie Zhao
2023-09-15T14:08:53Z
http://arxiv.org/abs/2309.08407v2
# Gravitational waves from Axion wave production. ###### Abstract We consider a scenario with axions/axion-like particles Chern-Simons gravity coupling, such that gravitational waves can be produced directly from axion wave tachyonic instability in the early universe after inflation. This axion gravity term is less constrained compared to the well-searched axion photon coupling and can provide a direct and efficient production channel for gravitational waves. Such stochastic gravitational waves can be detected by either space/ground-based gravitational wave detectors or pulsar timing arrays for a broad range of axion masses and decay constants. ## I Introduction Since the first observation of gravitational waves (GWs), we have acquired a new way to probe the current and early universe and it has a great impact on the fields of astrophysics, cosmology, and even particle physics. Gravitational waves at different frequencies from various sources are from the primordial gravitational waves at \(10^{-16}\) Hz[1; 2; 3], NanoHz\(10^{-9}\) Hz[4; 5; 6; 7; 8] to the binary system signals up to \(10^{4}\) Hz. This whole spectrum of gravitational waves need different detection scheme, from CMB polarization, Pulsar timing array[9; 10; 11; 12; 13; 14], interferometry[15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27], as well as some high-frequency proposals [28; 29; 30; 31; 32; 33; 34; 35]. Around one-fourth of the universe's total energy budget is made of dark matter from cosmological and astronomical observations. Aside from the well-searched weakly interacting massive particles (WIMPs), axion [36; 37; 38; 39; 40] is a promising dark matter candidate, as well as a natural solution to the 'Strong CP problem' [41; 42] in quantum chromodynamics (QCD)[43]. The axion was proposed as the Nambu-Goldstone boson of the spontaneously broken global U(1)PQ symmetry extension of the Standard Model (SM). When the universe cools down to the QCD scale the axion acquires a tiny mass and becomes a pseudo-Nambu-Goldstone particle. After the QCD phase transition, the axion field begins to oscillate and the energy density from classical oscillation can play the role of dark matter[40]. In well-motivated scenarios such as the Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) [41; 44; 45] and the Kim-Shifman-Vainshtein-Zakharov (KSVZ) [46; 47] models, QCD axions's characteristic axion-gluon coupling is generated via SU(3)c-charged fermion loops. In general axion-like particles (ALPs) are light bosons that have electromagnetic Chern-Simons (CS) type of coupling but are not necessarily coupled to gluons. ALPs can also acquire axion-graviton Chern-Simons coupling, which can be generated through heavy particle loops. The axion-graviton coupling can also induce axion photon coupling with Planck suppression [48]. ALPs are abundant in string-theory motivated models [49; 50]; ALPs generally can have a much wider mass range as the dark matter candidate. We will use 'axion' to denote both the QCD axion and ALPs. The gravitational wave from axion was considered with the axion dark gauge boson coupling, to avoid the stringent constraints from axion photon couplings. For the inflationary scenario that axion is the inflaton, axion gauge boson coupling can induce large primordial gravitational waves e.g. in the so-called axion monodromy model through tachyonic instability [51; 52; 53; 54; 55; 56; 57]. If considering the mechanism that axion oscillates after inflation, tachyonic instability in a dark gauge field induced by an axion-like particle is a known source of dark matter and/or stochastic gravitational waves[58; 59; 60; 61; 62; 63; 64; 65; 66], and friction for relaxion[67; 68]. Here in this paper, we consider a less studied axion graviton Chern-Simons coupling in the early universe after inflation, such that this term can produce gravitational waves directly through axion rolling tachyonic instability. The effect of such coupling term has been studied in inflationary scenarios for primordial gravitational wave production, and binary merger of black holes [69]. However, for high-scale inflationary scenarios, the ghost issues in Chern-Simons gravity may become severe. In this work, the frequency of directly produced gravitational waves is \(10^{-9}\text{Hz}\sim 10^{-2}\text{Hz}\). Due to the large parameter space of axion mass, we can possibly observe such gravitational wave spectrum as low as PTA band[9; 10; 11; 12; 13] when \(m_{a}\sim 10^{-12}\text{eV}\), up to space interferometry 0.01 Hz [15; 16; 17; 18] when \(m_{a}\sim 1\text{eV}\). The gravitational wave spectrum here has a peak frequency and is spread out around one order of magnitude by the universe expansion. This paper is organized as follows. In section II we briefly review the axion evolution equations in the early universe with axion graviton Chern-Simons coupling and universe expansion. We study how the axions produce GWs in section III. In section IV, we summarize and discuss our result. ## II Model We begin with the action of Chern-Simons modified gravity with axion coupling: \[S=\int d^{4}x\sqrt{-g}\left(\frac{R}{16\pi G}+\frac{\alpha}{4}\phi R\tilde{R}+ \frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-V(\phi)\right) \tag{1}\] where \(R\) is the Ricci scalar, \(R\tilde{R}=\frac{1}{2}\epsilon^{\rho\sigma\alpha\beta}R^{\mu\nu}_{\alpha\beta }R_{\nu\mu\rho\sigma}\), and the potential of axion field is \[V(\phi)=m_{a}^{2}f^{2}\left[1-\cos\left(\frac{\phi}{f}\right)\right], \tag{2}\] where \(m_{a}\) is the mass of axions. If neglecting the backreaction of gravitational waves, the equation of motion of the axion field is [59] \[\frac{\ddot{\phi}}{f}+3H\frac{\dot{\phi}}{f}+m_{a}^{2}\sin\frac{\phi}{f}=0 \tag{3}\] While the Hubble parameter \(H\gg m_{a}\), \(\phi\) remains constant. When the universe cools down to \(H\approx m_{a}\), the axion field begins rolling toward the minimum of the potential and then oscillates. Assuming a radiation-dominated universe, the temperature when \(H=m_{a}\) is \[\begin{split} T_{*}&=\left(\frac{45}{4\pi^{3}} \right)^{1/4}g_{*}^{-1/4}m_{a}^{1/2}G^{-1/4}\\ &=2.7118\times 10^{4}\left(\frac{g_{*}}{100}\right)^{-1/4}\left( \frac{m_{a}}{\rm eV}\right)^{1/2}{\rm GeV}\end{split} \tag{4}\] for a wide range of axion mass, i.e., \(m_{a}\in[10^{-12}{\rm eV},1{\rm eV}]\), the radiation dominated assumption is consistent [59]. In the following study, to solve Eq.(3), we chose initial conditions \(\phi(t_{*})/f=\theta\sim\mathcal{O}(1),\dot{\phi}(t_{*})=0,H(t_{*})=m_{a}\). We consider a Friedmann-Robertson-Walker (FRW) Universe. The perturbed metric reads \[ds^{2}=-dt^{2}+a^{2}(t)\left(\delta_{ij}+h_{ij}(t,{\bf x})\right)dx^{i}dx^{j}, \tag{5}\] where the scale factor \(a=\sqrt{t/t_{*}}=\sqrt{2tm_{a}}\) for radiation dominated Universe. We have chosen \(a(t_{*})=1\) in the whole study. In Fourier space \[h_{ij}(t,{\bf x})=\sum_{A=R,L}\int\frac{d^{3}k}{(2\pi)^{3}}h_{A}(t,k)e^{i \vec{k}\cdot\vec{x}}e_{ij}^{A} \tag{6}\] where \(A=L,R\) labels the left-handed and right-handed polarization, respectively. The equations of motion for \(h_{A}\) can be derived from the action[70], \[\begin{split}\ddot{h}_{A}+\left(3H+\frac{\dot{D}_{A}}{D_{A}} \right)\dot{h}_{A}+\frac{k^{2}}{a^{2}}h_{A}&=0,\\ \text{with}\quad D_{A}=1-\frac{\lambda_{A}k}{am_{cs}},\quad\lambda _{L}=-1,\lambda_{R}=1,m_{cs}=(16\pi G\alpha\dot{\phi})^{-1}\end{split} \tag{7}\] For \(\frac{k}{a}>|m_{cs}|\), \(1/D_{A}\) diverges, which indicates that there are ghost modes[71]. In our study of the resonant amplification of GWs, to avoid the ghost modes, we take the regulator: \[\frac{k}{am_{cs}}\longrightarrow\frac{k}{am_{cs}}f\left(\frac{k}{a\Lambda} \right),\text{with}\quad f(x)=\left\{\begin{array}{ll}0,&x>1,\\ 1,&x<1.\end{array}\right. \tag{8}\] For \(t\gg t_{0}\), we numerically solve for \(\dot{\phi}\sim\theta fA_{1}m_{a}a(t)^{-3/2}\sin(m_{a}t+\psi_{0})\). And then we chose \(\Lambda=\left(16\pi G\alpha a(t)^{-3/2}\theta f\left(A_{1}m_{a}+\frac{A_{2}} {t}\right)\right)^{-1}\), where \(A_{1}=1.66415,A_{2}=-0.118543\) are numerically determined by requiring the cut-off scale to be less than the amplitude of the oscillating \(m_{cs}\), \(|m_{cs}|\geq\Lambda\). For physical wavenumber \(k/a<\Lambda\), the equations of motion are almost unchanged. For \(k/a>\Lambda\), the regulator is equivalent to turning off the CS term for simplicity. Technically we use \(f(x)=(1+x^{20})^{-1}\). From the viewpoint that the model is an effective quantum field theory, for \(k/a>\Lambda\), the ghost appears and physics from higher-order terms should be considered. Fig.1 shows where ghost modes appear in the blue region corresponding to \(k/a>|m_{cs}|\). Notice here \(m_{cs}\) oscillates with axion \(\dot{\phi}\). We have chosen \(\theta=1,\alpha fm_{a}^{2}=5\times 10^{37}\text{GeV}^{2}\). ## III Resonant amplification of GWs The enhancement of GWs passing through the axion cloud has been studied in [70; 72; 48; 73]. Those studies mainly focus on a much later epoch when \(H\ll m_{a}\) and the universe expansion effects can be neglected. Their result shows that a resonant amplification of GWs happens when \(k=m_{a}/2\). However, when considering the Universe's expansion, it becomes slightly different. Fig.2 shows how the amplitudes of GWs evolve for \(k=10m_{a}\) as an exemplifying mode value \(k/a\). \(h_{A}/h_{0}\) is the amplitude of GWs normalized by the initial value. When \(k/a\gg m_{a}\), the amplitudes are almost unchanged. As time went by, \(k/a\) decreased. When \(k/a\sim m_{a}/2\), resonant amplification occurs, and \(h_{L}\) increases exponentially. Notice that in our calculation the two modes evolve dramatically differently, which implies that the CS-modified gravity breaks the parity symmetry. Notice that making replacement \(\phi\rightarrow-\phi,L\leftrightarrow R\) do not change Eq.7, which means that two modes behaviors switch if we use a different initial conditions \(\phi(t_{*})=-f\theta\). When \(a\) becomes sufficiently large, \(k/a\) becomes much smaller than \(m_{a}/2\), resonant amplification stops and \(h_{A}\) decreases slowly as the Universe expands. For different values \(k/m_{a}\), we numerically solve Eq.(7) and obtain magnification of GWs. The energy density \(\Omega\propto h_{A}^{2}\). When \(a\gg 2k/m_{a}\), \(\Omega/\Omega_{0}\) trends to a constant, where \(\Omega_{0}\) is the energy density when CS modification is absent. The results are shown in Fig.3. For \(k\gg m_{a}\), the axion field \(\phi\) becomes much smaller when \(k/a=m_{a}/2\), thus the magnifications are small. For \(k<5m_{a}\), \(k/a\) varies too fast when resonant amplification happens, which leads to less time for resonant amplification. The peak of GWs is at \(k\sim 6m_{a}\) for parameters \(\theta=1,\alpha fm_{a}^{2}=5\times 10^{37}\text{GeV}^{2}\). As Fig.1 shows, for \(k<3m_{a}\), \(\left|\frac{k}{am_{cs}}\right|>1\) when \(k/a=m_{a}/2\), where we turned on our regulator. Inflation can generate a scale-invariant gravitational wave spectrum. The spectrum can be Figure 1: The blue region corresponds to \(k/a>|m_{cs}|\), and the red line is \(k/a=m_{a}/2\) where the resonance occurs, as we will discuss in the next section. approximated by[74] \[\begin{split}\Omega_{\text{gw}}h^{2}(\tau_{0},k)& \approx\frac{1}{24}\Omega_{\gamma}h^{2}\left(\frac{g_{*\rho,\text{hc}}}{2} \right)\left(\frac{g_{*s,\text{hc}}}{g_{*s,\text{fin}}}\right)^{-\frac{4}{3}} \mathcal{P}_{T}(k)\\ &\approx 1.29\times 10^{-17}\left(\frac{g_{*s,\text{fin}}}{3.931} \right)^{\frac{4}{3}}\left(\frac{g_{*s,\text{hc}}}{106.75}\right)\left(\frac{g _{*s,\text{hc}}}{106.75}\right)^{-\frac{4}{3}}\left(\frac{V_{\text{inf}}^{1/4} }{10^{16}\,\text{GeV}}\right)^{4},\end{split} \tag{9}\] where \(V_{\text{inf}}^{1/4}\) is the energy scale of inflation. If there exist axions, the GWs from inflation can be magnified at certain specific frequencies. We can also consider the other sources as the seeds for the stochastic gravitational waves for our axion enhancement study. Here for simplicity, we just take the simplest scale invariant spectrum from inflation as an example. Suppose GWs induced by Figure 3: Magnification of energy density for GWs. Red line corresponding to \(\theta=1,\alpha fm_{a}^{2}=5\times 10^{37}\text{GeV}^{2}\), while green line corresponding to \(\alpha fm_{a}^{2}=2.5\times 10^{37}\text{GeV}^{2}\). inflation is about \(h^{2}\Omega_{0}\sim 10^{-16}\)[74], which is well below the upper bound on the amount of radiation \(\int\Omega_{GW}(k)d\ln k\leq 5.6\times 10^{-6}\Delta N_{eff}\)[28], where \(\Delta N_{eff}\) is the extra neutrino species and roughly satisfies the condition \(\Delta N_{eff}<0.2\). We show the energy density after the amplification by axions in Fig.4. The energy density shape depends on the magnification of Fig.3 and the frequencies are \[\begin{split} f&=\frac{k}{2\pi}\frac{a_{*}}{a_{0}} =\frac{k}{2\pi}\frac{T_{0}}{T_{*}}\left(\frac{g_{0}}{g_{*}}\right)^{1/3}\\ &\approx 7.125\times 10^{-4}\left(\frac{100}{g_{*}}\right)^{1/12} \left(\frac{k}{m_{a}}\right)\left(\frac{m_{a}}{\rm eV}\right)^{1/2}{\rm Hz}. \end{split} \tag{10}\] We have used Eq.4 and \(T_{0}=2.725\text{K},g_{0}=3.938\). The value \(k/m_{a}\) can be directly taken from Fig.3. The parameters for Fig.4 are shown in table 1, where BP3 and BP4 are consistent with the QCD axion. The frame-dragging experiments gives a limit for CS gravity coupling by[69; 75]\(\kappa^{-1}\alpha\dot{\phi}<3000\text{km}\), or \(\alpha\Omega_{\phi}^{1/2}<5.25\times 10^{81}\text{GeV}^{-1}\), where \(\Omega_{\phi}\) is energy fractions of axions nowadays. The energy fraction of axions when the axion field began to oscillate is \(\Omega_{\phi}\sim 2.6\times 10^{-4}\left(\frac{f}{10^{17}\text{GeV}}\right)^{2}\), which is consistent with the radiation dominated universe. Energy fractions of GWs at \(H=m_{a}\) is \(\Omega_{gw*}\simeq 2.78\times 10^{4}\left(\frac{g_{*}}{100}\right)^{1/3} \Omega_{gw}(T=2.7K)\). For \(\Omega_{gw}(T=2.7K)\sim 10^{-10}\), energy density when \(H=m_{a}\) is about \(\Omega_{gw*}\sim 10^{-6}\). In our calculations, \(\int\Omega_{gw*}(f)d\ln(f)\ll\Omega_{\phi}\) is always maintained, thus we can safely neglect the back reaction from GWs to axion dynamics. When \(H\ll m_{a},\ddot{a}/a\ll m_{a}^{2}\), Eq.(3) has a solution \(\phi(t)\approx\phi(t_{i})(a/a_{i})^{-3/2}\cos\left(m(t-t_{i})\right)\), thus the energy density \(\rho_{\phi}\propto a^{-3}\), which behaves like the dark matter. To avoid the redundant dark matter, \(\phi\) should decay to other particles such as dark photons[59]. In our scenario, the axion decay width should fulfill \(\Gamma\ll m_{a}\) to generate GWs before decaying. In Fig.4, the dashed part of each line denotes the region \(\left|\frac{k}{am_{cs}}\right|>1\) when \(k/a=m_{a}/2\) and our regulator is turned on, thus our result may be unreliable in these regions. We find that, after the axion enhancement, the GWs can be possibly detected by Taiji[18], TianQin[17], LISA[15] for axion mass \(m_{a}=10^{-3}\text{eV}\sim 1\text{eV}\) and can be detected by IPTA[12], SKA[76; 77] for \(m_{a}=10^{-12}\text{eV}\). ## IV Discussion and Summary We propose a novel generation mechanism for gravitational waves, as well as an efficient way to detect axions/ALPs. The tachyonic instability in the axion-graviton coupling is a less studied effect previously, and not much constrained compared to the axion-photon coupling. The Chern-Simons axion-graviton coupling is an interesting modification to Einstein gravity, an extension of the effective field theory of Einstein-Hilbert action. This Chern-Simons gravity term can generate axion-photon coupling at the loop level with Planck suppression. The large coupling of axion-graviton can generate gravitational waves in a radiation-dominated era, much later than inflation and preheating time which people usually explored. The peak frequency of the generated GWs is red-shifted today and can be detected by different GW detection methods, from \(10^{-9}\)Hz up to \(10^{-2}\)Hz. We have also numerically studied the shape of the GW spectrum, without considering the backreaction, which can be a future direction. The upper bounds of this power spectrum are mostly from observation, as we discussed earlier. The \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline – & \(m_{a}\) & \(f\) & \(\alpha\) & \(\theta\) \\ \hline BP1(ALP) & 1eV & \(10^{17}\)GeV & \(5\times 10^{38}\)GeV\({}^{-1}\) & 1 \\ \hline BP2(ALP) & \(10^{-12}\)eV & \(10^{17}\)GeV & \(5\times 10^{62}\)GeV\({}^{-1}\) & 1 \\ \hline BP3(QCD axion) & \(2\times 10^{-12}\)eV & \(3\times 10^{18}\)GeV & \(4.2\times 10^{60}\)GeV\({}^{-1}\) & 1 \\ \hline BP4(QCD axion) & \(2\times 10^{-12}\)eV & \(3\times 10^{18}\)GeV & \(2.1\times 10^{60}\)GeV\({}^{-1}\) & 1 \\ \hline BP5(ALP) & 1eV & \(10^{16}\)GeV & \(2.5\times 10^{39}\)GeV\({}^{-1}\) & 1 \\ \hline \hline \end{tabular} \end{table} Table 1: Parameters for 5 benchmark point in Fig.4. Notice that BP3 and BP4 are consistent with the QCD axion. Figure 4: GWs spectra after the amplification by axions. The dashed part of each line denotes the region \(\left|\frac{k}{am_{cs}}\right|>1\) when \(k/a=m_{a}/2\) and our regulator is turned on. We plot 5 benchmark points with the parameters in Table 1. We show the sensitivity curves from Taiji[18], TianQin[17], LISA[15] as well as IPTA[12], SKA[76; 77]. parameters we used in the figure are well below the bounds from \(\Delta N_{eff}\) in the radiation era, so this mechanism can be a source for large dark radiation. ###### Acknowledgements. S. Sun is supported by the National Natural Science Foundation of China (Nos. 12105013). Q. Yan is supported by the Natural Science Foundation of China under Grants No. 11875260 and No. 12275143. Z. Zhao has been partially supported by a China and Germany Postdoctoral Exchange Program between the Office of China Postdoctoral Council (OCPC) and DESY.
2309.10248
What is the Best Automated Metric for Text to Motion Generation?
There is growing interest in generating skeleton-based human motions from natural language descriptions. While most efforts have focused on developing better neural architectures for this task, there has been no significant work on determining the proper evaluation metric. Human evaluation is the ultimate accuracy measure for this task, and automated metrics should correlate well with human quality judgments. Since descriptions are compatible with many motions, determining the right metric is critical for evaluating and designing effective generative models. This paper systematically studies which metrics best align with human evaluations and proposes new metrics that align even better. Our findings indicate that none of the metrics currently used for this task show even a moderate correlation with human judgments on a sample level. However, for assessing average model performance, commonly used metrics such as R-Precision and less-used coordinate errors show strong correlations. Additionally, several recently developed metrics are not recommended due to their low correlation compared to alternatives. We also introduce a novel metric based on a multimodal BERT-like model, MoBERT, which offers strongly human-correlated sample-level evaluations while maintaining near-perfect model-level correlation. Our results demonstrate that this new metric exhibits extensive benefits over all current alternatives.
Jordan Voas, Yili Wang, Qixing Huang, Raymond Mooney
2023-09-19T01:59:54Z
http://arxiv.org/abs/2309.10248v1
# What is the Best Automated Metric for Text to Motion Generation? ###### Abstract There is growing interest in generating skeleton-based human motions from natural language descriptions. While most efforts have focused on developing better neural architectures for this task, there has been no significant work on determining the proper evaluation metric. Human evaluation is the ultimate accuracy measure for this task, and automated metrics should correlate well with human quality judgments. Since descriptions are compatible with many motions, determining the right metric is critical for evaluating and designing effective generative models. This paper systematically studies which metrics best align with human evaluations and proposes new metrics that align even better. Our findings indicate that none of the metrics currently used for this task show even a moderate correlation with human judgments on a sample level. However, for assessing average model performance, commonly used metrics such as R-Precision and less-used coordinate errors show strong correlations. Additionally, several recently developed metrics are not recommended due to their low correlation compared to alternatives. We also introduce a novel metric based on a multimodal BERT-like model, MoBERT, which offers strongly human-correlated sample-level evaluations while maintaining near-perfect model-level correlation. Our results demonstrate that this new metric exhibits extensive benefits over all current alternatives. ## CCS Concepts * **Computing methodologies \(\rightarrow\) Procedural animation; Motion capture; Natural language processing; Natural language generation; Temporal reasoning; Spatial and physical reasoning; Model verification and validation; Human-centered computing \(\rightarrow\) Visualization design and evaluation methods.** ## Keywords Multi-modal, human evaluation ### ACM Reference Format: Jordan Voas, Yili Wang, Qixing Huang, and Raymond Mooney. 2023. What is the Best Automated Metric for Text to Motion Generation?. In _SIGGRAPH Asia 2023 Conference Papers (SA Conference Papers '23), December 12-15, 2023, Sydney, NSW, Australia_. ACM, New York, NY, USA, 11 pages. [https://doi.org/10.1145/3610548.3618185](https://doi.org/10.1145/3610548.3618185) ## 1. Introduction High-quality human motion generation in animation has a wide range of applications, from creating realistic CGI in cinema to enabling context-aware character movement in video games. The increasing interest in generating human motions from natural language descriptions (text-to-motion) is evident (Ahuja and Morency, 2019; Delmas et al., 2022; Ghosh et al., 2021; Guo et al., 2022; Lin et al., 2018; Punnakkal et al., 2021; Zhang et al., 2022). Natural language offers a convenient and expressive means for controlling generative models, similar to image (Ramesh et al., 2022) and video (Singer et al., 2022) generation. Users can specify the desired actions or poses they want the motion to exhibit, such as global transitions like running, jumping, and walking, or localized actions like throwing or kicking. They may also indicate concurrent sub-motions or sequential motions. The generated motion sequence should accurately match the prompt while appearing natural. Determining the best-automated metric for human motion generation from natural language prompts is crucial for developing effective models. Although human judgment is considered the gold standard, comparing large sample sizes is time-consuming and expensive. Stochasticity in recent models adds to this challenge, necessitating extensive repetitions for accurate results. Our objective is to identify the best automated metric for evaluating language-conditioned human motion generations, with "best" referring to the metric most closely correlated with human judgments. While various automated metrics have been proposed (Ahuja and Morency, 2019; Ghosh et al., 2021; Guo et al., 2022) and some works have conducted comparative human evaluations (Guo et al., 2022; Petrovich et al., 2022), none have directly addressed this question. Developing appropriate automated metrics correlated with human judgments has been vital in fields such as machine translation (Papineni et al., 2002; Zhang et al., 2019), and we believe it is essential for advancing text-to-motion methods. To complement existing metrics, we propose novel ones that improve correlation with human judgment while being differentiable and capable of enhancing optimization when integrated into training losses. One novel metric, a multimodal BERT-like model MoBERT, offers sample level evaluation scores with significantly improved human judgment correlations. Multiple distinct aspects should be considered when assessing the quality of generated human motions. We evaluate human motion quality by focusing on the following: * **Naturalness**: How realistic is the motion to a viewer? Unnatural motions exhibit inhuman or improbable poses or display global transitions without appropriate actions. * **Faithfulness**: How well does the generated motion align with the natural language prompt? Unfaithful motions will omit key components or include irrelevant ones. Our main contributions are: * A dataset of motion-text pairs with human ratings of _Naturalness_ and _Faithfulness_ for evaluating automated metrics. * A critical evaluation of existing text-to-motion automated metrics based on correlation with human judgments. * The development of novel high-performing automated metrics, including MoBERT, offering the first strongly human-correlated evaluation metric for this task. We also discuss how MoBERT addresses limitations of existing metrics, advancing future architecture comparison and development. 1 Footnote 1: Our metric evaluation code and collected human judgment dataset are included as supplemental material to this work. Our novel evaluator model, MoBERT, is available at [https://github.com/jroost55/MoBERT](https://github.com/jroost55/MoBERT). ## 2. Related Works We review prior research on human motion generation, which includes both unconconditioned and conditioned generation, and discuss the evaluation metrics used in previous studies. ### Human Motion Generation Early unconditioned human motion generation approaches employed statistical generative models (Ikemoto et al., 2009; Mukai and Kuriyama, 2005), while more recent models have adopted deep learning techniques. Some studies have applied Variational Autoencoder (VAE) models (Kingma and Welling, 2013) for motion forecasting based on historical fragments (Aliakbarian et al., 2020; Ling et al., 2020; Rempe et al., 2021; Tulyakov et al., 2017). Others have used Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) to enhance the quality of generations (Barsoum et al., 2017). Normalization Flow Networks have also been explored (Henter et al., 2020). The majority of these methods employ joint-based frameworks, utilizing variants of the SMPL (Loper et al., 2015) body model, which represents the body as a kinematic tree of connected segments. For conditioned motion generation, various types of conditioning exist. Some studies have conditioned on fixed action categories, which simplifies the task compared to natural language conditioning but limits diversity and controllability. (Guo et al., 2020) employs a recurrent conditional VAE, while (Petrovich et al., 2021) uses a category-conditioned VAE with Transformers (Vaswani et al., 2017). Natural language conditioning allows for fine-grained motion control, enabling temporal descriptions and specification of individual body parts. Early efforts utilized a Seq2Seq approach (Lin et al., 2019). et al., 2018). Other studies learned a joint embedding projection for both modalities [Ahuja and Morency, 2019; Ghosh et al., 2021] and generated motions using a decoder. Some research applied auto-regressive methods [Guo et al., 2022a], encoding text and generating motion frames sequentially. Recent approaches, such as [Petrovich et al., 2022], use stochastic for diverse generations. Others employed diffusion-based models [Kim et al., 2022][Zhang et al., 2022][Tevet et al., 2022][Wei et al., 2023][Chen et al., 2022][Shafir et al., 2023][Zhang et al., 2023a][Han et al., 2023]. Recent models have taken inspiration from GPT-like LLM's through learned motion vocabularies and have competed with diffusion methods for SOTA performance [Zhang et al., 2023b][Jiang et al., 2023][Zhou and Wang, 2022][Zhang et al., 2023c]. Related tasks have also been investigated, such as [Li et al., 2020] or [Tseng et al., 2022], which conditions motion generation on music. Some models treat the task as reversible, captioning motions and generating them from language prompts [Guo et al., 2022b]. Others generate stylized character meshes to pair with the generated motions, conditioned on language prompt pairs [Hong et al., 2022; Youwang et al., 2022]. Adjacent efforts have focused on scene or motion path-based conditioning, allowing for high-quality animation of character movements along specific paths in an environment [Holden et al., 2017][Ling et al., 2020b][Huang et al., 2023]. ### Metrics for Automated Evaluation of Human Motions Various metrics have been used to evaluate text-to-motion. [Ahuja and Morency, 2019] employed Average Position Error (APE) and pioneered the practice of dividing joints into sub-groups for different versions of APE. [Ghosh et al., 2021] introduced Average Variance Error and also considered versions dependent on which joints (root versus all) are being used and whether global trajectories are included. [Petrovich et al., 2022] and [Kim et al., 2022] adopted similar methods, but recent works have moved away from these metrics despite no study establishing them as poor performers. [Guo et al., 2022a] developed a series of metrics based on their previous work for category-conditioned motion generation, advocating for Frechet Inception Distance (FID) [Heusel et al., 2017], which is commonly used in image generation and measures output distribution differences between datasets. [Guo et al., 2022a] also included R Precision, a metric based on retrieval rates of samples from batches using embedded distances, metrics to evaluate diversity, as well as one measuring the distance of co-embedding in each modality. These metrics have become standard, used by [Guo et al., 2022b; Kim et al., 2022; Tevet et al., 2022; Zhang et al., 2022]. These metrics rely on a text and motion co-encoder, so proving the effectiveness of the encoder is crucial for these metrics if they are to be used for judging model performance. [Yuan et al., 2022] expanded these metrics to measure factors of physical motion plausibility. The GENEA Challenge [Kucherenko et al., 2021] provides a collective assessment of co-speech motion generation methods through standardized human evaluations. It divides human judgments into _Human-likeness_ and _Appropriateness_, corresponding to our _Naturalness_ and _Faithfulness_. Recent findings by [Yoon et al., 2022] indicate that current methods generate natural motions at or above rates for baseline captures but underperform in faithfulness. While not directly applicable to text-to-motion, this research provides valuable data for understanding the performance of current methods and guiding future work in the area, including novel metrics. ## 3. Dataset Collection ### Baseline Models Evaluated We evaluate four implementations to assess a range of motion qualities and focus on issues relevant to top-performing models: [Guo et al., 2022a], TM2T [Guo et al., 2022b], MotionDiffuse [Zhang et al., 2022], and MDM [Tevet et al., 2022]. These models, trained on the HumanML3D dataset [Guo et al., 2022a], support 22 joint SMPL body models [Loper et al., 2015], enabling consistent animation methods for human ratings. We also include reference motions from HumanML3D as a baseline for non-reference evaluation metrics. ### Motion Prompt Sample Collection We sourced motion prompts from the HumanML3D test set. To ensure diverse and representative prompts, we encoded them using the RoBERTa language model's CLS outputs [Liu et al., 2019]. The embeddings were projected onto a low-dimensional space and we randomly sample from the resulting dataset's distribution, taking the nearest unsampled entry, to obtain 400 unique sample prompts. These prompts generated a dataset of 2000 motions, with 400 motions for each of the five baseline models (including HumanML3D). For models generating fixed-length motions, we used a length of 120 motion frames. All models were generated at the 20 Hz frequency used in HumanML3D. ### Motion Visualization Recent studies [Guo et al., 2022a; Petrovich et al., 2022] utilized stick figure renderings for evaluation, but this approach has limitations. Evaluating _Naturalness_ using stick figures can be challenging, as they are not relatable to raters. Moreover, they often lacked realistic environments, such as walls, floors, lighting, and textures. To address these limitations, we created high-quality renders using Blender [Community, 2018], focusing on environmental details and camera movements for natural motion perception (Figure 1). ### Human Quality Ratings Collection We collected human quality ratings using Amazon Mechanical Turk and a custom UI. To ensure quality, we implemented qualification requirements, in-tool checks, and post-quality criteria. We hand-picked 25 motion-text pairs from the 2000 motion samples we generated and used them as gold test questions2. The remaining annotations were divided into 20-pair batches, each containing five randomly placed gold test samples. We collected three ratings per sample and discarded batches that failed qualification checks. Footnote 2: Gold test questions ground truth labels were judged by the Authors. Motions for which the ratings were deemed to be overly subjective were not included in the gold test set. Ratings were presented as natural language descriptions corresponding to Likert Scale ratings (0 to 4). Annotators had access to a tooltip with detailed descriptions for each rating level during the task, all shown in Figures 3, 4, and 5 of the supplement. Ratings were rejected if more than two of the five test questions deviated by more than one from the "correct" answer. This allowed for subjectivity, missed details, and slight rating scale understanding differences. Significant deviations in rating scale understanding or guessing would pass a single question occasionally, but over the ten independent ratings would be detected with a high likelihood. In-tool quality checks required watching the entire video before progressing and capped the rate of progression to 12 seconds per sample. These measures aimed to prevent rushing and encourage thoughtfulness. Qualification requirements included residing in the U.S., completing over 1000 hits, and a minimum 98% acceptance rate. Quality checks were disclosed in the task instructions. We paid $1.25 per HIT, equating to at least $12 per hour. We removed samples with less than three ratings for all models, resulting in 1400 rated motion-text pairs (280 distinct prompts for each baseline model). Averaging the three ratings provided final _Naturalness_ and _Faithfulness_ values. We show in Figure 6 the dataset's distribution to be generally normal, while Table 2 shows high inter-annotator agreement (Krippendorff's Alpha) was obtained. ## 4. Evaluated Metrics We evaluate most automated metrics from recent works as well as new ones. We assess each metric's correlation with samples on both individual and model levels, whenever possible. Sample level correlations are computed on individual sample scores across baselines, reflecting the metric's capability to evaluate individual generations. Model-level correlations are determined using the mean metric score for all samples generated by a specific baseline model, which are then correlated with the mean human rating for the corresponding samples. This assesses how well the metric can judge model performance ranking. These levels can be distinct since metrics with outlier failures may negatively impact sample-level evaluation but have reduced effects when averaged over many samples. To calculate FID, R-Precision, and Multimodal Distance the motion features must be projected into an embedding space using an encoder. The encoder used was developed by (Guo et al., 2022) and is standard for these metrics. ### Existing Metrics #### 4.1.1. Coordinate Error (CE) Metrics Average Error (AE), also known as Average Position Error (APE) when applied to joint positions (Ahuja and Morency, 2019), and Average Variance Error (AVE) (Ghosh et al., 2021) are reference-based metrics employed in early works but have become less common recently. They calculate the mean L2 errors between reference and generated values, either absolute or as variance across frames, for each joint in the motion. We refer to these as coordinate error (CE) metrics, defined as: \[AE=\frac{1}{JT}\sum_{j\in J}\sum_{t\in T}\|X_{t}[j]-\hat{X}_{t}[j]\|_{2} \tag{1}\] \[\sigma[j]=\frac{1}{T-1}\sum_{t\in T}(X_{t}[j]-\hat{X}_{t}[j])^{2} \tag{2}\] \[AVE=\frac{1}{J}\sum_{j\in J}\|\sigma[j]-\hat{\sigma}_{t}[j]\|_{2} \tag{3}\] Where \(j\) represents a joint from all 22 joints \(J\), and \(t\) denotes a motion frame from the motion sequence \(T\). We matched frame lengths for reference and generated motions by clipping the longer one. We investigate CE metrics on positional values and their variations on positional derivatives, such as velocity and acceleration, calculated using frame-wise differences. Additionally, we evaluate these metrics on combinations of position and its derivatives. Similar to (Ghosh et al., 2021), we consider three joint groupings for CE metrics: root only, all joints excluding the root (Joint), and all joints (Pose). Prior works (Ahuja and Morency, 2019; Ghosh et al., 2021) suggested that AE on the root joint best aligns with quality. We hypothesize that this effect might stem from scaling issues when the root translations are included in combined calculations with other joints, causing their errors to dominate the metric. To test this, we explore potential root joint scaling factors, altering their transitions contribution to the metric's final score for the mean. We also examined the impact of scaling factors on each component when calculating combined position-velocity (PV) or position-velocity-acceleration (PVA) CE. Component-based scaling acts as a weighted average, with scaling factors increasing or decreasing the component errors, while root scaling adjusts the effects of root translation on all joint positions. #### 4.1.2. Frechet Inception Distance (FID) The Frechet Inception Distance (FID) (Heusel et al., 2017) is a widely used metric for generative tasks, which measures the alignment between two distributions. To compute FID, one must first obtain the mean and variance of each distribution from a large sample size. In generative tasks, these typically correspond to the reference samples (a valid distribution) and the generative model samples. A lower FID indicates better alignment between the generative and reference distributions. FID is calculated as follows for distributions \(D_{1}\) and \(D_{2}\): \[FID(D_{1},D_{2})=|\mu_{1}-\mu_{2}|+tr(\Sigma_{1}+\Sigma_{2}-2(\Sigma_{1}\Sigma_ {2})^{\frac{1}{2}}) \tag{4}\] As FID is only accurate with large sample sizes, we report correlations for FID at the model level only and do not report correlation scores for individual samples. #### 4.1.3. R-Precision R-Precision is a distance-based metric that measures the rate of correct motion-prompt pair matchings from a batch of random samples. Both motions and prompts are projected into a co-embedding space, and Euclidean Distance calculations are used to rank pair alignments. Scores of one are received if the correct matching is made within a rank threshold (Retrieval Allowance), and zero otherwise. Averaged over numerous samples, this provides a precision of retrieval metric. Higher Retrieval Allowance thresholds yield higher R-Precision scores, as they are more forgiving of imperfect embedding spaces and account for multiple motions described by the same prompt randomly being included in the batch. R-Precision scores for thresholds of 1-3 are commonly reported. We analyze the correlation for R-Precision scores with thresholds of 1-20 and hold the batch size to 32, following common practice (Guo et al., 2022). #### 4.1.4. Multimodal Distance This metric measures the distance between the generated motion embedding and the co-embedding of the prompt used for generation. When the two encoders are well-aligned in the embedding space, low scores suggest motions closely matching the prompt, while high scores indicate significant deviations in features (Guo et al., 2022). ### MoBERT: Multimodal Transformer Encoder Evaluator Our novel evaluation method, MoBERT, is inspired by past learned metrics such as CLIPScore (Hessel et al., 2022), that score the alignment between a multimodal pair. However, MoBERT distinguishes itself by its ability to evaluate both modalities using a shared Transformer Encoder (Vaswani et al., 2017) through a multimodal sequence embedding. This approach, as shown in Figure 2, employs the attention mechanism of the Transformer to capture detailed relationships between the motion chunks and textual tokens. Compared to CLIPScore, which uses separate encoders for each modality and combines the two modalities using cosine similarity, MoBERT's single Encoder approach allows for a richer understanding of the data. The Transformer Encoder's attention mechanism can learn to consider features across both modalities simultaneously, potentially capturing nuanced relationships between them that might be missed in a separate encoding scheme. In particular, this methodology allows MoBERT to consider the shared temporal aspects of motions and text prior to being collapsed to a single vector representation. This approach allows for more accurate prediction of correct and incorrect text pairings, allowing MoBERT to potentially outperform methods following CLIPScore's approach. #### 4.2.1. Encoding Motion Information To better contextualize motion in our model, we preprocess our \(N\times 22\times 3\) motions into an \(N\times 263\) representation following the approach in (Guo et al., 2022). This involves extracting motion transformations, such as root joint global transitions and rotations, to handle shifts in reference frames, as well as the linear velocities of each joint frame-to-frame and foot contact thresholding for a binary signal of foot-ground contact. To utilize frame-to-frame motion information and mitigate redundancy in the motion domain, we downsample encodings by chunking consecutive frames into frame chunks before converting them into embeddings. Our dataset motions span up to 200 frames, processed at 20 Hz. We group these into 14-frame chunks, as 0.7 seconds of motion information offers adequate encoding and information differentiation. To account for the simplicity of our chunking algorithm, we apply an overlap factor of 4 frames, duplicating overlapped frames in consecutive motion chunks. #### 4.2.2. Multimodal Tokenization Process For encoding text, we utilize a BPE (Gage, 1994) vocabulary and learned embeddings. We generate sequence embeddings from the textual and motion processes and merge them into a single sequence (Figure 2). We incorporate special tokens for CLS, start indicators, and padding embeddings. With short one or two-sentence descriptions and motions limited to a chunk length of 20, we train using a max context size of 64. Learned segment and positional tokens are added to inputs. #### 4.2.3. Training Process We used the HumanML3D dataset as the basis for our model's training. The model is trained through the task of **Alignment prediction** using Binary Cross Entropy loss. This task involves predicting a binary label that indicates whether a given motion corresponds to a specific textual description. For each motion-text pair in our training dataset, we randomly selected a contrastive textual description to serve as a negative label example. We then evaluate both valid and contrastive pairings with the model, resulting in alignment probability judgments. We employed a compact MLP model over the CLS output embeddings, terminating with sigmoid activation, to obtain an output alignment probability. Binary Cross Entropy loss is used to encourage the model to predict alignment labels for valid pairings and anti-alignment labels for incorrect pairings, as demonstrated in Equations 5 and 6. \[H(q) =-\frac{1}{N}\sum_{i=1}^{N}y_{q}(i)\cdot log(p(y_{q}(i),q))\] \[+(1-y_{q}(i))\cdot log(1-p(y_{q}(i),q))\] \[\mathcal{Z}1=H(V)+H(R) \tag{5}\] With \(N\) being all motions in a batch, \(y_{q}(i)\) is the correct binary label for sample \(i\) given text grouping \(q\) (valid or contrastive), and \(p\) being the predicted binary label. \(V\) is the set of valid textual descriptions and \(R\) is the set of random contrastive descriptions. We found that this process could still present a difficult optimization landscape, and would often choose to predict all one label to minimize loss on one pairing despite increased losses on the other. To promote balancing each label's prediction, we achieved better results with the L2 balanced loss shown in Equation 7. \[\mathcal{L}_{2}=\sqrt{H(V)^{2}+H(R)^{2}} \tag{7}\] Additional tasks, in a multi-task learning framework, were trialed but did not improve performance and were not included in the version of MoBERT's we report in this work. Improving Contrastive ExamplesThe HumanML3D dataset provides low diversity of descriptions, with many being very similar. Further, motions can be described in multiple ways, both of which make random contrastive textual samples provide low-quality guidance. To address this, we used Sentence Transformer similarity scores to weight contrastive training examples and adjust our loss Figure 2. Our MoBERT architecture and process flow. Green items represent inputs, white items indicate intermediate steps, red items denote output/losses, and blue items contain learned model parameters. functions accordingly. Inverse similarity scores were applied as weights to the loss function, down-weighting similar descriptions to reduce label confusion. We employed the top-performing Huggingface "all-mpnet-base-v2" implementation. The contrastive loss was rescaled by the weights to maintain a consistent magnitude with the valid loss. The final loss function is shown in Equation 8, where alpha represents the similarity scores produced by the Sentence Transformer model score, confined to \([0,1]\). \[\mathcal{L}_{f}=\sqrt{H(V)^{2}+\left(\frac{(1-\alpha)H(R)}{\sum_{i}^{N}(1- \alpha_{i})}\right)^{2}} \tag{8}\] _Model Evaluation Process._ We assess the correlation of our baseline models' raw Alignment Probability scores from our training process. Since this data lacks human rating guidance, we also test our model's performance when trained on a small set of human judgment data. We do this by discarding the output layers of our model, using an aggregation of output embeddings, and fitting a lightweight SVR or Linear Regression layer to predict human judgments. The best performance is achieved using a RBF Kernel SVR, with a Ridge regressor being the best fully differentiable. Sklearn's Python package is used for regression training and hyperparameters are reported in the supplemental materials section. To avoid overfitting to the small human judgment dataset, we apply ten-fold cross-validation, fitting regressors on 90% of the dataset's samples to predict the remaining portion. These cross-validated predictions are collected, reordered, and Pearson's correlation is calculated against the human judgment ratings. ## 5. Results Analysis This section highlights the key findings from our evaluation. We employed Pearson's Correlation Coefficient (Sedgwick, 2012) to correlate metrics with human judgments, measuring the linear relationship between metrics as most of our data is interval rather than ordinal. We present model and sample level correlations between _Faithfulness_ and _Naturalness_ in Table 3. All values are uncorrected, and negative correlations are expected for certain metrics (FID or CE) since our human judgment ratings suggest better outcomes with opposing directions. Weak P-values are observed for many reported correlations, which is anticipated as they were calculated (for model level results) based on only five samples. Our strongly-performing metrics achieved P-Values near 0.05 at the model level, while our best-performing sample-level metrics (Pearson's of 0.2 or above) had near zero P-Values. ### Coordinate Error Metrics Results The primary CE-metric results are presented in Table 1 with further details in Figures 8 and 9. Despite relying on only a single reference, CE metrics show weak but significant correlations with human judgments for both _Faithfulness_ and _Naturalness_ at the sample level. Performance largely depends on non-Root transitions, with Joint POS AE and Joint POS AVE outperforming pure Root-based metrics. Root scaling does not surpass Joint metrics, and our derivative-based methods do worse than positional ones. Combining components only achieves results comparable to Joint POS-based metrics. Notably, AE performs better than AVE at the sample level with a significant margin (0.1 Pearson's). At the model level, CE-based metrics strongly correlate with human judgments. Root-only traditional AE metrics achieve nearly 0.75 Pearson's, while Root AVE metrics surpass AE with approximately 0.91 Pearson's. Interestingly, Joint versions are unreliable on their own at the model level, suggesting that the main components of model evaluation can be derived from Root transitions alone. This supports similar claims by (Ghosh et al., 2021). Root scaling enhances both metrics, with AVE nearing perfect correlation. Utilizing velocity derivatives benefits AE at the model level, and combining positions, velocity, and/or acceleration for both AVE and AE yields versions with greater than 0.99 Pearson's (Figure 9). #### 5.1.1. Root Scaling Exploration We provide visualizations with scaling factors in Figures 10 and 11 to investigate the effects of root scaling on Pose CE metrics. Consistent with previous observations, model-level correlations improve (i.e., become more negatively correlated) when additional weight is placed on Root transitions. PV and PVA AE are the only versions that do not exhibit this trend. Alternatively, overemphasizing Root transitions significantly degrades performance at the sample level. ### FID, R-Precision, and Multimodal Distance Results Results for FID, R-Precision, and Multimodal Distance are also shown in Table 1, with additional detail for R-Precision across various Retrieval Thresholds in Figure 7. We examine FID only at the model level as it requires distributional statistics over multiple samples, preventing sample-level calculation. We present results for R-Precision at the sample level, but R-Precision provides only binary values at this level and so it is poorly suited for sample-level comparisons with Likert ratings unless averaged over multiple samples. Multimodal Distance scored near zero at the sample level so none of these metrics provide sample-level alternatives to CE metrics. Regarding model-level results, FID achieves acceptable results for _Faithfulness_ with 0.71 Pearson's but significantly underperforms for _Naturalness_. Given the weak correlation with _Naturalness_ and model-level-only comparison, P-Values are notably weak. While these results are poor, it is possible our samples may provide an unfavorable setting for FID, or may improve with more samples. R-Precision demonstrates substantial correlations for both human quality judgments, approaching 0.8 Pearson's with standard settings. Our results suggest current Retrieval Thresholds are sub-optimally set, with thresholds of 4 and 5 being marginally better, and then declining at higher values. Since R-Precision and FID share an embedding space, strong R-Precision results may indicate that FID's poor performance is not due to sample selection. Multimodal Distance is only weakly correlated with human quality judgments. The results indicate that R-Precision, and possibly FID, are suitably correlated with human judgments. However, these metrics are less correlated than the CE metrics they replaced, and they preclude single-sample analysis, relying on many samples. Even if these metrics improved with larger sample sizes, an uncertain possibility, they would require substantial enhancements to match even traditional CE metrics such as Root POS AVE. ### MoBERT Results for our novel learned metric are shown in Table 1, highlighting its performance against the best alternative metrics at the sample and model level. We observe that MoBERT substantially outperforms the best alternatives at both levels. The alignment probability outputs, without human judgment supervision, achieve a sample-level correlation of 0.488 for _Faithfulness_, up from a previous best of 0.208. As expected, the correlation with _Naturalness_ is significantly weaker but still surpasses all other sample level correlations demonstrated by the baselines. Similarly strong results are observed for model-level performance. Using a learned regression model over the output features further improves the results, highlighting the benefits of training on a small amount (approximately 1260 samples) of human-judgment. Our sample level correlations for _Faithfulness_ and _Naturalness_ increase to 0.624 and 0.528, respectively, reaching the strongly-correlated range for _Faithfulness_ when using the SVR regression layer. Moreover, our model achieves near-perfect model-level correlations, verifying that its ability to signify improved model performance is highly reliable. We run additional experiments exploring MoBERTs ability to act as a text-free Naturalness evaluator in the supplemental materials. ### Discussion and Future Work Our findings underscore CE metrics as the most reliable baseline metric, demonstrating strong model-level performance supported by sample-level results. With the application of root/component scaling, CE metrics reached near-perfect model-level correlations, highlighting their significance when compared with newer metrics that showed weaker performance in our study. Although R-Precision and FID demonstrate some correlation with human judgments, their relative significance should be evaluated in context. R-Precision reveals a solid correlation, yet fall short compared with CE metrics and should be considered supplemental. FID, while showing acceptable correlations with _Faithfulness_ and some correlation with _Naturalness_, should be used with caution in consideration of its potential to improve with more samples, but not prioritized over more consistent metrics. We recommend against the use of Multimodal Distance due to its consistently weak correlations. MoBERT significantly outperforms all competitors, presenting the first metric with robust model-level and sample-level performance. This metric also avoids reliance on any reference motions for evaluation, making it usable in more situations and alleviating concerns about the one-to-many nature of this task. Additionally, it is fully differentiable and could be used as a training objective for generative models in order to further enhance performance. We recommend future evaluations employ our MoBERT evaluator alongside metrics such as R-Precision 1-5, FID, Pose POS AVE, and Root PV AE when assessing text-to-motion generation. Figure 11 can help determine optimal root scalings for Pose POS AVE. #### 5.4.1. MoBERT Out-of-Distribution (OOD) Robustness MoBERT was pretrained exclusively on the HumanML3D dataset. Even though the regression versions are trained to fit human judgments using moderately OOD data produced by various generative models, these models were trained to emulate the HumanML3D data. The human judgment fine-tuning potentially learns to harness the most reliable MoBERT output features. These features, inferred from the distinct distributions produced by motion generation models, suggest a potential for MoBERT to withstand OOD scenarios. However, without a substantially OOD dataset, aligned to the 22-joint SMPL body model of HumanML3D, and coupled with human judgments this remains speculative. Low diversity in our training also may result in our vocabulary not being well covered for infrequent tokens. To enhance MoBERT's adaptability, future efforts could retrain the regression versions with a growing, diverse dataset of human judgments as they are collected. This could enable MoBERT to better accommodate various motion types, textual inputs, or evolving concepts of **Naturalness** and **Faithfulness**. Nonetheless, when adapting MoBERT to OOD data, assessing its performance against relevant human judgments is recommended. \begin{table} \begin{tabular}{||l|c|c|c|c||} \hline & \multicolumn{2}{c|}{Model Level} & \multicolumn{2}{c||}{Sample Level} \\ \cline{2-5} \multicolumn{1}{c|}{Metric} & \multicolumn{1}{c|}{Faithfulness} & Naturalness & Faithfulness & Naturalness \\ \hline \hline Root AVE & \(\downarrow\) & -0.926 & -0.908 & -0.013 & 0.007 \\ Root AE & \(\downarrow\) & -0.715 & -0.743 & -0.033 & 0.037 \\ Joint AVE & \(\downarrow\) & -0.260 & -0.344 & -0.178 & -0.185 \\ Joint AE & \(\downarrow\) & -0.120 & -0.227 & -0.208 & -0.245 \\ \hline Multimodal Distance & \(\downarrow\) & -0.212 & -0.299 & 0.025 & 0.014 \\ R-Precision & \(\uparrow\) & 0.816 & 0.756 & 0.036 & 0.042 \\ FID & \(\downarrow\) & -0.714 & -0.269 & - & - \\ \hline MoBERT Score (Alignment Probability) & \(\uparrow\) & **0.991** & 0.841 & 0.488 & 0.324 \\ MoBERT Score (SVR Regression)\({}^{*}\) & \(\uparrow\) & 0.962 & **0.986** & **0.624** & **0.528** \\ MoBERT Score (Linear Regression)\({}^{*}\) & \(\uparrow\) & 0.951 & 0.975 & 0.608 & 0.515 \\ \hline \end{tabular} \end{table} Table 1. Pearson correlations with human judgments calculated for several existing metrics and our MoBERT model. The best-performing metric in each category is bolded. Models with (*) were judged through 10-fold cross-validation. R-Precision scores reported used the best settings identified (2 for sample level, 5 for model level). Arrows next to metrics indicate whether negative (\(\downarrow\)) or positive (\(\uparrow\)) correlation is expected. ## 6. Conclusions In this study, we compiled a dataset of human motions generated by recent text-to-motion models, accompanied by human quality assessments. By analyzing existing and newly proposed evaluation metrics, we identified those that best correlate with human judgments. R-Precision is a reliable metric for evaluating model quality, but traditional CE metrics and our novel versions with root and component scaling perform equally well or even better, suggesting that R-Precision should not be relied upon as the sole metric. Some newer metrics that have replaced CE metrics in some publications demonstrated suboptimal or even poor performance. Our novel proposed MoBERT evaluator significantly outperforms all competitors, offering a reliable metric at all levels while being reference free. However, efforts to enhance encoder quality or develop novel metrics to improve sample-level evaluations are further encouraged as well as continued human studies whenever possible. ### Limitations Our dataset with 1400 motion annotations is fairly small for automated evaluation and covers only a small fraction of the HumanML3D test set. Although our study presents strong findings for model-level averages, it includes only five models, making model-level correlations potentially vulnerable to chance. Our interannotator agreement is high, but all human annotation has the potential to introduce biases and noise. We used a single instruction for annotation and alternative instructions might yield different results. As motion generation techniques continue to advance, the samples used in our study may not accurately represent error distributions in future improved models, potentially affecting the determination of the best metric. Despite the strong correlations observed between some metrics and human judgments, independent human evaluations remain crucial for comparing model performance. ### Acknowledgements This research was partially supported by NSF NRI Grant IIS-1925082 and NSF IS-2047677 as well as funding from Wormpex AI Research.
2309.03552
Evaluating Microservice Organizational Coupling based on Cross-service Contribution
For traditional modular software systems, "high cohesion, low coupling" is a recommended setting while it remains so for microservice architectures. However, coupling phenomena commonly exist therein which are caused by cross-service calls and dependencies. In addition, it is noticeable that teams for microservice projects can also suffer from high coupling issues in terms of their cross-service contribution, which can inevitably result in technical debt and high managerial costs. Such organizational coupling needs to be detected and mitigated in time to prevent future losses. Therefore, this paper proposes an automatable approach to evaluate the organizational couple by investigating the microservice ownership and cross-service contribution.
Xiaozhou Li, Dario Amoroso dAragona, Davide Taibi
2023-09-07T08:19:45Z
http://arxiv.org/abs/2309.03552v1
# Evaluating Microservice Organizational Coupling based on Cross-service Contribution ###### Abstract For traditional modular software systems, "high cohesion, low coupling" is a recommended setting while it remains so for microservice architectures. However, coupling phenomena commonly exist therein which are caused by cross-service calls and dependencies. In addition, it is noticeable that teams for microservice projects can also suffer from high coupling issues in terms of their cross-service contribution, which can inevitably result in technical debt and high managerial costs. Such organizational coupling needs to be detected and mitigated in time to prevent future losses. Therefore, this paper proposes an automatable approach to evaluate the organizational couple by investigating the microservice ownership and cross-service contribution. Keywords:Microservice Organizational Coupling Service Ownership Cross-service contribution. ## 1 Introduction Together with the advance of software engineering theories and practice, modularization has long been considered a mechanism to enhance a system's flexibility and comprehensibility system as well as its development efficiency [24]. Meanwhile, coupling and cohesion are the two critical concepts for modularized systems that characterize the interdependence amongst the modules when one well-recognized software design principle is "high cohesion low coupling". Especially, microservice, as one of the most dominantly popular modularized architectures for cloud-native systems, is also required to comply with the principle in order to guarantee the architecture quality [31]. Coupling is a common issue for microservice systems when many recent studies have proposed definitions as well as methods to identify and evaluate different types of coupling therein [12, 32]. Despite the importance of the issue, limited studies have been conducted on handling the couplings in microservice architecture. For example, Zhong et al. propose the Microservice Coupling Index (MCI) based on relative measurement theory which measures the dependence of the target microservices relative to the possible couplings between them [32]. d'Aragona et al. propose to use commit data as a metric to statically calculate logical coupling between microservices and validate the existence of such couplings in a large number of open-source microservices projects [12]. Specially, these studies propose microservice couplings from dynamic analysis or static analysis perspectives, as well as the temporal and deployment perspectives [31]. However, limited studies have taken into account the couplings on the organizational level, though organization-related issues are usually as important as technology issues if not more so [26]. For large software projects, properly structured organization shall contribute to effective collaboration with reduced communication, which is critical for the project's success [8]. For microservice-based projects, stakeholders shall be aware of and able to handle critical organizational issues, e.g., coupling, for the migration from monolith to microservices [23]. As microservice promotes and benefits from "strong module boundaries", the communication structure of the organization building it shall mirror such structure with the boundaries [10, 13]. After all, a module, in many contexts, is considered more than just a subprogram but rather a responsibility assignment [24]. It implies that the organizational structure of microservice projects shall also establish boundaries amongst different teams where developers within a team shall closely collaborate (i.e., high cohesion) while developers across teams shall be highly independent (i.e., low coupling). Therefore, it is not surprising that the notion of "One Microservice per Developer" has been promoted by many practitioners and companies [3, 29, 28, 11]. Though several studies have investigated microservice projects' organizational structure [4, 20], studies on the coupling of microservices in terms of their organization structures are still limited. Therefore, in this study, we propose the metric to assess the coupling on organizational structure level for microservice projects, named _organizational coupling_. A prerequisite step of evaluating such coupling is to identify the team for each microservice of the target project. Therefore, the degree to which two microservices are coupled in terms of developers' "cross-boundaries" contribution can be determined by that of those developers simultaneously belonging to both teams. To such an end, our work here can answer the following research question (RQ): _How to evaluate the organizational coupling between microservices in terms of cross-service contribution?_ The remainder of this paper is organized as follows. Section 2 introduces the related studies regarding coupling in microservice and microservice organizational structure. Section 3 presents the method to evaluate the organizational coupling between microservices. Section 4 uses a case study to validate the method in terms of its operationality. Section 5 provides a discussion on the implication, limitations, and future work when Section 6 concludes the paper. ## 2 Related Work The organizational structure of software projects has long been a critical factor determining the projects' success [8]. Many studies have proposed approaches to analyze or improve software projects' organizational structure. Nagappan et al. propose a metric scheme to quantify organizational complexity regarding the product development process checking if the metrics impact failure-proneness where the level of organizational code ownership is a key metric[22]. Mockus studies the relationship between developer-centric measures of organizational change and the probability of customer-reported defects with the results showing organizational change is associated with lower software quality [21]. Isern et al. investigate the popular agent-oriented methodologies in terms of their support and possibilities for modeling organizational structures with different levels of complexity [18]. Regarding the organizational structure of microservice projects, Li et al. propose an approach using social network analysis (SNA) to reconstruct the organizational structure of microservice-based software projects in terms of contributor collaboration [20]. d'Aragona et al. investigate the application of the "one microservice per developer" principle in OSS microservice projects and propose an approach of using exploratory factor analysis (EFA) to establish the different team specialty profiles [11]. Ashraf et al. conduct an empirical study and find that developer communities change considerably through projects' lifetime and that their alignment with the pre-defined microservice (or subsystem) teams is mostly low [2]. On the other hand, many studies have proposed methods to measure the coupling between software modules. Allen et al. propose related information theory-based measures of coupling and cohesion of a module based on the properties proposed by Briand et al. [7, 1]. Poshyvanyk and Marcus also propose a new set of coupling measures for object-oriented systems, named conceptual coupling, based on the semantic information shared between elements of the source code [27]. Other methods are also proposed to measure the coupling between packages or classes [16, 17]. All such coupling metrics and proposed measuring methods focus on the dependency relations within the source code without considering the connections among developers or latent teams. Regarding the coupling in microservice-based systems, Zhong et al. propose Microservice Coupling Index (MCI) derived from the relative measurement theory, which measures how the coupled microservices are relative to the possible couplings between them [32]. Pedraza-Coello and Valdes-Souto propose a method to measure the coupling between microservices in early phases based on COSMIC method concepts regarding the data movements in functional processes [25]. d'Aragona et al. propose a metric to statically calculate logical coupling between microservices based on commits [12]. Though these studies have addressed the issue of coupling in microservice-based systems, limited have yet considered the couplings on the organizational level. ## 3 Organizational Coupling Here we introduce the concept of organizational coupling and the methodology to evaluate the organizational coupling in any particular microservice-based system, which answers the proposed research question. ### Identify Microservice Teams As the initial step of evaluating the coupling between microservice teams, it is necessary to have a method to identify the team for each microservice in the target project. To do so, we adapt the method proposed by Bird et al. regarding the ownership profile of a particular software component [6], which, herein, is the microservices. Let \(M\) be the target microservice of a particular software product where file set \(F\) is identified in the folder (or the repository) of \(M\) located in the project. Thus, we see all the contributors that have committed to any of those \(n\) files establishing the team of microservice \(M\), denoted as \(T_{M}\). Herein, we calculate the quantified contribution of any developer \(D\in T_{M}\) to \(M\) as the sum of all the number of changes to each file \(f\in F\). Furthermore, we calculate \(D\)'s proportion of ownership (or ownership) of \(M\) as the ratio of the number of commit changes that \(D\) has made relative to the total number of commit changes (in terms of lines of code) for \(M\). To be noted, compared to the original study of Bird et al. [6], we use the number of commit changes instead of the number of commits due to the consideration that commits vary largely between one and another in terms of the exerted effort from the developers. Therefore, based on the calculated ownership proportion of each \(D\in T_{M}\), we see the developer(s) who has the highest proportion of ownership for \(M\) as the _Teamleader(s)_. According to the definitions by Bird et al. [6], we also define 1) _Major_ contributors as the developers whose contribution reach at least Figure 1: Ownership Proportion Example 5% proportion level, and 2) _Minor_ contributors as the developers whose contribution reach at least 5% proportion level. An example of microservice team with ownership proportion is shown in Figure 1. In this way, for each microservice in a given microservice-based architecture, based on the commits data, we can identify the team, i.e., all the developers who have contributed to it, and the ownership proportion of each developer, i.e., how much contribution ratio his/her is to the whole team. ### Contribution Switch as Weight Herein, we also take into account the phenomenon of the developer's contribution switch as an important factor influencing the organizational coupling between microservices. On the organizational level, we consider two individual microservices (as well as their teams) are more heavily coupled when the developers from either team more frequently commit to the other. Given two microservices \(M_{a}\) and \(M_{b}\), assume a developer \(D\in T_{M_{a}}\) or \(D\in T_{M_{b}}\) whose _contribution switch weight_ between these two microservices is denoted as \(S_{D}(M_{a},M_{b})\). Therefore, whenever \(D\) commits to \(M_{a}\) and then commits to \(M_{b}\) afterward (e.g., Commit 1 and 2 in Figure 2), we consider such an incidence as a _contribution switch_ of developer \(D\) from \(M_{a}\) to \(M_{b}\). Similarly, developer \(D\) also switches from \(M_{b}\) back to \(M_{a}\) via Commit 3 shown in Figure 2. To be noted, herein we only take into account the sequential relation of the commit series without considering the time intervals in between. Therefore, given the sequence of commits of \(D\) in terms of \(M_{a}\) and \(M_{b}\), we can simply count the number of contribution switches therein. In addition, regarding the situation of logically coupled commits [12] where both microservices are changed in a single commit (e.g., Commit 3 and 4), we consider this situation as two contribution switches. To generalize, given the previously described situation where \(k\) contribution switches are performed by \(D\) between \(M_{a}\) and \(M_{b}\) while \(D\) has in total \(n\) commits for both microservices, the contribution switch weight can be calculated as follows. \[S_{D}(M_{a},M_{b})=\frac{k}{2\times(n-1)} \tag{1}\] Taking Fig. 2 as an example where \(n=8\), as we can observe eight contriution switches (i.e., \(k=8\)), \(S_{D}(M_{1},M_{2})=8/(2\times(8-1))=0.571\). Considering the situation where every commit from the developer changes both microservices (i.e., logical coupling [12]), \(S_{D}(M_{1},M_{2})=14/(2\times(8-1))=1\). On the contrary, when the developer only contributes to one microservice, \(S_{D}(M_{1},M_{2})=0/(2\times(8-1))=0\). It means the two microservices are not coupled in terms of the contribution of \(D\) on the organizational level. Therefore, we can easily conclude that \(S_{D}(M_{a},M_{b})\in[0,1]\). To be noted, the contribution switch weight is only to influence the organizational coupling in terms of individual developers. ### Measure Organizational Coupling Given any two microservices \(M_{a}\) and \(M_{b}\), \(T_{M_{a}}\) and \(T_{M_{b}}\) are the teams for each microservice respectively, which are identified by the method proposed in Section 3.1. Therein, we can simply identify the \(p\) developers who have contributed in both microservices, denoted as \(T_{(M_{a}\cap M_{b})}=\{D_{1},D_{2},...D_{p}\}\). For any particular developer \(D_{i}\in T_{(M_{a}\cap M_{b})}\), all the commits he/she has conducted in temperal sequence are denoted as \(C_{D_{i}}\). For each \(c\in C_{D_{i}}\), we can identify on which microservice it commits to. Therefore, by finding the ones that are committed to \(M_{a}\) or \(M_{b}\) or both, we obtain a sub-sequence of commits, denoted as \(C_{D_{i}}(M_{a},M_{b})\). Such a commit sequence can be depicted as a figure similar to Fig. 2 where all the contribution switches can be identified with the contribution switch weight, \(S_{D_{i}}(M_{a},M_{b})\), calculated based on the method described in Section 3.2. To investigate the coupled contribution of \(D_{i}\) on \(M_{a}\) and \(M_{b}\), we adopt the harmonic mean of \(D_{i}\)'s contribution in them, considering the reason that the more equally any developer commits to multiple microservices, the more organizationally coupled the two microservices are, regarding this developer's contribution. Let \(\{ca_{1},ca_{2},...ca_{m}\}\) be the corresponding contribution value sequence for the \(m\) commits in \(C_{D_{i}}(M_{a})\) while \(\{cb_{1},cb_{2},...cb_{n}\}\) be that for the \(n\) commits in \(C_{D_{i}}(M_{b})\). Herein, the contribution value of each commit is calculated by the sum of all the number of changes to each file in the target microservices. Let \(OC(D_{i},M_{a},M_{b})\) be the organizational coupling (OC) caused by developer \(D_{i}\)'s cross-service contribution on microservices \(M_{a}\) and \(M_{b}\), we can calculate \(OC(D_{i},M_{a},M_{b})\) as follows. Figure 2: Contribution Switch between Microservices \[OC(D_{i},M_{a},M_{b})=(\frac{2\sum_{j=1}^{m}ca_{j}\sum_{k=1}^{n}cb_{j}}{\sum_{j=1} ^{m}ca_{j}+\sum_{k=1}^{n}cb_{j}})\times S_{D_{i}}(M_{a},M_{b}) \tag{2}\] Thus, the overall organizational coupling between \(M_{a}\) and \(M_{b}\), denoted as \(OC(M_{a},M_{b})\), can be calculated as follows. \[OC(M_{a},M_{b})=\sum_{i=1}^{p}OC(D_{i},M_{a},M_{b})=\sum_{i=1}^{p}(\frac{2\sum _{j=1}^{m}ca_{j}\sum_{k=1}^{n}cb_{j}}{\sum_{j=1}^{m}ca_{j}+\sum_{k=1}^{n}cb_{j} })\times S_{D_{i}}(M_{a},M_{b}) \tag{3}\] ## 4 Case Study In this study, we demonstrate the applicability of the proposed organizational coupling evaluation method with a case study. We select, _Spinnaker_3, a microservice-based application management and deployment system supporting software change releases. Spinnaker is an open-source, multi-cloud continuous delivery platform that combines a powerful and flexible pipeline management system with integrations to the major cloud providers. Herein, we use the Spinnaker project as a proof-of-concept to demonstrate and validate how to identify and evaluate the organizational coupling within microservice-based systems. Footnote 3: [https://spinnaker.io/](https://spinnaker.io/) Footnote 4: [https://spinnaker.io/docs/reference/architecture/](https://spinnaker.io/docs/reference/architecture/) microservices-overview/ ### Data Collection Spinnaker contains 12 independent microservices4. The dependencies of the microservices are shown in Figure 3. Footnote 4: [https://github.com/stpinaker](https://github.com/stpinaker) The 12 microservices include _CloudDriver_, _Deck_, _Echo_, _Fiat_, _Front50_, _Gate_, _Halyard_, _Igor_, _Kayenta_, _Keel_, _Orca_, and _Rosco_. The detailed functionality and responsibility of each microservices are introduced in the Spinnaker official documentation as well as its GitHub repositories5. To be noted, different from other popular microservice-based projects, e.g., eShopOnContainers6, Spinnaker project is organized as polyrepo architecture instead of monorepo [9]. Therefore, we shall gather data from the 12 corresponding repositories of the project. Footnote 5: [https://github.com/stpinaker](https://github.com/stpinaker) By using the GitHub REST API 7, we are able to collect all the commit data for the target 12 microservices of Spinnaker project. We collect the 43,654 commits from all 12 microservice repositories between 2012-03-18 and 2023-07-06. 801 different developers contributed to all these commits with 241,828 file changes. Footnote 7: [https://docs.github.com/en/rest?apiVersion=2022-11-28](https://docs.github.com/en/rest?apiVersion=2022-11-28) The distribution of 1) the number of commits for each microservice and 2) the number of different developers for each microservice are shown in Figure 4 and Figure 5. To be noted, in the original dataset, for each individual commit the contributor is identifed by _author_email_. However, considering the situation where multiple emails can belong to the same user, e.g., [email protected]_ and [email protected]_, we preprocess the author identity by dropping the email extention and combining such accounts. Figure 4: Number of Commits for each Microservice Figure 3: Spinnaker Architecture Overview ### Results #### 4.2.1 Identify Microservice Teams Firstly we identify the developer team for each microservice using the method introduced in Section 3.1. Due to the fact that Spinnaker project is structured as poly-repo, each microservice is an independent repository. Therefore, the team of each microservice shall contain all the contributors of each repository, which is comparatively easier to identify compared to mono-repo projects, e.g., eShopOnContainer. The number of developers in each microservice team is shown in Figure 5. Figure 5: Number of Developers for each Microservice Figure 6: Ownership Proportion of Spinnaker Microservices In addition, we can also further specify the team structure of each microservice team by identifying the teamleaders, major contributors and minor contributors of each team. Figure 6 shows the Top 20 contributors of each microservice team in terms of their ownership proportion. It is easy to observe that all microservice teams have at least one teamleader and one major contributor. Meanwhile, no team has more than six major contributors (including the Teamleader). Specifically, we list the teamleader and major contributors of each microservice team in Table 1. Considering the privacy reason, we only show the first four letters of each contributor's identity. We can observe that majority of the teamleaders have 20% - 30% ownership proportion. The teamleader of Halyard microservice has the highest ownership of the service (52.39%) when the teamleader of CloudDriver microservice has the lowest (7.72%). Meanwhile, we can also observe that majority of the microservice teams have one clear teamleader whose ownership proportion is at least 5% higher than that of the second major contributor. Five microservice teams have at least two contributors who share similar ownership proportion. Moreover, it is also noticeable that developer _duftxxx_ is the teamleader of both _Kayenta_ service and _Rosco_ service. He/she is also the major contributer of _CloudDriver_ service. Meanwhile, 11 out of the 12 microservice teamleaders are also major contributor of at least one other team. #### 4.2.1 Organizational Couplings and Evolution With the team of each microservice identified, we can then calculate the organizational coupling between each pair of them by adopting the method introduced in Section 3.3. According to the version log of Spinnaker8, the latest stable version (1.30.2) was released on June 1st, 2023. We select all the commit data until this date and calculate all the organizational couplings. We set the coloring criteria as: 1) Red (Very Highly Coupled) : \(OC\geq 10,000\); 2) Orange (Highly Coupled): \(1,000\leq OC<10,000\); 3) Yellow (Loosely Coupled): \(100\leq OC<1,000\); 4) Green (Very Loosely Coupled): \(OC<100\). The results are shown in Figure 7. \begin{table} \begin{tabular}{|l|l|l|} \hline We can observe that majority of the 12 microservices of Spinnaker are at least highly coupled in terms of developers' cross-service contribution. The most severely high coupling is the one between _Orca_ service and _CloudDriver_ service (84837.13). Meanwhile, both these two services are also heavily coupled with all other services. On the contrary, _Keel_ is loosely coupled with several services, including _Deck_, _Halyard_, _Kayenta_ and _Rosco_ when _Kayenta_ and _Fiat_ are also loosely coupled. Such a phenomenon likely results from the fact that the Spinnaker project had the first initial release in late 2015 with the earliest service repository created in May 2014. It is only reasonable that in the early development phase, a limited Figure 8: Evolution of Organizational Coupling between Services Figure 7: Organizational Coupling between Services (Version 1.30.2) number of developers were heavily involved in nearly all the services. Therefore, we can also investigate the changes in such organizational coupling between services through the project timeline. The first stable version (Version 1.0.0) of Spinnaker was released on June 5th, 2017. We select six commit datasets of six consecutive years from 2017-06-05 to 2023-06-05. By adopting the same method for each dataset, we can obtain six different heatmaps regarding the organizational coupling between services in the specific one-year period and observe the changes (shown in Figure 8). Observing the service organizational coupling from 2017-06-05 to 2018-06-05, we find that _Kayenta_ and _Keel_ are very loosely coupled with the majority of the others. The reason is likely _Kayenta_ was created in January 2017 while _Keel_ in October 2017. _CloudDriver_ and _Orca_ are still highly coupled with many other services. Thereafter, the organizational coupling among nearly all services increased from 2018-06-05 to 2019-06-05. However, from 2019-06-05 to 2020-06-05, we can observe the decrease of the coupling amongst all services except _Kayenta_ and _Keel_, whose coupling with other services still increased. It implies that there are still developers from other service teams contributing to these two newly established services. From 2020-06-05, we can easily observe the organizational coupling among all services decreases in the last three years. ## 5 Discussion In this study, we propose the organizational coupling between microservices as a measure to evaluate how much any two microservices are coupled by the cross-service contribution behaviors of the developers. Such coupling is also damaging to the quality of microservice architecture because spontaneous and unregulated contributions across will inevitably result in an increase in unnecessary communication costs, mismatch between developers and code, and risks in deteriorating system architecture [8, 10, 14]. Here we define the organizational coupling of two different microservices as the degree to which the developers cross-contribute between them. The method of evaluating the organizational coupling between two given microservices includes three steps: 1) identifying the contributor team of each microservice and finding the developers who contribute in both; 2) calculating the contribution switch of each common developer and using it as the weight on his/her mean contribution on both microservices; 3) summing all the common developers' weighted cross-service contribution of both microservices as the organizational coupling value. This answers the research question. When considering the organizational coupling between microservices, we consider that the switching behavior of the developers is a key factor. The reason is that people need to stop thinking about one task in order to fully transition their attention and perform well on another [19]. Therefore, the more frequently developers switch between different microservices the more difficult for them to concentrate and perform well on any. Thus, it is reasonable to consider the two microservices organizationally coupled when developers contribute across them as such switching behaviors can influence the quality of both. However, the cur rent calculation is more to take this factor into account as a proof-of-concept rather than accurately calculate the values. So herein, we simply conceptualize the contribution switch as the switch times between two services within a given time without considering the timespan between the switches. Furthermore, we shall also consider other factors when calculating the contribution switch, e.g., microservice priority [5], project roles [15], and so on. For future work, we shall continue to enrich the concept of organization coupling by taking into account more factors as parameters. On the other hand, strategies and mechanisms to monitor and handle such organizational couplings are also required in order to continue promoting the principle of "one microservice per developer". For example, we can adopt time series to monitor the changes in organizational coupling networks together with anomaly detection techniques to identify the severe coupling whenever occurring [30]. Furthermore, we shall also investigate techniques to reduce organizational coupling by encouraging developers to reduce contribution switching frequency or developer number. ## 6 Conclusion In this study, we propose the concept of organizational coupling as a measure to evaluate how much any two microservices are coupled by the cross-service contribution behaviors of the developers. Such organizational coupling needs to be detected and mitigated in time to prevent future losses. Therefore, we also propose an automatable approach to evaluate the organizational couple by investigating the microservice ownership and cross-service contribution and validate its usefulness with a case study. Organizational coupling is a critical issue for microservice-based systems on the organizational structural level. Such issues can have a potential impact on the deterioration of system architecture which needs to be detected and addressed in time.
2309.13999
Complex and real valued solutions for fractoinal Helmholtz equation
In this paper, we are concerned with the limiting absorption principle for the fractional Helmholtz equation, By establishing the boundedness estimate for the resolvent of fractional Helmholtz operator, we obtain the nontrivial Lq(Rn) complex valued solutions for (0.1). By setting up a dual variational framework, we also obtain the real valued solutions for (0.1) via a non-vanishing principle.
Zifei Shen, Shuijin Zhang
2023-09-25T10:05:39Z
http://arxiv.org/abs/2309.13999v2
# Complex and real valued solutions for fractional Helmholtz equation ###### Abstract. In this paper, we are concerned with the limiting absorption principle for the fractional Helmholtz equation \[(-\Delta)^{s}u-\lambda u=f(x,u),\ \ \text{in}\ \ \mathbb{R}^{n}, \tag{0.1}\] where \(n\geq 3\), \(0<\lambda<+\infty\) and \(\frac{n}{n+1}<s<\frac{n}{2}\) are two real parameters. By establishing the boundedness estimate for the resolvent of fractional Helmholtz operator, we obtain the nontrivial \(L^{q}(\mathbb{R}^{n})\) complex valued solutions for (0.1). By setting up a dual variational framework, we also obtain the real valued solutions for (0.1) via a non-vanishing principle. \({}^{*}\) Corresponding author, e-mail:[email protected]; Zifei Shen: [email protected] \({}^{1}\) Department of Mathematics, Zhejiang Normal University, 321000, Jinhua, China. \({}^{2010}\) _Mathematics Subject Classification._ 35J15, 45E10, 45G05. _Key words and phrases._ Existence; Fractional Helmholtz equation; Ginzburg-Landau equation; Limiting absorption principle. Introduction Let \(\mathbb{R}^{n}\) be a bounded bounded bounded bounded operator on \(\mathbb{R}^{n}\) and \(\mathbb{R}^{n}\) be a bounded bounded bounded operator on \(\mathbb{R}^{n}\). We consider the following two-dimensional elliptic system (1.1) \[\begin{cases}\dot{u}=\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1} {2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left( \frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2} \left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1 \right)}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2 \left(\frac{1}{2}\left(\frac{1}{2\left(\frac{1}{2}\left(\frac{1}{2\left(\frac{1 }{2\left(\frac{1}{2\left(\frac{1}\left{2\left(\frac{1}\left(\frac{1\left(\frac{1 \left(\frac{1\left(\left(\left(\left(\left(\left(\left( \left(\left(\left\left(\left( \left(\left\left(( \left(((((( \left)- ( ( )-) ((((((( )-) )}} (k (k) (((k) (((k) (((k) ) (((k) (((k) (((k)((k)((k) (((k)(((k(k(k(k) (k) ((k) (k) ((k (k) (k (k) (k (k (k) (k (k (k) (k (k (k ) (k (k (k (k ) (k (k (k ) (k (k (k ) (k (k ) (k (k (k )k (k (k )k (k (k (k )k (k (k )k (k (k (k )k (k (k (k (k )kk (k (k )kk (k (k )k (kk (k (kk )k (k (k (k (kk )k (k (k )kk (kk (kk )k (kk (kk )k (kk (kk (kk ) mainly relies on the Stein's oscillatory integral theorem, see [26, Lemma 2.4], it follows that the exponent in (1.9) maybe not the optimal. By reviving the method of Gutierrez [22], we obtain the following boundedness estimate for resolvent operator \(((-\Delta)^{s}-\lambda)^{-1}\). More precisely, define \(\mathcal{R}^{s}_{\lambda,\varepsilon}f=((-\Delta)^{s}-(\lambda+i\varepsilon) )^{-1}f\), we have the main theorem as follow. **Theorem 1.1**.: _Let \(n\geq 3\), \(\frac{n}{n+1}\leq s<\frac{n}{2}\) and \(1<p<q<\infty\) are Lebesgue exponents satisfying_ \[\frac{2}{n+1}\leq\frac{1}{p}-\frac{1}{q}\leq\frac{2s}{n},\ \frac{1}{p}>\frac{n+1}{2n},\ \frac{1}{q}<\frac{n-1}{2n}, \tag{1.10}\] _then there is a uniform constant \(C_{p,q}<\infty\) such that for any \(\varepsilon>0\)_ \[||u_{\varepsilon}||_{L^{q}(\mathbb{R}^{n})}=C_{p,q}\lambda^{\frac{n}{2n}( \frac{1}{p}-\frac{1}{q}-1)}||\mathcal{R}^{s}_{\lambda,\varepsilon}f||_{L^{q} (\mathbb{R}^{n})}\leq||f||_{L^{p}(\mathbb{R}^{n})}. \tag{1.11}\] _On the other hand, if \(0<s<\frac{n}{n+1}\), then no such uniform estimates exist. Particularly, there exists a linear operator \(\mathcal{R}^{s}_{\lambda}:\mathcal{S}\longrightarrow\mathcal{S}^{\prime}\) given by_ \[\langle\mathcal{R}^{s}_{\lambda}f,g\rangle=\lim_{\varepsilon\longrightarrow 0 }\int_{\mathbb{R}^{N}}[\mathcal{R}^{s}_{\lambda,\varepsilon}f](x)g(x)dx,\ \ \text{for all}\ f,g\in\mathcal{S}. \tag{1.12}\] Furthermore, define \(D^{s}_{x}u_{\varepsilon}\) by \(\widehat{D^{s}_{x}u_{\varepsilon}}(\xi)=|\xi|^{s}\widehat{u_{\varepsilon}}(\xi)\), we then obtain the local \(L^{2}(\mathbb{R}^{n})\)-estimate for \(u_{\varepsilon}\) and \(D^{s}_{x}u_{\varepsilon}\). **Theorem 1.2**.: _Assume that \(\frac{n}{n+1}\leq s<\frac{n}{2}\). Let \(u_{\varepsilon}=\mathcal{R}^{s}_{\lambda,\varepsilon}f\), then there exists a constant \(C\), indepenent of \(\lambda\) and \(\varepsilon\), such that_ \[\sup_{x_{0},R\geq 1/\sqrt{\lambda}}\Big{(}\frac{1}{R}\int_{B(x_{0},R)}|u_{ \varepsilon}(x)|^{2}dx\Big{)}^{1/2}\leq C\lambda^{\frac{n}{2}(\frac{1}{p}- \frac{1}{2})-\frac{3}{2}}||f||_{L^{p}(\mathbb{R}^{n})}, \tag{1.13}\] _whenever \(\frac{1}{n+1}\leq\frac{1}{p}-\frac{1}{2}<\frac{\varepsilon}{2}\) for \(n=3,4\) or \(\frac{1}{n+1}\leq\frac{1}{p}-\frac{1}{2}<\frac{\varepsilon}{n}\) for \(n\geq 5\). Moreover,_ \[\sup_{x_{0},R\geq 1/\sqrt{\lambda}}\Big{(}\frac{1}{R}\int_{B(x_{0},R)}|D^{s}_{x}u _{\varepsilon}(x)|^{2}dx\Big{)}^{1/2}\leq C\lambda^{\frac{n}{2}(\frac{1}{p}- \frac{1}{2})-\frac{1}{4}}||f||_{L^{p}(\mathbb{R}^{n})}, \tag{1.14}\] _whenever \(\frac{1}{n+1}\leq\frac{1}{p}-\frac{1}{2}<\frac{\varepsilon}{n}\) for \(n\geq 3\)._ Combining these estimates and the limiting absorption principle, we obtain the solutions of \((-\Delta)^{s}u+\lambda u=f\) characterized by the Sommerfeld radiation condition \[\lim_{R\longrightarrow\infty}\int_{B_{R}}|D^{s}_{x}u-iu\widehat{x}|^{2}dx=0, \tag{1.15}\] where \(\widehat{x}=\frac{x}{|x|}\). Moreover, based on these priori estimates for resolvent, we can obtain the complex solutions for (1.1). **Theorem 1.3**.: _(i) Let \(n\geq 3\), \(\frac{n}{n+1}\leq s<\frac{n+1}{4}\) and \(ns-n-s>0\). Moreover, let_ \[\begin{cases}\frac{n(t-1)}{2s}<q<\frac{2nt}{n+1},&\text{if}\ \ \frac{(n+1)^{2}}{(n-1)^{2}}<t<\frac{n+1}{n+1-4s},\\ \frac{n(t-1)}{2s}<q<\frac{(n+1)(t-1)}{2},&\text{if}\ \ \frac{(n-1+4s)}{(n-1)}<t< \frac{(n+1)^{2}}{(n-1)^{2}},\\ \frac{2n}{n-1}<q<\frac{(n+1)(t-1)}{2},&\text{if}\ \ \frac{n^{2}+4n-1}{n^{2}-1}<t< \frac{n-1+4s}{n-1};\end{cases} \tag{1.16}\] _(ii) Let \(n\geq 3\), \(\frac{n}{n+1}\leq s<\frac{n+1}{4}\) and \(ns-n-s<0\). Moreover, let_ \[\begin{cases}\frac{n(t-1)}{2s}<q<\frac{2nt}{n+1},&\text{if}\ \ \frac{n-1+4s}{n-1}<t<\frac{n+1}{n+1-4s},\\ \frac{2n}{n-1}<q<\frac{2nt}{n+1},&\text{if}\ \ \frac{(n+1)^{2}}{(n-1)^{2}}<t< \frac{n-1+4s}{n-1},\\ \frac{2n}{n-1}<q<\frac{(n+1)(t-1)}{2},&\text{if}\ \ \frac{n^{2}+4n-1}{n^{2}-1}<t< \frac{(n+1)^{2}}{(n-1)^{2}};\end{cases} \tag{1.17}\] _(iii) Let \(n=3,4\), \(\frac{n+1}{4}<s<\frac{2n^{2}}{(n+1)^{2}}\). Moreover, let_ \[\begin{cases}\frac{n(t-1)}{2s}<q<\frac{2nt}{n+1},&\quad\text{if}\ \ \frac{(n+1)^{2}}{(n-1)^{2}}<t<+\infty,\\ \frac{n(t-1)}{2s}<q<\frac{(n+1)}{t-1},&\quad\text{if}\ \ \frac{n}{n-2s}<t< \frac{(n+1)^{2}}{(n-1)^{2}},\\ t<q<\frac{(n+1)(t-1)}{2},&\quad\text{if}\ \ \frac{2n}{n-1}<t<\frac{n}{n-2s},\\ \frac{2n}{n-1}<q<\frac{(n+1)(t-1)}{2},&\quad\text{if}\ \ \frac{n^{2}+4n-1}{n^{2}-1}<t< \frac{2n}{n-1};\end{cases} \tag{1.18}\] _(iv) Let \(n=3,4\), \(\frac{2n^{2}}{(n+1)^{2}}\leq s<\frac{n}{2}\). Moreover, let_ \[\begin{cases}\frac{n(t-1)}{2s}<q<\frac{2nt}{n+1},&\quad\text{if}\ \ \frac{n}{n-2s}<t<+\infty,\\ t<q<\frac{2nt}{n+1},&\quad\text{if}\ \ \frac{(n+1)^{2}}{(n-1)^{2}}<t< \frac{n}{n-2s},\\ t<q<\frac{(n+1)(t-1)}{2},&\quad\text{if}\ \ \frac{2n}{n-1}<t<\frac{(n+1)^{2}}{(n-1)^{2}},\\ \frac{2n}{n-1}<q<\frac{(n+1)(t-1)}{2},&\quad\text{if}\ \ \frac{n^{2}+4n-1}{n^{2}-1}<t< \frac{2n}{n-1};\end{cases} \tag{1.19}\] _(v) Let \(n\geq 5\), \(\frac{n+1}{4}<s<\frac{n}{2}\). Moreover, let_ \[\begin{cases}\frac{n(t-1)}{2s}<q<\frac{2nt}{n+1},&\quad\text{if}\ \ \frac{n}{n-2s}<t<+\infty,\\ t<q<\frac{2nt}{n+1},&\quad\text{if}\ \ \frac{2n}{n-1}<t<\frac{n}{n-2s},\\ \frac{2n}{n-1}<q<\frac{2nt}{n+1},&\quad\text{if}\ \ \frac{(n+1)^{2}}{(n-1)^{2}}<t< \frac{2n}{n-1},\\ \frac{2n}{n-1}<q<\frac{(n+1)(t-1)}{2},&\quad\text{if}\ \ \frac{n^{2}+4n-1}{n^{2}-1}<t< \frac{(n+1)^{2}}{(n-1)^{2}}.\end{cases} \tag{1.20}\] _Assume that \(f(x,u)=|u|^{t-1}u\), then for any given \(\varphi\in L^{q}(\mathbb{R}^{n})\) of the homogeneous Helmholtz equation \((-\Delta)^{s}\varphi-\lambda\varphi=0\) with \(||\varphi||_{L^{q}(\mathbb{R}^{n})}\leq\varepsilon\), there exists \(a=a(||\varphi||_{L^{q}(\mathbb{R}^{n})})\) such that (1.1) has a unique solution \(u=\mathcal{R}^{s}_{\lambda}(|u|^{t-1}u)+\varphi\in L^{q}(\mathbb{R}^{n})\) satisfying_ \[u\in B_{a}(L^{q}(\mathbb{R}^{n}))=\{u:\mathbb{R}^{n}\longrightarrow\mathbb{C} |\ ||u||_{L^{q}(\mathbb{R}^{n})}\leq a\}.\] **Remark 1.4**.: _Indeed, the fractional Helmholtz equation are very closed to the fractional Ginzburg-Landau equations and fractional Allen-Cahn equtaion, see [33, 34] and the references therein. Therefore, by the similar analysis as in Gutierrez [22] and the boundedness estimate in Ma [28], some existence result can also be obtained for the fractional Ginzburg-Landau equations._ To remove the smallness condition \(||\varphi||_{L^{q}(\mathbb{R}^{n})}\leq\varepsilon\), we also establish the following boundedness estimate for resolvent operator \(((-\Delta)^{s}-\lambda)^{-1}\). **Theorem 1.5**.: _Let \(n\geq 3\), \(1\leq s\leq\frac{n}{2}\), \(\alpha>\frac{n+1}{2}\), and \(\tau(\alpha)\) be defined by_ \[\tau(\alpha)=\begin{cases}\alpha-\frac{n+1}{2},&\text{if}\ \ \frac{n+1}{2}<\alpha<n,\\ \frac{n+1}{2},&\text{if}\ \alpha\geq n.\end{cases} \tag{1.21}\] _Then we have_ \[\kappa_{\alpha}:=\sup\{||\mathcal{R}^{s}_{\lambda}f||_{L^{\infty}_{\tau( \alpha)}(\mathbb{R}^{n})}:f\in L^{\infty}_{\alpha}(\mathbb{R}^{n}),||f||_{L^{ \infty}_{\alpha}(\mathbb{R}^{n})}=1\}<\infty. \tag{1.22}\] _So \(\mathcal{R}^{s}_{\lambda}\) defines a bounded linear map \(L^{\infty}_{\alpha}(\mathbb{R}^{n})\longrightarrow L^{\infty}_{\tau(\alpha)}( \mathbb{R}^{n})\). Moreover, the resolvent operator defines a compact linear map \(\mathcal{R}^{s}_{\lambda}:L^{\infty}_{\alpha}(\mathbb{R}^{n})\longrightarrow L^ {\infty}(\mathbb{R}^{n})\)._ Correspondingly, we have the following existence results for (1.1). **Theorem 1.6**.: _Let \(n\geq 3\), \(1\leq s\leq\frac{n}{2}\). For some \(\alpha>\frac{n+1}{2}\), let \(f:\mathbb{R}^{n}\times\mathbb{C}\longrightarrow\mathbb{C}\) be a continuous function satisfying_ \[\sup_{|u|\leq M,x\in\mathbb{R}^{n}}\langle x\rangle^{\alpha}|f(x,u)|<\infty \ \ \text{for all}\ M>0. \tag{1.23}\] _Moreover, suppose that the nonlinearity is of the form \(f(x,u)\leq Q(x)|u|+b(x)\) with \(Q,b\in L^{\infty}_{\alpha}(\mathbb{R}^{n},\mathbb{R})\) and \(||Q||_{L^{\infty}_{\alpha}(\mathbb{R}^{n})}\). Then, for any given solution \(\varphi\in L^{\infty}(\mathbb{R}^{n})\) of the homogeneous Helmholtz equation \((-\Delta)^{s}\varphi+\lambda\varphi=0\), (1.1) admits a solution \(u=\mathcal{R}^{s}_{\lambda}(f(x,u))+\varphi\in L^{\infty}(\mathbb{R}^{n})\). Particulary, if \(f(x,u)\) satisfies the Lipschitz condition_ \[l_{\alpha}:=\sup\Bigl{\{}\langle x\rangle^{\alpha}\Big{|}\frac{f(x,u)-f(x,v)} {u-v}\Big{|}:u,v\in\mathbb{R},x\in\mathbb{R}^{n}\Bigr{\}}\leq\frac{1}{\kappa_ {\alpha}}, \tag{1.24}\] _then the solution is unique._ **Remark 1.7**.: _The case (\(f_{1}\)) in [9] is more complicate, someone need more nonexistence result for fractional Helmholtz equation to deal with it._ As for equation (1.1) with superlinear term, we can obtain the similar result as Theorem 1.3. **Theorem 1.8**.: _Let \(n\geq 3\), \(1\leq s\leq\frac{n}{2}\). For some \(\alpha>\frac{n+1}{2}\), let \(f(x,u):\mathbb{R}^{n}\times\mathbb{C}\longrightarrow\mathbb{C}\) be a continuous function satisfies (1.23). Suppose that \(f(x,\cdot)\) is real differentiable for every \(x\in\mathbb{R}^{n}\) and \(f^{\prime}:=\partial_{u}f:\mathbb{R}^{n}\times\mathbb{C}\longrightarrow \mathcal{L}_{\mathbb{R}}(\mathbb{C},\mathbb{C})\) is a continuous function satisfying_ \[\sup_{|u|\leq M,x\in\mathbb{R}^{n}}\langle x\rangle||f^{\prime}(x,u)||_{ \mathcal{L}_{\mathbb{R}}(\mathbb{C},\mathbb{C})}<\infty. \tag{1.25}\] _Moreover, suppose that \(f(x,0)=0\) and \(f^{\prime}(x,0)=0\in\mathcal{L}_{\mathbb{R}}(\mathbb{C},\mathbb{C})\) for all \(x\in\mathbb{R}^{n}\). Then there exists open neighborhoods \(U,V\subset L^{\infty}(\mathbb{R}^{n})\) of zero with the property that for every \(\varphi\in V\) there exists a unique solution \(u=u_{\varphi}\in U\) of (1.1). Moreover, the map \(V\longrightarrow U\), \(u\longrightarrow u_{\varphi}\) is of class \(C^{1}\)._ Set \(f(x,u)=Q(x)|u|^{p-2}u\) with \(p>2\) and \(Q\in L^{\infty}_{\alpha}(\mathbb{R}^{n})\) for some \(\alpha>\frac{n+1}{2}\), then \(f(x,u)\) is a special example that satisfying the conditions of Theorem 1.8. Therefore, for given \(\varphi\in L^{\infty}(\mathbb{R}^{n})\), there exists \(\epsilon>0\) and a unique local branch \((\epsilon,\epsilon)\longrightarrow L^{\infty}(\mathbb{R}^{n}),\lambda \longrightarrow u_{\lambda}\) of solutions the equation \[u=\mathcal{R}^{s}_{\lambda}(Q|u|^{p-2}u)+\lambda\varphi\quad\text{in }L^{ \infty}(\mathbb{R}^{n}). \tag{1.26}\] **Remark 1.9**.: _To establish the existence of a global continuation of this local branch, someone also need the nonexistence for fractional Helmholtz equation, see the classical case in [9, Section 4]. Even though one may assume that the stronger condition, that is \(Q\in L^{\infty}_{c}(\mathbb{R}^{n},\mathbb{R})\setminus\{0\}\) with some control on its diameters, the specific form of the Green function for fractional Helmholtz operator is not clear._ We would mention that some real valued solutions \(u=\operatorname{Re}(\mathcal{R}_{\lambda}f)\) for (1.2) haven also been detected by many authors. Since the real-valued solutions is only a real part of convolution integral, it easily follows that \(u=0\) is an isolated solution of (1.2) in \(L^{p}(\mathbb{R}^{n})\), and thus the nontrivial solutions cannot be found by a contraction mapping argument. Therefore, based on the boundedness estimate of the resolvent operator \(\mathcal{R}_{\lambda}\) with \(\frac{2(n+1)}{n-1}\leq p\leq\frac{2n}{n-2}\), Evequoz and Weth [13] (see [15] for n=2) set up a dual variational framework for (1.2). Correspondingly, the nontrivial real-valued solutions of equation (1.2) with \(f(x,u)=Q(x)|u|^{p-2}u\) are detected via the mountain pass argument, where \(Q(x)\) is a periodic or decay weight function, see also [14, 16, 17, 19, 29, 32] for the other cases. Specially, Evequoz and Weth [18] obtained the positive solution for (1.26). And by setting \(h(\xi)=\overline{h(-\xi)}\) in (1.5), Mandel [29] revived Gutierrez' fixed point approach and detected the continua of small real-valued solutions of (1.2) for a larger class of nonlinearities. In this paper, we also consider the real valued solutions for (1.1). **Theorem 1.10**.: _Let \(n\geq 3\), \(\frac{n}{n+1}<s<\frac{n}{2}\), \(\frac{2(n+1)}{n-1}<p<\frac{2n}{n-2s}\), and let \(Q\in L^{\infty}(\mathbb{R}^{n})\), \(Q\geq 0\), \(Q\not\equiv 0\) satisfy \(\lim\limits_{|x|\longrightarrow\infty}Q(x)=0\). Then problem \(u=\operatorname{Re}(\mathcal{R}^{s}_{\lambda}(Q(x)|u|^{p-2}u))\) admits a sequence of pairs \(\pm u_{n}\) of solutions such that \(u_{n}\in L^{p}(\mathbb{R}^{n})\) with \(||u_{n}||_{L^{p}(\mathbb{R}^{n})}\longrightarrow\infty\) as \(n\longrightarrow\infty\). Particularly, \(u_{n}\in W^{2s,q}(\mathbb{R}^{n})\cap\mathcal{C}^{1,\alpha}(\mathbb{R}^{n})\) for \(q\in[p,\infty)\) and \(\alpha\in(0,1)\)._ **Theorem 1.11**.: _Let \(n\geq 3\), \(\frac{n}{n+1}<s<\frac{n}{2}\), \(\frac{2(n+1)}{n-1}<p<\frac{2n}{n-2s}\), and let \(Q\in L^{\infty}(\mathbb{R}^{n})\), \(Q\geq 0\), \(Q\not\equiv 0\) be \(\mathbb{Z}^{n}\)-periodic. Then problem \(u=\operatorname{Re}(\mathcal{R}^{s}_{\lambda}(Q(x)|u|^{p-2}u))\) admits a nontrivial solution such that \(u\in L^{p}(\mathbb{R}^{n})\). Particularly, \(u_{n}\in W^{2s,q}(\mathbb{R}^{n})\cap\mathcal{C}^{1,\alpha}(\mathbb{R}^{n})\) for \(q\in[p,\infty)\) and \(\alpha\in(0,1)\)._ **Remark 1.12**.: _The far filed estimate for the real valued solutions need more information of the specific form on the asymptotic expansions for the Green function of fractional Helmholtz operator._ Let us now briefly explain our approach and the organization of the paper. In Sections 2, we first derive the limiting absorption principle for fractional Helmholtz operator, i.e., the \(L^{p}(\mathbb{R}^{n})\) and \(L^{\infty}(\mathbb{R}^{n})\) estimate for resolvent. In Section 3, we prove the existence of the complex valued solutions for (1.1). In Section 4, we derive a nonvanishing property related to the resolvent which is a key ingredient in the proof of Theorem 10. In Section 5, we lift the regularity of the solutions for a priori equation, and then we obtain some compactness for the resolvent operator. With the help of this property, we set up a dual variational framework for the problem \(u=\operatorname{Re}(\mathcal{R}^{s}_{\lambda}f)\). In section 6, we obtain the existence of real valued solutions for (1.1). ## 2. Limiting absorption principle for fractional Helmholtz operator Since the resolvent estimate for fractional Hemholtz operator is almost similar to the classical Helmholtz operator, hence in the first subsection, we recall and compare the delicate difference between the resolvent estimate in Gutierrez [22] and the integral estimate in Chen, Evequoz and Weth [9]. ### Green function for Helmholtz operator Let \(\lambda,\varepsilon>0\). Then the operator \(-\Delta-(\lambda+i\varepsilon):H^{2}(\mathbb{R}^{n})\subset L^{2}(\mathbb{R}^ {n})\longrightarrow L^{2}(\mathbb{R}^{n})\) is an isomorphism. Moreover, for any \(f\) from the Schwartz space \(\mathcal{S}\) its inverse is given by \[\mathcal{R}_{\lambda,\varepsilon}f(x):=[-\Delta-(\lambda+i\varepsilon)]^{-1}f( x)=(2\pi)^{-\frac{N}{2}}\int_{\mathbb{R}^{n}}e^{ix\cdot\xi}\frac{\widehat{f}(\xi)}{| \xi|^{2}-(\lambda+i\varepsilon)}d\xi.\] According to the Limiting Absorption Principle of Gutierrez [22] (see also [21]) that there exists a linear operator \(\mathcal{R}_{\lambda}:\mathcal{S}\longrightarrow\mathcal{S}^{\prime}\) given by \[\langle\mathcal{R}_{\lambda}f,g\rangle:=\lim_{\varepsilon\longrightarrow 0}\int_{\mathbb{R}^{n}}[\mathcal{R}_{\lambda, \varepsilon}f](x)g(x)dx=\int_{\mathbb{R}^{n}}[\Phi_{\lambda}\ast f](x)g(x)dx\ \ \text{for}\ f,g\in\mathcal{S}\] with \[\Phi_{\lambda}(x):=(2\pi)^{-\frac{n}{2}}\mathcal{F}^{-1}((|\xi|^{2}-\lambda-i0 )^{-1})(x)=\lambda^{\frac{n-1}{2}}\Phi_{1}(\sqrt{\lambda}x)=\frac{i}{4}(\frac {\lambda}{4\pi^{2}|x|^{2}})^{\frac{2-n}{4}}H^{(1)}_{\frac{n-2}{2}}(\sqrt{ \lambda}|x|)\] for \(x\in\mathbb{R}^{n}\setminus\{0\}\), where \(H^{(1)}_{\frac{n-2}{2}}\) is the Hankel function of the first kind of order \(\frac{n-2}{2}\). Here we use the notation form [21], which also allows us briefly write \[\mathcal{R}_{\lambda}f:=\mathcal{F}^{-1}\left((|\xi|^{2}-\lambda-i0)^{-1} \widehat{f}\right)\ \ \text{for}\ f\in\mathcal{S}.\] For \(H^{(1)}_{\frac{n-2}{2}}\) we have the asymptotic expansions \[H^{(1)}_{\frac{n-2}{2}}(s)=\begin{cases}\sqrt{\frac{2}{\pi s}}e^{i(s-\frac{n- 1}{4}\pi)}[1+O(s^{-1})],&\text{as}\ \ s\longrightarrow\infty,\\ -\frac{i\Gamma(\frac{n-2}{2})}{\pi}\Big{(}\frac{2}{s}\Big{)}^{\frac{N-2}{2}}[1 +O(s)],&\text{as}\ \ s\longrightarrow 0^{+},\end{cases}\] (see e.g. [27, Formulas (5.16.3)]), so there exists a constant \(C_{0}>0\) such that \[|\Phi_{\lambda}(x)|\leq\begin{cases}C_{0}{\rm max}\{|x|^{2-n},|x|^{\frac{1-n}{2} }\}\ \ {\rm for}\ \ x\in\mathbb{R}^{n}\setminus\{0\},&N\geq 3,\\ C_{0}{\rm min}\{1+|{\rm log}\ |x||,|x|^{-\frac{1}{2}}\}\ \ {\rm for}\ \ x\in\mathbb{R}^{n} \setminus\{0\},&N=2.\end{cases} \tag{2.1}\] As we can see, \(\Phi_{\lambda}\) is a Green function of Helmholtz operator but without the uniform bounded estimate. Actually, let \(z\in\mathbb{C}\), and let \(\Phi_{z}(x)\) be the Fourier transform of the multipliers \(m_{\lambda}=\frac{1}{|\xi|^{2}+z}\), then, as \(arg\ z\notin[-\frac{\pi}{2},\frac{\pi}{2}]\), some exponential decay properties of \(H^{1}_{\frac{N-2}{2}}(\sqrt{|\xi|^{2}z})\) is not uniform, see more details in Kenig, Ruiz, Sogge [26]. Based on these essential properties of the Green function, an oscillatory integral theorem of Stein [40] has been used to deal with this problem, see [26]. This method also be used in the case of fractional Helmholtz operator, see Huang, Yao and Zheng [25]. However, these result may be not the optimal. Hence, a cut-off skill and harmonic analysis method has been proposed by Gutierrez [22]. As a consequence, a more precisely version of the resolvent estimate is established. This method also be used to establish the nonvanishing lemma in Evequoz and Weth [13]. While, in the paper of Chen, Evequoz and Weth, they assumed that the nonlinearities \(f(x)\) belong to a stronger integrable space \(L^{\infty}_{\alpha}(\mathbb{R}^{n})\), that is some functions satisfying decay condition, and then they obtained the bounded estimate for the resolvent with the help of the weight term \(\langle x\rangle^{\alpha}=(1+|x|^{2})^{\frac{\alpha}{2}}\). ### \(L^{p}(\mathbb{R}^{n})\) estimate for resolvent of fractional Helmholtz operator Follow the idea of Gutierrez [22], we give the proof of Theorem 1.1. **Proof of Theorem 1.1.** For any \(f(x)\in\mathcal{S}\), the Schwartz space, by using the Fourier transform, the solution \(u_{\varepsilon}=\mathcal{R}^{s}_{\lambda,\varepsilon}f\) then can be written as (here we drop the subscript \(\varepsilon\) from notation) \[u(x)=c\int_{\mathbb{R}^{n}}e^{ix\cdot\xi}\frac{1}{|\xi|^{2s}-(\lambda+i \varepsilon)}\widehat{f(\xi)}d\xi. \tag{2.2}\] Since the problem is invariant under dilation and rotation we restrict ourselves to the case \(\lambda=1\) and \[{\rm supp}\widehat{f}\subseteq\{\xi=(\overline{\xi},\xi_{n})\in\mathbb{R}^{n- 1}\times\mathbb{R}:|\overline{\xi}|<\xi_{n}/6,\xi_{n}>0\}. \tag{2.3}\] Take a radial cut-off function \(\phi\in C^{\infty}_{c}(\mathbb{R})\), with \({\rm supp}\phi\subseteq[0,3/4]\), \(\phi(t)=1\) if \(t\in[0,5/8]\), and \(0\leq\phi\leq 1\). Then, we can split the multiplier \(m(\xi)=(|\xi|^{2s}-(1+i\varepsilon))^{-1}\) in (2.2) into multiplier \(m_{i}\), \(i=1,2,3\), in the following way \[m_{1}(\xi)=\phi(|\xi|)m(\xi),\ \ m_{2}(\xi)=(1-\phi(|\xi|/2))m(\xi),\ \ m_{3}=m-(m_{1}+m_{2}). \tag{2.4}\] Let \(u_{i}\) such that \(\widehat{u_{i}}(\xi)=m_{i}(\xi)\widehat{f}(\xi)\) or, equivalently, \(u_{i}=M_{i}*f\) where \(M_{i}\) denotes the Fourier transform of \(m_{i}\) (i.e. \(\widehat{M_{i}}=m_{i}\)), it suffices to prove inequality (1.11) for each \(u_{i}\). Firstly, it is easy to check that function \(u_{i}\), \(i=1,2\), are pointwise majorized by the Bessel potential \(J^{2s}f=(I-\Delta)^{s}f\). Hence the Fractional Integral Theorem yields \[||u_{i}||_{L^{q}(\mathbb{R}^{n})}=||M_{i}*f||_{L^{q}(\mathbb{R}^{n})}\leq C|| J^{2s}f||_{L^{q}(\mathbb{R}^{n})}\leq C||f||_{L^{p}(\mathbb{R}^{n})}, \tag{2.5}\] when \(0\leq\frac{1}{p}-\frac{1}{q}\leq\frac{2s}{n}\), \(q\neq\infty\) and \(p\neq 1\). It remains \(u_{3}\) to be estimated. Without loss of generality, we assume that \({\rm supp}\widehat{f}\) is contained in a neighbourhood of the support of \(m_{3}\), i.e. \(||\xi|-1|<1/2\). We then claim that \[|M_{3}(x)|\leq\frac{C}{(1+|x|)^{\frac{n-1}{2}}},\ \ a.e.\ x\in\mathbb{R}^{n},\ \varepsilon>0. \tag{2.6}\] Indeed, from the definition of \(M_{3}\) (i.e. \(\widehat{M_{3}}=m_{3}\)) we can write \[M_{3}(x)=M_{3}(\overline{x},x_{n})=\int_{\mathbb{R}^{N-1}}e^{\overline{x}\cdot \overline{\xi}}\Big{(}\int_{\mathbb{R}}e^{ix_{n}\xi_{n}}\frac{\psi(\xi)}{|\xi| ^{2s}-(1+i\varepsilon)}d\xi_{n}\Big{)}d\overline{\xi}, \tag{2.7}\] where \(\psi(\xi)=\phi(|\xi|/2)-\phi(|\xi|)\) is a compactly supported smooth function and \[{\rm supp}\psi\subset\{(\overline{\xi},\xi_{n}):|\overline{\xi}|<1/4,3/8<\xi_{n }<3/2\}. \tag{2.8}\] Using the change of variables \(\eta_{n}=\xi_{n}-(1-|\overline{\xi}|^{2})^{1/2}\), i.e. \(\xi_{n}=\eta_{n}+(1-|\overline{\xi}|^{2})^{1/2}\) in the above identity, the kernel \(M_{3}\) can be rewritten as \[M_{3}(x)=M_{3}(\overline{x},x_{n})=\int_{\mathbb{R}^{n-1}}e^{i \overline{x}\cdot\overline{\xi}}\Big{(}\int_{\mathbb{R}}e^{ix_{n}\xi_{n}} \frac{\psi(\xi)}{|\xi|^{2s}-(1+i\varepsilon)}d\xi_{n}\Big{)}d\overline{\xi}\] \[=\int_{\mathbb{R}^{n-1}}e^{i\overline{x}\cdot\overline{\xi}}\Big{(} \int_{\mathbb{R}}e^{ix_{n}\eta_{n}}\frac{\psi(\overline{\xi},\xi_{n})}{|\xi|^{2 s}-(1+i\varepsilon)}d\xi_{n}\Big{)}d\overline{\xi}\] \[=\int_{\mathbb{R}^{n-1}}e^{i\overline{x}\cdot\overline{\xi}}\Big{(} \int_{\mathbb{R}}e^{ix_{n}(\eta_{n}+(1-|\overline{\xi}|^{2})^{1/2})}\frac{ \psi(\overline{\xi},\eta_{n}+(1-|\overline{\xi}|^{2})^{1/2})}{|\xi|^{2s}-(1+ i\varepsilon)}d(\eta_{n}+(1-|\overline{\xi}|^{2})^{1/2})\Big{)}d\overline{\xi}\] \[=\int_{\mathbb{R}^{n-1}}e^{i\overline{x}\cdot\overline{\xi}+ix_{n }(1-|\overline{\xi}|^{2})^{1/2}}\Big{(}\int_{\mathbb{R}}e^{ix_{n}\eta_{n}}\frac {\psi(\overline{\xi},\eta_{n}+(1-|\overline{\xi}|^{2})^{1/2})}{|\xi|^{2}+(\eta _{n}+(1-|\overline{\xi}|^{2})^{1/2})^{2}]^{s}-(1+i\varepsilon)}d(\eta_{n}+(1-| \overline{\xi}|^{2})^{1/2})\Big{)}d\overline{\xi}\] \[=\int_{\mathbb{R}^{n-1}}e^{i\overline{x}\cdot\overline{\xi}+ix_{n }(1-|\overline{\xi}|^{2})^{1/2}}\Big{(}\int_{\mathbb{R}}e^{ix_{n}\eta_{n}}\frac {\psi(\overline{\xi},\eta_{n}+(1-|\overline{\xi}|^{2})^{1/2})}{[\eta_{n}^{2}+ 2\eta_{n}(1-|\overline{\xi}|^{2})^{1/2}+1]^{s}-(1+i\varepsilon)}d(\eta_{n}+(1- |\overline{\xi}|^{2})^{1/2})\Big{)}d\overline{\xi}\] \[=\int_{\mathbb{R}^{n-1}}e^{i\overline{x}\cdot\overline{\xi}+ix_{n }(1-|\overline{\xi}|^{2})^{1/2}}\Big{(}\int_{\eta_{n}}e^{ix_{n}\eta_{n}}\frac {\widetilde{\psi}_{\varepsilon}(\overline{\xi},\eta_{n})}{\eta_{n}^{s}}d\eta_ {n}\Big{)}d\overline{\xi}\] \[=\int_{\mathbb{R}^{n-1}}e^{i\overline{x}\cdot\overline{\xi}+ix_{n }(1-|\overline{\xi}|^{2})^{1/2}}\gamma(\overline{\xi},x_{n})d\overline{\xi}, \tag{2.9}\] where \[\gamma(\overline{\xi},x_{n})=\int_{\eta_{n}}e^{ix_{n}\eta_{n}}\frac{\widetilde {\psi}_{\varepsilon}(\overline{\xi},\eta_{n})}{\eta_{n}^{s}}d\eta_{n} \tag{2.10}\] and \(\widetilde{\psi}_{\varepsilon}(\overline{\xi},\eta_{n})\) is a Schwartz function with uniform estimates in \(\overline{\xi}\) and \(\varepsilon\), \(\gamma(\overline{\xi},x_{n})\in\mathcal{C}_{c}^{\infty}(\mathbb{R}^{N-1})\) with support and uniform estimates in the set of \(\{\xi:|\overline{\xi}|<\frac{1}{4}\}\). Denote the phase function of \(M_{3}\) by \(\sigma(\xi)=\overline{x}\cdot\overline{\xi}+x_{n}(1-|\overline{\xi}|^{2})^{1/2}\), then it is easy to check that \(\sigma(\xi)\) has no critical points in the support of \(\gamma(\overline{\xi},x_{n})\) when \(|\overline{x}|\leq|x_{n}|/3\). Therefore, it suffices consider points \(x=(\overline{x},x_{n})\) with \(|\overline{x}|\leq|x_{n}|/3\). Notice that \(1-|\overline{\xi}|^{2}>c>0\) if \(\overline{\xi}\in\mathrm{supp}\gamma(\cdot,t)\), and so \[\det(\partial_{ij}^{2}\sigma)\geq|x_{n}|^{n-1},\ \ \forall\xi\in\mathrm{supp} \gamma(\cdot,t). \tag{2.11}\] Hence, by the stationary phase lemma, see [24, Theorem 7.7.17], we are lead to \[|M_{3}(x)|\leq C(1+|x_{n}|)^{-(n-1)/2},\ \ \forall\ \varepsilon>0, \tag{2.12}\] and consequently, our claims hold. Follow the similar proof in [22, Theorem 6], for \(\frac{1}{p}-\frac{1}{q}\geq\frac{1}{n+1}\), \(\frac{1}{q}<\frac{n-1}{2n}\), and \(\frac{1}{p}>\frac{n+1}{2n}\), we obtain that \[||u_{3}||_{L^{q}(\mathbb{R}^{n})}=||M_{3}*f||_{L^{q}(\mathbb{R}^{n})}\leq C||f|| _{L^{p}(\mathbb{R}^{N})}. \tag{2.13}\] Together with the estimate in (2.5), we obtain the estimate (1.11). On the other hand, since the above estimates for \(M_{1},M_{2}\) and \(M_{3}\) are uniform with respect to \(\varepsilon\), hence we have the limiting absorption principle for \(\mathcal{R}_{\lambda,\varepsilon}^{s}\), that is (1.12). The proof of Theorem 1.2 follows the same method in [22, Theorem 8]. **Proof of Theorem 1.2.** Firstly, we prove inequality (1.14). Using the Fourier transform, we can write \[D_{x}^{s}u(x)=c\int_{\mathbb{R}^{n}}e^{ix\cdot\xi}\frac{\xi^{s}}{|\xi|^{2s}-( \lambda+i\varepsilon)}\widehat{f}(\xi)d\xi. \tag{2.14}\] Since the problem is invariant under dilations, rotations and translations we restrict ourselves to the case \(x_{0}=0,\tau=1\) and \[\mathrm{supp}\widehat{f}\subset\{\xi=(\overline{\xi},\xi_{n})\in\mathbb{R}^{n-1 }\times\mathbb{R}:|\overline{\xi}|<\xi_{n}/6,\xi_{n}>0\}. \tag{2.15}\] Let \(\phi\) be a radial cut off function such that \(\phi\in\mathcal{C}_{c}^{\infty}(\mathbb{R})\) with \(\text{supp}\phi\subset[0,3/4]\), \(\phi(t)=1\) if \(t\in[0,5/8]\), and \(0\leq\phi\leq 1\). Define \(f_{i}\), \(i=1,2,3\) by \[\widehat{f}_{1}(\xi)=\phi(|\xi|)\widehat{f}(\xi),\ \ \widehat{f}_{2}(\xi)=(1-\phi(| \xi|/2))\widehat{f}(\xi),\ \ \widehat{f}_{3}=\widehat{f}-(\widehat{f}_{1}+\widehat{f}_{2}). \tag{2.16}\] We denote by \(u_{i}\) the solution of the equation corresponding to the inhomogeneous term \(F_{i}\). Then, it suffices to prove inequality (1.14) for each \(D_{x}^{s}u_{i}\). It is easy to see that \(D_{x}^{s}u_{1}\) and \(D_{x}^{s}u_{2}\) are pointwise bounded by \(J^{s/2}f_{1}\) and \(J^{s/2}f_{2}\). Hence, by Holder's inequality, Minkowski Integral Inequality, and the known estimates for the Bessel potentials, we are lead to \[\begin{split}\Big{(}\frac{1}{R}\int_{B_{R}}|D_{x}^{s}u_{i}|^{2} dx\Big{)}^{1/2}&=\Big{(}\frac{1}{R}\int_{\mathbb{R}^{n}}\chi_{B_{ \kappa}}|D_{x}^{s}u_{i}|^{2}dx\Big{)}^{1/2}\\ &\leq\Big{(}\frac{1}{R}|B_{R}|^{1-\frac{1}{q}}|||D_{x}^{s}u_{i}|^{ 2}||_{L^{q}(\mathbb{R}^{n})}\Big{)}^{1/2}\\ &\leq cR^{\frac{q}{2}(1-\frac{1}{q})-\frac{1}{2}}||J^{s}f_{i}||_{ L^{2q}(\mathbb{R}^{n})}\\ &\leq cR^{\frac{q}{2}(1-\frac{1}{q})-\frac{1}{2}}||f_{i}||_{L^{p} (\mathbb{R}^{n})}\\ &\leq cR^{\frac{q}{2}(1-\frac{1}{q})-\frac{1}{2}}||f_{i}||_{L^{p} (\mathbb{R}^{n})}\leq c||f_{i}||_{L^{p}(\mathbb{R}^{n})},\end{split} \tag{2.17}\] where \(0\leq\frac{1}{p}-\frac{1}{2q}\leq\frac{s}{n}\), \(\frac{1}{q}\geq 1-\frac{1}{n}\) and \(i=1,2\). The estimate for \(D_{x}^{i}u_{3}\) can follow the same line and argument as those given in [39, Theorem 3.1], so that we conclude that \[\Big{(}\frac{1}{R}\int_{B_{R}}|D_{x}^{s}u_{3}(x)|^{2}dx\Big{)}^{1/2}\leq c||f ||_{p}, \tag{2.18}\] for \(p\) such that \(\frac{1}{p}-\frac{1}{p^{\prime}}\geq 1-(n-1)(\frac{1}{p}-\frac{1}{2})\). As a consequence, we obtain the estimate of (1.14). The proof of (1.13) follows the same lines as the proof of (1.14), it is suffices to use the fact that \(u_{i}\) is pointwise bounded by the Bessel potential \(J^{2s}f_{i}\) when \(i=1,2\). ### \(L^{\infty}(\mathbb{R}^{n})\) estimate for resolvent of fractional Helmholtz operator **Proof of Theorem 1.5.** Denote by \(K(x)\) the Schwartz kernel of the resolvent \([(-\Delta)^{s}-(1+i\varepsilon)]^{-1}\), it follows from the proof of Theorem 1.4. in [25] that \[|K(x)|\leq\begin{cases}C|x|^{2s-n},&0<|x|\leq 1,\\ C|x|^{\frac{1-n}{2}},&|x|>1,\end{cases} \tag{2.19}\] and \[|\nabla K(x)|\leq\begin{cases}c|x|^{2s-1-n},&0<|x|\leq 1,\\ c|x|^{\frac{1-n}{2}},&|x|>1.\end{cases}\] Then, we claim that \[|||K|*f||_{L^{\infty}_{\tau(\alpha)}(\mathbb{R}^{n})}\leq C||f||_{L^{\infty}_ {\alpha}(\mathbb{R}^{n})},\ \ \ |||\nabla K|*f||_{L^{\infty}_{\tau(\alpha)}(\mathbb{R}^{n})}\leq C||f||_{L^{ \infty}_{\alpha}(\mathbb{R}^{n})}. \tag{2.20}\] Indeed, we can calculate that \[\begin{split}|(K*f)(x)|&\leq\int_{\mathbb{R}^{n}}||K(y)||f (x-y)|dy\\ &\leq C||f||_{L^{\infty}_{\alpha}(\mathbb{R}^{n})}\Big{(}\int_{B_{1 }(0)}|y|^{2s-n}\langle x-y\rangle^{-\alpha}dy+\int_{\mathbb{R}^{n}\setminus B_{ 1}(0)}|y|^{\frac{1-n}{2}}\langle x-y\rangle^{-\alpha}dy\Big{)}.\end{split} \tag{2.21}\] For \(|x|\leq 4\), it is easy to see that \[\begin{split}|(|K|*f)(x)|&\leq C||f||_{L^{\infty}_{ \alpha}(\mathbb{R}^{n})}\Big{(}\int_{B_{1}(0)}|y|^{2s-n}dy+\int_{\mathbb{R}^{n} \setminus B_{1}(0)}|y|^{\frac{1-n}{2}-\alpha}dy\Big{)}\\ &\leq C||f||_{L^{\infty}_{\alpha}(\mathbb{R}^{n})}\Big{(}\int_{0} ^{1}|r|^{2s-1}dr+\int_{1}^{\infty}|r|^{\frac{1-n}{2}-\alpha+n-1}dr\Big{)}\\ &\leq C||f||_{L^{\infty}_{\alpha}(\mathbb{R}^{n})},\end{split} \tag{2.22}\] where we use that fact that \(1\leq s\leq\frac{n}{2}\) and \(\alpha>\frac{n+1}{2}>\frac{n-1}{2}\). Similarly, we have \[|(|\nabla K|*f)(x)|\leq C||f||_{L^{\infty}_{\alpha}(\mathbb{R}^{n})}\Big{(}\int_ {0}^{1}|r|^{2s-2}dr+\int_{1}^{\infty}|r|^{\frac{1-n}{2}-\alpha+n-1}dr\Big{)} \leq C||f||_{L^{\infty}_{\alpha}(\mathbb{R}^{n})}. \tag{2.23}\] For \(|x|>4\), we have \[I_{1}=\int_{B_{1}(0)}|y|^{2s-n}\langle x-y\rangle^{-\alpha}dy\leq C|x|^{- \alpha}. \tag{2.24}\] While for the estimate outside \(B_{1}(0)\), it has been computation in the proof of [9, Lemma 2.1], that is \[I_{2}:= \int_{B_{\frac{|x|}{2}}(0)\setminus B_{1}(0)}|y|^{\frac{1-n}{2}} \langle x-y\rangle^{-\alpha}dy\leq C|x|^{-\alpha+\frac{n+1}{2}};\] \[I_{3}:= \int_{B_{\frac{|x|}{2}}(x)}|y|^{\frac{1-n}{2}}\langle x-y\rangle^ {-\alpha}dy\leq C|x|^{-\tau(\alpha)};\] \[I_{4}:= \int_{\mathbb{R}^{n}\setminus\left(B_{\frac{|x|}{2}}(0)\cup B_{ \frac{|x|}{2}}(x)\right)}|z|^{\frac{1-n}{2}}\langle x-y\rangle^{-\alpha}dy\leq C |x|^{-\alpha+\frac{n+1}{2}}. \tag{2.25}\] Since \(-\tau(\alpha)\geq\max\{-\frac{n-1}{2},-\alpha,-\alpha+\frac{n+1}{2}\}\), one may combine these estimates with (2.22) to obtain the first statement of our claim. Moreover, for \(|x|>4\), we also have \[\widetilde{I}_{1}=\int_{B_{1}(0)}|y|^{2s-1-n}\langle x-y\rangle^{-\alpha}dy \leq C|x|^{-\alpha}\leq C\langle x\rangle^{-\alpha}. \tag{2.26}\] Therefore, we find by (2.23) that \[|(|\nabla K|*f)(x)|\leq C||f||_{L^{\infty}_{\alpha}}(\widetilde{I}_{1}+I_{2}+ I_{3}+I_{4})\leq C\langle x\rangle^{-\tau(\alpha)}||f||_{L^{\infty}_{\alpha}( \mathbb{R}^{n})},\ \ \text{for all }x\in\mathbb{R}^{n}. \tag{2.27}\] This implies that the second statement of our claim also holds. What's more, follow the same line in the proof of Proposition 1.1 in [9], the resolvent operator \(\mathcal{R}^{s}_{\lambda}\) is a compact linear map. ## 3. Complex valued solutions for fractional Helmholtz equation **Proof of Theorem 1.3.** Define the operator \(\mathcal{R}^{s}_{\lambda,\varphi}\) by \[\mathcal{R}^{s}_{\lambda,\varphi}(u)(x)=\varphi(x)+\mathcal{R}^{s}_{\lambda}(|u |^{t-1}u)(x). \tag{3.1}\] We now show that for an appropriate \(a>0\) the map \(\mathcal{R}^{s}_{\lambda,\varphi}:B_{a}(L^{q}(\mathbb{R}^{n}))\longrightarrow B _{a}(L^{q}(\mathbb{R}^{n}))\) is a contraction, where \(q\) satisfies the assumptions in \((i)-(v)\). From Theorem 1.1, for any exponent \(q\) satisfies the assumptions in \((i)-(v)\), there exist \(p=\frac{q}{t}\) such that \[\begin{split}||\mathcal{R}^{s}_{\lambda,\varphi}(u)||_{L^{q}( \mathbb{R}^{n})}&\leq||\varphi||_{L^{q}(\mathbb{R}^{n})}+|| \mathcal{R}^{s}_{\lambda}(|u|^{t-1}u)||_{L^{q}(\mathbb{R}^{n})}\\ &\leq C||\varphi||_{L^{q}(\mathbb{R}^{n})}+||u|^{t-1}u||_{L^{p}( \mathbb{R}^{n})}\\ &\leq C(\varepsilon+a^{t})<a,\ \ \ \forall u\in B_{a}(L^{q}(\mathbb{R}^{n})).\end{split} \tag{3.2}\] Moreover, due to the linearity of operator \(\mathcal{R}^{s}_{\lambda}\), and the estimates in Theorem 1.1, with the same indices \(p\) and \(q\), we obtain the following chain of inequalities \[\begin{split}||\mathcal{R}^{s}_{\lambda,\varphi}(u)-\mathcal{R}^{ s}_{\lambda,\varphi}(w)||_{L^{q}(\mathbb{R}^{n})}&=|| \mathcal{R}^{s}(|u|^{t-1}u-|w|^{t-1}w)||_{L^{q}(\mathbb{R}^{n})}\\ &\leq C||u|^{t-1}u-|w|^{t-1}w||_{L^{p}(\mathbb{R}^{n})}\\ &\leq C||(|u|^{t-1}+|w|^{t-1})|u-w||_{L^{p}(\mathbb{R}^{n})}\\ &\leq C(||u||^{t-1}_{L^{q}(\mathbb{R}^{n})}+||w||^{t-1}_{L^{q}( \mathbb{R}^{n})})||u-w||_{L^{q}(\mathbb{R}^{n})}\\ &\leq Ca^{t-1}||u-w||_{L^{q}(\mathbb{R}^{n})}<||u-w||_{L^{q}( \mathbb{R}^{n})},\end{split} \tag{3.3}\] where we use the Holder's inequality in obtaining the third inequality. Therefore, the map \(\mathcal{R}^{s}_{\lambda,\varphi}\) is a contraction in \(B_{a}(L^{q}(\mathbb{R}^{n}))\), and consequently, there exists a unique \(u\in B_{a}(L^{4}(\mathbb{R}^{n}))\) which satisfies \(u=\mathcal{R}^{s}_{\lambda,\varphi}(|u|^{t-1}u)+\varphi\), that is a solution of (1.1). **Proof of Theorem 1.6.** Let \(\varphi\in X:=L^{\infty}(\mathbb{R}^{n})\). We write (1.1) as a fixed point equation \[u=\mathcal{A}(u)\ \ \text{in}\ X \tag{3.4}\] with the nonlinear operator \[\mathcal{A}:X\longrightarrow X,\ \ \mathcal{A}[w]=\mathcal{R}^{s}_{\lambda}(N_{f}(w ))+\varphi, \tag{3.5}\] where \(N_{f}(u)(x)=f(x,u(x))\) is a superposition operator. Since \(\alpha>\frac{n+1}{2}\), we may fix \(\alpha^{\prime}\in(\frac{n+1}{2},\alpha)\). By Lemma 3.1 in [9], the nonlinear operator \(N_{f}:X\longrightarrow L^{\infty}_{\alpha^{\prime}}(\mathbb{R}^{n})\) is well defined and continuous. Moreover, \(\mathcal{R}^{s}_{\lambda}:L^{\infty}_{\alpha}(\mathbb{R}^{n})\longrightarrow X\) is compact by Theorem 1.5. Consequently, \(\mathcal{A}\) is a compact and continuous operator. Moreover, let \(\mathcal{F}\subset L^{\infty}(\mathbb{R}^{n})\) be a set of function \(u\) which solve the equation \[u=\mu(\mathcal{R}^{s}_{\lambda}N_{f}(u)+\varphi)\ \ \text{for some}\ \mu\in[0,1]. \tag{3.6}\] Then, we easily deduce that \(\mathcal{F}\) is bounded in \(L^{\infty}(\mathbb{R}^{n})\), see the same analysis in the proof of Proposition 5.1 in [9]. Hence a Schaefer's fixed point theorem [11, Chapter 9.2.2] implies that \(\mathcal{A}\) has a fixed point. Furthermore, the Lipschitz condition (1.24) implies that \[||\mathcal{A}(u)-\mathcal{A}(v)||_{X}=||\mathcal{R}^{s}_{\lambda}(N_{f}(u)-N_ {f}(v))||\leq\kappa_{\alpha}||N_{f}(u)-N_{f}(v)||_{L^{\infty}_{\alpha}}\leq \kappa_{\alpha}l_{\alpha}||u-v||_{X}, \tag{3.7}\] with \(\kappa_{\alpha}l_{\alpha}<1\). Hence \(\mathcal{A}\) is a contraction, and thus it has a unique fixed point in \(X\). **Proof of Theorem 1.8.** Let \(X=L^{\infty}(\mathbb{R}^{n})\), consider the nonlinear operator \(\mathcal{B}:X\longrightarrow X\), \(\mathcal{B}(u):=u-\mathcal{R}^{s}_{\lambda}N_{f}(u)\). Since \(N_{f}(0)=0\), then \(\mathcal{B}(0)=0\). Moreover, since \(f(x,u)\) is continuous and differentiable, then \(N_{f}:X\longrightarrow L^{\infty}_{\alpha^{\prime}}\) is also differentiable, see the specific proof in Lemma 3.2 in [9]. Therefore, \(\mathcal{B}\) is a differentiable map. Moreover, by the assumption on \(f(x,u)\), we have \[\mathcal{B}^{\prime}(0)=id-\mathcal{R}^{s}_{\lambda}N^{\prime}_{f}(0)=id\in \mathcal{L}_{\mathbb{R}}(X,X). \tag{3.8}\] Consequently, \(\mathcal{B}\) is a diffeomorphism between open neighborhoods \(U,V\subset X\) of zero, and this implies that our claim. ## 4. The nonvanishing property As we introduced before, we use the dual variational methods rather than the contraction mapping argument to detect the real valued solutions for (1.1), some compactness problem will arise naturally, hence, it is necessary to establish a nonvanishing property for resolvent. **Lemma 4.1**.: _Let \(n\geq 3\), \(\frac{n}{n+1}<s<\frac{n}{2}\) and \(\frac{2(n+1)}{n-1}<p<\frac{2n}{n-2s}\). Moreover, let \((v_{n})_{n}\subset L^{p^{\prime}}(\mathbb{R}^{n})\) be a bounded sequence satisfying \(\limsup_{n\longrightarrow\infty}|\int_{\mathbb{R}^{n}}v_{n}\mathcal{R}^{s}_{ \lambda}v_{n}|dx>0\). Then there exists \(R>0\), \(\zeta>0\) and a sequence \((x_{n})_{n}\subset\mathbb{R}^{n}\) such that, up to a subsequence,_ \[\int_{B_{R}(x_{n})}|v_{n}|^{p^{\prime}}dx\geq\zeta\ \ \text{for all}\ n. \tag{4.1}\] The proof is similar to the classical case in [13, Section 3], and we use a cut-off skill that revive the method of Gutierrez in [22]. Fix \(\psi\in\mathcal{S}\) such that \(\widehat{\psi}\in\mathcal{C}^{\infty}(\mathbb{R}^{n})\) is radial, \(0\leq\widehat{\psi}\leq 1\), \(\widehat{\psi}(\xi)=1\) for \(||\xi|-1|\leq\frac{1}{6}\) and \(\widehat{\psi}(\xi)=0\) for \(||\xi|-1|\geq\frac{1}{4}\). We then write \(K=K_{1}+K_{2}\) with \[K_{1}:=\psi*K,\quad K_{2}=K-K_{1}. \tag{4.2}\] From (2.19) it then follows, by making \(C_{0}>0\) large necessary, that \(K_{1}\in\mathcal{C}^{\infty}(\mathbb{R}^{n})\) and \[|K_{1}(x)|\leq C_{0}(1+|x|)^{\frac{1-n}{2}}\quad\text{ for }x\in\mathbb{R}^{n}. \tag{4.3}\] This is particular implies that \(|K_{2}(x)|=|[K-K_{1}](x)|\leq 2C_{0}|x|^{2s-n}\) for \(|x|\leq 1\). Moreover, since \(\widehat{K_{2}}=(1-\widehat{\psi})\widehat{K}\) and \(\widehat{K}=(|\xi|^{2s}-1-i0)^{-1}\), we have \(\widehat{K_{2}}\in\mathcal{C}^{\infty}(\mathbb{R}^{n})\) with \(\widehat{K_{2}}(\xi)=(|\xi|^{2s}-1)^{-1}\) for \(|\xi|\geq\frac{5}{4}\). This implies that \(\partial^{\gamma}\widehat{K_{2}}\in L^{1}(\mathbb{R}^{n})\) for all \(\gamma\) such that \(|\gamma|>n-2s\), which gives \(|K_{2}(x)|\leq\kappa_{\beta}|x|^{-\beta}\), \(x\in\mathbb{R}^{n}\) for all \(\beta>n-2s\) with some constant \(\kappa_{\beta}>0\). In particular, by making \(C_{0}>0\) large if necessary, we have \[|K_{2}(x)|\leq C_{0}\text{min}\{|x|^{2s-n},|x|^{-n}\}\quad\text{for }x\in \mathbb{R}^{n}\setminus\{0\}. \tag{4.4}\] Proof.: For \(2<p<\frac{2n}{n-2s}\), we claim that for any bounded sequence \((v_{n})_{n}\in L^{p^{\prime}}(\mathbb{R}^{n})\) such that \[\lim_{n\longrightarrow\infty}\Big{(}\sup_{y\in\mathbb{R}^{n}}\int_{B_{\rho}(y )}|v_{n}|^{p^{\prime}}dx\Big{)}=0\quad\text{for all }\rho>0, \tag{4.5}\] we have \[\int_{\mathbb{R}^{n}}v_{n}[K_{2}*v_{n}]dx\longrightarrow 0\quad\text{as }n \longrightarrow\infty. \tag{4.6}\] Indeed, setting \(A_{R}:=\{x\in\mathbb{R}^{n}:\frac{1}{R}\leq|x|\leq R\}\) and \(D_{R}:=\mathbb{R}^{n}\setminus A_{R}\) for \(R>1\), we derive from (4.4) that \[||K_{2}||_{L^{\frac{p}{2}}(D_{R})}\longrightarrow 0\quad\text{as }R \longrightarrow\infty, \tag{4.7}\] since \(1<\frac{p}{2}<\frac{n}{n-2s}\). Hence, by Young's inequality, \[\sup_{n\in\mathbb{N}}\big{|}\int_{\mathbb{R}^{n}}v_{n}[(1_{D_{R}}K_{2})*v_{n} ]dx\big{|}\leq||K_{1}||_{L^{\frac{p}{2}}(D_{R})}\sup_{x\in\mathbb{N}}||v_{n}| |^{2}_{L^{p^{\prime}}(\mathbb{R}^{n})}\longrightarrow 0\ \text{ as }R \longrightarrow\infty. \tag{4.8}\] On the other hand, decomposing \(\mathbb{R}^{n}\) into disjoint \(N-\)cubes \(\{Q_{l}\}_{l\in\mathbb{N}}\) of side length \(R\), and considering for each \(l\) the \(N-\)cube \(Q_{l}^{\prime}\) with the same center as \(Q_{l}\) but with side length \(3R\), we find, \[\begin{split}\Big{|}\int_{\mathbb{R}^{n}}v_{n}[(1_{A_{R}}K_{2})*v _{n}]dx\Big{|}&\leq\sum_{l=1}^{\infty}\int_{Q_{l}}\Big{(}\int_{ \frac{1}{R}<|x-y|<R}|K_{2}(x-y)||v_{n}(x)||v_{n}(y)|dx\Big{)}dx\\ &\leq CR^{n-2s}\sum_{l=1}^{\infty}\int_{Q_{l}}\int_{Q_{l}}\Big{(} \int_{Q_{l}}|v_{n}(x)||v_{n}(y)|dy\Big{)}dx\\ &\leq CR^{n-2s+\frac{2n}{p}}\sum_{l=1}^{\infty}\int_{Q_{l}}\Big{(} \int_{Q_{l}}|v_{n}(x)|^{p^{\prime}}dx\Big{)}^{\frac{2}{p^{\prime}}}\\ &\leq CR^{n-2s+\frac{2n}{p}}\big{[}\sup_{t\in\mathbb{N}}\int_{Q_{ l}}|v_{n}|^{p^{\prime}}dx\big{]}^{\frac{2}{p^{\prime}}-1}\sum_{l=1}^{\infty}\int_{Q_{l} ^{\prime}}|v_{n}(x)|^{p^{\prime}}dx\\ &\leq CR^{n-2s+\frac{2n}{p}}\big{[}\sup_{y\in\mathbb{R}^{n}}\int_ {y\in\mathbb{R}^{n}}|v_{n}|^{p^{\prime}}dx\big{]}^{\frac{2}{p^{\prime}}-1}3^{ N}||v_{n}||^{p^{\prime}}_{p^{\prime}}.\end{split} \tag{4.9}\] Hence, we also have \[\lim_{n\longrightarrow\infty}\int_{\mathbb{R}^{n}}v_{n}[(1_{A_{R}}K_{2})*v_{n} ]dx=0\quad\text{for every }\ R>0. \tag{4.10}\] Combine (4.8) and (4.10), our claim holds. For \(p>\frac{2(n+1)}{n-1}\), we also claim that for any bounded sequence \((v_{n})_{n}\in L^{p^{\prime}}(\mathbb{R}^{n})\) such that \[\lim_{n\longrightarrow\infty}\Big{(}\sup_{y\in\mathbb{R}^{n}}\int_{B_{\rho}(y )}|v_{n}|^{p^{\prime}}dx\Big{)}=0\quad\text{for all }\rho>0, \tag{4.11}\] we have \[\int_{\mathbb{R}^{n}}v_{n}[K_{1}*v_{n}]dx\longrightarrow 0\quad\text{as }n \longrightarrow\infty. \tag{4.12}\] Indeed, since \(|K_{1}(x)|\leq C_{0}(1+|x|)^{\frac{1-n}{2}}\), this proof follows the same line in Proposition 3.3 and Lemma 3.4 in [13]. ## 5. Variational Setting In order to set up the variational framework, we begin with some preliminary work. ### Regularity Lemma As we can see, the \(L^{q}\) complex valued solutions can be obtained by a bounded estimate for resolvent. Actually, these solutions also have a local strong regularity. **Lemma 5.1**.: _Let \(n\geq 3\), \(\frac{n}{n+1}<s<\frac{n}{2}\), \(\frac{2(n+1)}{n-1}<p<\frac{2n}{n-2s}\) and let \(f\in L^{p^{\prime}}(\mathbb{R}^{n})\). Then \(u=\mathcal{R}^{s}_{\lambda}f\in W^{2s,p^{\prime}}_{\mathrm{loc}}(\mathbb{R}^{ n})\cap L^{p}(\mathbb{R}^{n})\) is a strong solution of \((-\Delta)^{s}u-u=f\) in \(\mathbb{R}^{n}\). Moreover, for every \(r>0\), there exists a constant \(C>0\) depending on \(r,s,n,p\), such that for all \(x_{0}\in\mathbb{R}^{n}\),_ \[||u||_{W^{2s,p^{\prime}}(B_{r}(x_{0}))}\leq C\big{(}||u||_{L^{p}(\mathbb{R}^{ n})}+||f||_{L^{p^{\prime}}(\mathbb{R}^{n})}\big{)}. \tag{5.1}\] _Furthermore,_ _(i) If_ \(f\in L^{p^{\prime}}(\mathbb{R}^{n})\cap L^{q}_{\mathrm{loc}}(\mathbb{R}^{n})\) _and_ \(u\in L^{q}_{\mathrm{loc}}(\mathbb{R}^{n})\) _for some_ \(q\in(1,\infty)\)_, then_ \(u\in W^{2s,q}_{\mathrm{loc}}(\mathbb{R}^{n})\)_, and for every_ \(r>0\)_, there exists a constant_ \(D>0\) _depending only on_ \(r,n,p,q\) _such that_ \[||u||_{W^{2s,q}(B_{r}(x_{0}))}\leq D\big{(}||u||_{L^{q}(B_{2r}(x_{0}))}+||f||_ {L^{q}(B_{2r}(x_{0}))}\big{)} \tag{5.2}\] _for every_ \(x_{0}\in\mathbb{R}^{n}\)_._ _(ii) If_ \(f\in L^{p^{\prime}}(\mathbb{R}^{n})\cap L^{q}(\mathbb{R}^{n})\) _and_ \(u\in L^{q}(\mathbb{R}^{n})\) _for some_ \(q\in(1,\infty)\)_, then_ \(u\in W^{2s,q}(\mathbb{R}^{n})\)_._ Proof.: Firstly, we claim that, for any \(f\in L^{p^{\prime}}(\mathbb{R}^{n})\), the equation \((-\Delta)^{s}u-u=f\) holds in the sense of distributional. Indeed, assume that \(f\in\mathcal{S}\), then \(\mathcal{R}^{s}_{\lambda}f\in\mathcal{S}^{\prime}\) is given by \[\langle\mathcal{R}^{s}_{\lambda}f,\varphi\rangle=\lim_{\varepsilon\longrightarrow 0 ^{+}}\int_{\mathbb{R}^{n}}\varphi(x)\mathbb{F}^{-1}\big{(}\frac{\widehat{f}( \cdot)}{|\cdot|^{2s}-1-i\varepsilon}\big{)}dx=\lim_{\varepsilon\longrightarrow 0 ^{+}}\int_{\mathbb{R}^{n}}\frac{\tilde{\varphi}\widehat{f}(\xi)}{|\xi|^{2s}-1- i\varepsilon}d\xi \tag{5.3}\] for all \(\varphi\in\mathcal{S}\), where \(\tilde{\varphi}\) is an abbreviation for \(\mathbb{F}^{-1}(\varphi)\). On the other hand, since \(\big{|}\frac{i\varepsilon}{|\xi|^{2s}-1-i\varepsilon}\big{|}\leq 1\) for every \(\xi\in\mathbb{R}^{n}\) and \(\varepsilon>0\), and \(\lim_{\varepsilon\longrightarrow 0^{+}}\frac{i\varepsilon}{|\xi|^{2s}-1-i \varepsilon}=0\) for \(\xi\in\mathbb{R}^{n}\) with \(|\xi|\neq 1\), it then follows from Lebesgue's Theorem that \[\lim_{\varepsilon\longrightarrow 0^{+}}\int_{\mathbb{R}^{n}}\frac{i\varepsilon}{| \xi|^{2s}-1-i\varepsilon}g(\xi)d\xi=0\quad\text{ for every }g\in\mathcal{S}. \tag{5.4}\] Hence, setting \(u=\mathcal{R}^{s}_{\lambda}f\), we obtain for every \(\varphi\in\mathcal{S}\) \[\langle(-\Delta)^{s}u-u,\varphi\rangle=\langle\mathcal{R}^{s}_{ \lambda}f,(-\Delta)^{s}\varphi-\varphi\rangle =\lim_{\varepsilon\longrightarrow 0^{+}}\int_{\mathbb{R}^{n}}\frac{ \widehat{f}(\xi)\tilde{\varphi}(\xi)(|\xi|^{2s}-1)}{|\xi|^{2s}-1-\mathrm{i} \varepsilon}d\xi\] \[=\lim_{\varepsilon\longrightarrow 0^{+}}\int_{\mathbb{R}^{n}}\frac{ \widehat{f}(\xi)\tilde{\varphi}(\xi)(|\xi|^{2s}-1-\mathrm{i}\varepsilon)}{| \xi|^{2s}-1-i\varepsilon}d\xi=\mathcal{R}^{s}_{\lambda}f(x)\varphi(x)dx= \langle f,\varphi\rangle. \tag{5.5}\] This implies that the equation \((-\Delta)^{s}u-u=f\) holds in the distributional sense for any \(f\in\mathcal{S}\). Now let \(f\in L^{p^{\prime}}(\mathbb{R}^{n})\) and consider a sequence \((f_{n})_{n}\subset\mathcal{S}\) with \(||f_{n}-f||_{L^{p^{\prime}}(\mathbb{R}^{n})}\longrightarrow 0\) as \(n\longrightarrow\infty\). Then \(u_{n}=\mathcal{R}^{s}_{\lambda}f_{n}\) solves \((-\Delta)^{s}u_{n}-u_{n}=f_{n}\) in the distributional sense, and \(u_{n}\longrightarrow u\) in \(L^{p}(\mathbb{R}^{n})\) by Theorem 1.1. Consequently, \((-\Delta)^{s}u_{n}-u_{n}=\longrightarrow f\) and \(u_{n}\longrightarrow u\) in \(\mathcal{S}^{\prime}\) as \(n\longrightarrow\infty\), this proves our claim. Secondly, taking \(x_{0}\in\mathbb{R}^{n}\), \(r>0\) and considering the mollification \((u_{\varepsilon})_{\varepsilon>0}\) of \(u=\mathcal{R}^{s}_{\lambda}f\), i.e., \(u_{\varepsilon}:=\rho_{\varepsilon}*u\) where \(\rho_{\varepsilon}(x)=\varepsilon^{-N}\rho(\frac{\varepsilon}{\varepsilon})\), \(x\in\mathbb{R}^{n}\) for some function \(\rho\in\mathcal{C}^{\infty}_{c}(\mathbb{R}^{n})\) satisfying \(\rho(x)\geq 0\) for all \(x\in\mathbb{R}^{n}\), \(\text{supp}(\rho)\subset B_{1}\) and \(\int_{\mathbb{R}^{n}}\rho dx=1\). Since \(u\in L^{p}(\mathbb{R}^{n})\), we deduce that \(u\in L^{p}\left(B_{r}(x_{0})\right)\) and consequently, \(u_{\varepsilon}\longrightarrow u\) in \(L^{p^{\prime}}(B_{r}(x_{0}))\) as \(\varepsilon\longrightarrow 0^{+}\). Similarly, considering the mollification \((f_{\varepsilon})_{\varepsilon>0}\) of \(f\), we see that \(f_{\varepsilon}\longrightarrow f\) in \(L^{p^{\prime}}(\mathbb{R}^{n})\) and therefore also in \(L^{p^{\prime}}(B_{r}(x_{0}))\), as \(\varepsilon\longrightarrow 0^{+}\). Based on the properties of the mollification of \(L^{p}-\) functions and of tempered distributions, with respect to differential operators with constant coefficients (see [[38]]), we yield that \[(-\Delta)^{s}u_{\varepsilon}-u_{\varepsilon}=(-\Delta)^{s}(u*\rho_{\varepsilon})-( u*\rho_{\varepsilon})=((-\Delta)^{s}u-u)*\rho_{\varepsilon}=f*\rho_{\varepsilon}=f_{ \varepsilon}\ \ \text{in}\ \mathbb{R}^{n}. \tag{5.6}\] Therefore, the local regularity theory (see [3, Theorem 1.4]) for fractional elliptic equation shows that the existence, for all \(r>0\), of some constant \(C>0\), depending only on \(r,p\) and \(N\), such that \[||u_{\varepsilon}||_{W^{2s,p^{\prime}}(B_{r}(x_{0}))}\leq C\big{(}||u_{ \varepsilon}||_{L^{p^{\prime}}(B_{2r}(x_{0}))}+||f_{\varepsilon}||_{L^{p^{ \prime}}(B_{2r}(x_{0}))}\big{)}\quad\text{for all }\varepsilon>0. \tag{5.7}\] Then choosing some sequence \((\varepsilon_{n})_{n}\subset(0,\infty)\) such that \(\varepsilon_{n}\longrightarrow 0\) as \(n\longrightarrow\infty\) and replacing \(u_{\varepsilon}\) by \(u_{\varepsilon_{n}}-u_{\varepsilon_{m}}\) in (5.7) gives that \((u_{\varepsilon_{n}})_{n}\) is a Cauchy sequence in \(W^{2s,p^{\prime}}(B_{r}(x_{0}))\) and therefore, there exists \(w\in W^{2,p^{\prime}}(B_{r}(x_{0}))\) such that \(u_{\varepsilon_{n}}\longrightarrow w\) in \(W^{2,p^{\prime}}(B_{r}(x_{0}))\) as \(n\longrightarrow\infty\). This also implies that \(u_{\varepsilon_{n}}\longrightarrow w\) in \(L^{p^{\prime}}(B_{r}(x_{0}))\) Moreover, since \(w=u\) a.e. in \(B_{r}(x_{0})\), it follows that \(u\in W^{2s,p^{\prime}}(B_{r}(x_{0}))\) and \(u\) solves the equation \((-\Delta)^{s}u-u=f\) almost everywhere in \(B_{r}(x_{0})\). Furthermore, (5.7) gives \[\begin{split}||u||_{W^{2s,p^{\prime}}(B_{r}(x_{0}))}& \leq C\big{(}||u||_{L^{p^{\prime}}(B_{2r}(x_{0}))}+||f||_{L^{p^{ \prime}}(B_{2r}(x_{0}))}\big{)}\\ &\leq\tilde{C}\big{(}||u||_{L^{p}(\mathbb{R}^{n})}+||f||_{L^{p^{ \prime}}(\mathbb{R}^{n})}\big{)},\end{split} \tag{5.8}\] where \(\tilde{C}=C\text{max}\{1,[\omega_{n}(2r)^{n}]^{\frac{p-2}{p}}\}\) and \(\omega_{n}\) denotes the volume of the unit ball in \(\mathbb{R}^{n}\). Since \(r>0\) and \(x_{0}\in\mathbb{R}^{n}\) were arbitrarily chosen, it follows that \(u\in W^{2s,p^{\prime}}_{\text{loc}}(\mathbb{R}^{n})\) is a strong solution of \((-\Delta)^{s}u-u=f\) and, for every \(r>0\), there exists a constant \(\widetilde{C}>0\) depending only on \(r,n,p\) such that (5.1) holds for all \(x_{0}\in\mathbb{R}^{n}\). The proof of (ii) follows the same line in [13, Proposition A.1] and we omit it here. While the claim in (iii) follows a global fractional Calderon-Zygmund estimate, which has been presented in Abdellaoui, Fernandez, Leonori and Younes [1]. ### Local Compactness for Resolvent Being interested in the real-valued solutions of (1.1), all function spaces are assumed to consist of real-valued functions and we write \(\Psi_{\lambda}^{s}:=\text{Re }K(x)\) for the real part of the Schwartz kernel \(K(x)\) of the resolvent. By Theorem 1.1, we know that the linear operator \(\mathbf{R}:L^{p^{\prime}}(\mathbb{R}^{n})\longrightarrow L^{p}(\mathbb{R}^{n})\), \(\mathbf{R}(v):=\Psi_{\lambda}^{s}*v\) is bounded. Setting \(v=Q^{\frac{1}{p^{\prime}}}|u|^{p-2}u\), we are led to consider the equation \[|v|^{p^{\prime}-2}v=Q^{\frac{1}{p}}[\Psi_{\lambda}^{s}*(Q^{\frac{1}{p}}v)] \quad\text{in }\mathbb{R}^{n}. \tag{5.9}\] To set up a dual variational framework for (1.1), it is necessary for us to study the operator \(Q^{\frac{1}{p}}[\Psi_{\lambda}^{s}*(Q^{\frac{1}{p}}v)]\) on the right-hand side of (5.9). Note \(\mathbf{K}_{p}=Q^{\frac{1}{p}}[\Psi_{\lambda}^{s}*(Q^{\frac{1}{p}}v)]\), we have the following local compactness lemma for \(\mathbf{K}_{p}\). **Lemma 5.2**.: _Let \(n\geq 3\), \(\frac{n}{n+1}<s<\frac{n}{2}\), \(\frac{2(n+1)}{n-1}\leq p\leq\frac{2n}{n-2s}\) and consider \(Q\in L^{\infty}(\mathbb{R}^{n})\), satisfying \(Q(x)\geq 0\) for a.e. \(x\in\mathbb{R}^{n}\). Then \(\mathbf{K}_{p}\) is symmetric in the sense that \(\int_{\mathbb{R}^{n}}w\mathbf{K}_{p}(v)=\int_{\mathbb{R}^{n}}v\mathbf{K}_{p}(w)\) for all \(v,w\in L^{p^{\prime}}(\mathbb{R}^{n})\). Moreover, if \(p<\frac{2n}{n-2s}\), we have (i) For any bounded and measurable set \(B\subset\mathbb{R}^{n}\), the operator \(1_{B}\mathbf{K}_{p}\) is compact. Here \(1_{B}\) denote the characteristic function of the set \(B\). (ii) If in addition, \(\operatorname*{ess}\sup_{|x|\geq R}Q(x)\longrightarrow 0\) as \(R\longrightarrow\infty\), then \(\mathbf{K}_{p}\) itself is compact._ Proof.: By the argument in Lemma 5.1, we know that \(u\in\operatorname{W}^{2s,p^{\prime}}_{\text{loc}}(\mathbb{R}^{n})\cap L^{p}( \mathbb{R}^{n})\) is a strong solution of \((-\Delta)^{s}u-\lambda u=f\) in \(\mathbb{R}^{N}\) for any \(f\in L^{p^{\prime}}(\mathbb{R}^{n})\). Furthermore, since \(p<\frac{2n}{n-2s}\), we then have the compact embedding \(W^{2s,p^{\prime}}(B)\hookrightarrow L^{\frac{nR^{\prime}}{n-2s^{\prime}}}(B) \hookrightarrow L^{p}(B)\). Plugging this fact into the proof of [13, Lemma 4.1], we deduce that the operator \(1_{B}\mathbf{K}_{p}\) is compact. And the remains follows the lines of the proof of Lemma 4.1 in [13]. ### Mountain Pass Structure Now, let us consider the energy functional \[\begin{split} J(v)&=\frac{1}{p^{\prime}}\int_{ \mathbb{R}^{n}}|v|^{p^{\prime}}dx-\frac{1}{2}\int_{\mathbb{R}^{n}}Q(x)^{ \frac{1}{p}}v(x)\mathbf{R}_{\lambda}^{s}(Q^{\frac{1}{p}}v)(x)dx\\ &=\frac{1}{p^{\prime}}||v||_{p^{\prime}}^{p^{\prime}}-\frac{1}{2} \int_{\mathbb{R}^{n}}v\mathbf{K}_{p}(v)dx\end{split} \tag{5.10}\] for \(v\in L^{p^{\prime}}(\mathbb{R}^{n})\). By the symmetric properties of the operator \(\mathbf{K}_{p}\), we easily deduce that \(J\in\mathcal{C}^{1}(L^{p^{\prime}}(\mathbb{R}^{n}),\mathbb{R})\) with \[J^{\prime}(v)w=\int_{\mathbb{R}^{n}}\big{(}|v|^{p^{\prime}-2}v-\mathbf{K}_{p}(v )\big{)}w\ dx\quad\text{for all }v,w\in L^{p^{\prime}}(\mathbb{R}^{n}). \tag{5.11}\] Moreover, the functional \(J\) has the so-called mountain pass geometry. **Lemma 5.3**.: _Let \(n\geq 3\), \(\frac{n}{n+1}<s<\frac{n}{2}\) and \(\frac{2(N+1)}{N-1}\leq p\leq\frac{2n}{n-2s}\), consider \(Q\in L^{\infty}(\mathbb{R}^{n})\) and \(Q\not\equiv 0\), such that \(Q(x)\geq 0\) for a.e. \(x\in\mathbb{R}^{n}\). (i) There exists \(\delta>0\) and \(0<\rho<1\) such that \(J(v)\geq\delta>0\) for all \(v\in L^{p^{\prime}}(\mathbb{R}^{n})\) with \(||v||_{p^{\prime}}=\rho\)._ _(ii) There is_ \(v_{0}\in L^{p^{\prime}}(\mathbb{R}^{n})\) _such that_ \(||v_{0}||_{L^{p^{\prime}}(\mathbb{R}^{n})}>1\) _and_ \(J(v_{0})<0\)_._ _(iii) Every Palais-Smale sequence for_ \(J\) _is bounded in_ \(L^{p^{\prime}}(\mathbb{R}^{n})\)_._ Proof.: (i) By the boundedness of operator \(\mathbf{R}\), there exists some constant \(C>0\) such that \(||\mathbf{K}_{p}(v)||_{L^{p}(\mathbb{R}^{n})}\leq C||v||_{L^{p^{\prime}}( \mathbb{R}^{n})}\) for all \(v\in L^{p^{\prime}}(\mathbb{R}^{n})\). Hence, if \(||v||_{L^{p^{\prime}}(\mathbb{R}^{n})}=\rho\), we obtain \[\begin{split} J(v)&=\frac{1}{p^{\prime}}\rho^{p^{ \prime}}-\frac{\rho}{2}||\mathbf{K}_{p}(v)||_{L^{p}(\mathbb{R}^{n})}\geq\frac{ 1}{p^{\prime}}\rho^{p^{\prime}}-\frac{C}{2}\rho^{2}>0,\end{split} \tag{5.12}\] where we use the fact of \(p^{\prime}<2\) and we chose \(\rho>0\) small enough. (ii) From the representation formulations in [25], it follows that there exists \(r>0\) such that \(\Psi_{\lambda}^{s}>0\) for all \(x\in B_{2R}(0)\). Moreover, since \(Q\geq 0\) a.e. on \(\mathbb{R}^{n}\) and \(Q\not\equiv 0\), the metric density of the set \(\omega_{Q}:=\{x\in\mathbb{R}^{n}:Q(x)\geq 0\}\) is \(1\) for almost every point from this set. Therefore, there exists \(x_{0}\in\mathbb{R}^{n}\) and \(0<\rho<r\) such that \(\omega_{Q}\cap B_{\frac{p}{2}(x_{0})}\) has positive measure. Choosing \(z\in\mathcal{C}_{c}^{\infty}(\mathbb{R}^{n})\) with \(\text{supp }z\subset B_{\rho}(x_{0})\), \(0\leq z\leq 1\) in \(\mathbb{R}^{n}\) and \(z=1\) in \(B_{\frac{p}{2}}(x_{0})\), the definition of \(\mathbf{K}_{p}\) implies that \[\begin{split}\int_{\mathbb{R}^{n}}z\mathbf{K}_{p}zdx& =\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}Q(x)^{\frac{1}{p}}z(x )\Psi_{\lambda}^{s}(x-y)Q(y)^{\frac{1}{p}}z(y)dydx\\ &\geq\int_{B_{\frac{p}{2}(x_{0})}}\int_{B_{\frac{p}{2}(x_{0})}} \Psi_{\lambda}^{s}(x-y)Q(x)^{\frac{1}{p}}Q(y)^{\frac{1}{p}}dxdy>0.\end{split} \tag{5.13}\] Hence taking \(t>0\) large enough, we obtain \[\begin{split} J(tz)&=\frac{t^{p^{\prime}}}{p^{\prime }}\int_{\mathbb{R}^{n}}|z|^{p^{\prime}}dx-\frac{t^{2}}{2}\int_{\mathbb{R}^{n}} z\mathbf{K}_{p}zdx\\ &=t^{2}\Big{(}\frac{1}{p^{\prime}t^{2-p^{\prime}}}\int_{\mathbb{R }^{n}}|z|^{p^{\prime}}dx-\frac{1}{2}\int_{\mathbb{R}^{n}}z\mathbf{K}_{p}zdx \Big{)}<0.\end{split} \tag{5.14}\] (iii) For every Palais-Smale sequence \((v_{n})_{n}\subset L^{p^{\prime}}(\mathbb{R}^{n})\), we have \[\begin{split}+\infty&>\sup_{n}|J(v_{n})|\geq J(v_{n} )=(\frac{1}{p^{\prime}}-\frac{1}{2})||v_{n}||_{p^{\prime}}^{p^{\prime}}+\frac{ 1}{2}J^{\prime}(v_{n})v_{n}\\ &\geq(\frac{1}{p^{\prime}}-\frac{1}{2})||v_{n}||_{p^{\prime}}^{p^ {\prime}}-\frac{1}{2}||J^{\prime}(v_{n})||_{L^{p^{\prime}}(\mathbb{R}^{n})^{ \ast}}||v_{n}||_{L^{p^{\prime}}(\mathbb{R}^{n})}\\ &\geq(\frac{1}{p^{\prime}}-\frac{1}{2})||v_{n}||_{p^{\prime}}^{p^ {\prime}}\ \ \text{as }n\longrightarrow\infty.\end{split} \tag{5.15}\] This implies \((v_{n})_{n}\) is bounded since \(1<p^{\prime}<2\). ## 6. Real solutions for fractional Helmholtz equation ### Existence of solutions in the compact case For the case \(Q(x)\longrightarrow 0\) as \(|x|\longrightarrow\infty\), we shall prove the existence of infinitely many pairs \(\{\pm u\}\) of critical points for \(J\) using a variant of the symmetric Mountain Pass Theorem, see [2]. Therefore, we need more properties of \(\mathbf{K}_{p}\) and the functional \(J\). **Lemma 6.1**.: _For every \(m\in\mathbb{N}\), there exist an \(m-\)dimendsional subspace \(\mathcal{W}\subset\mathcal{C}_{c}^{\infty}(\mathbb{R}^{n})\) with the following properties: (i) \(\int_{\mathbb{R}^{n}}v\mathbf{K}_{p}vdx>0\) for all \(v\in\mathcal{W}\setminus\{0\}\). (ii) There exists \(R=R(\mathcal{W})>0\) such that \(J(v)\leq 0\) for every \(v\in\mathcal{W}\) with \(||v||_{L^{p^{\prime}}(\mathbb{R}^{n})}\geq R\)._ Proof.: Since \(Q\not\equiv 0\), there exists a point \(x_{0}\) of density one for the set \(\{Q>0\}\). Without loss of generality, we may assume that \(x_{0}=0\). Then for \(\delta>0\) sufficiently small we have \[|Q^{-1}(0)\cap B_{\delta}(0)|\leq(\frac{1}{4m^{2}})^{n}|B_{\delta}(0)|. \tag{6.1}\] Let \[\Psi^{s,*}_{\lambda}(\tau):=\inf_{B_{\tau}(0)\setminus\{0\}}\Psi^{s}_{\lambda} \quad\text{and}\Psi^{s}_{\lambda,*}(\tau):=||\Psi^{s}_{\lambda}||_{L^{\infty}( \mathbb{R}^{n}\setminus B_{\tau}(0))}\ \ \text{for}\ \ \tau>0. \tag{6.2}\] Since \(\Psi^{s}_{\lambda}\) is bounded outside of every neighborhood of zero and \(\Psi^{s}_{\lambda}(x)|x|^{n-2s}\) tends to a positive constant as \(|x|\longrightarrow 0\) by the representation of fractional resolvent in [25], we may fix \(\delta>0\) such that (6.1) holds and that \[\Psi^{s,*}_{\lambda}(\tau)>(m-1)\Psi^{s}_{\lambda,*}(m\tau)\ \ \ \text{ for }\tau\in(0,\delta]. \tag{6.3}\] Moreover, it is easy to see that there exists \(m\) disjoint open balls \(B^{1},...,B^{m}\subset B_{\delta}(0)\) of diameter \(\tau:=\frac{\delta}{m^{2}}\) such that \[\text{dist}(B^{i},B^{j}):=\inf\{|x-y|:x\in B^{i},y\in B^{i}\}\geq\frac{\delta} {m}. \tag{6.4}\] Since \(|B^{i}|=(\frac{1}{2m^{2}})^{n}|B_{\delta}(0)|\) for \(i=1,...,m\), then by (6.1), we also have \[|B^{i}\cap\{Q>0\}|>0\ \ \text{ for }i=1,...,m. \tag{6.5}\] Now, fix functions \(z_{i}\in\mathcal{C}^{\infty}_{c}(\mathbb{R}^{n})\), \(i=1,...,m\) such that \(z_{i}>0\) in \(B^{i}\) and \(z_{i}\equiv 0\) in \(\mathbb{R}^{n}\setminus B^{i}\). Moreover, we let \(\mathcal{W}\) denote the span of \(z_{1},...,z_{m}\). Then any \(v\in\mathcal{W}\setminus\{0\}\) can be written as \(v=\sum\limits_{i=1}^{m}a_{i}z_{i}\) with \(a=(a_{1},...,a_{m})\in\mathbb{R}^{m}\setminus\{0\}\), and thus we have \[\begin{split}\int_{\mathbb{R}^{n}}v\mathbf{K}_{p}vdx& =\sum\limits_{i,j=1}^{m}a_{i}a_{j}\int_{B^{i}}\int_{B^{j}}\Psi^{s} _{\lambda}Q(x)^{\frac{1}{p}}Q(y)^{\frac{1}{p}}z_{i}(x)z_{j}(y)dxdy\\ &\geq\Psi^{s,*}_{\lambda}(\tau)\sum\limits_{i=1}^{m}a_{i}^{2} \Big{(}\int_{B^{i}}Q(x)^{\frac{1}{p}}z_{i}(x)dx\Big{)}^{2}\\ &\quad-\Psi^{s}_{\lambda,*}(m\tau)\sum\limits_{i,j=1,i\neq j}^{m}| a_{i}||a_{j}|\Big{(}\int_{B^{i}}Q(x)^{\frac{1}{p}}z_{i}(x)dx\Big{)}\Big{(}\int_{B^{ j}}Q(x)^{\frac{1}{p}}z_{j}(x)dx\Big{)}\\ &\geq\Psi^{s,*}_{\lambda}(\tau)\sum\limits_{i=1}^{m}a_{i}^{2} \Big{(}\int_{B^{i}}Q(x)^{\frac{1}{p}}z_{i}(x)dx\Big{)}^{2}\\ &\quad-\frac{\Psi^{s}_{\lambda,*}(m\tau)}{2}\sum\limits_{i,j=1,i \neq j}\Big{[}a_{i}^{2}\Big{(}\int_{B^{i}}Q(x)^{\frac{1}{p}}z_{i}(x)\Big{)}^{ 2}+a_{j}^{2}\Big{(}\int_{B^{j}}Q(x)^{\frac{1}{p}}z_{j}(x)\Big{)}^{2}\Big{]}\\ =&\sum\limits_{i=1}^{m}\Big{(}\Psi^{s,*}_{\lambda}( \tau)-(m-1)\Psi^{s}_{\lambda,*}(m\tau)\Big{)}a_{i}^{2}\Big{(}\int_{B^{i}}Q(x)^ {\frac{1}{p}}z_{i}(x)dx\Big{)}^{2}>0,\end{split} \tag{6.6}\] as a consequence of (6.3) and (6.5). This proves (i). By the continuity of \(\mathbf{K}_{p}\), we have \[m_{\mathcal{W}}:=\inf_{v\in\mathcal{W},||v||_{L^{p}(\mathbb{R}^{n})}=1}\int_{ \mathbb{R}^{n}}v\mathbf{K}_{p}vdx>0. \tag{6.7}\] Therefore, we obtain that \[J(v)=\frac{||v||^{p^{\prime}}_{L^{p^{\prime}}(\mathbb{R}^{n})}}{p^{\prime}}- \frac{1}{2}\int_{\mathbb{R}^{n}}v\mathbf{K}_{p}vdx\leq||v||^{p^{\prime}}_{L^{p^ {\prime}}(\mathbb{R}^{n})}(\frac{1}{p^{\prime}}-\frac{1}{2}||v||^{2-p^{\prime }}_{L^{p^{\prime}}(\mathbb{R}^{n})}m_{\mathcal{W}})\ \ \text{ for }v\in\mathcal{W}. \tag{6.8}\] Taking \(R:=\Big{(}\frac{2}{m_{\mathcal{W}}p^{\prime}}\Big{)}^{\frac{1}{2-p^{\prime}}}\), we have \(J(v)\leq 0\) for every \(v\in\mathcal{W}\). **Lemma 6.2**.: _Let \(n\geq 3\), \(\frac{n}{n+1}<s<\frac{n}{2}\), \(\frac{2(n+1)}{n-1}<p<\frac{2n}{n-2}\), and let \(Q\in L^{\infty}(\mathbb{R}^{n})\), \(Q\geq 0\), \(Q\not\equiv 0\) satisfy \(\lim\limits_{|x|\longrightarrow\infty}Q(x)=0\). Then problem \(u=\mathrm{Re}(\mathcal{R}^{s}_{\lambda}(Q(x)|u|^{p-2}u))\) admits a sequence of pairs \(\pm u_{n}\) of solutions such that \(||u_{n}||_{L^{p}(\mathbb{R}^{n})}\longrightarrow\infty\) as \(n\longrightarrow\infty\)._ Proof.: Let \((v_{n})_{n}\subset L^{p^{\prime}}(\mathbb{R}^{n})\) be a Palais-Smale sequence, then by Lemma 5.3 (iii), we know that \((v_{n})_{n}\) is bounded in \(L^{p^{\prime}}(\mathbb{R}^{n})\), Hence, up to a subsequence, we may assume \(v_{n}\rightharpoonup v\in L^{p^{\prime}}(\mathbb{R}^{n})\), this also implies that \(||v||_{L^{p^{\prime}}(\mathbb{R}^{n})}\leq\liminf\limits_{n\longrightarrow \infty}||v_{n}||_{L^{p^{\prime}}(\mathbb{R}^{n})}\). On the other hand, we easily obtain that \[\begin{split}\frac{1}{p^{\prime}}||v||_{L^{p^{\prime}}(\mathbb{R}^ {n})}^{p^{\prime}}&-\frac{1}{p^{\prime}}||v_{n}||_{L^{p^{\prime}} (\mathbb{R}^{n})}^{p^{\prime}}\geq\int_{\mathbb{R}^{n}}|v_{n}|^{p^{\prime}-2}v _{n}(v-v_{n})dx\\ &=J^{\prime}(v_{n})(v-v_{n})+\int_{\mathbb{R}^{n}}v_{n}\mathbf{K} _{p}(v-v_{n})dx\longrightarrow 0,\ \ n\longrightarrow\infty,\end{split} \tag{6.9}\] where we use the convexity of the function \(t\longrightarrow|t|^{p^{\prime}}\). As a consequence, we have \(||v||_{L^{p^{\prime}}(\mathbb{R}^{n})}\leq\lim\limits_{n\longrightarrow \infty}||v_{n}||_{L^{p^{\prime}}(\mathbb{R}^{n})}\), this implies that \(v_{n}\longrightarrow v\) strongly in \(L^{p^{\prime}}(\mathbb{R}^{n})\), which means that \(J(v)\) satisfies the Palais-Smale condition. Combining the previous lemma with the symmetric Mountain Pass Theorem, we then obtain the existence of nontrivial pairs \(\{\pm v_{n}\}\) of critical point of \(J\) with \(J(v_{n})\longrightarrow\infty\) and thus \(||v_{n}||_{L^{p^{\prime}}(\mathbb{R}^{n})}\longrightarrow\infty\) as \(n\longrightarrow\infty\). Remembering the setting \(u_{n}:=\mathbf{R}^{s}_{\lambda}(Q^{\frac{1}{p}}v_{n})\), we have \(v_{n}=Q^{\frac{1}{p^{\prime}}}|u_{n}|^{p-2}u_{n}\), thus \(||u_{n}||_{L^{p}(\mathbb{R}^{n})}\longrightarrow\infty\) as \(n\longrightarrow\infty\). ### Existence of solutions in the periodic case As we have showed in Lemma 5.3, the functional \(J(v)\) satisfies the mountain pass geometry. Hence, we may define a mountain-pass level for \(J(v)\) by \[c:=\inf\limits_{\gamma\in\Gamma,t\in[0,1]}J(\gamma(t)) \tag{6.10}\] where \(\Gamma=\{\gamma\in C([0,1],L^{p^{\prime}}(\mathbb{R}^{n})):\gamma(0)=0\) and \(J(\gamma(1))<0\}\). Apparently, \(\Gamma\neq\emptyset\) and \(c>0\). To show that \(c\) is a critical level of \(J(v)\), we consider some Palais-Smale sequences \((v_{n})_{n}\subset L^{p^{\prime}}(\mathbb{R}^{n})\) for \(J(v)\) at level \(c\), which can be easily found via a deformation Lemma. Moreover, these Palais-Smale sequences have been proved to be bounded in Lemma 5.3 (iii). A long as we prove these sequences having a strong convergence subsequences in \(L^{p^{\prime}}(\mathbb{R}^{n})\), we then obtain the main conclusion. **Lemma 6.3**.: _Let \(n\geq 3\), \(\frac{n}{n+1}<s<\frac{n}{2}\), \(\frac{2(n+1)}{n-1}<p<\frac{2n}{n-2}\), Consider a nonnegative function \(Q\in L^{\infty}(\mathbb{R}^{n})\), \(Q\not\equiv 0\) which is \(\mathbb{Z}^{n}-\)periodic on \(\mathbb{R}^{n}\). Then problem \(u=\operatorname{Re}(\mathcal{R}^{s}_{\lambda}(Q(x)|u|^{p-2}u))\) has a nontrivial solution \(u\in L^{p}(\mathbb{R}^{n})\)._ Proof.: Let \((v_{n})_{n}\) be a bounded Palais-Smale sequence for \(J(v)\) at level \(c\), that is \(J(v_{n})\longrightarrow c\) and \(J^{\prime}(v_{n})v_{n}\longrightarrow 0\) as \(n\longrightarrow\infty\). Then we easily deduce that \[\lim\limits_{n\longrightarrow\infty}\int_{\mathbb{R}^{n}}Q^{\frac{1}{p}}v_{n }\mathbf{R}^{s}_{\lambda}(Q^{\frac{1}{p}}v_{n})dx=\frac{2p^{\prime}}{2-p^{ \prime}}\lim\limits_{n\longrightarrow\infty}[J(v_{n})-\frac{1}{p^{\prime}}J^{ \prime}(v_{n})v_{n}]=\frac{2p^{\prime}}{2-p^{\prime}}c>0. \tag{6.11}\] Moreover, since \(Q\in L^{\infty}(\mathbb{R}^{n})\), the sequence \((Q^{\frac{1}{p}}v_{n})_{n}\) is also bounded. Therefore, by the vanishing lemma 4.1, there exists \(R,\zeta>0\) and a sequence \((x_{n})_{n}\subset\mathbb{R}^{n}\) such that \[\int_{B_{R}(x_{0})}|v_{n}|^{p^{\prime}}dx\geq\zeta\ \ \ \ \text{for all $n$}. \tag{6.12}\] By the periodicity of \(Q\), this problem is invariance under translation. Hence we may set \(w_{n}(x)=v_{n}(x+x_{n})\) for \(x\in\mathbb{R}^{n}\), where \((w_{n})_{n}\subset L^{p^{\prime}}(\mathbb{R}^{n})\) is a bounded sequence such that \(J(w_{n})=J(v_{n})\) and \(||J^{\prime}(w_{n})||=||J^{\prime}(v_{n})||\). Moreover, up to a subsequence, we have \(w_{n}\rightharpoonup w\) in \(L^{p^{\prime}}(\mathbb{R}^{n})\). Next, we claim that \[1_{B_{R^{\prime}}}|w_{n}|^{p^{\prime}-2}w_{n}\longrightarrow 1_{B_{R^{\prime}}}|w |^{p^{\prime}-2}w\ \ \text{strongly in $L^{p}(\mathbb{R}^{\prime})$ for every $R^{\prime}>0$}. \tag{6.13}\] Indeed, fix \(\varphi\in C_{c}^{\infty}(B_{R^{\prime}})\subset\mathcal{C}_{c}^{\infty}( \mathbb{R}^{n})\), then for \(n,m\in\mathbb{N}\) we have \[\begin{split}&\Big{|}\int_{\mathbb{R}^{n}}\Big{(}|w_{n}|^{p^{ \prime}-2}w_{b}-|w_{m}|^{p^{\prime}-2}w_{m}\Big{)}\varphi dx\Big{|}\\ &=\Big{|}J^{\prime}(w_{n})\varphi-J^{\prime}(w_{m})\varphi+\int_{B_{ R^{\prime}}}\varphi\mathbf{K}_{p}(w_{n}-w_{m})dx\Big{|}\\ &\leq||J^{\prime}(w_{n})-J^{\prime}(w_{m})||_{\mathcal{L}^{p}(L^{p} (\mathbb{R}^{n}),\mathbb{R})}||\varphi||_{L^{p^{\prime}}(\mathbb{R}^{n})}+||1_{B_ {R^{\prime}}}\mathbf{K}_{p}(w_{n}-w_{m})||_{\mathcal{L}^{p}(L^{p}(\mathbb{R}^{n}), \mathbb{R})}||\varphi||_{L^{p^{\prime}}(\mathbb{R}^{n})}.\end{split} \tag{6.14}\] Since \(\mathcal{C}_{c}^{\infty}(B_{R^{\prime}})\subset L^{p^{\prime}}(B_{R^{\prime}})\) is dense, \(||J^{\prime}(w_{n})||\longrightarrow 0\) as \(n\longrightarrow\infty\) and since \(1_{B_{R^{\prime}}}\mathbf{K}_{p}\) is a compact operator, we deduce that \(|w_{n}|^{p^{\prime}-2}w_{n}\) is a Cauchy sequence in \(L^{p}(B_{R^{\prime}})\), so that \(|w_{n}|^{p^{\prime}-2}w_{n}\longrightarrow\widetilde{w}\) strongly in \(L^{p}(B_{R^{\prime}})\) for some \(\widetilde{w}\in L^{p}(B_{R^{\prime}})\). Up to a subsequence, \(|w_{n}|^{p^{\prime}-2}w_{n}\longrightarrow\widetilde{w}\) and, equivalently, \(w_{n}\longrightarrow|\widetilde{w}|^{p-2}\widetilde{w}\) a.e. on \(B_{R^{\prime}}\). By the uniqueness of the weak limit, it follows that \(w=|\widetilde{w}|^{p-2}\widetilde{w}\), i.e. \(\widetilde{w}=|w|^{p^{\prime}-2}w\) on \(B_{R^{\prime}}\). This proves our claim. As a consequence, \[0<\zeta\leq\int_{B_{R}(x_{n})}=\int_{B_{R}}|w_{n}|^{p^{\prime}}dx\longrightarrow \int_{B_{R}}|w|^{p^{\prime}}dx\ \ \text{as}\ n\longrightarrow\infty, \tag{6.15}\] which implies \(w\neq 0\). We are going to show that \(w\) is a critical point of \(J(w)\). For every \(\varphi\in\mathcal{C}_{c}^{\infty}(\mathbb{R}^{n})\), we have \[\int_{\mathbb{R}^{n}}|w_{n}|^{p^{\prime}-2}w_{n}\varphi dx\longrightarrow\int_ {\mathbb{R}^{n}}|w|^{p^{\prime}-2}w\varphi dx\ \ \ \text{as}\ n\longrightarrow\infty. \tag{6.16}\] On the other hand since \(\mathbf{K}_{p}\) is a bounded operator, we have \[\int_{\mathbb{R}^{n}}\varphi\mathbf{K}_{p}(w_{n})dx\longrightarrow\int_{ \mathbb{R}^{n}}\varphi\mathbf{K}_{p}(w)dx\ \ \text{as}\ n\longrightarrow\infty. \tag{6.17}\] Consequently, \[\begin{split} J^{\prime}(w)\varphi&=\int_{\mathbb{R} ^{n}}|w|^{p^{\prime}-2}w\varphi dx-\int_{\mathbb{R}^{n}}\varphi\mathbf{K}_{p}( w)dx\\ &=\lim_{n\longrightarrow\infty}\Big{(}\int_{\mathbb{R}^{n}}|w_{n }|^{p^{\prime}-2}w_{n}\varphi-\int_{\mathbb{R}^{n}}\varphi\mathbf{K}_{p}(w_{n} )dx\Big{)}=\lim_{n\longrightarrow\infty}J^{\prime}(w_{n})\varphi=0.\end{split} \tag{6.18}\] Therefore, \(w\in L^{p^{\prime}}(\mathbb{R}^{n})\) is a nontrivial critical point of \(J(w)\), so is \(v\). Remembering the setting \(u:=\mathbf{R}_{\lambda}^{s}(Q^{\frac{1}{p}}v)\), we have \(v=Q^{\frac{1}{p^{\prime}}}|u|^{p-2}u\), this implies that \(u\in L^{p}(\mathbb{R}^{n})\) is a nontrivial solution for \(u=\operatorname{Re}(\mathcal{R}_{\lambda}^{s}(Q(x)|u|^{p-2}u))\). ### Strong solutions for the Helmholtz equation As we have showed that that problem \(u=\operatorname{Re}(\mathcal{R}_{\lambda}^{s}(Q(x)|u|^{p-2}u))\) has a nontrivial weak solution \(u\in L^{p}(\mathbb{R}^{n})\). In the following, we study the regularity of \(u\). **Lemma 6.4**.: _Let \(n\geq 3\), \(\frac{n}{n+1}<s<1\), \(\frac{2(n+1)}{n-1}<p<\frac{2n}{n-2s}\). Let \(Q\in L^{\infty}(\mathbb{R}^{n})\), and consider a solution \(u\in L^{p}(\mathbb{R}^{n})\) of \(u=\operatorname{Re}(\mathcal{R}_{\lambda}^{s}(Q(x)|u|^{p-2}u))\). Then \(u\in W^{2s,q}(\mathbb{R}^{n})\cap\mathcal{C}^{1,\alpha}(\mathbb{R}^{n})\) for all \(p\leq q<\infty\), \(0<\alpha<1\), and it is a strong solution of (1.1)._ Proof.: Firstly, we shall use the Moser iteration technique to show that the solutions are bounded in \(L^{\infty}(\mathbb{R}^{n})\). Indeed, since \(Q\in L^{\infty}(\mathbb{R}^{n})\) and \(\frac{2(n+1)}{n-1}<p<\frac{2n}{n-2s}\), then by Lemma 5.1, it follows that \(u\in W^{2s,p^{\prime}}_{\operatorname{loc}}(\mathbb{R}^{n})\) and for every \(x_{0}\in\mathbb{R}^{n}\), \[||u||_{W^{2s,p^{\prime}}(B_{2r}(x_{0}))}\leq\widetilde{C}\Big{(}||u||_{L^{p}( \mathbb{R}^{n})}+||Q||_{L^{\infty}(\mathbb{R}^{n})}||u||_{L^{p}(\mathbb{R}^{ n})}^{p-1}\Big{)} \tag{6.19}\] with some constant \(\widetilde{C}>0\), independent of \(x_{0}\). Moreover, \(u\) is a strong solution of (1.1). Using Sobolev's embedding theorem with the property \(p^{\prime}\geq\frac{2n}{n+2s}\), we obtain that \(u\in W^{s,2}_{\operatorname{loc}}(\mathbb{R}^{n})\) with \[||u||_{W^{s,2}(B_{2r}(x_{0}))}\leq\kappa\widetilde{C}\Big{(}||u||_{L^{p}( \mathbb{R}^{n})}+||Q||_{L^{\infty}(\mathbb{R}^{n})}||u||_{L^{p-1}(\mathbb{R}^{ n})}\Big{)}\ \ \text{for all}\ x_{0}\in\mathbb{R}^{n}, \tag{6.20}\] where the constant \(\kappa\) is independent of \(x_{0}\). Consider now \(L>0\), \(\beta>1\) and a cut off function \(\eta\in\mathcal{C}_{c}^{\infty}(\mathbb{R}^{N})\) with supp \(\eta\subset B_{r}(x_{0})\), define \(u_{L}=\min\{u,L\}\) and \(\gamma(u)=\eta uu_{L}^{2(\beta-1)}\). Apparently, \(\gamma\) is a increasing function and we have \[(a-b)(\gamma(a)-\gamma(b))\geq 0\ \ \ \text{for}\ \ \text{any}\ \ a,b\in\mathbb{R}. \tag{6.21}\] Define the functions \[\Lambda(t)=\frac{|t|^{2}}{2}\ \ \text{and}\ \ \ \Gamma(t)=\int_{0}^{t}( \gamma^{\prime}(\tau))^{\frac{1}{2}}d\tau. \tag{6.22}\] Fix \(a,b\in\mathbb{R}\) such that \(a>b\). Then, from the above definitions and applying Jensen inequality we get \[\begin{split}\Lambda^{\prime}(a-b)(\gamma(a)-\gamma(b))& =(a-b)(\gamma(a)-\gamma(b))=(a-b)\int_{a}^{b}\gamma^{\prime}(t)dt\\ &=(a-b)\int_{a}^{b}(\Gamma^{\prime}(t))^{2}dt\geq\Big{(}\int_{a} ^{b}\Gamma^{\prime}(t)dt\Big{)}^{2}=\Gamma(a)-\Gamma(b).\end{split} \tag{6.23}\] In similar fashion, we can prove that the above inequality is true for any \(a\leq b\). Thus we can infer that \[\Lambda^{\prime}(a-b)(\gamma(a)-\gamma(b))\geq|\Gamma(a)-\Gamma(b)|^{2}, \tag{6.24}\] In particular, it follows that \[|\Gamma(\eta u(x))-\Gamma(\eta u(y))|^{2}\leq|\eta u(x)-\eta u(y)|((\eta uu_{ L}^{2(\beta-1)})(x)-(\eta uu_{L}^{2(\beta-1)})(y)). \tag{6.25}\] Therefore, taking \(\gamma(u)=\eta uu_{L}^{2(\beta-1)}\) as test-function in (1.1), in view of (6.25) we have \[\begin{split}&||\Gamma(\eta u)||_{W^{s,2}(B_{r}(x_{0}))}^{2}- \int_{B_{r}(x_{0})}\eta u^{2}u_{L}^{2(\beta-1)}dx\\ &\leq\int\int_{B_{r}(x_{0})\times B_{r}(x_{0})}\frac{u(x)-u(y)}{ |x-y|^{n+2s}}((\eta uu_{L}^{2(\beta-1)})(x)-(\eta uu_{L}^{2(\beta-1)})(y))dxdy -\int_{B_{r}(x_{0})}\eta u^{2}u_{L}^{2(\beta-1)}dx\\ &=\int_{B_{r}(x_{0})}Q(x)\eta|u|^{p}|u_{L}|^{2(\beta-1)}dx.\end{split} \tag{6.26}\] Since \(\Gamma(\eta u)\geq\frac{1}{\beta}\eta uu_{L}^{\beta-1}\), from the Sobolev inequality we can deduce that \[||\Gamma(\eta u)||_{W^{s,2}(B_{r}(x_{0}))}^{2}\geq S_{*}|\Gamma(\eta u)|_{L^{2 s}(B_{r}(x_{0}))}^{2}\geq(\frac{1}{\beta})^{2}S_{*}|\eta uu_{L}^{\beta-1}|_{L^{2 s}(B_{r}(x_{0}))}^{2}, \tag{6.27}\] where \(S_{*}\) is the best constant of the fractional Sobolev embedding inequality. By the fact \(Q(x)\in L^{\infty}(\mathbb{R}^{n})\) and \(p>2\), we have \[\begin{split}|\eta uu_{L}^{\beta-1}|_{L^{2s}(B_{r}(x_{0}))}^{2}& \leq C(||Q||_{L^{\infty}(\mathbb{R}^{n})})S_{*}^{-1}\beta^{2}\int_{B_{r}(x_{0 })}\eta|u|^{p}|u_{L}|^{2(\beta-1)}dx\\ &\leq C_{1}\beta^{2}\Big{(}\int_{B_{r}(x_{0})}|u|^{2_{*}^{*}} \Big{)}^{\frac{p-2}{s_{*}^{2}}}\Big{(}\int_{B_{r}(x_{0})}|\eta uu_{L}|^{\frac{ 2s}{s_{*}^{2}-(p-2)}}\Big{)}^{\frac{2s}{s_{*}^{2}-(p-2)}}.\end{split} \tag{6.28}\] Setting \(w_{L}=\eta uu_{L}^{\beta-1}\) and \(\alpha^{*}=\frac{22^{*}_{*}}{2^{*}-(p-2)}\), we then have \[||w_{L}||_{L^{2s}(B_{r}(x_{0}))}^{2}\leq C_{2}\beta^{2}||w_{L}||_{L^{\alpha^{ *}}(B_{r}(x_{0}))}^{2}. \tag{6.29}\] Now, we observe that if \(u^{\beta}\in L^{\alpha^{*}}(B_{r}(x_{0}))\), from the definition of \(w_{L}\), \(u_{L}\leq u\), we obtain \[||w_{L}||_{L^{2s}(B_{r}(x_{0}))}^{2}\leq C_{3}\beta^{2}\big{(}\int_{B_{r}(x_{0} )}u^{\beta\alpha^{*}}dx\big{)}^{\frac{2}{\alpha^{*}}}<\infty. \tag{6.30}\] Since \(\eta\in\mathcal{C}_{c}^{\infty}(\mathbb{R}^{n})\) was chosen arbitrarily with supp \(\eta\subset B_{r}(x_{0})\), then by passing to the limit as \(L\longrightarrow\infty\), the Fatou's Lemma yields \[||u||_{L^{\beta 2s}(B_{r}(x_{0}))}\leq C_{3}^{\frac{1}{2\beta}}\beta^{\frac{1}{ \beta}}||u||_{L^{\beta\alpha^{*}}(B_{r}(x_{0}))}. \tag{6.31}\] Now, we set \(\beta=\frac{2^{*}_{*}}{\alpha^{*}}>1\) and we observe that, being \(u\in L^{2s}(B_{r}(x_{0}))\), the above inequality holds for this choice of \(\beta\). Then, by using the fact that \(\beta^{2}\alpha^{*}=\beta 2^{*}_{*}\), it follows that (6.31) holds with \(\beta\) replaced by \(\beta^{2}\). Therefore, we can see that \[\begin{split}||u||_{L^{\beta 2s}(B_{r}(x_{0}))}&\leq C_{3}^{ \frac{1}{2\beta^{2}}}\beta^{\frac{2}{\beta^{2}}}||u||_{L^{\beta 2\alpha^{*}}(B_{r}(x_{0}))}\\ &\leq C_{3}^{\frac{1}{2}(\frac{1}{\beta}+\frac{1}{\beta^{2}})} \beta^{\frac{1}{\beta}+\frac{1}{\beta^{2}}}||u||_{L^{\beta\alpha^{*}}(B_{r}(x_{0 }))}.\end{split} \tag{6.32}\] Iterating this process, and recalling that \(\beta\alpha^{*}:=2_{s}^{*}\), we can infer that for every \(m\in\mathbb{N}\) \[||u||_{L^{\beta=2_{s}^{*}}(B_{r}(x_{0}))}\leq C_{3}^{\sum_{j-1}^{m}\frac{1}{2 \beta^{j}}}\beta^{\sum_{j-1}^{m}j\beta^{j-1}}||u||_{L^{2_{s}^{*}}(B_{r}(x_{0}))}. \tag{6.33}\] Taking the limit as \(m\longrightarrow\infty\) we get \[||u||_{L^{\infty}(B_{r}(x_{0}))}\leq C_{4}^{\gamma_{1}}\beta^{\gamma_{2}}<\infty, \tag{6.34}\] where \(\gamma_{1}=\frac{1}{2}\sum_{j=1}^{\infty}\frac{1}{\beta^{j}}<\infty\) and \(\gamma_{2}=\sum_{j=1}^{\infty}\frac{j}{S}<\infty\). This implies that \(u\in L^{\infty}(B_{r}(x_{0}))\) for all \(x_{0}\in\mathbb{R}^{n}\) with \(\sup\limits_{x_{0}\in\mathbb{R}^{n}}||u||_{L^{\infty}(B_{\frac{1}{2}}(x_{0}) )}<\infty\), i.e. \(u\in L^{\infty}(\mathbb{R}^{n})\), as claimed. Applying Lemma 5.1 (ii) we then find that \(u\in W^{2s,q}(\mathbb{R}^{n})\) for every \(p\leq q<\infty\). **Proof of Theorem 1.10.** By Lemma 6.2 and Lemma 6.4, we obtain the strong solutions for (1.1). **Proof of Theorem 1.11.** The proof of Theorem 1.11 easily follows from Lemma 6.3 and Lemma 6.4. ## Acknowledgements The research of Zifei Shen was partially supported by NSFC(12071438). ## Declarations **Conflict of interest** The authors declare that they have no conflict of interest.
2309.13755
Efficient Recursive Data-enabled Predictive Control (Extended Version)
In the field of model predictive control, Data-enabled Predictive Control (DeePC) offers direct predictive control, bypassing traditional modeling. However, challenges emerge with increased computational demand due to recursive data updates. This paper introduces a novel recursive updating algorithm for DeePC. It emphasizes the use of Singular Value Decomposition (SVD) for efficient low-dimensional transformations of DeePC in its general form, as well as a fast SVD update scheme. Importantly, our proposed algorithm is highly flexible due to its reliance on the general form of DeePC, which is demonstrated to encompass various data-driven methods that utilize Pseudoinverse and Hankel matrices. This is exemplified through a comparison to Subspace Predictive Control, where the algorithm achieves asymptotically consistent prediction for stochastic linear time-invariant systems. Our proposed methodologies' efficacy is validated through simulation studies.
Jicheng Shi, Yingzhao Lian, Colin N. Jones
2023-09-24T21:13:20Z
http://arxiv.org/abs/2309.13755v3
# Efficient Recursive Data-enabled Predictive Control ###### Abstract In the field of model predictive control, Data-enabled Predictive Control (DeePC) offers direct predictive control, bypassing traditional modeling. However, challenges emerge with increased computational demand due to recursive data updates. This paper introduces a novel recursive updating algorithm for DeePC. It emphasizes the use of Singular Value Decomposition (SVD) for efficient low-dimensional transformations of DeePC in its general form, as well as a fast SVD update scheme. Importantly, our proposed algorithm is highly flexible due to its reliance on the general form of DeePC, which is demonstrated to encompass various data-driven methods that utilize Pseudoinverse and Hankel matrices. This is exemplified through a comparison to Subspace Predictive Control, where the algorithm achieves asymptotically consistent prediction for stochastic linear time-invariant systems. Our proposed methodologies' efficacy is validated through simulation studies. ## I Introduction In Model Predictive Control (MPC), data-driven techniques have emerged as promising tools to expedite and enhance controller design, offering end-to-end solutions from input-output (I/O) data to fully functional controllers. Among these, Data-enabled Predictive Controller (DeePC) has gained significant attention, leveraging Willems' Fundamental Lemma [1] to bypass traditional modeling steps and establish a direct predictive controller. This method has demonstrated effectiveness across diverse domains, including batteries [2, 3], buildings [4, 5], grids [6], and vehicles [7]. In deterministic Linear Time-invariant (LTI) systems, numerous data-driven approaches have demonstrated the capability to consistently estimate system dynamics using a finite yet sufficiently excited I/O dataset. A paradigmatic method in this regard, Subspace Identification (SID) [8], employs an indirect approach to generate a consistent state-space model, facilitating the subsequent design of an MPC controller. In addition, other direct methods such as DeePC and Subspace Predictive Control (SPC) [9] can leverage this limited data set to directly yield exact trajectory prediction. The landscape changes slightly for LTI systems affected by stochastic noise. Several data-driven studies have shown that employing infinite open-loop I/O data leads to the estimation of asymptotically consistent models [10]. Building on this foundation, more recent efforts have sought to extend these algorithms to closed-loop data [11]. An innovative proposal in this context, as mentioned in [12], seeks to design the SPC using initial I/O data for system control, with the SPC undergoing recursive updates to enhance performance. DeePC has also witnessed similar extensions, such as the integration of instrumental variables [13, 14, 15], which have also been explored to achieve consistency in predictions utilizing both open [13] and closed-loop data [14]. A major hurdle arises with DeePC's increasing computational complexity as more I/O data are integrated. Recent studies have aimed to mitigate this by reducing DeePC's computational overhead [3, 16, 17]. For instance, [3, 16] employ the Singular Value Decomposition (SVD) of the Hankel matrix to reduce the dimensions of DeePC's decision variables. However, these methods, while promising, often present their own challenges, especially as more data is recursively incorporated into the model. Existing recursive updating methodologies in SID [18] and SPC [12, 19], reliant on the least squares structure, remain unsuited to DeePC and its variations. This gap underscores the pressing need for a generalized and efficient strategy to recursively update DeePC. Addressing this void, our research introduces an effective recursive updating paradigm within the DeePC framework. Our main contribution describes the equivalency between an SVD-based low-dimensional DeePC and its counterpart in a more general form compared to [3, 16], while also detailing an efficient SVD updating mechanism for recursively updated I/O data. A key advantage of the proposed algorithm is its high degree of flexibility rooted in the general-form DeePC. Our study demonstrates that this form of DeePC can include data-driven methods based on pseudo-inverse matrices and Hankel matrices. We give an example of this through a comparison to SPC, where the algorithm achieves asymptotically consistent prediction. In addition, our proposed algorithm has the potential for broader applications, especially among other adaptive DeePC methods [20, 21]. The paper's structure is as follows: Section II revisits Willems' fundamental lemma and DeePC. Subsequently, Section III delves into the equivalent low-dimensional transformation of DeePC, introducing our efficient recursive updating method. Section IV details the data-driven predictor ensuring consistent predictions. The validity of these methods is empirically established through simulations presented in Section V. **Notation:** Let \(\mathbf{0}\) represent a zero matrix, and \(I\) represent an identity matrix. The notation \(x:=\{x_{i}\}_{i=1}^{T}\) indicates a sequence of size \(T\). The term \(x_{t}\) represents the measurement of \(x\) at the instance \(t\). Additionally, \(x_{1:L}:=[x_{1}^{\top},x_{2}^{\top},\ldots,x_{L}^{\top}]^{\top}\) signifies a concatenated sequence of \(x\) from \(x_{1}\) to \(x_{L}\). \(M^{\dagger}\) indicates Moore-Penrose inverse of a matrix \(M\). ## II Preliminaries Consider a linear time-invariant (LTI) system described by the equations \(x_{t+1}=Ax_{t}+Bu_{t}\) and \(y_{t}=Cx_{t}+Du_{t}\), which we refer to as \(\mathfrak{B}(A,B,C,D)\). The system's order is given by \(n_{x}\) with \(n_{u},\,n_{y}\) denoting its input and output dimensions. An \(L\)-step trajectory for this system is represented as \(\begin{bmatrix}u_{1:L}^{\top}&y_{1:L}^{\top}\end{bmatrix}^{\top}\). The set of all potential \(L\)-step trajectories produced by \(\mathfrak{B}(A,B,C,D)\) is denoted by \(\mathfrak{B}_{L}(A,B,C,D)\). We define the Hankel matrix \(H_{s}\) of depth \(L\) associated with a vector-valued signal sequence \(s=\{s_{i}\}_{i=1}^{T}\) as: \[H_{s}:=\begin{bmatrix}s_{1}&s_{2}&\dots&s_{T-L+1}\\ s_{2}&s_{3}&\dots&s_{T-L+2}\\ \vdots&\vdots&&\vdots\\ s_{L}&s_{L+1}&\dots&s_{T}\end{bmatrix}\,.\] The row and column counts of a Hankel matrix \(H\) are given by _row\({}_{H}\)_ and _coli\({}_{H}\)_, respectively. Throughout this paper, the term \(L\) is exclusively used to indicate the size of the Hankel matrix. An input measurement sequence defined as \(u=\{u_{i}\}_{i=1}^{T}\) is termed _persistently exciting_ of order \(L\) if the Hankel matrix \(H_{u}\) has full row rank. Utilizing the Hankel matrices \(H_{u}\) and \(H_{y}\), we introduce the well-established **Willems' Fundamental Lemma**: **Lemma 1**: _[_1_, Theorem 1]_ _Given a controllable linear system where \(\{u_{i}\}_{i=1}^{T}\) is persistently exciting of order \(L+n_{x}\), the condition \(\operatorname{colspan}\bigl{(}\begin{bmatrix}H_{u}^{\top}&H_{y}^{\top}\end{bmatrix} ^{\top}\bigr{)}=\mathfrak{B}_{L}(A,B,C,D)\) is satisfied._ Recent advancements in the data-driven control domain have given rise to schemes like DeePC [22], along with numerous variants, for instance, [23, 20]. Lemmal plays a pivotal role in these schemes by facilitating trajectory prediction. In the scope of this paper, our primary aim is to unveil a universally efficient updating algorithm tailored for various controllers under the DeePC paradigm. To exemplify, consider the L2 regularized DeePC (L2-DeePC) detailed in [23]: \[\min_{g,\sigma}\,J(u_{pred},y_{pred})+\lambda_{\sigma}\|\sigma\|^ {2}+\lambda_{g}\|g\|^{2} \tag{1a}\] \[\text{s.t.}\,\,Hg=\begin{bmatrix}y_{init}+\sigma\\ y_{pred}\\ u_{init}\\ u_{pred}\end{bmatrix},\] (1b) \[y_{pred}\in\mathbb{Y},u_{pred}\in\mathbb{U} \tag{1c}\] where \(H:=\begin{bmatrix}H_{y}\\ H_{u}\end{bmatrix}\) for simplification. The parameters \(\lambda_{\sigma}\) and \(\lambda_{g}\) represent user-determined regularization cost weights. The elements \(J(u_{pred},y_{pred})\), \(\mathbb{Y}\), and \(\mathbb{U}\) are defined according to the task at hand. Sequences \(u_{init}\) and \(y_{init}\) provide \(n_{init}\)-step historical data for measured inputs and outputs leading up to the present moment, which aids in current state estimation of the dynamic system [22]. Correspondingly, \(u_{pred}\) and \(y_{pred}\) denote the predicted sequences of \(n_{pred}\) steps from the current timestamp. Consistently, the row dimension of the Hankel matrix is set to \(L=n_{init}+n_{pred}\). The L2-DeePC as presented in (1) forecasts the \(n_{pred}\)-step output trajectory \(y_{pred}\) based on a provided predictive input sequence \(u_{pred}\). The objective, specified in (1a), is minimized subject to the constraint delineated in (1c). The inclusion of the slack variable \(\sigma\) ensures feasibility for L2-DeePC. Meanwhile, regularization terms are introduced to enhance predictions, especially beneficial when the system is prone to noise or embodies nonlinear elements. For an in-depth discussion and detailed insights, readers are directed to [23]. This paper introduces a data-driven MPC technique under the DeePC framework that is recursively updated with the most recent operational data. We term this approach recursive DeePC and detail it in **Algorithm 1**. ``` 0: Retrieve some persistently excited past I/O data and build the initial DeePC controller, such as (1). 1. Retrieve the recent \(L\)-step measurements and update the Hankel matrix as: \[H\leftarrow\begin{bmatrix}H&\begin{bmatrix}y_{t-L:t-1}\\ u_{t-L:t-1}\end{bmatrix}\end{bmatrix}\] (2) 3. Retrieve the recent \(t_{init}\)-step measurements. Solve the DeePC and apply the optimal input as \(u_{t}=\mathbf{u}_{pred}^{*}(1)\). 4. Pause until the subsequent sampling time and revert to step 1. ``` **Algorithm 1** Recursive DeePC In many applications, empirical evidence suggests that DeePC benefits from larger quantities of I/O data beyond the minimal requirement [4, 6]. Furthermore, research in [13] establishes that infinite open-loop data can ensure consistent prediction in DeePC methods with instrumental variables for stochastic LTI systems. This insight can be broadened to closed-loop data, leveraging techniques from SID [8] and SPC [12]. Notably, by modifying (2) to incorporate a forgetting factor and discard outdated data, **Algorithm 1** can be adapted for adaptive DeePC approaches such as those detailed in [20, 21]. ## III Efficient recursive updates in the DeePC framework In this section, we introduce a more computationally efficient version of **Algorithm 1**. This improved algorithm hinges on two primary components: (1) an equivalent low-dimensional transformation of the DeePC in its general form, leveraging SVD, and (2) a fast SVD updating technique. The complete methodology is concluded in **Algorithm 3**. To detail its operation, we reference the L2-DeePC (1) as a demonstrative example. ### _An equivalent low-dimensional transformation_ For the first component of **Algorithm 3**, we describe an equivalent low-dimensional transformation of a general DeePC problem. This transformation is facilitated by the SVD of the aggregated Hankel matrix, \(H\): \[H=\begin{bmatrix}U_{1}&U_{2}\end{bmatrix}\begin{bmatrix}\Sigma&\mathbf{0}\\ \mathbf{0}&\mathbf{0}\end{bmatrix}\begin{bmatrix}V_{1}&V_{2}\end{bmatrix}^{ \top}=U_{1}\Sigma V_{1}^{\top}\] where \(\Sigma\in\mathbb{R}_{r_{H},r_{H}}\) and \(r_{H}\) is the rank of \(H\). A general DeePC problem is defined as: **Problem 1:** \[\min_{\begin{subarray}{c}g,\sigma\\ u_{pred},y_{pred}\end{subarray}}f_{1}(u_{pred},y_{pred},\sigma,V_{1}^{\top}g)+f_{2 }(V_{2}^{\top}g) \tag{3}\] \[\text{s.t. }Hg=\begin{bmatrix}y_{init}\\ y_{pred}\\ u_{init}\\ u_{pred}\end{bmatrix}+\sigma,\] \[f_{3}(y_{pred},u_{pred},\sigma)\leq 0\] Here, functions \(f_{1}(\cdot)\), \(f_{2}(\cdot)\), and \(f_{3}(\cdot)\) are user-specified and vary across different DeePC methodologies tailored for diverse applications. The aforementioned transformation in a lower dimension is defined as: **Problem 2:** \[\min_{\begin{subarray}{c}g,\sigma\\ u_{pred},y_{pred}\end{subarray}}f_{1}(u_{pred},y_{pred},\sigma,\bar{g}) \tag{4}\] \[\text{s.t. }\bar{H}\bar{g}=\begin{bmatrix}y_{init}\\ y_{pred}\\ u_{init}\\ u_{pred}\end{bmatrix}+\sigma,\] \[f_{3}(y_{pred},u_{pred},\sigma)\leq 0\] where \(\bar{H}:=U_{1}\Sigma\), signifying the transformed version of the Hankel matrix. **Lemma 2**: **Problem 1** _and **Problem 2** are equivalent._ **Problem 1** _can change the decision variable \(g\) by:_ \[\bar{g}=\begin{bmatrix}\tilde{g}_{1}\\ \tilde{g}_{2}\end{bmatrix}=\begin{bmatrix}V_{1}^{\top}g\\ V_{2}^{\top}g\end{bmatrix}=\begin{bmatrix}V_{1}&V_{2}\end{bmatrix}^{\top}g\] _because \(\begin{bmatrix}V_{1}&V_{2}\end{bmatrix}\) is an orthogonal matrix [24]. Then because \(Hg=U_{1}\Sigma V_{1}^{\top}g=\bar{H}\bar{g}_{1}\), the objects and constraints in the new equivalent problem are separable with respect to \(\tilde{g}_{1}\) and \(\tilde{g}_{2}\):_ \[\min_{\begin{subarray}{c}\tilde{g},\sigma\\ u_{pred},y_{pred}\end{subarray}}f_{1}(u_{pred},y_{pred},\sigma,\tilde{g}_{1})+f_{ 2}(\tilde{g}_{2})\] \[\text{s.t. }\bar{H}\tilde{g}_{1}=\begin{bmatrix}y_{init}\\ y_{pred}\\ u_{init}\\ u_{pred}\end{bmatrix}+\sigma,\] \[f_{3}(y_{pred},u_{pred},\sigma)\leq 0\] _Therefore, we can solve them separately by:_ \[\min_{\begin{subarray}{c}\tilde{g}_{1},\sigma\\ u_{pred},y_{pred}\end{subarray}}f_{1}(u_{pred},y_{pred},\sigma,\tilde{g}_{1})\] \[\text{s.t. }\bar{H}\tilde{g}_{1}=\begin{bmatrix}y_{init}\\ y_{pred}\\ u_{init}\\ u_{pred}\end{bmatrix}+\sigma,\] \[f_{3}(y_{pred},u_{pred},\sigma)\leq 0\] \[\min_{\tilde{g}_{2}}f_{2}(\tilde{g}_{2})\] _By replacing \(\tilde{g}_{1}\) by \(\bar{g}\) in the first sub-problem above, we get **Problem 2**._ Leveraging Lemma 2, we can deduce the low-dimensional version of the L2-DeePC (1). This inference is drawn from the relationship: \(\|g\|^{2}=g^{\top}\begin{bmatrix}V_{1}&V_{2}\end{bmatrix}\begin{bmatrix}V_{1}&V_{2 }\end{bmatrix}^{\top}g=\|V_{1}g\|^{2}+\|V_{2}g\|^{2}\): \[\min_{\begin{subarray}{c}\tilde{g},\sigma\\ \bar{g},\sigma\end{subarray}}J(u_{pred},y_{pred})+\lambda_{\sigma}\|\sigma\|^{ 2}+\lambda_{g}\|\bar{g}\|^{2} \tag{5}\] \[\text{s.t. }H\bar{g}=\begin{bmatrix}y_{init}+\sigma\\ y_{pred}\\ u_{init}\\ u_{pred}\end{bmatrix},\] \[y_{pred}\in\mathbb{Y},u_{pred}\in\mathbb{U}\] **Remark 1**: _In **Problem 2**, the decision variable \(\bar{g}\), which belongs to \(\mathbb{R}_{r_{H}}\), is independent of the columns of the Hankel matrix. The authors in [3, 16] introduce the same SVD-based transformation and establish the equivalence using KKT conditions [16]. However, their study is limited to L2-DeePC, which is a special case of the more general form of DeePC (3) described in our study.._ ### _Efficient SVD updates_ In the preceding section, we established that the general-form DeePC (3) can be converted into a more compact, low-dimensional format (4) via SVD. Notably, the dimensionality of the decision variable in (4) is governed solely by the rank of the Hankel matrix \(H\). Expanding upon this, the current section introduces a rapid SVD updating technique [25, 26]. This method obviates the need for a complete SVD recalculation with each recursive update (2). When the previous SVD components, specifically \(U_{1}\) and \(\Sigma\), are available and \(H\) undergoes an update as per (2), **Algorithm 2** can be leveraged to update the new \(U_{1}\) and \(\Sigma\). **Given:** Current SVD components: \(U_{1},\Sigma\) 1. Retrieve the column \(a\) to be added to \(H\), i.e. \(\begin{bmatrix}y_{t-Lt-1}\\ u_{t-Lt-1}\end{bmatrix}\) at time \(t\). Compute \(r=\text{rank}(\Sigma)\). 2. If \(r<\text{row}_{H}\), update \(U_{1},\ \Sigma\) by [25]: Compute \(m=U^{\top}a\), \(p=a-Um\), \(R_{a}=\|p\|\), \(P=R_{a}^{-1}p\) Compute \(K=\begin{bmatrix}\Sigma&m\\ 0&R_{a}\end{bmatrix}\) and its SVD \(K=C\bar{\Sigma}D^{\top}\) If \(\text{rank}(\bar{\Sigma})==r\): \(U_{1}=\begin{bmatrix}U_{1}&P\end{bmatrix}C(;1:r)\), \(\Sigma=\bar{\Sigma}(1:r,1:r)\) If \(\text{rank}(\bar{S})==r+1\): \(U_{1}=\begin{bmatrix}U_{1}&P\end{bmatrix}C,\ \Sigma=\bar{\Sigma}\) 3. If \(r==row_{H}\), update \(U_{1},\ \Sigma\) by [26]: Compute \(z=U_{1}^{\top}a\) Compute eigendecomposition of \(\Sigma^{2}+zz^{\top}\): \(C\bar{\Sigma}C^{\top}\) Update \(U_{1}=U_{1}C\), \(\Sigma=\sqrt{\bar{\Sigma}}\) **Lemma 3**: **Algorithm 2** _calculates \(U_{1}\) and \(\Sigma\) identical to the results obtained through the direct SVD of \(H\) following each recursive update (2). It boasts a computational complexity of \(\mathcal{O}(\text{row}_{H}r_{H}^{2})\) and a space requirement of \(\mathcal{O}(\text{row}_{H}r_{H})\)._ **Proof 1**: _Proofs of equivalence for the conditions in steps \(2)\) and \(3)\) of **Algorithm 2** are provided in [25, 26], which is omitted due to the limited space. All the matrix multiplications requires a load of \(\mathcal{O}(\text{row}_{H}r_{H}^{2})\). In addition, the SVD's load in step \(2)\) is \(\mathcal{O}(r_{H}^{3})\) (or \(\mathcal{O}(r_{H}^{2})\) due to the special structure [25]), and the eigendecomposition's load is \(\mathcal{O}(\textit{row}_{H}^{3})\). Because \(r_{H}<\textit{row}_{H}\) for step \(2)\) and \(r_{H}=\textit{row}_{H}\) for step \(3)\), the overall complexity is \(\mathcal{O}(\textit{row}_{H}r_{H}^{2})\). Finally, the matrices' size directly determines the space requirement. ### _Conclusion of the algorithm_ A computationally recursive DeePC is summarized in **Algorithm 3**. The general form of DeePC represented in (3) undergoes a transformation into a low-dimensional equivalent as outlined in (4), using SVD. Moreover, with each successive update as indicated in (2), the new SVD components are rapidly updated. ``` 0: Retrieve some persistently excited past I/O data. Construct \(H\) and compute its SVD. Build the initial low-dimensional DeePC controller based on the **Problem 2**, such as (1). 1. Retrieve the recent \(L\)-step measurements and update the SVD components based on **Algorithm 2**. 3. Retrieve the recent \(t_{init}\)-step measurements. Solve the DeePC and apply the optimal input as \(u_{t}=\textbf{u}_{pred}^{*}(1)\). 4. Pause until the subsequent sampling time and revert to step 1. **Lemma 4**: _Algorithm 1 and Algorithm 3 are equivalent._ At the initial step, **Problem 1** and **Problem 2** are respectively constructed in two Algorithms, which have been proved to be equivalent in Lemma 2. After the first recursive update (2), the new SVD components are exactly updated by **Algorithm 2** proved in Lemma 3. Therefore, **Problem 1** and **Problem 2** are still equivalent by Lemma 2. The proof is then completed by induction. As a result, the total computational complexity depends mainly on the polynomials with respect to \(\textit{row}_{H}\) and \(r_{H}\). Because \(\textit{row}_{H}=(n_{init}+n_{pred})(n_{u}+n_{y})\), and the decision variables \(\bar{g}\in\mathbb{R}_{r_{H}}\), \(u_{pred}\in\mathbb{R}_{n_{n}(n_{init}+n_{pred})}\), \(y_{pred}\in\mathbb{R}_{n_{y}(n_{init}+n_{pred})}\), \(\sigma\in\mathbb{R}_{\textit{row}_{H}}\) in **Problem 1**. Besides, the complexity of the fast SVD updating method is proved in Lemma 3. Therefore, the complexity is fixed after the DeePC's parameter is settled as \(r_{H}\leq\textit{row}_{H}\). It's notable that the size of the original recursive DeePC in **Algorithm 1** relates to \(\textit{col}_{H}=T-\textit{row}_{H}+1\), which increases with the addition of more data. The computational burden of **Algorithm 3** is comparable to the recursive SPC method [12]. The latter has a decision variable in its sparse representation of size \(\textit{row}_{H}\), which can recursive update using _Recursive Least Square_ at a computational complexity of \(\mathcal{O}(\textit{row}_{H}^{2})\)[19]. In the next Section, we will prove that SPC is equivalent to a specialized DeePC belonging to the general-from DeePC (3), adaptable as well by **Algorithm 3**. **Remark 2**: _Algorithm 3 offers extensions to other adaptive DeePC strategies, typified by references like [20, 21]. These strategies are applicable to slowly time-varying linear systems or approximate dynamics of unknown nonlinear systems across varied operating points. Other fast SVD modifications, such as the integration of forgetting factors (see **Appendix A**) and downdating [27], can be incorporated for extensions to these adaptive methods._ ## IV Comparison to data-driven methods based-on Pseudoinverse A pivotal strength of **Algorithm 3** is its versatile nature, rooted in its generic DeePC formulation. Beyond encompassing various DeePC variants [23, 6, 20], it holds potential for extension to various data-driven methodologies that utilize the Hankel matrix, such as simulation [28], physics-based filters [29] and data-driven observers [30, 31]. Among them, A group of researchers utilizes Pseudoinverse to achieve prediction or estimation [6, 28, 30, 31]. For the purpose of this section, we will illustrate that these Pseudoinverse-based methods can be generalized in the form of **Problem 1** using a specific data-driven prediction formulation [6]. Next, we will demonstrate that **Problem 1** can generalize SPC, which also employs Pseudoinverse and Hankel matrices. Additionally, we present how to achieve asymptotically consistent prediction for stochastic LTI systems through recursive data updates using **Algorithm 3**. ### _Comparison to Data-driven prediction_ Given measured \(u_{init},y_{init}\) and required \(u_{pred}\), a data-driven prediction method based on Pseudoinverse [6] is formulated as: \[\begin{split} y_{pred}&=H_{y,pred}g\\ g&=\begin{bmatrix}H_{y,init}\\ H_{u}\end{bmatrix}^{\dagger}\begin{bmatrix}y_{init}\\ u_{init}\\ u_{pred}\end{bmatrix}\end{split} \tag{6}\] where the sub-Hankel matrices are derived from the original Hankel matrix by \(H_{y}=\begin{bmatrix}H_{y,init}\\ H_{y,pred}\end{bmatrix}\). The matrix \(H_{y,init}\) is of depth \(n_{init}\) and the depth of \(H_{y,pred}\) is the prediction horizon \(n_{pred}\) such that \(n_{init}+n_{pred}=L\). (6) is the solution of an optimization problem [6] formulated as: \[\begin{split} y_{pred}&=H_{y,pred}g\\ g&=\operatorname*{arg\,min}_{g_{l}}\lVert g_{l}\rVert^{2}\\ \text{s.t.}&\begin{bmatrix}H_{y,init}\\ H_{u}\end{bmatrix}g_{l}=\begin{bmatrix}y_{init}\\ u_{init}\\ u_{pred}\end{bmatrix}\end{split} \tag{7}\] By the fact \(\lVert g\rVert^{2}=g^{\top}\begin{bmatrix}V_{1}&V_{2}\end{bmatrix}\begin{bmatrix}V_{ 1}&V_{2}\end{bmatrix}^{\top}g=\lVert V_{1}g\rVert^{2}+\lVert V_{2}g\rVert^{2}\) and adding an equality constraint so that \(u_{pred}\) is equal to the required value, we can write (7) in the form of **Problem 1**. For other Pseudoinverse-based data-driven methods [28, 30, 31], similar results can be derived after little modification of (6). ### _Comparison to SPC_ This section describes how to generalize SPC in the form of **Problem 1**. In addition, **Algorithm 3** helps to achieve asymptotically consistent prediction by continuously involving open-loop and closed-loop data online. The SPC controller [9] is formulated as, \[\min_{u_{pred},y_{pred}} J(u_{pred},y_{pred})\] s.t. \[y_{pred}\in\mathbb{Y},u_{pred}\in\mathbb{U}\] \[y_{pred}=K\begin{bmatrix}y_{init}\\ u_{init}\\ u_{pred}\end{bmatrix} \tag{8a}\] \[K=H_{y,pred}\begin{bmatrix}H_{y,init}\\ H_{u}\end{bmatrix}^{\dagger} \tag{8b}\] Based on the specific-form data-driven prediction (7), a bi-level DeePC is defined as: \[\min_{g,\sigma} J(u_{pred},y_{pred})\] s.t. \[y_{pred}\in\mathbb{Y},u_{pred}\in\mathbb{U} \tag{9}\] **Lemma 5**: _SPC (8) and the DeePC (9) are equivalent._ The only difference between the two controllers is their prediction parts, i.e. (8a) and (8b), (7). The fact that both predictions can be written in the same explicit form finishes the proof: \(y_{pred}=H_{y,pred}\begin{bmatrix}H_{y,init}\\ H_{u}\end{bmatrix}^{\dagger}\begin{bmatrix}y_{init}\\ u_{init}\\ u_{pred}\end{bmatrix}\) The analysis of consistent prediction is as follows. Consider a stochastic LTI system defined in innovation form: \[x_{t+1}=Ax_{t}+Bu_{t}+Ke_{t} \tag{10}\] \[y_{t}=Cx_{t}+Du_{t}+e_{t}\] where \(K\) denotes the Kalman gain and \(e_{k}\) is a zero-mean white noise signal. The prediction \(y_{pred}\) at time \(t\) is consistent if its expectation is an unbiased estimation of the real output sequence \(y_{real}\)[14, 8], i.e. \[\mathbb{E}_{e}(y_{pred}-y_{real})=\mathbf{0}\] **Assumption 1**: _The Kalman gain K in (10) ensures that the matrix \(A-KC\) is strictly stable. The initial step \(n_{init}\) is sufficiently large so that \((A-KC)^{t_{init}}\approx 0\)._ **Assumption 2**: _The input is quasi-stationary so that limits of time averages of the input sequence exists._ **Assumption 3**: _The input sequence \(\{u_{i}\}_{i=1}^{T}\) for Hankel matric \(H_{u}\) is persistently exciting of order \(L+n_{x}\)._ **Lemma 6**: _Under Assumptions 1, 2 and 3, (7) constructed by open-loop data provides a consistent prediction when \(\mathit{col}_{H}\rightarrow\infty\)._ **Lemma 7**: _Assumptions 1, 2 and 3 stand. Assume that \(D=0\) in the LTI system or the I/O data is collected by feedback control with at least one sample time delay. Then (7) with \(n_{pred}=1\) constructed by the closed-loop data provides a consistent prediction when \(\mathit{col}_{H}\rightarrow\infty\)_ The proof of Lemmas 6 and 7 is elaborated in **Appendix B**. Assumptions 1, 2 and 3 used therein also frequently emerge in consistency analysis in the field of system identification, as seen in [8, 9]. In addition, because Lemma 7 guarantees asymptotically consistent prediction for (7) with closed-loop data when setting \(n_{pred}=1\), one can successively apply (7) with \(n_{pred}=1\) to achieve consistent multi-step output prediction via: \[\begin{split}&\forall i=1,2,\ldots,n_{pred}:\\ & y_{pred}(i)=H_{y,pred}g_{i}\\ & g_{i}=\operatorname*{arg\,min}_{g_{l}}\lVert g_{l}\rVert^{2}\\ &\text{s.t.}\ \begin{bmatrix}H_{y,init}\\ H_{u}\end{bmatrix}g=\begin{bmatrix}y_{init}(i:n_{pred})\\ y_{pred}(1:i-1)\\ u_{init}(i:n_{pred})\\ u_{pred}(1:i)\end{bmatrix}\end{split} \tag{11}\] where \(y_{pred}(i)\) represents the \(i-\)th output in \(y_{pred}\), and \(y_{pred}(i:j)\) captures the vector from the \(i\)-th to \(j-\)th outputs within \(y_{pred}\) (with similar notations applied elsewhere). Similar setups have been validated in prior DeePC and SPC studies [32, 12]. A new bi-level DeePC can be defined as: \[\min_{g,\sigma} J(u_{pred},y_{pred})\] (12) s.t. \[y_{pred}\in\mathbb{Y},u_{pred}\in\mathbb{U}\] Notably, one can directly apply **Algorithm 3** to recursively update the bi-level DeePC (12). A tractable computation method of the bi-level DeePC can be referred to our previous work [20]. According to Lemma 7, with an infinite length of closed-loop data, it's feasible to obtain an unbiased output prediction for the stochastic LTI system. Nonetheless, it's important to note that there may not be a monotonic improvement in prediction and control performance throughout the update cycle. In addition, one can adapt Algorithm 3 to update the data-driven prediction online in the two bi-level DeePCs (9) and (12) with data from other controllers. To achieve this, the modification required for (11) (or (7)) is simply replacing the DeePC in step 3) of Algorithm 2 with other closed-loop controllers (or open-loop control signals). ## V Simulation In this section, we evaluate the effectiveness of the proposed efficient recursive DeePC methodology through simulation studies. We utilize a discrete-time LTI system, as detailed in [9], which models two circular plates coupled with flexible shafts. The system's matrices, conforming to (10), are provided: \[A=\begin{bmatrix}4.4&1&0&0&0\\ -8.09&0&1&0&0\\ 7.83&0&0&1&0\\ -4&0&0&0&1\\ 0.86&0&0&0&0\end{bmatrix},\ B=\begin{bmatrix}0.00098\\ 0.01299\\ 0.01859\\ 0.0033\\ -0.00002\end{bmatrix},K=\begin{bmatrix}2.3\\ -6.64\\ 7.515\\ -4.0146\\ 0.86336\end{bmatrix}\] During the simulation, the noise variance is set to \(\text{var}(e_{t})=0.1\), and the input is restricted to \(|u_{t}|\leq 10\). The optimization problems in the following simulation are solved by the solver quadprog in MATLAB with Intel Core i7-1165G7 2.80 GHz processor. ### _Validation of Algorithm 3_ To initiate, a 200-step trajectory is generated with the input defined as a zero-mean white noise signal, having \(\text{var}(u_{t})=\text{var}(u_{t})\) 1. Employing this initial trajectory, a L2-DeePC (1) is established, targeting the objective \(J(u_{pred},y_{pred})=\|y_{pred}-ref\|^{2}+0.001\|u_{pred}\|^{2}\). Initially, the reference is set at \(10\) for 1000 steps and subsequently adjusted to \(0\) for the ensuing 1000 steps. The parameters are designated as \(\lambda_{\sigma}=10^{6}\), \(\lambda_{g}=10^{4}\), and \(n_{init}=n_{pred}=10\). These parameters aren't meticulously tuned, as our primary interest lies in evaluating the efficiency of **Algorithm 3**. The L2-DeePC controls the system and is recursively updated by **Algorithm 1**. For comparative analysis, an identical procedure is employed utilizing **Algorithm 3**, integrated with the equivalent low-dimensional L2-DeePC, as delineated in (5). Table I provides the statistical results from 10 Monte Carlo simulations across different noise scenarios. Both algorithms yield almost identical input and output signals, with only slight numerical errors, **Algorithm 3** proves to be faster in execution than **Algorithm 1**. For a closer look, Figure 1 shows the resulting trajectories for a specific noise scenario. The input and output trajectories validate the equivalence between the two algorithms. Additionally, we analyze the computational time required for each recursive update and optimization for both algorithms. We notice that the time for **Algorithm 1** increases superlinearly as more data are added, whereas the time for **Algorithm 3** stays relatively steady, highlighting its efficiency. ### _Comparison to SPC_ This section evaluates the asymptotic consistency of the data-driven prediction in equations (9) and (12) by using **Algorithm 3**. We will refer them to as DDP1 and DDP2 for brevity. The equivalence between (9) and SPC (8) is also tested. Given that all the data-driven prediction methods and the ground truth (elaborated in the Appendix C) can be expressed in the matrix form: \[y_{pred}=K_{y,init}y_{init}+K_{u,init}u_{init}+K_{u,pred}u_{pred} \tag{13}\] , consistency is tested by comparing discrepancies among the involved matrices. In this study, we set \(n_{init}=n_{pred}=50\), with a large \(n_{init}\) ensuring compliance with Assumption 1 Firstly, an open-loop trajectory spanning 10000 steps is generated using an input characterized as a zero-mean white noise signal with a variance \(\text{var}(u_{t})=1\). DPP1, DDP2, and SPC are initialized using 150 steps to assemble the Hankel matrix. They are then efficiently updated in a recursive manner, leveraging a variant of **Algorithm 3** as outlined at the end of Section IV-B. Figure 2 depicts average outcomes from 10 Monte Carlo simulations, wherein the deviation from the ground truth is calculated at each iteration. As more open-loop data are incorporated into the three prediction methods, the matrix discrepancies consistently diminish, reinforcing their validity. Furthermore, the equivalence between SPC and DPP1 is validated. The subsequent experiment employs a 25000-step closed-loop trajectory, controlled by a static DeePC constructed from the open-loop trajectory of the previous test. Average results over 10 Monte Carlo simulations are showcased in Figure 3. The matrix discrepancies from the ground truth, as observed in the SPC and DDP1, initially decline but later stabilize. Conversely, DDP2 continually exhibits a reduction in matrix differences as it integrates more closed-loop data. However, it's notable that when the Hankel matrix lacks sufficient data, matrix discrepancies in DDP2 exceed others', and its improvement rate lags behind that observed in Figure 2. Future work will focus on optimizing the closed \begin{table} \begin{tabular}{c c c c} \hline Average \(e_{u}\) & Average \(e_{y}\) & Average \(time_{1}\) & Average \(time_{3}\) \\ \hline \(6.7\times 10^{-12}\) & \(5.2\times 10^{-12}\) & 0.1557 [s] & 0.0026 [s] \\ \hline \end{tabular} \end{table} TABLE I: Statistical results of 10 Monte Carlo runs: the differences of input (\(e_{u}=|u_{1}-u_{3}|\)) and output (\(e_{y}=|y_{1}-y_{3}|\))), the computational time of each recursive step (\(time\)), where \(\cdot_{1}\) and \(\cdot_{3}\) respectively indicate the data from Algorithm 1 and 3. Fig. 1: Comparison of Algorithm 1 and Algorithm 3. Fig. 2: Consistency analysis by open-loop data. SPC: from (8a) and (8b); DDP1: from (9); DDP2: from (12). \(K^{\text{ground}}\) indicates the matrix from the ground truth. loop controller design to expedite improvements in DDP2. ## VI Conclusion In conclusion, this paper presents a novel recursive updating algorithm for DeePC to efficiently handle computational challenges. The algorithm utilizes SVD for low-dimensional transformations and fast updates. It is flexible, accommodating various data-driven methods that use Pseudoinverse and Hankel matrices, as demonstrated through a comparison to Subspace Predictive Control. ### _Integration of forgetting factors_ Consider that a forgetting factor \(\alpha<1\) is utilized after the recursive update (2) in step \(1\)) of **Algorithm 1**, defined as: \[H\leftarrow\alpha H\] Then an additional update should be conducted after performing the SVD update in step \(2\)) of **Algorithm 3**. Specifically, due to the fact that \(\alpha H=\alpha U_{1}\Sigma V_{1}=U_{1}\alpha\Sigma V_{1}\), we need to update: \[\Sigma\leftarrow\alpha\Sigma\] ### _Proofs of Lemmas 6 and 7_ First, we derive a relationship from the stochastic LTI system (10). By propagating the dynamics from time \(t\), we can formulate the next \(n_{pred}\)-step output as: \[y_{real}=\Gamma x_{t}+K_{1}u_{pred}+K_{2}e_{pred} \tag{14}\] where \(e_{pred}:=e_{t:t+n_{pred}-1}\) and \[\Gamma=\begin{bmatrix}C^{\top}&(CA)^{\top}&(CA^{2})^{\top}&\cdots&(CA^{n_{pred }-1})^{\top}\end{bmatrix}^{\top},\] \[K_{1}=\begin{bmatrix}D&0&0&\cdots&0\\ CB&D&0&\cdots&0\\ CAB&CB&D&\cdots&0\\ \cdots&\cdots&\ddots&\ddots&\vdots\\ CA^{n_{pred}-2}BA&CA^{n_{pred}-3}B&\cdots&CB&D\end{bmatrix},\] \[K_{1}=\begin{bmatrix}I&0&0&\cdots&0\\ CK&I&0&\cdots&0\\ CAK&CB&I&\cdots&0\\ \cdots&\cdots&\ddots&\ddots&\vdots\\ CA^{n_{pred}-2}K&CA^{n_{pred}-3}K&\cdots&CK&I\end{bmatrix}.\] By replacing \(e_{t}=y_{t}-Cx_{t}-Du_{t}\) in the state propagation in (10), a predictor-form state-space model can be formulated as: \(x_{t+1}=\tilde{A}x_{t}+\tilde{B}u_{t}+Ky_{t},\ y_{t}=Cx_{t}+Du_{t}+e_{t}\), where \(\tilde{A}=A-KC\) and \(\tilde{B}=B-KD\). From this model, we can find a relation between \(x_{t}\) and \(x_{t-n_{init}}\) by: \[x_{t}=\tilde{A}^{P}x_{t-n_{init}}+K_{3}u_{init}+K_{4}y_{init}\] By replacing the above equation in (14), we can find: \[y_{real}=\begin{bmatrix}\Gamma K_{4}&\Gamma K_{3}&K_{1}\end{bmatrix}\begin{bmatrix} y_{init}\\ u_{init}\\ u_{pred}\end{bmatrix} \tag{15}\] \[+\tilde{A}^{P}x_{t-n_{init}}+K_{2}e_{pred}\] The above linear relation can be extended to the Hankel matrices \(H_{u},H_{y}\) by \[H_{y,pred}=\begin{bmatrix}\Gamma K_{4}&\Gamma K_{3}&K_{1}\end{bmatrix} \begin{bmatrix}H_{y,init}\\ H_{u}\end{bmatrix} \tag{16}\] \[+\tilde{A}^{P}X_{-n_{init}}+K_{2}H_{e,pred}\] where \(H_{e,pred}\) represents the prediction part in \(H_{e}\), similar to the definition of \(H_{y,pred}\). Besides, \(X_{-n_{init}}:=\begin{bmatrix}x_{1-n_{init}}&x_{2-n_{init}}&\cdots&x_{T-L+1-n_ {init}}\end{bmatrix}\), where the time corresponds to that of \(\{u_{i}\}_{i=1}^{T}\) and \(\{y_{i}\}_{i=1}^{T}\) for constructing \(H_{u}\) and \(H_{y}\). [For Lemma 6] The data-driven prediction (7) can be written in the explicit form: \(y_{pred}=H_{y,pred}Z^{\top}\begin{bmatrix}y_{init}\\ u_{init}\\ u_{pred}\end{bmatrix}\) by defining \(Z:=\begin{bmatrix}H_{y,init}\\ H_{u}\end{bmatrix}\) for simplicity. It can be rewritten as \[y_{pred}=\frac{1}{T}H_{y,pred}Z^{\top}(\frac{1}{T}ZZ^{\top})^{-1}\begin{bmatrix} y_{init}\\ u_{init}\\ u_{pred}\end{bmatrix} \tag{17}\] under Assumption 3, which ensures that the inverse exists for stochastic LTI systems. Besides, Assumption 2 ensures the existence of matrix correlation in (17). By replacing \(H_{y,pred}\) by (16) to (17), we can find the following result: \[\begin{split}&\lim_{T\rightarrow\infty}\frac{1}{T}H_{y,pred}Z^{ \top}(\frac{1}{T}ZZ^{\top})^{-1}\\ =&\begin{bmatrix}\Gamma K_{3}&K_{1}&\Gamma K_{4}\end{bmatrix}+\\ &\lim_{T\rightarrow\infty}\frac{1}{T}(\tilde{A}^{P}X_{-n_{init}}+K_{2}H_{e, pred})Z^{\top}(\frac{1}{T}ZZ^{\top})^{-1}\\ =&\begin{bmatrix}\Gamma K_{3}&K_{1}&\Gamma K_{4}\end{bmatrix}\end{split} \tag{18}\] where the latter term in the second equation vanishes due to Assumption 1 and the lack of correlation between the \(u_{t}\) and \(e_{t}\) in open-loop data. Referring to (15), and (18) Fig. 3: Consistency analysis by closed-loop data and Assumption 1, we can demonstrate consistency in the prediction made by (17) as \(T\rightarrow\infty\) by: \[\mathbb{E}_{e}(y_{pred}-y_{real})=\mathbb{E}_{e}(\tilde{A}^{P}x_{t-n_{init}}+K_{2 }e_{pred})=\mathbf{0}\] [For Lemma 7] The proof is very similar to the one for For Lemma 6. The only difference is that \(\lim_{T\rightarrow\infty}\frac{1}{T}K_{2}H_{e,pred})Z^{\top}\neq 0\) in general for closed-loop data. However, under the specific setup, i.e. \(D=0\) in the LTI system or the I/O data is collected by feedback control with at least one sample time delay, we again have \(\lim_{T\rightarrow\infty}\frac{1}{T}K_{2}H_{e,pred})Z^{\top}=0\). ### _Data-driven prediction: a specific form_ This appendix explains how to transform the data-driven prediction in (7), (8), and (11) into the specific form (13). In addition, the ground truth is derived in the form of (13). For (7) and (8), the result directly comes from its explicit solution, given in (17). For (11), each output prediction \(y_{pred}(i)\) is firstly replaced by the explicit solution of (17) with \(n_{pred}=1\). After that, each \(y_{pred}(i)\) can be reformulated in the form of (13) by dynamic programming from \(i=1\). Next, we explain the derivation of the ground truth. The ground truth in the form of (13) is designed by choosing: \(K_{y,init}=\Gamma K_{4},K_{u,init}=\Gamma K_{3},K_{u,pred}=K_{1}\) from (15). By (15) and Assumption 1, it is trivial to prove that it generates consistent prediction.
2309.04229
Quantum dots for photonic quantum information technology
The generation, manipulation, storage, and detection of single photons play a central role in emerging photonic quantum information technology. Individual photons serve as flying qubits and transmit the quantum information at high speed and with low losses, for example between individual nodes of quantum networks. Due to the laws of quantum mechanics, quantum communication is fundamentally tap-proof, which explains the enormous interest in this modern information technology. On the other hand, stationary qubits or photonic states in quantum computers can potentially lead to enormous increases in performance through parallel data processing, to outperform classical computers in specific tasks when quantum advantage is achieved. Here, we discuss in depth the great potential of quantum dots (QDs) in photonic quantum information technology. In this context, QDs form a key resource for the implementation of quantum communication networks and photonic quantum computers because they can generate single photons on-demand. Moreover, QDs are compatible with the mature semiconductor technology, so that they can be integrated comparatively easily into nanophotonic structures, which form the basis for quantum light sources and integrated photonic quantum circuits. After a thematic introduction, we present modern numerical methods and theoretical approaches to device design and the physical description of quantum dot devices. We then present modern methods and technical solutions for the epitaxial growth and for the deterministic nanoprocessing of quantum devices based on QDs. Furthermore, we present the most promising concepts for quantum light sources and photonic quantum circuits that include single QDs as active elements and discuss applications of these novel devices in photonic quantum information technology. We close with an overview of open issues and an outlook on future developments.
Tobias Heindel, Je-Hyung Kim, Niels Gregersen, Armando Rastelli, Stephan Reitzenstein
2023-09-08T09:34:49Z
http://arxiv.org/abs/2309.04229v1
# Quantum dots for photonic quantum information technology ###### Abstract The generation, manipulation, storage, and detection of single photons play a central role in emerging photonic quantum information technology. Individual photons serve as flying qubits and transmit the relevant quantum information at high speed and with low losses, for example between individual nodes of quantum networks. Due to the laws of quantum mechanics, the associated quantum communication is fundamentally tap-proof, which explains the enormous interest in this modern information technology. On the other hand, stationary qubits or photonic states in quantum computers can potentially lead to enormous increases in performance through parallel data processing, to outperform classical computers in specific tasks when quantum advantage is achieved. In this review, we discuss in depth the great potential of semiconductor quantum dots in photonic quantum information technology. In this context, quantum dots form a key resource for the implementation of quantum communication networks and photonic quantum computers because they can generate single photons on-demand. Moreover, these solid-state quantum emitters are compatible with the mature semiconductor technology, so that they can be integrated comparatively easily into nanophotonic structures such as resonators and waveguide systems, which form the basis for quantum light sources and integrated photonic quantum circuits. After a thematic introduction, we present modern numerical methods and theoretical approaches to device design and the physical description of quantum dot devices. We then present modern methods and technical solutions for the epitaxial growth and for the deterministic nanoprocessing of quantum devices based on semiconductor quantum dots. Furthermore, we present the most promising concepts for quantum light sources and photonic quantum circuits that include single quantum dots as active elements and discuss applications of these novel devices in photonic quantum information technology. We close with an overview of open issues and an outlook on future developments. 1 Institut fur Festorperphysik, Technische Universitat Berlin, Hardenbergstrasse 36, 10623 Berlin, Germany 2Department of Physics, Ulsan National Institute of Science and Technology (UNIST), Ulsan, 44919 Republic of Korea 3DTU Electro, Department of Electrical and Photonics Engineering, Technical University of Denmark, Orsteds Plads, Building 343, DK-2800 Kongens Lyngby, Denmark 4Institute of Semiconductor and Solid State Physics, Johannes Kepler University Linz, Altenbergerstr. 69, 4040 Linz, Austria [email protected] ###### Contents * 1 Introduction * 2 Application scenarios and requirements * 2.1 Quantum communication * 2.2 Photonic quantum computing * 2.3 Building blocks of the quantum internet * 2.4 Requirements and key parameters of quantum dot quantum devices Theory and modeling of quantum dot devices * 3.1 Theory of quantum dot states * 3.2 Modeling of the light-matter interaction * 3.3 Optical simulations of quantum dots in nanophotonic devices * 3.4 Modeling of decoherence effects * 3.4.1 Markovian decoherence: Time jitter and pure dephasing * 3.4.2 Non-Markovian decoherence: Phonons * 3.5 Performance of the micropillar single-photon source * 4 Methods to fabricate semiconductor quantum dots * 4.1 General concepts of epitaxial growth of semiconductors * 4.2 Epitaxial quantum dots * 4.2.1 Quantum dots in quantum wells * 4.2.2 Quantum dot fabrication via the Stranski-Krastanow method * 4.2.3 Quantum dot fabrication via nanohole filling * 4.2.4 Site-controlled quantum dots via guided self-assembly * 4.2.5 Quantum dots obtained by droplet epitaxy * 4.2.6 Quantum dot molecules * 4.2.7 Quantum dots in nanowires * 5 Nanofabrication of single-quantum-dot devices * 5.1 Deterministic fabrication technologies * 5.1.1 Pick-and-place technique * 5.1.2 Marker-based lithography techniques * 5.1.3 In situ lithography techniques * 5.2 On-chip fiber coupling of quantum light sources * 6 Performance of quantum dots as stationary qubits and as sources of flying qubits * 6.1 Quantum dot single- and entangled photon sources emitting around 780 nm * 6.2 Quantum dot quantum light sources emitting at around 900 nm * 6.3 Quantum dot quantum light sources emitting in the telecom O- and C-band * 6.4 Quantum dot spin-photon interfaces * 6.5 Quantum dots for entangled photon pair generation * 6.6 Quantum dots for photonic cluster state generation * 7 Integrated quantum photonics with QDs * 7.1 Homogeneous integrated quantum photonic systems * 7.2 Heterogeneous integrated quantum photonic systems * 8 Applications of single quantum dot devices in photonic quantum technology * 8.1 Quantum key distribution * 8.1.1 Single-photon quantum key distribution * 8.1.2 Entangled-photon quantum key distribution * 8.1.3 Towards Device-independent quantum key distribution * 8.2 Quantum teleportation and entanglement swapping with QD photons * 8.3 Boson sampling * 8.4 Photonic quantum computing * 9 Open challenges and outlook * 9.1 Theory and numerical device modelling * 9.2 Epitaxial growth * 9.3 Device nanofabrication * 9.4 Practical applications in quantum information ## 1 Introduction Ever since Richard Feynman's famous proposal 40 years ago to use quantum physics to build computers with ultimate performance, scientists worldwide have been fascinated by this prospect [1, 2]. For a long time, the development of corresponding concepts was mainly in the area of theory and basic research [3, 4], but in recent years there have been breathtaking advances in application-oriented quantum technology. Quantum computers are no longer just the dream of many scientists, but are now being further developed by global players on an almost industrial scale [5, 6, 7]. The latest generations have even achieved the quantum advantage [8] for special problems such as (Gaussian) boson sampling with 100 photonic inputs [9] and using 53 qubits to sample the output of a pseudo-random quantum circuit [5]. Applications in the field of quantum communication are currently evolving with a similar dynamic. Interestingly, the development of quantum cryptography was mainly triggered by the prospect of implementing Shor's quantum algorithm for efficient prime factorization of large numbers in a quantum computer [3], which makes classic encryption methods vulnerable. Simple point-to-point quantum communication systems are already well-established and commercially available [10], and more complex quantum networks are emerging worldwide [11, 12, 13, 14]. Even satellite-based quantum links have already been established that allow quantum cryptography over distances of more than 1200 km [15]. A key resource of all photonic quantum technology systems are single photons. As flying qubits, they serve as carriers of quantum information. For example, in the famous BB84 quantum key distribution (QKD) protocol [16], the polarization degree of freedom is used to encode the information about the secret key exchanged between the sender (Alice) and the receiver (Bob). The same applies to more complex QKD concepts in the field of long-distance quantum communication. For example, measurement-device-independent QKD (MDI-QKD) relies on indistinguishable photons for quantum data exchange [17], and the quantum repeater concept uses entanglement distribution across nodes of a network to extend the communication distance and the data rate compared to simple point-to-point QKD protocols [18, 19]. Similarly, single-photon states that are purposefully manipulated, stored, and detected are the basis of photonic quantum computers [20]. Photonic cluster states could be of particular importance in the future, especially 2D cluster states that are largely immune to decoherence and can pave the way to powerful fault-tolerant photonics quantum computers [21]. Against this background, it is clear that sources of single photons and entangled photon pairs are central building blocks of applications in photonic quantum technology. Ideally, they deliver the photons on-demand at the desired wavelength. In current applications, however, probabilistic photon sources are mostly used that do not emit photons in a deterministic manner. An example is represented by heavily attenuated lasers for BB84-like QKD using decay-state protocols [22], where the mean number of photons per pulse is below one. Due to the underlying classic photon statistics, however, each pulse contains a number of photons that is distributed according to the Poisson distribution and, in addition to individual photons, often also contains the vacuum state (no photon) or several photons. Photon sources based on non-linear emission processes such as parametric down-conversion behave similarly [23] and are also frequently applied in quantum communication and quantum computing settings [24, 25]. These are very attractive sources of entangled photon pairs, but the number of pairs per pulse is also affected by statistical fluctuations. Quasi-deterministic operation can only be achieved through comparatively complex heralding [26]. In both cases, it is of great practical advantage that the sources (laser or parametric down-conversion) can be operated at room temperature. In contrast, there is the class of non-classical light sources that can emit individual photons and entangled photon pairs deterministically, i.e. at the "push of a button". Such quantum emitters typically have an extension in all space dimensions in the range of or smaller than the de Broglie wavelength of the enclosed charge carriers, leading to discrete electronic energy levels so that one and only one photon is emitted under suitable experimental conditions in the usually radiative recombination process [27, 28]. Biexcitons consisting of two bound electrons in the conduction band and two bound holes in the valence band are of particular interest in the context of this article because they can generate entangled photon pairs [29, 30, 31] (see Section 6.5 for details). Compared to other quantum emitters such as nitrogen vacancy centers in diamond [32], defect centers in SiC [33] and in 2D transition metal dichalcogenides [34], quantum dots (QDs) have the enormous advantage that their material basis makes them compatible with common processes and technical solutions in modern III/V optoelectronics. For example, sophisticated epitaxial processes are used to growth high-quality semiconductor heterostructures with QDs as the active medium [35]. Furthermore, the emission wavelength of the QDs can cover in a wide spectral range from about 300 nm to beyond 1.55 \(\mu\)m by a suitable choice of the materials and by strain engineering [36]. In particular, this wavelength range includes the telecom O-band at 1.3 \(\mu\)m and the C-band at 1.55 \(\mu\)m, which are of crucial importance for fiber-based quantum communication [37]. Due to its discrete electronic energy levels, a QD already represents an almost ideal 2-level system or, in the case of the biexciton (XX) cascade, a 3-level or 4-level system, so that it can act as a source of single photons or entangled photon pairs. The foundation for discovering and studying these exciting properties was laid in the 1990s and early 2000s when single-QD spectroscopy was developed. Early work on single QD properties includes the first studies on isolated GaAs QDs [38] and Stranski-Krastanov (S-K) QDs [39] and the observation of some Coulomb effects of particles [40]. In addition, important electronic features such as the splitting of the excitonic fine structure [41] and hidden symmetries in the QD energy levels [42] had been identified and studied in detail. Going beyond such fundamental investigations, a number of challenges need to be addressed in order to be able to use QDs as quantum light sources (QLSs)1 in photonic quantum information technology, as discussed for instance also in a recent review article by X. Zhou et al. [43]. On the one hand, photonic structures are required that direct the emitted photons in the intended direction with high extraction efficiency [44], so that they can be coupled directly into a glass fiber for applications in quantum communication, for example. Similarly, in the field of integrated quantum photonics, the photons have to be guided into integrated waveguide systems with high efficiency [45, 46]. In order to take these requirements into account, various concepts for the efficient light extraction and transmission of photons from QDs have been developed. These include, for example, micropillar cavities [47, 48, 49, 50, 51], circular Bragg gratings (CBGs) [52, 53, 54], photonic wires [55] and microlenses [56] for photon extraction normal to the sample surface and ridge waveguides [57] and photonic crystal waveguides [58] for lateral photon guiding in integrated quantum photonic circuits (IQPCs). These quantum devices must be modeled numerically with high accuracy in order to achieve optimal performance, for example in terms of photon extraction efficiency. For this purpose, optimization calculations are carried out in multidimensional parameter spaces using modern algorithms and numerical methods such as finite difference time domain method [59] and the finite element method [60], where Bayesian optimization shows superior performance [61]. Footnote 1: We refer to quantum light sources in the general context of non-classical light sources that emit single photons, entangled photon pairs, or photonic cluster states. In contrast, we define the subset of sources that emit single photons as single-photon sources. The technological implementation of these concepts is usually very demanding and can only be achieved with highly optimized nanoprocessing concepts. Especially in the field of lithography, new approaches are needed to integrate individual QDs with nm accuracy and spectral matching in resonator structures and waveguide systems. For this purpose, high-precision deterministic lithography processes have been developed in recent years [62], which are now used very successfully for the realization of QD based quantum devices. Other important and current aspects regarding the application in quantum technology are the spectral control of the QD emission via external variables such as strain tuning [63] and the direct fiber coupling of the sources for user-friendly integration in quantum networks [64]. Against this background, this review article gives a comprehensive overview of the development and various application perspectives of semiconductor QDs in the field of photonic quantum information technology, as illustrated in Fig. 1. The article is aimed at students and scientists who want to get a well-founded insight into the basics of QDs, highly optimized device concepts, modern nanoprocessing technologies and the optical and quantum-optical properties of corresponding QLSs and IQPCs. Furthermore, open questions are discussed, and future development directions are presented, which should pave the way for the application of QD quantum devices in photonic quantum information technology. The article is structured as follows. In Section 2 we first introduce application scenarios of QDs in quantum information technology and the associated requirements and key parameters. We then introduce the theoretical concepts needed to understand light emission and the numerical modeling methods used to predict the performance of QLSs based on QDs in Section 3. Numeric optimization is often the basis for sample growth and device nanofabrication which are discussed in Section 4 and Section 5 along with modern fiber-coupling solutions. Section 6 presents the optical and quantum optical properties of state-of-the-art QD-based QLSs in the most relevant spectral ranges and introduces advanced QD device concepts for acting as spin-photon interfaces and photonic cluster state generators. In the following Section 7 we introduce and discuss recent Figure 1: Schematic overview of the development and application of QD-based quantum devices in quantum information technology. Theory and numerical modelling is used to predict and optimize the device performance. QDs are then epitaxially grown, integrated into nanophotonic devices to enhance their optical properties and fiber-coupled for user-friendly operation. At the next level, QD quantum devices are integrated into larger systems to implement, for instance, quantum communication networks. In the other hand, on-chip integration of QDs is a cornerstone in photonic quantum computing. Ultimately, the overarching goal is to combine quantum networks and photonic quantum processing units in a global quantum internet. advanced in the realization of QD-based IQPCs, before presenting system integration and first applications of single QD devices in photonic quantum technologies in Section 8. The article closes with an outlook onto open questions and future research directions towards a global quantum internet, and a conclusion in Section 9 and in Section 10, respectively. ## 2 Application scenarios and requirements This section introduces envisaged application scenarios of QDs in photonic quantum information technologies. In the broader context of the quantum internet, this includes the field of quantum communication on the one hand and the area of photonic quantum computers on the other hand as key building blocks of a global quantum network. Furthermore, interfaces between stationary and flying qubits are presented, which are required to connect different nodes in large-scale quantum networks. We also discuss the associated requirements as the basis for the following sections, in which we present the design, fabrication, optical properties and first applications of QD quantum devices in photonic quantum information technologies. ### Quantum communication Quantum communication is currently considered to be the quantum information technology with the highest short-term application potential and large impact on the secure exchange of sensitive data. As one of the most studied cryptographic primitives, quantum key distribution enables the generation of a secret and random bit-string shared between two authenticated parties. Once distributed, this key can be used to encrypt data, with its security being protected by the laws of quantum mechanics rather than computational complexity as in classical schemes. Using the so-called one-time-pad scheme for data encryption, even information-theoretical security is possible [24, 65]. The first QKD protocol, known as BB84, was proposed by Charles H. Bennett and Gilles Brassard in 1984 [66] and uses the quantum mechanical properties of single photons to establish security2. Here, the polarization of single photons is used to encode the bits in different, randomly chosen bases in a so-called prepare-and-measure type configurations (see Fig. 2 (a)). The sending party (Alice) first prepares qubit states randomly in four different states, sends them to the receiving party (Bob) via a quantum channel where the states are detected, again in randomly set basis settings. Eavesdropping attempts of an adversary would lead to an increase in errors in the bit sequence (e.g. 25% for an simple intercept-resend strategy) and can thus be detected by comparing a subset of the results. The BB84 protocol can be subdivided into five basic steps, common to most QKD protocols: Qubit exchange, sifting, parameter estimation, and the classical post-processes error correction and privacy amplification. These steps result in a final secure or secret key rate, being the most important figure of merit for the benchmarking of QKD systems. Errors occurring in the key after the sifting step, i.e. the sifted key, are quantified by the quantum bit error ratio (QBER)3 - the probability that a bit value of Alice and Bob differs, even though they used the same measurement basis. It should be noted, that the steps mentioned above require an authenticated classical channel between both parties, which implies that a small amount of secret key is required already before the first quantum-key exchange [69]. For this reason, QKD is also referred to as a "secret growing scheme". Footnote 2: The concept of quantum cryptography was born already earlier, within the idea of conjugate coding by S. Wiesner in the late 1960s, work which has not been published until 1983 [67] (see Bennett et al. [68] for a historical review). Footnote 3: A more widely used term in the literature is the quantum bit error rate in units of \(s^{-1}\). As the QBER entering the key rate equations must be a probability, it is sometimes beneficial to use the quantum bit error ratio for consistency reasons. Prepare-and-measure type QKD as introduced with the BB84 protocol is however not the only possible choice. As proposed in the E91 protocol by Artur Ekert in 1991 [70], also entangled photon pair sources can be used for implementations of QKD (see Fig. 2(b)). Here, Alice and Bob independently perform a measurement on one photon of an entangled two-photon state using randomly selected bases. Keeping only results in which both used the same basis, both parties obtain a perfectly correlated bit-string. By quantifying the remaining degree of entanglement after the photon transmission, e.g. by verifying the violation of the Bell-type Clauser, Horne, Shimony, and Holt (CHSH) inequality [71], eavesdropping attempts can be uncovered. Alternatively, one can also use the distributed entangled photons directly for measurements in the BB84 bases, compare some of the results and deduce the security from the identified error rates just as in the BB84 protocol - a protocol known as BBM92 [72]. Both, prepare-and-measure and entanglement-based QKD can be enhanced in their performance, compared to implementations using attenuated lasers, if deterministic QD-based QLSs are employed. Recent progress in this direction using QD-based QLSs is reviewed in Sections 8.1.1 and 8.1.2. While the quantum cryptographic protocols discussed above can be proved secure in an information theoretical sense, device imperfections in physical realizations can compromise the protocol's security by introducing loop-holes or side-channel attacks (see Ref. [73] for an in-depth review on quantum hacking strategies). For this reason, device-independent (DI) QKD protocols have been invented, which are constructed such that imperfections of the technical realization do not compromise the protocols' security, representing a major advantage for practical applications [74]. Full-fledged implementations of DI-QKD are extremely challenging to realize [75] and require loophole-free Bell-state measurements (BSMs) across remote locations with high entanglement fidelity [76, 77, 78]. On the other hand, already partially device-independent protocols eliminating attacks on specific devices, are very useful. An example are MDI-QKD protocols [79, 17], for which the protocol security can be guaranteed independent of the measurement device, i.e. the detection setup. Here, Alice and Bob each send single indistinguishable photons to a central receiver station (Charly), where both photons are projected into an entangled two-photon state via a BSM (cf. Fig. 2(c)). To date, MDI-QKD has mostly been implemented using weak coherent pulses [80, 81, 82, 83, 84, 85], for which the underlying Poissonian photon statistic fundamentally limited the achievable two-photon interference (TPI) visibility Figure 2: Quantum communication concepts. Depending on the type of quantum resource, different types of QKD scenarios are possible: (a) Prepare-and-measure based QKD protocols using single-photon states, (b) entanglement based QKD protocols using polarization entangled photon pairs, (c) device-independent QKD protocols requiring indistinguishable photons from remote sources, and (d) the quantum repeater concept for long-distance QKD. to 50%. Exceeding this classical limit increases the efficiency of the BSMs [86]. Thus, for implementations using deterministic QLSs based on QDs, substantial advances can be expected. In addition, as MDI-QKD protocols are intrinsically suited for star-like network topologies, MDI-QKD is particularly useful for the realization of scalable multi-user QKD networks in metropolitan areas [87]. An experimental demonstration of this type of quantum network with sub-Poissonian QLSs would be a major step forward. Recent progress in this direction will be reviewed in Section 8.1.3. To cover arbitrary distances in quantum-secured communication, QKD links as discussed above can in principle be chained using intermediate trusted nodes [88], which however reduce the overall security in the end-to-end connection. An elegant solution for transferring quantum information over arbitrary distances without compromises in the security are quantum repeaters. Here, the quantum channel is divided into shorter segments using entangled photon pair sources and entanglement swapping as key resources (cf. Fig. 2(d)). The first quantum repeater scheme, known as BDCZ protocol, was proposed by Briegel, Dur, Cirac, and Zoller in 1998, to overcome the exponential scaling of errors in the quantum channel due to depolarization and transmission losses [89, 18]. The BDCZ protocol enables the distribution of a maximally entangled photon pair, e.g. the well-known Einstein-Podolski-Rosen (EPR) state [90], over arbitrary distances. The entangled photon pair can then be used directly, to realize QKD protocols (e.g. the E91 protocol), or, to teleport a quantum state from one end to the other. To distribute the entanglement, Briegel et al. proposed to use multiple EPR sources along the quantum channel each sending entangled photons in opposite directions, thus dividing the complete quantum channel into shorter segments. At intermediate nodes, photons from two neighboring EPR sources are stored in a quantum memory and then used for swapping the entanglement to the two outer photons via a joint BSM of the photons at the intermediate station. By repeating the swapping in a nested fashion, arbitrary distances can in principle be covered. To implement this scheme at reasonable levels of photon loss tolerance, however, quantum memories and coherent spin-photon interfaces are required at the intermediate nodes, which makes the practical implementation very complex. To implement the BDCZ quantum repeater protocol, QD-based entangled photon pair sources can be used in combination with suitable spin-photon interfaces (see Section 6.4 and Ref. [19] for a comparison of different quantum emitter platforms). Alternatively, all-photonic measurement-based quantum repeater schemes also promise quantum communication at arbitrary scales without requiring quantum memories [91, 92, 93]. These types of repeater protocols require photonic cluster states as key resources, which have already been generated using QD-QLSs [94, 95] as discussed in Section 6.6. ### Photonic quantum computing The realization of photonic quantum processors and eventually photonic quantum computers Figure 3: Realization of photonic quantum computers: (a) Single-photon sources and detectors providing the essential quantum hardware, (b) photonic quantum gates based on linear optics, and (c) quantum algorithms to solve target problems. is another appealing application scenario of QD-based quantum devices, and Fig. 3 illustrates important building blocks for such quantum information systems. In this context, photons have distinct advantages as a qubit source over other qubit platforms. First, they have a variety of degrees of freedom encoding quantum states such as polarization, path, time-bin, and frequency. It is also possible to utilize their high-dimensional or continuous variables, such as orbital angular momenta and spatial modes. Second, photons do not suffer from decoherence and merely interact with the environment. Third, there exist well-developed technologies for generating, manipulating, and measuring photons in free space, fiber optics, and integrated chips. Therefore, photons provide excellent quantum information carriers. From these advantages, the field of photonic quantum computing is growing rapidly. Although fault-tolerant quantum computing still requires significantly more quantum resources with higher accuracy, photonic quantum computing showed its potential by solving specific problems beyond classical computers [9, 20], which could be useful to simulate complex molecular interactions [96] and find eigenvalues [97]. Therefore, the applications of photonic quantum computing range from simulating new materials and drugs to solving optimization and factorial problems. While using photons leads to fewer concerns about decoherence issues, a major challenge in photonic quantum computing and simulation is implementing quantum gates, since direct interactions between photons are quite difficult to establish. Furthermore, photons are subject to loss and other errors during the operation, and therefore it is necessary to develop methods for improving efficiency and correcting these errors in order to secure the reliability of photonic quantum computers. In 2001, a scheme was proposed for linear optics quantum computing that does not require direct interaction but introduces nonlinearities in the quantum interference and measurement [98]. Since then, significant advances have been made in the key building blocks of photonic quantum computing, including high-quality QLSs, efficient photon detection, and fast photonic gates. In particular, deterministically operating semiconductor QDs are starting to outperform existing heralded QLSs based on spontaneous parametric down-conversion process in terms of brightness, single-photon purity, and indistinguishability as well as high fidelity for entangled photon pairs [99]. As single photons from QDs can be collected efficiently, see Section 6, and commercial high-efficiency superconducting nanowire single-photon detectors become available, the single-photon detection rate from a source to a detector now reaches over 10 MHz [100]. Achieving high efficiency allows the use of quantum gates with a high success rate and an improved signal-to-noise ratio. Therefore, the QD QLSs are highly suitable for measurement-based quantum computing with minimized loss and errors. However, in addition to efficiency, other important properties of photonic quantum computing include stability, scalability, and compatibility with other components in the quantum system. As the size of the quantum system increases, multiple single photons need to interact and be entangled. Therefore, it is necessary to eliminate frequency jitter over time and between different emitters. The lifetime-limited linewidth is required to ensure a long coherence time. Quantum memory is also essential for several tasks in photonic quantum computing, including quantum error correction and quantum algorithms, such that a quantum memory can store the quantum information that is being protected from errors and store intermediate results during the process [101, 102]. This stored information needs to be retrieved later effectively, requiring efficient spin-photon interfaces. The ground state spin of QDs has shown a spin coherence time of up to a few microseconds [103], and it could be prolonged with low-strain GaAs QDs [104, 105] (see Section 4). Besides, incorporating QD spin qubits can provide nonlinearity based on spin-photon entanglement (see Section 6.4) and brings new functionalities in photonic quantum computation, such as sequential entangler [95], single-photon transistor [106] and deterministic quantum gates [107]. Of particular interest are 2D photonic cluster states, which allow for efficient one-way photonic quantum computing [21] (see Section 6.6). To implement practical photonic quantum systems, the integration of QLSs and memories with classical photonic integrated circuits is a crucial aspect. This integration combines the strengths of quantum and classical photonic technologies to create systems with improved functionality and performance. The quantum resources bring their inherent quantum properties, such as coherence and entanglement, while the classical photonic chips provide stable, compact, and programmable platforms, required for practical applications (see Section 7). By meeting all these requirements, it is possible to develop photonic quantum computing systems that are robust, scalable and capable of performing complex quantum algorithms. ### Building blocks of the quantum internet The overarching goal of quantum information technology is the development of a global quantum internet [108]. Such a network consists of quantum nodes, which can represent quantum computers, interconnected by quantum channels, in which information distributed via single photons acting as flying qubits. In this way, distributed quantum computing can be performed in the future. In close connection with quantum communication and photonic quantum computing introduced in the two previous sections, the implementation of large-scale quantum networks, and finally the quantum internet, requires coherent interfaces between stationary and flying qubits to connect different quantum nodes. In the same context, sources of photonic cluster states should also be mentioned, which represent a powerful resource for measurement-based quantum computing and loss-tolerant quantum communication, and which form further important application perspectives for QDs. The concepts of these building blocks, illustrated in Fig. 4, is discussed in the following. Quantum memories and the related spin-photon interfaces have the task of storing a quantum state for as long as possible in order to read it out at a later point in time. They are central elements of quantum repeater networks and quantum computers, which explains the enormous research activities in this field. On the one hand, such coherent interfaces must interact with their environment for writing and reading, but on the other hand they must also be decoupled from it in order to avoid decoherence of the stored quantum state. So far, the best coherence times have been achieved in atomic quantum memories, which, however, are hardly compatible with scalable component technologies. QDs could be an interesting alternative in this context. An additional electron or hole can be localized in a QD by targeted doping. A quantum state can be encoded in its spin degree of freedom, which can be initialized and retrieved via optical excitation. In addition, electrically addressable quantum dot molecules promise increased functionality and better storage properties. Stationary qubits reside in local devices, such as the memory or processor of a quantum computer. Flying qubits are typically photons that carry the quantum information through the air, a vacuum of space, or through fiber optic networks. Thus, interfaces between stationary and flying qubits are key building blocks of quantum networks. There are different proposals to Figure 4: Building blocks for a future quantum internet: (a) Spin-photon interfaces converting quantum information from flying to stationary qubits, (b) quantum memories storing and releasing quantum information on-demand, and (c) cluster states as key resources for fault-tolerant quantum computing. implement them, including spin-photon interfaces which can be realized by atoms or QDs (see Section 6.4. Another important resource in photonic quantum computing and quantum networks are photonic cluster states. Such states are highly entangled states of multiple qubits which allow for one-way, and measurement-based quantum computing [21, 109] and loss-tolerant quantum communication [92]. However, such applications require cluster states of two dimensions or higher. For instance, for the realization of topologically fault-tolerant cluster state quantum computation, at least three dimensions a necessary [109]. Interestingly, there exist proposals for the deterministic generation of 1D [110], 2D [111] and multidimensional [112] photonic cluster states using semiconductor QDs. ### Requirements and key parameters of quantum dot quantum devices In order to meet the envisaged applications in photonic quantum information technology, the QD quantum devices must meet a number of stringent requirements, the most important of which are briefly introduced below and will be discussed in detail in this review article: * 950 nm, quantum applications require specific target wavelengths. Especially for use in fiber-based quantum networks, emission wavelengths in the telecom O-band at 1.3 \(\mu\)m and in the C-band at 1.55 \(\mu\)m are aimed for, which are characterized by minimal dispersion and minimal attenuation, respectively [113]. For free-space quantum communication, shorter wavelengths are generally preferred [114] and the first quantum satellites operate around 800 nm. In addition, for coupling to available atomic-based or rare-earth-ion-based quantum memories, specific wavelengths are needed, see e.g. Ref. [115]. An example is represented by the D1 transition in Rb vapors [116]. * **Single-photon purity**: A central parameter of all SPSs is multi-photon emission suppression which is quantified via the autocorrelation function at zero time delay \(g^{\left(2\right)}\left(0\right)\), which should be as close to zero as possible. In the context of QLSs, this property is often referred to as "single-photon purity". * **Emission linewidth and indistinguishability**: Quantum emitters are characterized by discrete emission lines, which ideally should only be homogeneously broadened due to the finite lifetime for spontaneous emission \(\tau_{\mathrm{r}}\). In this case, the linewidth \(\Gamma\) results from a Fourier transformation of the spontaneous decay and is given by \(\Gamma=\hbar/\tau_{\mathrm{r}}\). In practice, pure dephasing [117], inelastic interaction with phonons [118], and the Coulomb interaction of the confined carriers with charged states in the vicinity of the QD lead to an additional broadening [119]. All these mechanisms have an adverse effect on the indistinguishability of the photons, as discussed further in Section 3.4. While phonon-related broadening can be limited by operating at sufficiently low temperature, charge noise is often the dominant inhomogeneous-broadening mechanism and is therefore particularly problematic for quantum functionalities such as the entanglement distribution in quantum repeater networks, which are based on "Hong-Ou-Mandel" (HOM)-like TPI [37]. * **Entanglement fidelity**: QDs can generate entangled photons on-demand at high photon flux. This can happen via the biexciton-exciton (XX-X) cascade, which leads to polarization-entangled photon pairs [29, 30, 120]. Furthermore, time-bin entanglement and hyper-entanglement are possible to achieve using QDs [121, 122]. In all cases, the entanglement fidelity with respect to a maximally entangled state is an important parameter that should be as close to one as possible for quantum applications. * **Spin coherence**: The spin coherence is an important parameter of QDs with regard to the realization of spin-photon interfaces and in the generation of photonic cluster states. Via the spin degree of freedom, quantum information can be stored on a timescale of the spin coherence time. The aim is to achieve the highest possible spin coherence time, for which, for example, the spin of a confined electron must be decoupled from the solid-state environment as efficiently as possible to generate long-lived stationary qubits. At the same time, efficient photonic coupling to the environment is usually required to realize spin-photon interfaces that interact with flying qubits. * **Preparation and quantum efficiency**: In the ideal case, the state preparation efficiency \(\eta_{\text{prep}}\), i.e. the probability of initializing a QD in the desired state upon excitation, e.g. a charged exciton or a biexciton state (depending on the application), should be unity. In reality, \(\eta_{\text{prep}}<1\) because of possible random fluctuations in the charges captured by (or generated in) the QD [123] and/or other effects such as electron-phonon interaction limiting the population-inversion efficiency [124]. The former limit is usually referred to as "blinking" and can be strongly reduced in charge-tunable devices [105], while the latter can be reduced by sophisticated excitation schemes [125]. Also, the emission (or quantum) efficiency \(\eta_{\text{em}}\), i.e. the probability that the recombination results in a photon (or photon pair) in the desired optical mode is limited due to possible non-radiative decay channels as well as radiative side-channels, such as phonon side-bands and radiative Auger [126] channels. Compared to other quantum emitters, QDs have in general very high quantum efficiency. * **Photon extraction and coupling efficiency**: For the on-demand character of the QD QLSs, it is necessary to generate a usable photon or entangled photon pair with each trigger impulse. In order to come close to this goal, deterministic excitation concepts are used on the one hand and device geometries are developed on the other hand, which couple the photons generated by the QD into certain modes or emit them in the desired direction, with the emission into loss channels being suppressed as much as possible. Furthermore, effects of cavity quantum electrodynamics (cQED) come into play in resonator-based device concepts, which can accelerate the spontaneous emission of the QDs in the regime of weak coupling in order to improve the photon extraction efficiency as well as the indistinguishability [47, 50, 127, 128]. * **Device fabrication**: In addition to the physical aspects mentioned, the device fabrication itself is a very important aspect in the development of QD-based QLSs for applications in photonic quantum technology. For example, QDs often have to be integrated into nanophotonic devices with nm accuracy and spectral matching, which in view of the self-organized growth of QDs with indeterminate (lateral) position and spectral location necessarily requires deterministic production methods. This is particularly essential for upscaling to complex quantum networks and highly integrated IQPCs based on QLSs with identical properties. Compatibility with quantum memories is also required for certain applications, which can be achieved using hybrid concepts. Finally, a high functional integration density is aimed at for IQPCs, which, in addition to the sources, also includes on-chip detectors [129]. ## 3 Theory and modeling of quantum dot devices While numerous design strategies exist for controlling the light emission from QD-based QLSs, they all require accurate modeling and careful optical engineering to achieve high performance. In this section, we review the theory of light emission from QDs. Furthermore, we discuss the numerical simulation techniques used to model the collection efficiency and the photon indistinguishability. We exemplify with an analysis of the performance of the micropillar SPS. ### Theory of quantum dot states The direct bandgap of In(Ga)As QDs (and GaAs QDs) enables efficient spontaneous emission using the radiative transition from the conduction band to the valence band at the \(\Gamma\) point of the Brillouin zone. The zinc-blende crystal structure results in three valence bands, the heavy hole, the light hole and the split-off band [130]. However, spin orbit coupling and the aspect ratio of pyramidal-shaped QDs shift the energies of the latter two, such that light emission predominantly takes place through transitions to the heavy hole band. For standard sized QDs, the extension of the electron and hole wavefunctions is dominated by the strong confinement of the 3D potential landscape, while Coulomb interaction instead plays a perturbative role for the energy levels [131]. While QDs generally feature an advanced energy level structures, the lowest energy s-shell is typically used for light emission. The most basic optical excitation is the exciton state configuration consisting of a single electron in the conduction band and a single hole in the valence band, shown in Fig. 5(a). In terms of the vertical orientation \(S\) along the quantization growth axis of the electron (\(S_{\mathrm{e}}=|\!\uparrow\rangle\) or \(|\!\downarrow\rangle\)) and hole spin state (\(S_{\mathrm{h}}=|\!\uparrow\rangle\) or \(|\!\downarrow\rangle\)), the relevant optically bright states of the exciton are \(|\!\uparrow\!\downarrow\rangle\) and \(|\!\downarrow\!\uparrow\rangle\), whereas emission from the dark states \(|\!\uparrow\!\uparrow\!\uparrow\rangle\) and \(|\!\downarrow\!\downarrow\!\downarrow\rangle\) is forbidden due to lack of angular momentum conservation. In the presence of electron-hole exchange interaction, the bright energy eigenstates are \(|\mathrm{X_{H}}\rangle=\frac{1}{\sqrt{2}}\left(|\!\uparrow\!\downarrow\rangle +|\!\downarrow\!\uparrow\rangle\right)\) and \(|\mathrm{X_{V}}\rangle=\frac{1}{\sqrt{2}}\left(|\!\uparrow\!\downarrow\rangle -|\!\downarrow\!\uparrow\rangle\right)\), which produce photons linearly polarized in the horizontal (H) and vertical (V) direction during the radiative transition to the ground state \(|\mathrm{g}\rangle\). The pyramidal shape of the QD lifts the energy degeneracy between the two states, which are separated by the fine structure splitting \(E_{\mathrm{FSS}}\) of typically 10 - 100 \(\mu\)eV for InGaAs QD [132], while very low \(E_{\mathrm{FSS}}\) in the range of the homogenous linewidth are usually observed for highly symmetric GaAs QDs [133]. By adding a single charge to the exciton state, the charged exciton or trion states depicted in Fig. 5(b) are obtained. The negatively (positively) charged trion state \(|\mathrm{X^{-}}\rangle\) (\(|\mathrm{X^{+}}\rangle\)) consists of two electrons (holes) in the conduction (valence) band and a single hole (electron) in the valence (conduction) band, and the corresponding spin configurations are \(|\mathrm{X^{-}}\rangle=\frac{1}{\sqrt{2}}(|\!\uparrow\!\downarrow\rangle-|\! \downarrow\!\uparrow\rangle)S_{\mathrm{h}}\) and \(|\mathrm{X^{+}}\rangle=\frac{1}{\sqrt{2}}(|\!\uparrow\!\downarrow\rangle-|\! \downarrow\!\uparrow\rangle)S_{\mathrm{e}}\). The trion is a fermion, and the superpositions arise due to the requirement of an anti-symmetric state for identical particles. For the trion state, recombination of an electron-hole pair leads to emission of a circularly polarized photon leaving the system in the charged ground state \(|\mathrm{g^{\pm}}\rangle\). The additional charge of the trion state represents an important asset for spin physics [134] enabling e.g. generation of the photonic cluster states [94, 110, 111, 112], see Section 6.6. Finally, by adding another electron-hole pair to the excitonic state, the biexciton configuration \(|\mathrm{XX}\rangle=|\!\uparrow\!\downarrow\!\uparrow\!\uparrow\!\downarrow\rangle\) including two electron and holes is obtained. The biexciton can decay to the ground state via either of the two channels illustrated in Fig. 5(c) producing a pair of linearly polarized photons. In the absence of fine structure splitting \(E_{\mathrm{FSS}}\) = 0, the two decay channels are indistinguishable resulting in the entangled photon pair state \(\frac{1}{\sqrt{2}}\left(|\mathrm{HH}\rangle+|\mathrm{VV}\rangle\right)\), and control of the fine structure splitting [120] is thus an essential tool for entangled photon pair generation. Coulomb interaction typically lowers the energy of the initial photon emitted from the biexciton by a few meV, and for the entangled photon pair source a broadband photonic design approach is thus needed to ensure collection of both the biexciton and exciton photons. ### Modeling of the light-matter interaction The Lorentz force governs the interaction between the electromagnetic field and the charge states of the QD, typically modeled as a two-level system with an excited (ground) state wavefunction \(\Psi_{\rm e}\) (\(\Psi_{\rm g}\)). Since typical QDs are small compared to the wavelength, the electric field \({\bf E}({\bf r},t)\) can be considered constant \({\bf E}(t)\) over the QD. In this dipole approximation, the interaction Hamiltonian \(\hat{H}_{\rm int}=-\hat{\bf d}\cdot{\bf E}(t)\)[135] is the product of the electric field and the dipole moment operator \(\hat{\bf d}=-e\hat{\bf r}\), where \(e\) is the electronic charge and \(\hat{\bf r}\) is the position operator. The ability of the QD to emit light is then quantified by the dipole moment \({\bf d}=\left\langle\Psi_{\rm e}\right|\hat{\bf d}\left|\Psi_{\rm g}\right\rangle\). The simplest model for describing the electronic wavefunction is the single-band effective mass approximation [136], where the wavefunction \(\Psi=F({\bf r})u({\bf r})\) using the Bloch theorem is given as the product of a slowly varying envelope function \(F({\bf r})\) and the periodic electronic Bloch function \(u({\bf r})\). In this approximation, the envelope function is a solution to the time independent Schrodinger equation \[-\frac{\hbar^{2}}{2m_{0}}\nabla\cdot\left(\frac{1}{m_{\rm eff}({\bf r})}\nabla F ({\bf r})\right)+V({\bf r})F({\bf r})=EF({\bf r}), \tag{1}\] where \(m_{0}\) (\(m_{\rm eff}\)) is the electron mass (effective mass), \(V({\bf r})\) is the energy potential, and \(E\) is the eigenstate energy. The dipole moment for an interband transition can then be written as \({\bf d}=\left\langle F_{\rm e}|F_{\rm g}\right\rangle\left\langle u_{\rm e} \right|\hat{\bf d}\left|u_{\rm g}\right\rangle\), where the Bloch matrix element \(\left\langle u_{\rm e}\right|\hat{\bf d}\left|u_{\rm g}\right\rangle\) depends only on the properties of the bulk material [136]. QD-based QLSs generally exploit the spontaneous emission process of the weak coupling regime of cQED for deterministic light emission. Here, the spontaneous emission rate \(\Gamma\) for an emitter at the position \({\bf r}_{0}\) with transition energy \(\hbar\omega_{0}\) is derived using Fermi's golden rule [137] as the product of the dipole moment and the local photonic density of states \(\rho_{\rm L}\)[135], \[\Gamma=\frac{\pi\omega_{0}}{\hbar\varepsilon_{0}}|{\bf d}|^{2}\rho_{\rm L}({ \bf n_{\rm d}},{\bf r}_{0},\omega_{0}), \tag{2}\] where \({\bf n_{\rm d}}={\bf d}/|{\bf d}|\) is the dipole moment orientation. The local density of states \(\rho_{\rm L}\) fully describes the photonic environment at the position of the emitter and includes e.g. cavity effects accelerating the spontaneous emission rate through Purcell enhancement or photonic bandgap or dielectric screening effects suppressing the rate. Even though the spontaneous emission rate is fully described using Eq. (2), the light emission process is typically simulated numerically using a classical optical calculation by exploiting an equivalence principle [135]. To see this, we consider Figure 5: The electron (hole) configurations represented by red full (empty) circles for the \(s\) shell in the conduction (valence) band and associated photon emission transitions. (a) The exciton state generates a linearly polarized photon. (b) The trion state produces a circularly polarized photon. (c) The biexciton configuration generates polarization entangled linearly polarized photon pair through the biexciton-exciton emission cascade. the classical optical dyadic Green's function \(\overleftrightarrow{\mathbf{G}}(\mathbf{r},\mathbf{r}_{0})\) in the frequency domain defined as the solution to the wave equation \[\nabla\times\nabla\times\overleftrightarrow{\mathbf{G}}(\mathbf{r},\mathbf{r}_{ 0})-\varepsilon(\mathbf{r})\frac{\omega_{0}^{2}}{c^{2}}\overleftrightarrow{ \mathbf{G}}(\mathbf{r},\mathbf{r}_{0})=\overline{\overline{I}}\delta\left( \mathbf{r}-\mathbf{r}_{0}\right)\,. \tag{3}\] Physically, the dyadic Green's function represents the classical electric field at the position \(\mathbf{r}\) generated by a dipole \(\mathbf{d}\) at the position \(\mathbf{r}_{0}\) such that \[\mathbf{E}(\mathbf{r})=\omega_{0}^{2}\mu_{0}\overleftrightarrow{\mathbf{G}} (\mathbf{r},\mathbf{r}_{0})\mathbf{d}. \tag{4}\] In terms of the Green's function, the power \(P\) emitted by a classical dipole at \(\mathbf{r}_{0}\) oscillating at the frequency \(\omega_{0}\) then becomes \[P=\frac{\omega_{0}}{2}\mathrm{Im}\left[\mathbf{d}^{*}\cdot \mathbf{E}(\mathbf{r}_{0})\right]=\frac{\omega_{0}^{3}\mu_{0}\left|\mathbf{d} \right|^{2}}{2}\mathrm{Im}\left(\mathbf{n}_{\mathrm{d}}^{*}\cdot \overleftrightarrow{\mathbf{G}}(\mathbf{r}_{0},\mathbf{r}_{0})\cdot\mathbf{n }_{\mathrm{d}}\right). \tag{5}\] On the other hand, the local photonic density of states \(\rho_{\mathrm{L}}\) describing the spontaneous emission rate can be written [135] in terms of the dyadic Green's function as \[\rho_{\mathrm{L}}(\mathbf{n}_{\mathrm{d}},\mathbf{r}_{0},\omega_{0})=\frac{2 \omega_{0}}{\pi c^{2}}\mathrm{Im}\left(\mathbf{n}_{\mathrm{d}}^{*}\cdot \overleftrightarrow{\mathbf{G}}(\mathbf{r}_{0},\mathbf{r}_{0})\cdot\mathbf{n }_{\mathrm{d}}\right). \tag{6}\] Normalizing the spontaneous emission rate (classical power) to its value \(\Gamma_{\mathrm{Bulk}}\) (\(P_{\mathrm{Bulk}}\)) in a bulk medium, we then obtain from Eqs. (5,6) the equivalence principle \[\frac{\Gamma}{\Gamma_{\mathrm{Bulk}}}=\frac{P}{P_{\mathrm{Bulk}}}, \tag{7}\] stating that the normalized spontaneous emission rate of a two-level system equals the normalized power emitted by a classical dipole oscillating at the same frequency. This equivalence principle allows for the spontaneous emission rate to be computed using a standard solver of Maxwell's equations by considering a classical dipole at the position of the emitter, and has been used extensively [138, 139, 140, 141, 142, 143, 144, 145, 146, 147] to compute Purcell enhancement. ### Optical simulations of quantum dots in nanophotonic devices Let us now consider the general scenario illustrated in Fig. 6(a) consisting of a QD modeled using a classical dipole placed at the position \(\mathbf{r}_{0}\) inside a generic photonic structure. The extraction efficiency \(\eta_{\mathrm{ext}}\) is given by \[\eta_{\mathrm{ext}}=\frac{P_{\mathrm{Lens}}}{P}, \tag{8}\] where \(P_{\mathrm{Lens}}\) is the total power detected by the collection lens having a finite numerical aperture (NA). Calculation of the efficiency \(\eta_{\mathrm{ext}}\) requires an optical simulation predicting the classical electromagnetic field in the vicinity of the emitter. The electrical field may be determined using Eq. (4) from the optical dyadic Green's function, which describes the response of the photonic environment. Analytical expressions for the optical Green's function only exist in very few cases, and generally a numerical solution of Maxwell's equations is needed. Popular numerical simulation techniques used to model light emission in QD-based QLSs include the finite difference time domain technique [148, 149] and the finite element method approach [149, 150] in the frequency domain. Here, the computational domain is expanded on a spatial grid as illustrated in Fig. 7(a), and Maxwell's equations are either solved directly in the finite difference time domain technique or reformulated as a vectorial wave equation in the finite element method. Advantages of these spatial grid expansion methods include their availability in commercial user-friendly software packages and their ability to handle arbitrary geometries without simplifying symmetries. A disadvantage is the necessity to implement absorbing boundary conditions around the limited computational domain to correctly model an open geometry and light emission into the far field. As an alternative to using a spatial grid, the optical field may be expanded on the optical modes of the geometry: The modal method [138, 139, 140, 149] is a frequency domain technique, where the geometry is divided into layers uniform along a propagation \(z\) axis as depicted in Fig. 7(b). The electric field \(\mathbf{E}(\mathbf{r}_{\perp},z)\) inside a layer is then expanded on the eigenmodes \(\mathbf{e}_{j}(\mathbf{r}_{\perp})\) of the layer, determined assuming uniformity along the propagation axis. The field in a specific layer is written as \[\mathbf{E}(\mathbf{r}_{\perp},z)=\sum_{j}a_{j}\mathbf{e}_{j}( \mathbf{r}_{\perp})e^{i\beta_{j}z}+\sum_{j}b_{j}\mathbf{e}_{j}(\mathbf{r}_{ \perp})e^{-i\beta_{j}z}, \tag{9}\] where \(a_{j}\) (\(b_{j}\)) is the amplitude coefficient of the forward (backward) propagating eigenmode \(j\) and the summation includes discrete guided modes as well as the continuum of radiation modes. The fields at each side of a layer interface are then connected using a scattering matrix formalism [149, 151]. Advantages of the modal method include the option for directly implementing a true open boundary condition [138, 139, 140] as well as direct access to the optical modes of the photonic structure, facilitating the understanding of the governing physics. However, the modal method suffers from convergence issues [152] when considering large 3D geometries without rotational symmetry. As discussed in section 3.2, the simulation is then performed by computing the optical field Figure 6: Light emission from a QD inside a generic nanophotonic structure. (a) A classical point dipole (red arrow) inside a volume \(V\) emits light through the surface \(S\). The light emitted within a cone of polar angle \(\theta_{\text{NA}}\) defined by the NA of the lens is detected by the collection optics. (b) The spontaneous emission \(\beta\) factor describes the light emitted from the QD (red triangle) into an optical mode M of interest (blue Gaussian). The transmission coefficient \(\gamma\) describes the fraction of light transmitted from the optical mode M to the lens. Figure 7: Spatial discretization methods: (a) Spatial grid employed by the finite difference method. (b) Uniform layer decomposition along the \(z\) axis used in the modal method. generated by the classical dipole given by Eq. (4). A numerical difficulty in the evaluation of the power \(P\) emitted by the dipole at the position \(\mathbf{r}_{0}\) is the divergence of \(\mathrm{Re}(\mathbf{E}(\mathbf{r}_{0}))\). For this reason, instead of directly using Eq. (5), the power is often computed by considering a small sphere with surface \(S\) centered on the dipole as illustrated in Fig. 6(a). The power is then computed by integrating the Poynting vector over the surface \(S\) with normal unit vector \(\mathbf{n}_{S}\) as \[P=\frac{1}{2}\int\,\mathrm{Re}\left(\mathbf{E}\times\mathbf{H}^{*}\right) \cdot\mathbf{n}_{S}d\mathrm{S}. \tag{10}\] To determine the photon collection efficiency, the power \(P_{\mathrm{Lens}}\) collected by the first lens is typically evaluated by computing the far field using a near field to far field transformation [153]. To model the collection of a finite NA lens, the Poynting vector is then integrated over the solid unit angle \(\Omega\) of the cone shown in Fig. 6(a) with unit normal vector \(\mathbf{n}_{\Omega}\) as \[P_{\mathrm{Lens}}=\frac{1}{2}\int_{\theta<\theta_{\mathrm{NA}}}\mathrm{Re} \left(\mathbf{E}\times\mathbf{H}^{*}\right)\cdot\mathbf{n}_{\Omega}d\Omega, \tag{11}\] where the integration is limited to the polar angle \(\theta_{\mathrm{NA}}\) defined by the numerical aperture of the lens. Even though the collection efficiency \(\eta_{\mathrm{ext}}\) is correctly modeled using Eqs. (8,10,11), this approach does not always provide direct insight into the physics governing the light extraction. In many QLs designs, the light is transmitted to the lens predominantly via a single optical mode M of interest as illustrated in Fig. 6(b), and the photonic structure is then engineered to direct light from this mode towards the collection optics. In this case, a single mode description of the efficiency \(\eta_{\mathrm{ext,S}}\) given by \(\eta_{\mathrm{ext,S}}=\beta\gamma\) may provide an excellent description of the light emission. Here, the spontaneous emission factor \(\beta=\Gamma_{\mathrm{M}}/\Gamma=P_{\mathrm{M}}/P\) factor describes the fraction of the spontaneous emission (or total power) emitted into the optical mode M, whereas the transmission \(\gamma=P_{\mathrm{Lens,M}}/P_{\mathrm{M}}\) is the power detected by the lens \(P_{\mathrm{Lens,M}}\) from the mode M alone. In terms of the Purcell factor \(F_{\mathrm{P}}\), the spontaneous emission \(\beta\) factor can be written as [154] \[\beta=\frac{\Gamma_{\mathrm{M}}}{\Gamma_{\mathrm{M}}+\Gamma_{\mathrm{B}}}= \frac{F_{\mathrm{P}}\Gamma_{\mathrm{Bulk}}}{F_{\mathrm{P}}\Gamma_{\mathrm{Bulk }}+\Gamma_{\mathrm{B}}}, \tag{12}\] where the total spontaneous emission rate has been written as a sum of the rate \(\Gamma_{\mathrm{M}}\) into the mode M and the background spontaneous emission \(\Gamma_{\mathrm{B}}\) into all other modes. We observe that the \(\beta\) factor can be increased either by introducing cavity effects and Purcell enhancement of the spontaneous emission into the cavity mode or by controlling the background emission \(P_{\mathrm{B}}\) using e.g. dielectric screening [155] or photonic bandgap effects [141, 58, 142]. Similarly, the transmission \(\gamma\) can be analyzed and optimized, e.g. using tapering strategies [156, 143]. As an example, the performance of the micropillar SPS is analyzed in terms of \(\beta\) and \(\gamma\) in Section 3.5. An overview of the most successful QLS design approaches is presented in Fig. 8. The micropillar and the open cavity are narrow-band designs relying on Purcell enhancement to increase the \(\beta\) factor. Whereas the micropillar design [49, 50, 51] in Fig. 8(a) allows for monolithic integration and represents a mature technology, the cavity resonance frequency is not tunable. This limitation is overcome in the fully tunable open cavity geometry [157] shown in Fig. 8(b), even though this design is more sensitive to mechanical vibrations. Broadband approaches include the "bullseye" CBG, the photonic nanowire and the microlens designs. Even though the CBG design [52, 53, 54, 158] depicted in Fig. 8(c) does feature significant Purcell enhancement, the light extraction mechanism does not rely on the resonant effect and remains high over a wavelength range much broader than the resonance [158]. Similarly, the photonic nanowire [159, 55] illustrated in Fig. 8(d) exploits suppression of the background emission rate \(\Gamma_{\mathrm{B}}\) to increase the \(\beta\) factor, whereas the broad-band microlens [160, 56] in Fig. 8(e) benefits from a classical hemispherical lens effect to direct the light towards the collection optics. Finally, the photonic crystal waveguide geometry [58] shown in Fig. 8(f) exploits the slow light effect near the waveguide band edge to efficiently couple light into the planar waveguide, and light is subsequently extracted typically using grating out-couplers. ### Modeling of decoherence effects The photon indistinguishability quantified by the visibility of the two-photon interference \(V_{\text{TPI}}\) takes a value between 0 for distinguishable photons and 1 for perfectly indistinguishable photons. In addition to efficient emission of single photons, most quantum information protocols require also high indistinguishability of the emitted photons. However, for QDs embedded in a solid-state environment, the indistinguishability is compromised by several physical mechanisms leading to decoherence in the emission process. Numerically, it can be determined from the second-order correlation of the electromagnetic field operator as [161, 162] \[V_{\text{TPI}}=\frac{\int_{0}^{\infty}\int_{0}^{\infty}\left|\left\langle \hat{a}^{\dagger}(t+\tau)\hat{a}(t)\right\rangle\right|^{2}dtd\tau}{\int_{0}^{ \infty}\int_{0}^{\infty}\left\langle\hat{a}^{\dagger}(t)\hat{a}(t)\right\rangle \left\langle\hat{a}^{\dagger}(t+\tau)\hat{a}(t+\tau)\right\rangle dtd\tau}, \tag{13}\] where \(\hat{a}^{\dagger}\) (\(\hat{a}\)) is the creation (annihilation) operator for the output electric field. Modeling of the indistinguishability then requires an evaluation of the two-time correlation function \(\left\langle\hat{a}^{\dagger}(t+\tau)\hat{a}(t)\right\rangle\). We now consider a QD placed spectrally on resonance inside an optical cavity. For the general case of a QD pumped using non-resonant excitation, we can model the QD as a three-level system with ground state \(\left|\text{g}\right\rangle\), excited state \(\left|\text{e}\right\rangle\) and a pump state \(\left|\text{p}\right\rangle\) as shown in Fig. 9(a). The pump state relaxes to the excited state with a rate \(\alpha\) and can subsequently relax to the ground state through spontaneous emission into non-cavity modes with a rate \(\Gamma_{\text{B}}\). Additionally, we consider a coupling to an optical cavity described by a light-matter coupling constant \(g\) and a cavity leakage rate \(\kappa\). Finally, the QD is subject to a decoherence mechanism with a rate \(\gamma\) as discussed below, leading to uncertainty of the excited state energy level. The coherent interaction between the Figure 8: Artistic illustrations of popular QLS designs, with \(90^{\circ}\) cuts to illustrate the inner region and the position of the QD (red sphere). (a) The micropillar cavity [49, 50, 51], (b) the open cavity geometry [157], (c) the “bullseye” circular Bragg grating [52, 53, 54, 158], (d) the photonic nanowire [55, 159], (e) the microlens [56, 160] and (f) the photonic crystal waveguide [58]. QD and the cavity is described in a rotating frame by a Jaynes-Cummings [163] Hamiltonian as \(\hat{H}=\hbar g(\hat{a}^{\dagger}\hat{\sigma}_{\rm ge}+\hat{a}\hat{\sigma}_{\rm eg})\), where \(\hat{\sigma}_{\rm ij}=|{\rm i}\rangle\!\langle{\rm j}|\) is the dipole operator. The interaction with the environment is then modeled using a master equation formalism [164] for the reduced density operator \(\hat{\rho}\) describing the emitter alone. The master equation is given by \[\frac{d}{dt}\hat{\rho}=\alpha\mathcal{L}_{\hat{\sigma}_{\rm ge}}(\hat{\rho})+ \Gamma_{\rm Bulk}\mathcal{L}_{\hat{\sigma}_{\rm ge}}(\hat{\rho})+2\gamma \mathcal{L}_{\hat{\sigma}_{\rm ge}}\hat{\sigma}_{\rm ge}(\hat{\rho})+\kappa \mathcal{L}_{\hat{a}}(\hat{\rho})-\frac{i}{\hbar}\left[\hat{H},\hat{\rho} \right], \tag{14}\] where loss is included using the Lindblad operator defined as \(\mathcal{L}_{\hat{x}}(\hat{\rho})=\hat{x}\hat{\rho}\hat{x}^{\dagger}-\left( \hat{x}^{\dagger}\hat{x}\hat{\rho}+\hat{\rho}\hat{x}^{\dagger}\hat{x}\right)/2\). On the right-hand side of Eq. (14), the first two terms describe respectively the p\(\rightarrow\)e relaxation and the e\(\rightarrow\)g transition through spontaneous emission, whereas the third and fourth terms represent the pure dephasing mechanism and the cavity leakage of photons, respectively. #### 3.4.1 Markovian decoherence: Time jitter and pure dephasing In the weak coupling regime of cQED, the QD-cavity coupling strength \(g\) is small compared to the total decoherence rate \(\gamma_{\rm T}=\gamma+\left(\Gamma+\kappa\right)/2\). This allows for the cavity mode to be adiabatically [161] eliminated from Eq. (14) leading to an equation for the emitter alone of the form \[\frac{d}{dt}\hat{\rho}=\alpha\mathcal{L}_{\hat{\sigma}_{\rm ge}}(\hat{\rho})+ \Gamma_{\rm Bulk}\mathcal{L}_{\hat{\sigma}_{\rm ge}}(\hat{\rho})+2\gamma \mathcal{L}_{\hat{\sigma}_{\rm ge}}\hat{\sigma}_{\rm ge}(\hat{\rho})+F_{\rm p }\Gamma_{\rm Bulk}\mathcal{L}_{\hat{\sigma}_{\rm ge}}(\hat{\rho}), \tag{15}\] where the last term on the right-hand side now describes spontaneous emission into the cavity enhanced by the Purcell factor \(F_{\rm p}=4g^{2}/(\kappa\Gamma_{\rm Bulk})\). Since the field operator for the cavity mode was eliminated, the light emission is typically described in terms of the emitter dipole operator \(\hat{\sigma}_{\rm ge}\), where it is assumed that all light emitted from the QD reaches the beamsplitter in a HOM-TPI configuration, and the indistinguishability is then evaluated using the correlation function \(\langle\hat{\sigma}_{\rm ge}(t+\tau)\hat{\sigma}_{\rm ge}(t)\rangle\). The solution of Eq. (14) provides the one-time expectation value \(\langle\hat{\sigma}_{\rm ge}(t)\rangle\) of the dipole operator, and the two-time expectation value \(\langle\hat{\sigma}_{\rm ge}(t+\tau)\hat{\sigma}_{\rm ge}(t)\rangle\) is subsequently obtained using the quantum regression theorem [164] valid for a Markovian environment without memory effects. Writing the total spontaneous emission rate as \(\Gamma_{\rm T}=\left(F_{\rm p}+1\right)\Gamma_{\rm Bulk}\), the indistinguishability then takes the form \[V_{\rm TPI}=\frac{\Gamma_{\rm T}}{\Gamma_{\rm T}+2\gamma}\frac{\alpha}{\alpha +\Gamma_{\rm T}}, \tag{16}\] Figure 9: (a) The QD three-level system coupled to an optical cavity with light-matter interaction strength \(g\). \(\alpha\) and \(\Gamma_{\rm B}\) are the decay rates of the respective transitions, whereas \(\kappa\) is the cavity escape rate and \(\gamma\) is the dephasing rate. (b) The indistinguishability \(V_{\rm TPI}\) as function of light-matter interaction strength \(g\) for constant pure dephasing rates \(\gamma\) of 0.5 \(\mu\)eV (full) and 2 \(\mu\)eV (dashed) and \(\alpha=\infty\). Indistinguishability vs. \(g\) where \(\gamma=0.5\)\(\mu\)eV + \(\gamma_{\rm Ph}\) is the sum of a constant dephasing rate and the phonon contribution given by Eq. (18) for T = 0\({}^{\circ}\) K (full) and T = 20\({}^{\circ}\) K. Parameters: \(\alpha=\infty\), \(\eta=0.032\) ps\({}^{2}\), \(\omega_{\rm c}=0.95\) meV, \(\kappa=100\)\(\mu\)eV, \(\Gamma_{\rm B}=1\)\(\mu\)eV and \(B=1\). describing both the influence of pure dephasing and the p\(\rightarrow\)e relaxation process in the Markovian regime. Physically, a dominating mechanism for pure dephasing is excited state energy variations due to a fluctuating charge environment [165]. The indistinguishability predicted by Eq. (16) is presented in Fig. 9(b) for \(\alpha=\infty\). We observe that increasing the spontaneous emission rate using Purcell enhancement serves to improve not only the efficiency but also the indistinguishability in the presence of pure dephasing. However, for non-resonant excitation with a finite pump relaxation rate \(\alpha<\infty\), a trade-off occurs due to the second fraction in Eq. (16). Even though the relaxation from the pump to the excited state does not in itself introduce decoherence in the emission process, a finite relaxation time \(\alpha\) results in uncertainty in the photon emission time and reduced temporal overlap for photons impinging on the beamsplitter. Increasing the spontaneous emission rate through Purcell shortens the pulse duration in time and thus amplifies this detrimental effect, which is referred to as time jitter. The reduction of the indistinguishability due to time jitter can be avoided either using resonant excitation [49, 50, 166] or by accelerating the p\(\rightarrow\)e relaxation time \(\alpha\), e.g. using a stimulation pulse [167, 168]. Thus, Purcell enhancement is generally beneficial to the performance within the Markovian regime. However, this picture relies on decoupling of the Purcell enhancement and the dephasing rate, which is not always a good approximation, as discussed below. #### 3.4.2 Non-Markovian decoherence: Phonons For a QD in a solid-state material, interaction between the QD and quantized lattice vibrations, phonons, in the bulk environment [161, 169, 170, 171, 172] represents an additional fundamental decoherence mechanism. The dominating coupling is with longitudinal acoustic phonons [173, 169, 174] and leads to an additional contribution to the Hamiltonian given by \[\hat{H}=|\mathrm{e}\rangle\langle\mathrm{e}|\sum_{\mathbf{k}}\hbar g_{ \mathbf{k}}\left(\hat{b}_{\mathbf{k}}^{\dagger}+\hat{b}_{\mathbf{k}}\right)+ \sum_{\mathbf{k}}\hbar\nu_{\mathbf{k}}\hat{b}_{\mathbf{k}}^{\dagger}\hat{b}_{ \mathbf{k}}, \tag{17}\] where \(\hat{b}_{\mathbf{k}}^{\dagger}\) (\(\hat{b}_{\mathbf{k}}\)) is the creation (annihilation) operator for the phonon mode with frequency \(\nu_{\mathbf{k}}\) and wave vector \(\mathbf{k}\) the output electric field, and \(g_{\mathbf{k}}\) is the QD-phonon coupling strength. Whereas the QD-phonon interaction is completely described by Eqs. (14,17), the calculation of the two-time expectation function is challenging: The interaction with the phonons again leads to excited state energy fluctuations this time due to a deformation of the energy potential producing non-Markovian memory effects in the environment in the short-time limit of \(\sim\) 5 ps, such that the quantum regression theorem can no longer be used to determine the two-time correlation functions. Accurate modeling of the phonon interaction has been performed using an exact diagonalization technique [161, 170], which however, quickly becomes numerically demanding with increasing size of the Hilbert space. More recently, a numerically exact method based on a time-evolving matrix product operator [175] was proposed, however this method is also computationally demanding. Significant simplification was achieved with the introduction of the polaron transformation [171], allowing for a formulation of a Born-Markov master equation in the polaron frame and subsequently for approximate analytic expressions for the efficiency and indistinguishability. In the weak coupling regime and in the limit of weak QD-phonon interaction, the dephasing rate \(\gamma_{\mathrm{Ph}}\) of the zero-phonon line becomes \[\gamma_{\mathrm{Ph}}=2\pi\left(\frac{gB}{\kappa}\right)^{2}J_{\mathrm{Ph}}(2gB )\coth\left(\frac{\hbar gB}{k_{\mathrm{B}}T}\right), \tag{18}\] where \(B\) is the Franck-Condon factor describing the overlap of the lattice configurations of the excited and ground states and \(J_{\mathrm{Ph}}(\omega)\) is the phonon spectral density. In the case of a spherical QD placed in a bulk material, the phonon spectral density is given by \(J_{\mathrm{Ph}}(\omega)=\eta\omega^{3}\exp(-\omega^{2}/\omega_{c}^{2})\) [176], where \(\eta\) is the interaction strength and \(\omega_{\rm c}\) is the cutoff frequency inversely proportional to the QD length. Since the phonon interaction strength depends on the QD dressed state energy separation [171], which depends in turn on the QD-cavity coupling strength \(g\), the phonon-induced decoherence cannot be accurately modeled using a fixed pure dephasing rate independent of the remaining parameters. The indistinguishability computed from Eqs. (16,18) is presented in Fig. 9(c) as a function of the light-matter interaction strength \(g\). We observe that the increase in the total emission rate through Purcell enhancement is initially beneficial for indistinguishability. However, as \(g\) increases, the phonon-induced decoherence rate \(\gamma_{\rm Ph}\) increases, and we observe a reduction in the indistinguishability with increasing \(g\) even at \(0^{\circ}\) K. As \(g\) increases even further, the strong coupling regime of cQED is reached. Here, the emission spectrum is split into two hybrid polariton states and phonon-induced transitions between the polariton states occur [171, 172] which is detrimental to the indistinguishability. We conclude that whereas the implementation of Purcell enhancement appears beneficial to both increase the spontaneous emission \(\beta\) factor in Eq. (12) and to overcome pure dephasing in Eq. (16), phonon-induced decoherence results in an inherent trade-off between the achievable efficiency and indistinguishability in the cQED design approach, as exemplified below for the micropillar geometry. ### Performance of the micropillar single-photon source We now consider the specific example of the micropillar cavity-based SPS. The geometry features a QD placed in a vertical \(\lambda\) cavity generally featuring an asymmetric distributed Bragg reflector (DBR) configuration illustrated in Fig. 10(a) enabling light emission through the top. The micropillar has been subject to intense numerical investigation, e.g. to predict and explain fundamental physics such as the diameter-dependent variations in the Q-factor [177, 178] and to increase its Q/V-ratio for strong-coupling experiments [179, 180] using Bloch-wave engineering [181]. The performance of the micropillar for the SPS application was analyzed in Refs. [144, 145], and the main figures of merit are presented in Fig. 10(b-g). The micropillar geometry relies on cavity QED effects to ensure high efficiency, and the Q-factor is presented in Fig. 10(b) as a function of pillar diameter for increasing number \(n_{\rm top}\) of top DBR layer pairs. In addition to an Figure 10: (a) Sketch of the micropillar geometry consisting of a QD sandwiched between vertical DBRs. (b) The Q-factor, (c) the Purcell-factor \(F_{\rm P}\), (d) the spontaneous emission \(\beta\)-factor, (e) the transmission \(\gamma\), (f) the collection efficiency \(\eta_{\rm ext}\) and (g) the photon indistinguishability \(V_{\rm TPI}\) as function of pillar diameter \(d\) for varying number of top DBR layer pairs \(n_{\rm top}\) with \(n_{\rm bot}=40\). See Ref. [144] for additional geometrical parameters. increase in the overall Q-factor, oscillatory variations are observed in the low-diameter high-Q limit resulting from interaction with higher-order Bloch modes [178, 182]. These oscillations are directly observed in the Purcell-factor shown in Fig. 10(c), where an additional reduction in \(F_{\rm P}\) occurs due to a decrease in the mode volume with \(d\). As predicted from Eq. (12), the large Purcell-factor \(F_{\rm P}\) results in a large \(\beta\)-factor shown in Fig. 10(d). Again, significant oscillations are observed, however their origin is not the variations in the Q-factor. Instead, they are resulting from a periodic variation [145] of the background spontaneous emission rate \(\Gamma_{\rm B}\). In Ref. [183], a careful choice of pillar diameter was made to ensure a -\(\beta\)factor at a peak position, resulting in a final 69% collection efficiency. However, the performance of the micropillar suffers from fundamental trade-offs. Since a large pillar diameter results in a narrow far field emission pattern, the transmission \(\gamma\) in Fig. 10(e) generally increases with \(d\). Here a trade-off between high \(\beta\) and \(\gamma\) is observed, resulting in the collection efficiency \(\eta_{\rm ext}\) shown in Fig. 10(f) taking its maximum value in the \(d\in[1.5,2]\mu\)m regime for \(n_{\rm top}=17\). An initial procedure to improve the collection efficiency could be to increase the number of layer pairs in the DBRs, which would increase \(\beta\) also for large diameters. However, here the onset of the strong coupling regime limits the indistinguishability in the presence of phonon-induced decoherence: We observe in Fig. 10(g) that increasing \(n_{\rm top}\) beyond 21 layer pairs leads to a significant reduction in the indistinguishability. This leads to an inherent trade-off between the achievable efficiency and indistinguishability for the micropillar SPS, the product of the two taking a maximum value of \(\eta_{\rm ext}V_{\rm TPI}\sim 0.95\)[144]. Realizing QLSs with performance beyond this value requires new design concepts such as the "hourglass" geometry [146, 147], which exploits suppression of the background spontaneous emission rate to increase \(\beta\) further towards unity while avoiding the strong coupling regime. ## 4 Methods to fabricate semiconductor quantum dots After having discussed the theoretical background, we now turn to the fabrication of high-quality semiconductor QDs with a focus on epitaxial growth. In general, there are different approaches to create QDs capable of confining the motion of free charge carriers (electrons in conduction band states and holes in valence band states) in three dimensions. Here, we restrict our attention to optically-active QDs, in which both carrier types are confined in the same region of a direct-bandgap semiconductor. Arguably, the simplest method to obtain QDs relies on chemical synthesis of colloidal nanocrystals [184]. Such QDs have been employed for pioneering demonstration of non-classical light emission [185] and are widely used in classical optoelectronic devices such as displays and also as fluorescent markers. Their interest in quantum technologies is however rather limited, mostly because the confined carriers are typically located close to the free surface of the nanostructures, leading to significant _interaction with surface states_ and consequent deterioration of their quantum optical properties. In addition, the preparation from solution may lead to higher impurity concentrations compared to QDs obtained via epitaxial growth methods. In the following, we provide a brief introduction to the basics of epitaxial growth of semiconductors in Section 4.1 and then move to illustrate the current methods employed to make high-quality QDs with different properties for quantum information science and technology. ### General concepts of epitaxial growth of semiconductors Epitaxial growth consists in the growth of crystalline layers by deposition of atoms or molecules on a clean surface of a crystalline semiconductor substrate and at sufficiently high substrate temperature to allow them to be incorporated and arranged in a periodic fashion, compatible with the crystalline structure of the substrate [35]. For epitaxial growth to occur, the deposited material and substrate must be capable of adopting compatible crystal structures. The substrates of interest here are typically (001)- and (111)-oriented GaAs and InP (with zincblende crystal structure) and, in some case, Si. (001)-oriented substrates are the standard in electronic and optoelectronic industry, while many different orientations have been used in the past to study QD formation [188]. The layer growing on the substrate is called _epilayer_, and we distinguish between _homoepitaxy_ and _heteroceptive_, depending on whether the epilayer consists of the same material as the substrate (beside possible doping) or a different material. In most of the circumstances, the different materials in a _heterostructure_ have the same crystal structure and differ in their _lattice constants_, giving rise to _strained growth_. In this case, the _lattice-mismatch_\(\epsilon\) is defined as the relative difference in the in-plane lattice constants of epilayer \(a_{e}\) and substrate \(a_{s}\): \[\epsilon=(a_{s}-a_{e})/a_{e} \tag{19}\] . In the absence of strain relaxation, the epilayer adapts its in-plane lattice constant to the underlying substrate, which exerts stress on the epilayer, leading to an in-plane strain \(\epsilon<0\) (\(\epsilon>0\)) for compressive (tensile) strain. To limit impurities, deposition takes place from the vapor phase rather than from the liquid phase. The most common physical- and chemical-vapor deposition techniques used for fabricating QDs are the molecular beam epitaxy (MBE) and the metal-organic-vapor-phase-epitaxy (MOVPE), respectively [35]. In the former, atomic or molecular beams are obtained by thermal or electron-beam heating of highest-purity source materials in ultrahigh vacuum conditions and atoms or molecules impinge on the substrate after travelling ballistically in the deposition chamber. In MOVPE, molecules containing the elements to be deposited are transported by a carrier gas into a reactor containing the heated substrate, where they decompose into the desired species and volatile molecules. The first step after introducing a substrate into an epitaxy system consists in oxide removal and growth of a _buffer layer_ (usually homoepitaxial) with a thickness of some 100 nm to place the active structures away from the original substrate surface, which usually contains defects and impurities. Figure 11: Overview of phenomena related to epitaxial growth of semiconductors. (a) Basic processes for atoms and molecules deposited on surfaces. (b) Illustration of surface reconstruction. (c) Cross-sectional scanning tunneling microscopy (STM) image of a layer of InGaAs in a GaAs matrix showing disorder due to mixing and surface segregation. (d) Classification of growth modes. (a, b) Reprinted from Surface Science Reports, Vol 43, B. Voigtlander, Fundamental processes in Si/Si and Ge/Si epitaxy studied by scanning tunneling microscopy during growth, Pages 127-254, Copyright (2001), with permission from Elsevier [186], (c) Reprinted from _Keizer et al. 2011_[187], with the permission of AIP Publishing. Upon _deposition_ of the desired atoms on the surface, the adsorbed atoms (or _"adatoms") diffuse_ on _terraces_ and can _attach_ to surface features like steps as well as other adatoms and adatom clusters, named _islands_. The reverse processes of adsorption and attachment are desorption and detachment, respectively. In thermal equilibrium, these processes would counterbalance each other. Crystal growth is therefore an inherently non-equilibrium process, as the net adsorption rate must be higher than the desorption rate for the crystal to grow on top of the substrate. These processes are illustrated in Fig. 11(a). Although the overall system consisting of substrate, epilayer, and source vapor is not in a global thermodynamic equilibrium, surface processes can be often treated in a quasi-equilibrium framework, allowing us to see them as driven by (local) free-energy minimization. It is important to note that under common growth conditions, all relevant processes occur at the surface layers and - in some cases - in the first one or two subsurface monolayers. The reason is that common semiconductors are characterized by strong covalent bonds, and the energy necessary to allow an atom to move inside the "bulk" lattice is much larger than the energy required to break surface bonds. This means that, once atoms are buried below a few monolayers of material, they can be considered as immobile. This fact reduces considerably the complexity of the theoretical description of growth processes, which can be done either using continuum or atomistic models. Surface diffusion is in general anisotropic. Preferential directions for diffusion can be caused by (i) _gradients in surface chemical potential_, which, in turn, can originate from local surface curvature, local strain, local composition fluctuations, atomic steps, as well as by (ii) _surface reconstructions_, i.e. the rearrangement of surface atoms in periodic structures with unit cells larger than the bulk unit cell to reduce the surface energy due to dangling bonds (see Fig. 11(b)). As an example of (ii), the (4\(\times\)2) reconstruction of As-terminated GaAs(001) surface is characterized by dimers, making diffusion along the [110] direction faster than along the perpendicular [110] direction (see Fig. 11(c)). If we neglect the effect of surface reconstruction we can write the chemical potential \(\mu\) of an adatom as [189, 190]: \[\mu(\vec{r})=\mu_{0}+\Omega E_{s}(\vec{r})+\gamma\Omega\kappa(\vec{r})-\frac{ \zeta\Omega\theta(\vec{r})}{a}, \tag{20}\] where \(\mu_{0}\) is the chemical potential of adatoms on an unstressed surface, \(E_{s}\) the elastic energy density due to local strain, \(\Omega\) the atomic volume. The third term is the surface energy contribution, where \(\gamma\) is the surface energy and \(\kappa\) the surface curvature. Finally, the last term gives account to _surface segregation_, with \(\zeta\) the energy benefit (per unit area) of having a surface composed of the deposited species compared to the underlying species, \(a\) the lattice constant and \(\theta\) varying between 0 and 1 depending on whether the adatom moves on a layer with atoms of the same species or another. Segregation leads to the swapping of deposited and surface atoms when the latter allow the surface to have lower energy. An example is represented by the overgrowth of an InAs surface with GaAs. Because of the lower surface energy of InAs compared to GaAs, In atoms tend to float on top of the Ga atoms, naturally leading to a smearing of interfaces between layers of different materials. The cross-sectional STM image of Fig. 11(d), with In atoms appearing brighter than Ga atoms, clearly illustrate this phenomenon. Surface segregation also leads to inhomogeneous vertical composition profiles when species with different surface energies, such as InAs and GaAs are co-deposited to form alloys [191]. From Eq. 20, we see that the tendency of the growing layer to minimize the chemical potential allows us to describe _capillarity_ effects, i.e. the spontaneous planarization of rough surfaces or dimples, which are characterized by negative curvature. In general, there is a delicate interplay between different contributions to the chemical potential, which may lead to surface roughening instead of smoothing. As an important example, elastic energy (or strain energy) can favor the occurrence of three-dimensional (3D) _islands_. According to nucleation theory, when the number of adatoms contained in an island exceeds a certain critical size (critical nucleus), the island becomes stable and growth leads to a drop in chemical potential. Islands can also form without any nucleation barrier [192]. In spite of the rich physics of surface phenomena, the result of depositing an epilayer on top of a crystalline substrate can be schematically summarized according to three growth modes, see Fig. 11(d): the Frank-van-der Merwe (F-M) or layer-by-layer growth, the Volmer-Weber (V-W) or island growth, and the Stranski-Krastanow or layer-plus-island growth mode. Whether one or the other growth mode occurs depends on the relative properties of the epilayer and substrate. In particular, a useful classification concentrates on the relation between the surface energy of the substrate material \(\gamma_{s}\), of the epilayer \(\gamma_{e}\), and the interface energy \(\gamma_{\mathrm{is}}\). The F-M growth mode takes place whenever \(\gamma_{s}\geq\gamma_{e}+\gamma_{\mathrm{is}}\), i.e. in the case in which substrate wetting is energetically favored, the V-W in the opposite case, and the S-K whenever \(\gamma_{\mathrm{is}}\) increases with the epilayer thickness, so that - above a critical thickness - the inequality changes sign. All these growth modes are relevant for the growth of QDs, as we will see below. Before concluding this brief overview on epitaxial growth of semiconductors, we mention that _crystal defects_ should be avoided. In general, such defects locally disrupt the crystal periodicity and introduce localized electronic states. These can act as non-radiative recombination centers, reducing the quantum efficiency of the QDs because of enhanced defect-induced non-radiative emission, or as traps for charges, leading to charge noise. Defects can be distinguished into _point defects_, such as interstitials, vacancies, and unintentional impurity atoms either at lattice sites or as interstitials and _extended defects_, such as dislocations, stacking faults, antiphase boundaries, and surfaces with associated dangling bonds, impurities and oxides. ### Epitaxial quantum dots Epitaxial QDs are mostly obtained out of heterostructures containing a region of a semiconductor with lower energy bandgap (QD material) embedded in a matrix with larger bandgap (barrier material). In our treatment, we focus on heterostructures with _type-I band alignment_, for which the Figure 12: Overview of the main types of epitaxial QDs defined in semiconductor heterostructures containing a region of a semiconductor with lower energy bandgap (QD material, in orange) embedded in a matrix with larger bandgap (barrier material, blue): (a1) and (a2) shallow-etched quantum wells, leading to in-plane confinement in addition to vertical confinement; (a3) QDs by lateral bandgap-modulation in quantum wells; (b) Natural QDs in quantum wells; (c1-c2) Self-assembled S-K QDs before and after capping; (d1-d2) QDs in nanoholes; (e1-e3) QDs by droplet epitaxy (a metal droplet (in red) is recrystallized and capped; (f) Site-controlled S-K QDs; (g) QDs in nanowires; (h) Vertically staked QDs. In all sketches the substrate is at the bottom and the growth proceeds towards the top. conduction band edge and valence band edge of the QD material lie inside the energy bandgap of the barrier material. We classify the main types of epitaxial QDs with reference to Fig. 12: QDs in quantum wells, obtained either by post-growth definition of in-plane confinement regions (a1-a3) or by spontaneous exciton localization (b); S-K QDs (c1-c2); QDs in nanoholes, obtained by filling self-assembled or lithographically-defined surface dimples via capillarity-driven diffusion (d1-d2); QDs obtained by the droplet epitaxy method (e1-e3); Site-controlled S-K QDs obtained by S-K growth on lithographically-patterned nanoholes (f); QDs in nanowires, obtained by vapor-liquid-solid growth (g); (h) Vertically stacked QDs, or _QD molecules_. We further distinguish among _site-controlled_ and _self-assembled_ QDs. The position of the former on the substrate is determined a priori, facilitating deterministic device fabrication, while the latter are randomly placed on the substrate, requiring registration methods for device fabrication (see Section 5.1). Since site-controlled QDs require substrate manipulation _before growth_ (and associated contamination/defects), it remains challenging to obtain the same high optical quality as for self-assembled QDs. This is the reason why self-assembled growth keeps being pursued by most of the research groups in spite of additional steps _after growth_ and foreseeable limitations for scalability, see Section 4.2.4. For single-QD devices, typical inter-QD distances are of the order of the wavelength of the emitted light, allowing QDs to be individually addressed by far-field optics. Another important criterion for the choice of the QD "hardware" is the spectral range of their optical transitions. This is determined by the energy bandgaps of the used materials, the extent of the confinement region, and the strain present in the structures. In general, the emission wavelength increases for decreasing energy bandgap of QD and barrier material, for increasing QD size, and for decreasing compressive strain. Each material combination allows a certain spectral range to be accessed. As for any heterostructure, the lattice-mismatch between epilayer and substrate must be limited to avoid the occurrence of dislocations. This limitation is relaxed in the case of QD in nanowires. In order of increasing wavelength, the following material combinations are commonly used: GaN/AlGaN, InGaN/GaN, InP/In(Al,Ga)P, GaAs/AlGaAs, In(Ga)As/GaAs, InAs/InP or InAs on In(Ga,Al)As. Since epitaxial QDs usually have a flat morphology (height/width ratio of the order of \(\sim\)0.05-0.3), the size along the growth direction is the one that mostly determines the transition energy. #### 4.2.1 Quantum dots in quantum wells Historically, the first methods to create 3D confinement in semiconductors were based on introducing quantum wells, in which the carrier motion is free only in the quantum well plane (for a review, see Ref. [193]). Lateral confinement could be introduced by deep or shallow etching [194] (see sketches in Fig. 12(a1, a2)), local strain modulation [195], local interdiffusion promoted by laser irradiation [196] or, in special cases, via hydrogen irradiation [197] (see sketches in Fig. 12(a3)). In all cases to laterally confine the carrier motion in the quantum well plane, the quantum wells needed to be located at a few tens of nanometers from the sample surface, leading to significant interaction of the confined excitons with surface states and consequent deterioration of the optical properties of the resulting QDs. Because of the limited optical quality of the resulting QDs, these methods have been useful for pioneering studies, but are now practically abandoned. The above-mentioned methods have the appealing feature of allowing the position of the QDs to be controlled. Lateral confinement in quantum wells is also achieved without intentional lateral modulation, e.g. due to local random fluctuations in the thickness of the quantum well [198]. At sufficiently low temperatures, excitons get confined in areas of the quantum well with locally larger thickness. The resulting QDs are often referred to as _natural QDs_ and appear also as a consequence of local alloy fluctuations, which may result in regions of effective lower energy bandgap (see, e.g. Fig. 11(c)). Natural QDs are characterized by poorly defined structural properties, but - in case of lateral extensions of the order of several tens of nm - they feature very high oscillator strengths [199], which are pivotal to experiments where strong light-matter interaction is needed [200]. #### 4.2.2 Quantum dot fabrication via the Stranski-Krastanow method The most common method to fabricate high quality QDs is via self-assembly of crystalline 3D islands in the Stranski-Krastanow growth mode. Although the initial S-K concept [204] did not involve strain, in the context of QD growth, the common driving force leading to a change from a layer-to-layer to a layer-plus-island growth is usually strain due to the lattice mismatch between deposited material and substrate (see Eq. 19). With increasing amount of deposited material \(t_{e}\), the elastic energy in the _wetting layer_ increases \(\propto\epsilon t_{e}^{2}\) up to a critical thickness, above which 3D island formation is favored due to their capability of partially relaxing elastic energy through the free surfaces and substrate. (In our description of the growth modes illustrated in Fig. 11(d) we can imagine that the interface energy \(\gamma_{\mathrm{is}}\) increases with increasing elastic energy). From an atomistic perspective, 3D island growth is favored by the fact that deposited atoms have lower chemical potential (see second term in Eq. 20) on top of an island compared to the surrounding planar surfaces. In turn, this is because the local lattice constant at the island top is closer to \(a_{e}\) compared to the surrounding regions, where it is close to the substrate lattice constant \(a_{s}\) and strain is this larger. Early work on the growth and development of S-K QDs include the first report of intense luminescence from In-rich coherent clusters in GaAs [205, 206] and the first studies of such nanostructures by atomic force microscopy (AFM) [207]. To illustrate the main properties of S-K QDs, we focus on the prototypical example of QDs obtained by depositing InAs on GaAs(001) substrates, with \(\epsilon\simeq 7\%\). In this case, the wetting layer thickness is about 1.2-1.7 monolayers depending on the growth temperature [208], corresponding Figure 13: Example of Stranski-Krastanow QDs: InGaAs QDs on GaAs(001) and InP(001) substrates. (a) STM image of InGaAs QDs obtained by depositing 1.8 monolayers of InAs on GaAs at a substrate temperature of 500\({}^{\circ}\)C via MBE. (b) Two families of faceted nanocrystals are observed, “pyramids” and “domes”. (c) A 3D view of a pyramid with a lateral size of \(\sim\)10 nm and a height of \(\sim\)3 nm. (d) Cross-sectional STM image of a stack of InGaAs QDs overgrown with GaAs without any growth interruption (top) or with the “In-flush method” after partial capping with the indicated amount of GaAs prior to complete overgrowth. (e) InAs QDs on InP(001) and (f) InAs QDs on InAlGaAs lattice-matched to InP(001) grown by MBE. (a-c) Reprinted from Journal of Crystal Growth, Vol 278, G. Costantini et al., Pyramids and domes in the InAs/GaAs(0 0 1) and Ge/Si(0 0 1) systems, Pages 38-45, Copyright (2005), with permission from Elsevier [201], (d) Reprinted from _Keizer et al. 2011_[187], with the permission of AIP Publishing, (e) Reprinted figure with permission from _Skiba-Szymanska et al. 2017_[202] Copyright (2017) by the American Physical Society, (f) Reprinted from _Yacob et al. 2014_[203], with the permission of AIP Publishing. to \(\sim 0.5\) nm. Figure 13(a) shows an STM image of 3D islands obtained by deposition of nominally 1.8 monolayers of InAs on GaAs by MBE at a substrate temperature of 500\({}^{\circ}\)C followed by immediate cooling to room temperature. Under such conditions, a _bimodal_ distribution of faceted islands is observed. The small (large) islands, with a height up to about 4 nm (15 nm) are referred to "pyramids" ("domes") and are bound by relatively shallow \(\{137\}\) (steep \(\{110\}\) and \(\{101\}\)) facets (see sketches in Fig. 13). Past investigations have shown that a morphological transition occurs between pyramids and domes at a critical volume [209], similar to other material systems [210] and that part of the material in the wetting layer is consumed by the islands after their formation. From Fig. 13(a) we also see that pyramids tend to nucleate close to steps or terraces. In an elegant experiment, Bart et al. [211] have recently demonstrated that a larger roughness of the GaAs surface leads to higher density of InAs QDs due to local thickening of the wetting layer. A bimodal size distribution leads to a large _inhomogeneous broadening_ of the emission wavelength of the resulting QD ensembles and is often undesired. For this reason, a growth interruption is usually introduced after island formation to promote the growth of domes at the expense of pyramids due to _ripening_. To obtain ensembles with sufficiently large interdot distance for single-QD devices, growth is usually performed at relatively high substrate temperatures (above \(\sim 490^{\circ}\)C) and low InAs deposition rate (less than \(\sim 0.05\) monolayers/s) - to reduce the island nucleation rate - and by carefully tuning the amount of deposited InAs and/or using gradients in local thickness, roughness, or temperature [211, 212]. Since the maximum temperature is limited by In desorption and the minimum rate by the time required to obtain QDs, it remains challenging to obtain consistently low surface densities across full wafers. It is also important to note that S-K islands resulting from InAs deposition on GaAs are unavoidably alloyed [213] because of Ga-In _intermixing_ occurring during growth, so that the resulting QDs are often referred as In(Ga)As or InGaAs QDs. Intermixing is favored by entropy and allows for a reduction of elastic energy in the layer and substrate. For stable operation and to avoid interaction with surface states, the QDs need to be overgrown with a semiconductor layer. This step, usually consisting of GaAs overgrowth in the case of InGaAs QDs, generally brings in strong modifications in the structural properties of the S-K islands [214, 215]. Specifically, the In-rich island top is driven away by the GaAs, leading to a reduction of QD height. This effect, which is due to the fact that Ga atoms tend to "avoid" the strained island top (regions of high chemical potential for Ga) and In atoms tend to wet the surrounding Ga-rich surface because of its lower surface energy, can be enhanced by interrupting the GaAs overgrowth after deposition of a layer with thickness \(t_{\rm cap}\), followed by an increase of substrate temperature to desorb In, resulting in QDs with height close to \(t_{\rm cap}\), as illustrated in Fig. 13(d). Through this method, referred to as "In-flush" or "partial-capping-and annealing" [216, 187, 217], the low-temperature ground-state emission wavelength of the InGaAs QDs can be controllably blue-shifted from \(\sim\)1200 nm to \(\sim\)890 nm. For proof-of-principle experiments, wavelengths below \(\sim 950\) nm are favorable because of the availability of Si-based detectors and cameras and Ti:Sa lasers. An alternative method to blue-shift the emission wavelength of InGaAs QDs consists in post-growth _rapid thermal processing_, resulting in In-Ga interdiffusion [212, 216]. Since bulk-intermixing is involved in this process (with higher activation energies compared to surface processes), processing temperatures exceeding 800\({}^{\circ}\)C are usually required and the resulting QDs have very different properties compared to InGaAs QDs obtained with the In-flush method: the confinement potential is much larger and the In fraction in the alloy is lower, resulting, e.g. in increased oscillator strengths [218]. On the other hand, to obtain light emission in the telecom O-band (about 1300 nm) at low temperatures, QD flattening during capping should be avoided. This can be achieved by capping the InGaAs QDs with an InGaAs _strain-reducing layer_ instead of pure GaAs [219, 220]. The presence of In in this layer reduces the average lattice mismatch between deposited material and island top, facilitating overgrowth, and enriches the surface with In, limiting the out-diffusion of QD material to the surrounding surface. It should be noted that the maximum wavelength of InAs QDs is set by the energy bandgap of bulk InAs (415 meV at cryogenic temperatures, corresponding to a wavelength of almost 3 \(\mu\)m), so that emission in the C-band is in principle easy to reach. In addition to the above-mentioned material intermixing, which leads to alloyed InGaAs QDs with usually more than 40% Ga fraction, the large lattice mismatch between InAs and GaAs limits the maximum QD size before _plastic relaxation_ (i.e. defect formation) [221] and - at the same time - the large compressive stress exerted by GaAs on InAs leads to a substantial bandgap increase of the QD material. Since strain is one of the main driving forces for alloying and for bandgap increase, the obvious solution is growing InAs QDs on substrates with \(a_{s}\) closer to the lattice constant of InAs. The most common solutions are either InP substrates (with \(\epsilon\simeq-3\%\)), or InGaAs _virtual substrates_ (or _metamorphic buffers_) grown on GaAs. High-quality emission in the C-band has been demonstrated for InAs QDs embedded in InAlGaAs barriers lattice-matched to InP(001) grown by MBE [222, 203] (see Fig. 13(f)) as well as in InP barriers [202] (see Fig. 13(e)). Metamorphic buffers are layers capable of accommodating the lattice mismatch between an In\({}_{x}\)Ga\({}_{1-x}\)As final layer acting as substrate and the GaAs substrate through _misfit dislocations_. Excellent results have been recently achieved by MOVPE [223] and led to InAs/InAsGaAs QDs emitting in the C-band and grown on cost-effective GaAs(001). #### 4.2.3 Quantum dot fabrication via nanohole filling The main limitations of the S-K method are: (i) the difficulty of achieving suitable QD surface densities suitable for single-QD devices over large areas, making the selection of "sweet spots" on wafers necessary; (ii) the structural disorder and anisotropies due to inhomogeneous alloying, Figure 14: QDs in nanoholes. (a) Scanning electron microscope (SEM) image of an array of pyramidal recesses in a GaAs(111)B substrate, filled with an (Al,In,Ga)As heterostructure; (b) Cross-sectional AFM image showing a heterostructure grown in such pyramids. (c) STM image of a nanohole on an Al\({}_{0.45}\)Ga\({}_{0.55}\)As surface obtained by MBE and _in situ_ etching of a template of S-K /InGaAs/GaAs QDs followed by overgrowth with a 7-nm thick Al\({}_{0.45}\)Ga\({}_{0.55}\)As layer; linescans of the elongated nanohole are shown in the bottom panel. (d) Similar to (c) but with nanoholes obtained on a GaAs surface by Ga droplet etching followed by overgrowth with 7-nm Al\({}_{0.45}\)Ga\({}_{0.55}\)As. (e) Symmetric nanoholes obtained on a 100 nm thick Al\({}_{0.4}\)Ga\({}_{0.6}\)As layer via optimized Al-droplet etching. (a) Reproduced from [224] with permission of the publisher, (b) reproduced from _Juska et al._ _2011_[225] under Creative Commons CC BY license, (c) Reprinted figure with permission from Ref. 212 Copyright (2004) by the American Physical Society. (d, e) Reprinted from _Huo et al._ _2013_[133], with the permission of AIP Publishing. which limits the QD usability as hosts of coherent spins and as sources of entangled photons; (iii) the unsuitability for the creation of QDs out of almost lattice-matched material combinations. A method overcoming some or all of these limitations relies on the creation of dimples (or "nanoholes", Fig. 12(d1)) on the bottom barrier material, followed by the growth of a "planar" heterostructure with carefully chosen growth parameters to allow either quasi-conformal overgrowth of the dimple (for barrier material) or accumulation of QD material at its bottom (region of largest curvature and lowest chemical potential, see Eq. 20), leading to local thickening and QD formation (Fig. 12(d2)). In turn, nanoholes can be created either by lithography and etching before growth or by _in situ_ methods. An example of the former (see Fig. 14(a)) is represented by tetragonal recesses obtained by anisotropic etching of lithographically-defined apertures on a GaAs(111)B surface [226] and overgrown via MOVPE to obtain site-controlled arrays of QDs with very high ensemble homogeneity [227]. A cross-sectional energy-dispersive-X-ray elemental map of an InGaAs QD at the center of an AlAs/GaAs/AlAs heterostructure is shown in Fig. 14(b). A notable material system combination for which the S-K mode cannot be used to fabricate QDs is represented by Al\({}_{x}\)Ga\({}_{1-\mathrm{x}}\)As alloys, that have a lattice constant differing by at most 0.1% compared to GaAs. The first attempts to create "hierarchically self-assembled" QDs via nanohole filling relied on the selective in situ etching of S-K InGaAs QDs during MBE, leading to dimples on a GaAs surface. Such nanoholes were then overgrown with AlGaAs at relatively low surface temperature [212] to allow for quasi-conformal overgrowth (see Fig. 14(c)). Nanohole filling was then obtained by deposition of a thin GaAs layer and a growth interruption, allowing the QD material to diffuse and accumulate in the nanoholes. Since nanohole fabrication relied on a template of S-K InGaAs QDs, there was no improvement in the QD density control. In addition, the maximum thickness of the lower AlGaAs barrier was limited due to undesired nanohole filling with AlGaAs, making the resulting heterostructure poorly suited for photonic integration. These QDs had however good optical quality and a well-defined elongated shape, enabling detailed studies on the relation between structural and optical properties of QDs [228, 229, 212] as well as pioneering experiments on slow-light in Rb vapors with photons emitted by QDs [230]. Attempts to replace the self-assembled nanoholes with ex situ etched nanoholes resulted in good site-control but poorer optical quality [231] due to interaction of the QD states with interface defects. Later on, _local droplet etching_ was discovered [232], resulting in structures with improved control of density [233]. By focusing on III-V semiconductors, the process consists in the deposition of group-III elements (Ga, In, Al) in absence of group-V flux, leading to the formation of metal droplets on the surface (Fig. 12(e1)). The group-V gradient at the interface between droplet and III-V semiconductor drives the diffusion of group-V elements (As, in the case of GaAs) into the droplet and consequent local liquefaction of the substrate. Under a reduced As flux, the etching process continues until a nanohole remains on the surface. For a recent review, see [234]. An example of GaAs nanoholes obtained by Ga-droplet etching followed by overgrowth with a thin AlGaAs layer is shown in Fig. 14(d)). QDs with consistently low density across large areas are easily obtained via local droplet epitaxy. As in the case of GaAs nanoholes obtained by in situ etching of InGaAs QDs, the main limitation of Ga-assisted local-droplet-etching on GaAs is the limited maximum thickness of the AlGaAs barrier. This problem was finally solved by implementing local droplet etching directly on AlGaAs surfaces and using Al droplets [235]. By optimizing the amount of Al used for droplet formation and other growth parameters, highly symmetric nanoholes were demonstrated [133] (Fig. 14(e))), resulting in GaAs QDs with high in-plane symmetry and excellent optical properties, ideally suited as sources of polarization-entangled photons fully compatible with photonic integration [53]. For a recent review on these QDs, see [236] and for recent device achievements, see Section 6.1. #### 4.2.4 Site-controlled quantum dots via guided self-assembly Ex situ patterned nanoholes have been successfully used to guide the formation of S-K QDs only at the desired substrate positions [237, 238] and almost perfect long-range ordering of InGaAs QDs has been reported [239]. The formation mechanism relies on the filling of the nanoholes with a diluted InGaAs alloy on top of which InGaAs preferentially form because of the locally reduced lattice mismatch compared to the surrounding GaAs surfaces, as sketched in Fig. 12(f). Since nanoholes are energetically unfavorable, the growth of a thick buffer layer to bring the QDs away from the defective processed interface tends to flatten the surface, reducing the efficiency of site-control. Compromises have been found, leading to QDs with good optical properties [240]. To increase the interface-QD distance, diffusion anisotropies have been used successfully used [241], leading to QDs with improved optical quality. Achieving the same quality as fully self-assembled QDs is still an open challenge and new in situ patterning methods, such as laser interference in MBE or growth through stencil masks are currently under investigation and have already shown promising results both for guiding the formation of InGaAs S-K QDs [242] and Ga droplets [243] for GaAs QDs via droplet epitaxy [244]. An example of such QDs is shown in Fig. 15(c). #### 4.2.5 Quantum dots obtained by droplet epitaxy Historically, metal droplets were first used to directly obtain QDs rather than nanoholes in the so-called _droplet epitaxy_ method, see review article [234]. To this aim, droplets of group-III elements (typically In or Ga) are exposed to a flux of group-V elements (typically As) to obtain their recrystallization, followed by barrier overgrowth (see Fig. 12(e1-e3)). To prevent substrate etching, exposure must be performed at relatively low substrate temperatures, leading to the occurrence of point defects and difficulty to obtain material with quality as high as in QDs obtained with the S-K or local-droplet-etching methods, at least for what concerns GaAs/AlGaAs(001) QDs. High quality QDs have been recently achieved by performing the growth on GaAs(111)A substrates [245], see Fig. 15(a). Excellent results have been obtained also for InAs/InP QDs emitting in the C-band [202] (Fig. 15(b)). As in the case of local droplet etching, the main advantage of droplet epitaxy over S-K growth is the improved QD-density control and also the higher in-plane symmetry of the obtained nanocrystals. This is exemplified by comparing pyramidal InAs QDs formed on InP by S-K growth (Fig. 13(e)) with the QDs obtained by droplet epitaxy in Fig. 15(b). Figure 15: Quantum dots obtained by droplet epitaxy. (a) InAs/InP(001) QDs with emission in the telecom C-band; (b) GaAs/AlGaAs QDs obtained on GaAs(111)A via “high-temperature droplet epitaxy”; (c) site-controlled GaAs/AlGaAs QDs on GaAs(001) via laser interference. (a) Reprinted figure with permission from _Skipas-Szymanska et al. 2017_[202] Copyright (2017) by the American Physical Society, (b) reprinted from _Bietti et al. 2020_[245] under Creative Commons CC BY license. (c) reprinted from _Han et al. 2021_[244] under Creative Commons CC BY license. #### 4.2.6 Quantum dot molecules In addition to single QDs, there are experiments and applications that rely on interacting QDs. Among the different possible interactions, we mention here tunnel-coupled QDs, which can be obtained by vertical stacking of two or more QDs to form so called QD molecules [246]. For S-K QDs, the strain produced by a buried QD guides the formation of the next QD right on top of the buried one [247], providing a convenient way to achieve vertical stacking, as seen in Fig. 13(d). For QDs in nanoholes, vertically stacked QDs can be obtained by beginning with sufficiently deep nanoholes and alternating QD material and barrier materials [248]. Nanoholes or other surface features can also be used to guide the formation of closely spaced QDs in the growth plane [247]. #### 4.2.7 Quantum dots in nanowires While the methods discussed so far primarily lead to the formation of QDs in planar structure (an exception is represented by the QDs in inverted pyramids), QDs have been successfully fabricated also in nanowire structures [249] following two possible routes: the vapor-liquid-solid growth, in which a metallic droplet (usually gold) acts as a catalyst for the vertical growth of nanowires on a substrate via the vapor-liquid-solid growth, and the catalist-free method, in which selective growth is achieved by opening nanometric apertures on an oxide layer (e.g. silicon oxide) on a substrate (e.g. silicon). Different from planar growth, nanowires allow strain due to lattice mismatch to be efficiently relaxed, thus enabling the growth of a richer set of material combinations, which would be incompatible for planar heterostructures. In addition, also the crystal structure is not necessarily imposed by the substrate and materials usually crystallizing in the zincblende structures are found to crystallize in the wurzite structure in nanowires. As in planar growth, heterostructures can be created along the wires. In particular, segments of QD material can be embedded in higher bandgap barriers. Lateral growth and also etching are also possible by proper tuning of the growth parameters, leading to a rich playground for nanostructure formation. For instance, _crystal phase_ QDs have also been created by alternating segments of the same material but different lattice structure [250]. Site-controlled growth is easily achieved by patterning the precursor droplets or apertures. In addition, the wire geometry is favorable for improved light extraction both in free-space and integrated photonics via pick-and-place (see Section 5.1). Among the different material systems explored so far, InAsP QDs in InP barriers have demonstrated very good performance [159, 251]. ## 5 Nanofabrication of single-quantum-dot devices The nanofabrication of QLS-devices for applications in photonic quantum information technology is technologically very demanding. For instance, it requires a precise integration of individual QDs into nanophotonic structures and their spectral matching to the device's optical modes. In turn, the photonic structures with sub-\(\mu\)m feature sizes must precisely meet the design specifications of the numerical modeling. In addition, recent results show that targeted post-processing, e.g. via surface passivation, is often required to achieve optimal quantum-optical emission properties. Moreover, for practical applications, innovative concepts for applying electrical gates to or directly fiber-pigtailing respective quantum devices are beneficial. Not least, to gain the best out of multiple worlds, hybrid device-concepts are pursued, e.g. allowing for strain-induced spectral control of the QD emission in semiconductor-piezo integrated devices. The epitaxial growth of QD heterostructures and numerical optimizations of device geometries is typically followed by several delicate processing steps during the device nanostructuring in a clean room environment. These mainly include the deposition of thin layers, optical lithography and electron beam lithography, as well as wet and dry chemical etching. In the context of this article, the lithography methods are of particular relevance. During lithography, the envisioned device geometry is transferred to a later etching mask. At the same time, it determines the position of the quantum emitter in the target structure. The latter point is particularly important in the case of single-QD devices. Due to the self-organized nature of the QD growth, the position of the emitter in the respective structure is completely undefined if standard lithography methods are used, which have been optimized, e.g. to produce semiconductor lasers and classical photonic circuits. Moreover, the spectral matching between emitter and nanophotonic structure is in general not guaranteed, being a major issue for resonator-based approaches. Conventional lithography usually results in a process yield for individual QD devices of below 1%, rendering the scaling to e.g. complex IQPCs virtually impossible. Hence, to overcome these hurdles, deterministic process technologies are a crucial tool for the controlled and scalable fabrication of single-QD devices. In the following, innovative nanotechnology methods are presented that have been developed and optimized in recent years for the deterministic fabrication of QD devices. Subsequently, concepts are presented that allow for the direct on-chip fiber coupling of QD devices, enabling quantum network integration. In addition, open challenges and future optimization approaches are discussed. ### Deterministic fabrication technologies A major challenge in the single-QD device fabrication is to integrate individual emitters with the desired optical and quantum-optical properties with high alignment accuracy into photonic nanostructures. This asks for deterministic nanofabrication technologies, which we introduce in the next subsections. #### 5.1.1 Pick-and-place technique The first deterministic process technology to be presented here is called pick-and-place technique. This approach is often used for the fabrication of heterogeneous IQPCs that contain emitter structures and waveguide structures made of different materials. For example, it can act as a powerful nanotechnology platform for the deterministic integration of III/V QDs into silicon-based IQPCs. The aim of pick-and-place approaches is usually to process heterogeneous quantum devices in a scalable manner with high process yield. For this purpose, a multi-stage manufacturing process is used, where first a large number of optically active QD structures and the host structures (e.g. waveguide circuits) are independently processed on different chips. In a next step, QD devices suitable for the transfer to e.g. photonic waveguides are selected by spectroscopic means. Finally, using a micromanipulator or a rubber stamp technique [252], the corresponding QD structures are detached from the host substrate and transferred to the target structure where they are integrated via van der Waals forces. While this transfer can be achieved with an accuracy of 10-100 nm, it can hardly be automatized, thus limiting the practicality of this approach. Still, the pick-and-place technique is very suitable for efficient prototyping, as it allows for the independent optimization of the active QD structure and the passive counterpart, respectively, and a flexible integration of both. Figure 16 shows a concrete example in which the pick-and-place technique was used for the deterministic fabrication of QD quantum circuits [253]. The aim was to integrate an active QD structure into a silicon photonic chip. The prototype IQPC consists of an InAs/InP QD in a tapered nanobeam resonator (Fig. 16(a)) whose emission is adiabatically coupled into the underlying silicon waveguide. This waveguide is in the simplest case linear, or branches off in an on-chip 50/50 beam splitter with two output waveguides, each ending in a grating coupler for vertical outcoupling of light (Fig. 16(b, c)). Due to the random position (and inhomogeneous broadening of the ensemble emission) of the self-assembled QDs in the growth plane, the spatial (and spectral) overlap of the QDs with the field maximum in the nanobeams can not be guaranteed in this conventional manufacturing process, resulting typically in a yield of suitable components of approximately 1%. Hence, suitable QD-nanobeams, which fulfill the desired spectral and spatial properties, are selected before the transfer to waveguides. The corresponding QD-nanobeam membrane structures are then removed from the host chip by focus ion beam milling and transferred to the desired position on the waveguide using a micromanipulator (see Fig. 16(d-f)). Van der Waals forces ensure the necessary adhesion of the QD-nanobeam structures to the micromanipulator and the waveguide structures. The transfer is reproducible and takes place with an accuracy better than 100 nm. The placement routine of the nanobeam, however, is relatively time-consuming with a duration of about 1 hour. Pick-and-place techniques as described above have been applied for the fabrication of single QD quantum devices in various ways. For example, epitaxially grown InAsP nanowires with integrated QDs were transferred to silicon nitride waveguides via pick-and-place, where they can in turn act as single-photon emitters [254]. This work shows an attractive aspect of the nanowire technology: The host structure can be designed to include complex photonics structured in parallel to the emitter structure. Such structures can be fabricated using methods of integrated photonics in order to obtain a functional IQPC after heterogeneous integration. For example, as shown in Ref. [254], electrically controlled filter elements and single-photon on-chip multiplexers can be integrated. Noteworthy, pick-and-place techniques are attractive alternatives to the complex epitaxial growth of hybrid heterostructures of III-V compound semiconductors on e.g. Si wafers or the fabrication of heterogeneous quantum devices based on wafer-bonded QD heterostructures. #### 5.1.2 Marker-based lithography techniques Another strategy for the fabrication of single-QD devices is marker-based lithography. In this approach, the location of suitable quantum emitters is first identified relative to alignment markers, Figure 16: SEM images illustrating deterministic nanofabrication via the pick-and-place technique. (a) EBL fabricated InGaAs/GaAs QD-nanobeam structure. (b, c) Straight waveguide and y-shaped 50/50 waveguide including grating outcouplers made of silicon. (d, e) Pick-and-place step transferring the QD-nanobeam to a y-shaped 50/50 waveguide using a microprobe tip combined with a focused ion beam and SEM. (f) False color SEM image of four fully processed hybrid QD-waveguide devices. Reprinted with permission from Ref. [253]. Copyright 2017 American Chemical Society. usually via optical imaging. In a second step, the intended nanophotonic structures are defined in the appropriate resist at the location of pre-selected QDs using electron beam lithography. Marker-based lithography is a flexible method that can be used for different quantum emitters, emission wavelengths and device concepts. It is currently very popular to fabricated CBG resonators with deterministically integrated QDs. The origins of marker-based lithography lie in technological developments aimed at producing QD nanoresonators for the study of cQED effects. Here, the spectral and spatial resonance between the emitter and the resonator mode is a crucial requirement that cannot be reproducibly achieved with conventional manufacturing technologies. To counteract this problem, in Ref. [255] vertically strain-correlated stacked QDs were used as presented in Fig.17(a-c). In a stack of 6 QDs, the bottom QD, which was intended for the cQED experiments, was blue-detuned and the position of the top QD could be determined on the sample surface using SEM images relative to alignment markers. Subsequently, PC-cavities were produced using EBL and etching techniques, the position of which was specifically aligned to the detected QDs. Overall, this advanced nanotechnology concept allowed the regime of weak [255] and strong coupling [257] in QD-nanocavity systems to be implemented in a controlled manner for the first time. The aforementioned marker-based manufacturing process first showed the potential of deterministic device fabrication technologies. However, it is limited in two ways. On the one hand, it Figure 17: Marker-based deterministic device processing. (a-c) Deterministic fabrication of a QD photonic crystal (PC) cavity device. Using the growth of vertically stacked QDs (a) the position of the target seed QD which is blue-shifted to be in resonance with the PC cavity can be retrieved by SEM imaging at the sample surface. Using marker-based EBL, the nanocavity can be aligned with the selected QD (b) to ensure optimum mode overlap (c). (d-f) Schematic illustration of cryogenic laser lithography to pre-select a QD and pattern alignment markers relative to its position. Using (a) a QD sample patterned with a tungsten mask (e) and SU-8 photoresist (f) \(\mu\)PL spectroscopy is performed at cryogenic temperatures to determine the position of a target QD (g) before markers spatially aligned to the QD are written into SU-8 by optical lithography in the cryogenic \(\mu\)PL. Finally, the markers are transferred by into Tungsten by reactive ion etching (i). In a following EBL process, the fabricated cross-markers could be used to align a photonic nanostructure to the selected QD using marker-based EBL. (a-c) From Ref. [255]. Reprinted with permission from AAAS. (d-i) Reprinted from Ref. [256], with the permission of AIP Publishing. is based on a stack of strain-coupled QDs, which limits the device compatibility to near-surface structures, and on the other hand, only the position of the QDs but not their spectral features (especially the emission wavelength) could be determined during device fabrication. In other words, only the spatial, but not the spectral resonance between the emitter and the resonator mode could be controlled. To overcome this problem, cryogenic laser photolithography was developed, which provides knowledge of the position and spectral properties of selected QDs [256]. In the first step of this nanotechnology concept, which is illustrated in Fig. 17(d-i), the sample surface is scanned using \(\mu\)PL spectroscopy in order to determine the position and the spectral position of suitable QDs. Immediately afterward, the laser alignment marker is used to write in a photon-sensitive lacquer relative to the position of the pre-selected QDs. It is worth noting that both processes take place at cryogenic temperatures (4 K) to ensure a sufficiently high luminescence yield of the QDs. In the final step, the marker structures are transferred to the semiconductor material by dry etching. With this method, the QD positions can be determined with an accuracy of \(\pm\)50 nm and retrieved with an accuracy of \(\pm\)150 nm via marker detection, and the spectral accuracy is about 1 nm. The method described was not used in the following for the production of single-QD quantum devices, but it can be regarded as an important basis and trigger for the development of the deterministic fabrication methods described in the following. The full potential of marker-based lithography was first demonstrated by Davanco et al. Ref. [52]. In this work, QD SPSs based on CGB resonators, also called bullseye resonators, were fabricated deterministically with spatial and spectral control. The precise fabrication process made it possible to produce sources with a photon extraction efficiency of (48 \(\pm\) 5)%, which agrees very well with the theoretically predicted value of 50%. In the underlying multi-stage Figure 18: Marker-based deterministic device processing using cryogenic optical imaging. (a) Experimental setup including two light emitting diodes (LEDs) (at 630 nm and 940 nm) and a laser (at 780), several beam-splitters and optical filters, a closed-cycle cryostat, an electron multiplied charged couple device (EMCCD) for optical imaging and a spectrometer for \(\mu\)PL investigations. (b) Fluorescence image under LED excitation at 630 nm and a 900 nm low-pass filter to suppress reflected light. Emission of two QDs and their positions (in the given coordinate system) can be identified. (c) Optical image of the sample under LED illumination at 630 nm. Metallic alignment markers are clearly seen. Overlying the two images allows one to determine the positions of pre-selected QDs with respect to the alignment markers with better than 30 nm accuracy. Reproduced from Ref. [52] under Creative Commons CC BY license. nanoscale optical positioning process illustrated in Fig. 18, first, metallic alignment markers were structured on the sample surface using EBL. In the second step, performed at 6 K, the sample surface was illuminated over a large area (200 \(\mu\)m diameter) in a modified \(\mu\)PL setup with a 630 nm LED. In the detection path, either the reflected light or the fluorescence of the QD could be recorded with an electron multiplied charged couple device through a corresponding spectral filter. Alternatively, a laser was used to measure the \(\mu\)PL spectra of individual QDs. An additional 940 nm LED could also be used to record the reflected light of the markers and the luminescence of the QDs at the same time to improve the alignment accuracy. In this experimental configuration, by comparing the images and the \(\mu\)PL data, the positions and emission wavelengths of suitable QDs relative to the alignment marks could be determined with an average position uncertainty of <30 nm. In the final processing step, bullseye resonators, which are spatially and spectrally matched to selected QDs, were structured using EBL and dry chemical etching. The deterministically produced quantum devices are characterized by a high extraction efficiency ((48 \(\pm\) 5)%) and a very good single-photon purity of \(g^{(2)}(0)=0.009\pm 0.005\). Furthermore, a Purcell effect of 4 (significantly below the expected value of 11) could be determined due to the increased light-matter interaction in the resonator structure. The nanoscale optical positioning method presented has evolved since its first demonstration and has already been used many times for the deterministic production of various QLSs. These are usually based on QDs in CBG resonators [53, 258], but micropillars [258, 259] and PC-cavity-based devices [260] have also been manufactured using this process. Current work shows that it can also be used for quantum devices with emission in the telecom O-band [261]. It is interesting to note that recently a marker-based technology has been developed that uses cathodoluminescence(CL) mapping instead of optical imaging. This method has the advantage that the QD selection and the EBL are carried out in the same system and therefore in principle also in the same coordinate system, which promises higher alignment accuracies in the future compared to optical imaging in combination with EBL. It was used to demonstrate the scalable integration of multiple QDs into an optical waveguide system [262]. Although the marker-based positioning methods are very powerful, they are also relatively complex in practical implementation due to the multistep process flow. Furthermore, the alignment markers can have a disruptive effect on the fabrication of larger structures, such as large-scale integrated quantum circuits. These limitations can be circumvented with in situ lithography concepts, which are presented and discussed in the following section. #### 5.1.3 In situ lithography techniques Additional interesting deterministic nanostructuring technologies are in situ lithography techniques. In contrast to the approaches presented in the previous sections, these techniques do not require marker structures and are therefore comparatively simple in the process flow. The basic idea is to first determine the positions of suitable quantum emitters using optical spectroscopy or cathodoluminescence and then, in the same setup, to define the desired nanophotonic structure using optical lithography or EBL in the appropriate resist. Pioneering work in in situ lithography was performed by Dousse et al. in Ref. [263]. In this work, in situ optical lithography was developed and used for the first time to produce a spectral and spatially resonant QD-micropillar system for the controlled study of cQED effects. The experimental setup consists of a low-temperature \(\mu\)PL unit, which has been expanded to include a second laser on the excitation side (see Fig. 19a). While the first laser with a wavelength of 750 nm is used for optical excitation of the QD sample, the second laser with a wavelength of 532 nm is used for lithography. In the process sequence, the QD sample, which in Dousse et al. corresponded to a planar microresonator structure with a lower and an upper DBR with an intermediate cavity with an integrated QD layer (see Fig. 19b), is first coated with a positive photoresist and then mounted onto the cold finger of a He-flow cryostat. After that, part of the sample surface (a few \(\mu\)m in x- and y-direction) is scanned for the QD selection by \(\mu\)PL mapping at typically 10 K, for which the 750 nm laser is used (while the 532 nm laser is blocked), which does not affect the photoresist. Here, QDs are selected specifically with regard to their emission wavelength (and \(\mu\)PL intensity) in order to later be brought into spectral resonance with the fundamental emission mode of a micropillar cavity. An exemplary \(\mu\)PL spectrum of the planar microcavity is shown in Fig. 19c). Once a suitable QD has been found, the sample position is optimized for maximum \(\mu\)PL intensity before the long wavelength laser is blocked, and the photoresist is exposed with the unblocked short wavelength laser at the location of the QD. Here, the diameter of the exposure spot, which specifies the diameter of the later micropillar, can be adjusted within certain limits via the exposure time. Finally, reactive ion etching is applied to produce the micropillar. Fig. 19d) compares \(\mu\)PL spectra of the planar cavity (black trace) with spectra of the processed QD-micropillar with a radius of 0.85 \(\mu\)m taken at two temperatures. The QD-micropillar is well-structured and spectral resonance between the single-QD exciton and the resonator mode can be achieved at about 20 K. A corresponding temperature tuning \(\mu\)PL map is presented in Fig. 19e) and an evaluation of the normalized PL intensity yields a Purcell-effect of \(9\pm 3\). These results reflect the high potential of in situ optical lithography, which has been used very successfully since the first report, to demonstrate, for instance, bright sources of indistinguishable photons [49] and entangled photon pairs [264]. Despite the great success of in situ optical lithography, this technique also has disadvantages. Above all, it is based on the resist exposure using laser light, which limits the resolution and structure accuracy to a few 100 nm. Furthermore, no complex structures such as optical Figure 19: In situ optical lithography of a spatially and spectrally resonant QD-micropillar system. a) Sketch of used experimental \(\mu\)PL setup extended by a second green laser for optical lithography. b) Layer design of the used QD-heterostructure with the photoresist layer on top and an illustration of the laser excitation scheme. c) \(\mu\)PL spectrum of the planar cavity during QD selection at 10 K. Emission of a single QD exciton \(E_{X}\) is marked. d) \(\mu\)PL spectra of a selected QD during in situ lithography (black trace) and after etching of the micropillar with a radius of 0.85 \(\mu\)m at 10 K and 32 K (red and green traces). Emission of the fundamental cavity mode M is marked. e) Temperature tuning of the QD exciton X through resonance with the cavity mode M. The QD-micropillar system is in the weak coupling regime of cQED and shows enhanced emission at spectral resonance. f) Corresponding normalized PL intensity as function of detuning between X and M. The fit (red trace) yields a Purcell-factor of \(9\pm 3\). Reprinted from _Dousse et al._ _2008_[263]. Copyright (2008) by the American Physical Society. waveguides can be defined. In order to circumvent these limitations, the method of in situ electron beam lithography was developed [265]. This technique uses CL spectroscopy in combination with EBL to select QDs (mainly at cryogenic temperatures), and then to define the desired nanophotonic structure with high alignment accuracy relative to the selected QD. In this way, the in situ EBL combines the advantages of user-friendly CL mapping with the high flexibility and resolution of the EBL in a unique deterministic nanodrocessing technology. The process sequence of the in situ EBL method is shown in Fig. 20. After the electron beam sensitive positive tone resist has been spin-coated onto the sample surface, it is mounted in a SEM with a CL extension and a He-flow cryostat. For the QD selection, sample areas of usually around 50 \(\mu\)m x 50 \(\mu\)m are scanned with a low dose below the onset dose for resist inversion (typically 20 mC cm\({}^{-2}\), see Figure 20(e)) in order to create a CL map as indicated in Fig. 20(a). Based on the data obtained, the positions of suitable QDs are determined, using the luminescence intensity and the spectral properties of the QD as criteria. At these QD positions, the photonic nanostructures are then exposed into the resist using EBL as illustrated in Fig. 20(b), with a dose above the onset dose for inversion being selected in order to locally invert the resist and reduce its solubility in the subsequent development step (Fig. 20(c). The CL mapping and the actual EBL are carried out at cryogenic temperature and allow an alignment accuracy between QD and nanophotonic structure of about 30-40 nm [267]. This alignment accuracy is essentially limited by the mechanical drift of the cold finger (in the semi-professional SEM used). In the future, professional in situ EBL systems with an interferometer stage will probably be able to achieve significantly better alignment accuracies. With a suitable selection of the exposure dose Figure 20: In situ electron beam lithography of single-QD quantum devices. In the in situ EBL process flow, suitable resist (PMMA or CSAR) are first spin-coated on the sample surface before CL mapping is performed with low dose at cryogenic temperatures to select suitable QDs based on the luminescence yield and spectral properties. Then, still at cryogenic temperature, the desired nanophotonic is written in the resist with higher dose at the position of a selected QD, where gray-scale EBL can be applied to shape for instance 3D microlenses by locally inverting the resist (b). Afterward, the resist is developed in the clean room (c) leaving the defined structure as etch mask in the subsequent reactive ion etching step (d). Contrast curve of PMMA resist at 5 K. Inversion of the positive to resist starts at a dose value of 20 mC cm\({}^{-2}\). Inset: SEM image of 3 deterministically fabricated QD-microlenses. (f) On-chip quantum circuit fabricated by in situ EBL. The circuit includes a deterministically integrated QD in the input waveguide of a multi-mode interference beam-splitter with two exit ports. (a-d) Reproduced from Ref. [56] under Creative Commons CC BY license. (e, f) Adapted with permission from Ref. [266]. Copyright 2018 American Chemical Society. above the onset value, i.e. in the range of 20-40 mC cm\({}^{-2}\) (see contrast curve in Fig 20(e)), three-dimensional nanostructures can be defined in the resist via gray-scale EBL, as used, for example, used for the deterministic fabrication of QD microlenses, as shown in Fig 20(b)). In the subsequent development process (Fig 20(c)), the non-inverted resist is removed so that the desired structure remains as an etching mask on the sample surface. In the final step, reactive ion etching is applied to transfer the patterned structure into the semiconductor material. As a result, microlenses with sub-\(\mu\)m feature sizes and deterministically integrated QDs are fabricated (see, Fig 20(e), inset). Such QD-microlenses act as bright sources of indistinguishable photons, as demonstrated in Ref. [56]. The great potential of in situ EBL technology for the fabrication of complex QD nanostructures with nanometer feature sizes was demonstrated in Ref. [266], where the technique was applied for patterning a photonic quantum circuit with a deterministically integrated QD. The structure shown in Fig. 20(f) includes the deterministically integrated QD in a linear waveguide, via which the photons emitted by the QD are transmitted into the input port of a multi-mode interference beam splitter with a 50/50 splitting ratio. The latter was used in Ref. [266] as an on-chip-integrated beam splitter and enabled the authors to perform a quantum optical Hanbury Brown Twiss experiment on chip. Furthermore, hybrid waveguide systems [268] and structures for the controlled study of chiral light-matter interactions [269] were fabricated using in situ EBL. An important milestone in the scalable fabrication of quantum circuits was recently achieved by deterministically integrating two QDs into the input waveguides of a multi-mode interference beam splitter [262]. In the future, this approach, in combination with spectral fine-tuning using the quantum confined Stark effect [270], can be used to develop highly functional IQPCs, for example for a fully integrated boson sampling chip. \begin{table} \begin{tabular}{c c c c c c c c} \hline Method & Complex. & MB & SS & Litho. & PA & AA & LR & Ref. \\ \hline Pick-and-place & high & no & no & EBL, RT & – & \(\approx\) 200 nm & ¡ 10 nm & [253] \\ Optical imaging & medium & yes & no & EBL, RT & \(\approx\) 10 nm & ¡ 30 nm & ¡ 10 nm & [52] \\ CL imaging & medium & yes & yes & EBL, RT & 10 nm & 40 nm & ¡ 10 nm & [262] \\ In situ opt. litho. & low & no & yes & optical, low-T & \(\pm\) 50 nm & \(\pm\) 50 nm & \(>\) 100 nm & [263] \\ In situ EBL & low & no & yes & EBL, low-T & 25 nm & 30-40 nm & ¡ 10 nm & [267] \\ \hline \end{tabular} \end{table} Table 1: Comparison of most relevant deterministic QD-device processing technologies. The table indicates the relative complexity (complex.), whether the technology is marker based (MB), if spectral selection (SS) (using a spectrometer) of QDs is possible, and which type of lithography is performed (optical lithography, EBL, at cryogenic temperatures (low-T) or at room temperature (RT)). It also provides information about position accuracy (PA), alignment accuracy (AA), lithography resolution (LR), and the related references. Here, PA refers to the accuracy with which the position of a QD can be determined. AA is the accuracy with which the QD is positioned in the nanophotonic structure. In summary, today there are a number of very powerful deterministic processing technologies available to integrate individual emitters with high accuracy into photonic nanostructures. The most relevant methods are given in Table 1 together with the most important technology parameters. They differ in their complexity and in their alignment and structural accuracy. The relatively complex pick-and-place technique is based on a multi-stage process which, in addition to structure fabrication, includes structure selection and structure transfer. In addition to the actual structure fabrication, the marker-based technologies require the marker processing and a precise alignment of the mapping data with the marker coordinate system. In comparison, the two in situ lithography processes are technologically relatively simple because they do not require any marker structures. The achievable alignment and structure accuracies are similar with the techniques based on EBL. In this regard, in situ optical lithography has to accept compromises, especially with regard to structural accuracy. ### On-chip fiber coupling of quantum light sources In the last two decades, enormous progress has been made in the development and fabrication of QLSs based on semiconductor QDs. As described in Sections 4 and 5, innovative growth and fabrication technologies were developed and used, and almost ideal emission properties could be achieved (see Section 6). So far, however, these quantum devices have been operated and studied almost exclusively on a laboratory scale in proof-of-principle experiments. With regard to real applications, for example in photonic quantum technology, further development stages are necessary. In fact, fiber coupling of QLSs can enable the transmission of quantum information over long distances and the generation of remote entanglement between separated quantum systems to create quantum networks and to enable distributed quantum information processing. Several different approaches have been proposed for achieving high coupling efficiency between a QD and a single-mode fiber. These approaches aim at connecting QLSs to optical fibers in a robust manner and with high coupling efficiency. Here, the coupling to single-mode optical fibers compatible with standard telecom components is particularly interesting and important to allow direct application, for example in QKD. On the source side, single-photon emitters with emission in the telecom O-band and C-band at 1.3 \(\mu\)m and 1.55 \(\mu\)m wavelengths are of particular relevance in order to transmit quantum information over medium and long distances. Additionally, sources with emission wavelengths below 1 \(\mu\)m are interesting for local interconnects, for example to generate photonic input states for quantum computers and simulators in a convenient manner via optical fibers. One of the most important parameters of fiber-coupled SPS is the overall coupling efficiency \(\eta_{\mathrm{tot}}\), i.e. the probability that a photon will be coupled into the core of the fiber after a trigger event. This parameter results from the product of the QD occupation probability \(\eta_{\mathrm{exc}}\), the internal quantum efficiency of the QD \(\eta_{\mathrm{int}}\), the photon extraction efficiency \(\eta_{\mathrm{ext}}\) and the coupling efficiency between source and fiber \(\eta_{\mathrm{sf}}\): \(\eta_{\mathrm{tot}}=\eta_{\mathrm{exc}}\times\eta_{\mathrm{int}}\times\eta_{ \mathrm{ext}}\times\eta_{\mathrm{sf}}\). \(\eta_{\mathrm{exc}}\) and \(\eta_{\mathrm{int}}\) are close to one when using suitable resonant excitation and high-quality QDs. \(\eta_{\mathrm{ext}}\) depends on the device design, which is usually optimized using numerical methods, and in the case of CBG structures reaches values beyond 80% [53] but for high NA of the collecting optics in the range of 0.6-0.8, as usually present in quantum-optical experiments. Concerning fiber-coupling solutions, it is therefore a major challenge to achieve high \(\eta_{\mathrm{sf}}\). On the one hand, conventional single-mode fibers with a small refractive index contrast between core and cladding have a large mode size, causing a small NA of around 0.1. Furthermore, simple settings suffer from poor mode matching between source and fiber. Thus, the optical fiber collects only a small fraction of photons emitted from the QDs. In order to counter these problems, one interesting approach is the evanescent field coupling using tapered fibers. A fiber-pulling technique is able to form tapered micro-fibers, which couples the emission from QD devices with high efficiency of 23% [272]. It is also possible to use a single-side tapered fiber and integrate it with tapered QD devices [273]. Proper designs and alignments can lead to adiabatic mode transfer between tapered fiber and QD devices with near-unit coupling efficiency [274]. Although evanescent couplings via tapered fibers are very effective in improving coupling efficiency, the approach could be impractical due to the lack of mechanical stability and the need for continuous alignment to maintain high coupling efficiency. A further approach aims at far-field coupling between the sources and the fiber. Maximizing \(\eta_{\mathrm{tot}}\) in this setting with a large parameter space requires high computational effort. A recent comprehensive numerical study maximized \(\eta_{\mathrm{tot}}\) for micromesas, microlenses, micropillars and CBGs for emission wavelengths of 930 nm, 1.3 \(\mu\)m and 1.55 \(\mu\)m coupled to a single-mode fiber [271]. Here, an intermediate achromatic microlens was considered to maximize the mode matching between source and fiber, as presented in Fig. 21(a). Fig. 21(b) shows the calculated field intensity when considering a QD-micropillar and panel (c) compares the mode profile of the light field at the fiber facet and the light field confined in the core of the fiber. Excellent mode overlap of up to 95% and total efficiencies of up to 83% were achieved by numeric optimization of the SPS-lens-fiber system. The on-chip fiber coupling of QLSs based on semiconductor QDs is technologically challenging for several reasons. For example, QDs are typically embedded in intricate nanophotonic structures or cavities to maximize photon extraction efficiency. These have lateral dimensions in the micrometer range, so that a sub-\(\mu\)m alignment accuracy between the source and the single-mode fiber is required to achieve high photon coupling efficiency between the two. Furthermore, QDs must be operated at cryogenic temperatures in order to ensure a sufficiently high luminescence yield. This complicates the fiber source adjustment, which usually cannot be done in the cryostat at the operating temperature of the QD. It should be mentioned that in the case of evanescent microfiber-coupled QD structures, a low-temperature alignment using x-y-z stages is possible Figure 21: Numeric simulation and optimization of fiber-coupled QD single-photon sources (SPSs). (a) Numerical setting, including the SPS (QD-micropillar planarized with benzocyclobutene in this case), mode-matching aspheric microlens and a single-mode optical fiber. (b) Corresponding light field intensity distribution calculated via the finite element method. The light is collected by the aspheric microlens and focused on the core of the fiber. (c) Calculated mode profile of the optical fiber (dashed black trace) and of the incident emission of a QD integrated into a micromesa, microlens, CBG resonator, and a micropillar for two orientations of the emitter dipole. The calculations yield excellent mode overlap of 89%, 92%, 90% and 95% for the four considered geometries. Reprinted from Ref. [271]. in principle, but permanent bonding of the fiber is not. Therefore, adjustment and coupling techniques must be developed and used that do not require optically active alignment to the QD signal at room temperature and ensure a robust and permanent source-fiber connection. It is important to note that these interconnect solutions must be capable of operating at a few 10 K and survive many cool-down cycles while maintaining micron-precise source-fiber coupling. In order to ensure this, it is important, for example, that the coefficients of thermal expansion of the materials used, and in particular the adhesive used to fix the fiber holder, differ only slightly. In practice, this could only be guaranteed to a limited extent, so that mechanical stresses usually arise, which lead to a temperature-dependent (strain-induced) spectral shift of the QD emission of up to a few nm [279]. A straight-forward fiber coupling solution is based on optical adjustment of the fiber and subsequent gluing of it at the position of the QD-SPS. The challenge here is optical adjustment at room temperature without a direct signal from the QD itself. In Ref. [275], the wetting layer signal of a QD-micromesa, which is strong enough even at room temperature, was used to align Figure 22: Fiber-coupling techniques based on optical source-fiber alignment at room temperature. (a) Optical alignment and gluing of a single-QD micromesa to a multimode fiber. 1. Laser light is coupled into the fiber, which is scanned across the sample surface. 2. At the position of the QD-micromesa wetting layer, emission is generated and collected by the fiber as optical feedback. 3. After lifting the fiber at the position of the QD-micromesa epoxy glue is applied to the surface. 4. The fiber is brought into contact with the sample and the glue is cured. (b) Optical image of the fiber-coupled QD-micromesa. (c) Schematic view of a fiber-coupled, electrically driven QD-micropillar. (d) Corresponding reflected laser signal obtained by scanning a single-mode fiber across the sample surface. The electrical contact leading to high reflected intensity is clearly identified. Before gluing the fiber similar to (a) fine adjustment is performed by maximizing electroluminescence of the micropillar collected by the single-mode fiber. Then the fiber is glued similar to (a). (e) Alternative fiber alignment technique using the interference signal between a single-mode fiber and the sample surface as optical feedback while scanning the sample surface. Optical images of micromesas with 2 \(\mu\)m (f) and 0.5 \(\mu\)m (h) diameter in the center of an etched sample area. (g, h) Corresponding surface maps of the collected interference signal. The positions of the mesas can be determined with 50 nm (lateral) accuracy. (j) Optical image of a QD-micromesa with emission in the telecom O-band. The interference method was used for mesa-fiber alignment. (a, b) Reproduced from Ref. [275] under Creative Commons CC BY license. (c-d) Reprinted from Ref. [276] with the permission of AIP Publishing. (e-j) Reprinted from Ref. [277]. the fiber. As shown in Fig. 22(a), the fiber was scanned over the relevant sample area under optical excitation from the fiber by laser light. The spatially resolved luminescence was collected via the same fiber and analyzed by a spectrometer with regard to wetting layer emission. In this way, the position of the QD micromesa could be reliably determined via the wetting layer signal to then glue the fiber aligned to the position of the QD-micromesa. A correspondingly fabricated fiber-coupled QD-micromesa is shown in Fig. 22(b). This emits single photons with a wavelength of 930 nm directly into a multimode fiber. A similar approach was taken in Ref. [276] and applied to couple an electrically driven QD-micropillar fiber (see Fig. 22(c)). In this case, the reflected signal from the electrical contact was used for the rough adjustment (see Fig. 22(d)) before the fine adjustment was made using electroluminescence from the QD-micropillar. Using this approach, the single-mode fiber coupling of the QD-micropillar was achieved. Another variant of this approach uses interference phenomena between the fiber facet and the sample surface to determine the position of the QD-structure [277]. As shown in Fig. 22(e), the sample surface was scanned with light from a supercontinuum source and the reflected light was analyzed using a spectrometer. In this way, spatially resolved interference images can be recorded (see Fig. 22(g, i)) in order to determine the position of QD-microstructures with a sub-\(\mu\)m extension with a lateral resolution of 50 nm, before gluing to the fiber is performed. With this method, it was possible to couple a QD-micromesa with emission in the telecom O-band to a single-mode fiber (SMF) (see Fig. 22(j)), which was then integrated into a stand-alone SPS [278]. An interesting alternative to the coupling techniques described is based on 3D two-photon lithography. This modern microstructuring process can be used to produce microlenses on the Figure 23: SPS fiber coupling via a 3D printed holder ((a-d)) and a polydimethylsiloxane stamp ((e,f)), respectively. (a) Schematic of the coupling scheme. First, a total internal reflection microlens is printed onto a QD-micromesa via 3D two-photon lithography to efficiently extract the photons from the SPS. Then the fiber holder, aligned with sub-\(\mu\)m to the source, is printed with 3D two-photon lithography. Finally, the fiber with a numerical aperture (NA) matched focus lens printed to its end facet is inserted into the holder to collect the photons from the total internal reflection lens. (b-d) Optical microscopy images of the lensed fiber, the holder with inserted fiber and the glued fiber. The inset in (d) shows the device mounted to a copper holder. (e) Schematic view of pick-and-place transferring a hole-CBG SPS to the core of a standard telecom fiber using a polydimethylsiloxane stamp. The optical image shows the facet of the fiber with the aligned hole-CBG (green color) in the center. (f) Optical microscopy image of the finished device and close-up view of the laser-illuminated hole-CBG. (a-d) Reproduced from Ref. [279] under Creative Commons CC BY license. (e, f) Reproduced from Ref. [280] with permission from John Wiley and Sons. one hand and fiber holders on the other, very flexibly and with great precision. The application of this technique for the fabrication of fiber-coupled SPSs is illustrated in Fig. 23(a-d). As seen in panel (a), a high-NA total internal reflection microlens can be printed directly over the QD-SPS via 3D two-photon lithography to effectively collect the emitted photons and focus them onto the fiber core. The fiber itself is guided over a holder, also manufactured using 3D two-photon lithography, and aligned with the source. For a durable connection, the holder is also glued to the semiconductor chip together with a lensed fiber in the last process step. In this way, an SMF-coupled QD-SPS with excellent emission properties at 930 nm was realized [279]. Still another approach uses the pick-and-place technique for SPS fiber alignment. Fig. 23(e,f) shows a corresponding example in which a QD-SPS based on a hole-based circular Bragg grating (hole-CBG), i.e. a CBG resonator in which the trenches are replaced by hole arrays, was coupled to an SMF. For this purpose, the hole-CBG structure was transferred after fabrication onto a polydimethylsiloxane stamp, with the help of which it was transferred to the facet of the SMF with high precision. The excellent alignment accuracy was impressively demonstrated by the backside illumination of the hole-CBG structure (see Fig. 23(f)). In this way, a fiber-coupled telecom O-band SPS with a total efficiency of 4.6% was fabricated [280]. Similar results, with an total efficiency of 5.8%, were achieved before for an nanowire-coupled InGaAs QD emitting at 970 nm [281]. The techniques and results presented show the enormous development in the field of fiber-coupled SPSs in recent years, and Table 2 summarizes the-state-of the-art in the field. Despite the enormous progress, great efforts are still necessary, in particular to further increase the source-fiber coupling efficiency so that the theoretically predicted values can be achieved. The points discussed reflect only a small part of the activities. For more details on fiber-coupled SPSs, we refer to a recently published review article [271]. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline Structure & Det. fab. & Alignment & NF/FF & Fiber & Perrna. & \(\lambda\) & \(\eta_{\text{ext}}\) & \(\eta_{\text{tot}}\) & Ref. \\ \hline QD-PC-cavity & no & Optical & NF & MF & no & 914 nm & 41\% & 23\% & [272] \\ QD-micromesa & yes & PL wetting layer & NF & MMF & yes & 930 nm & 0.28\% & 0.28\% & [275] \\ QD-microlens & yes & 3D printing & FF & SMF & yes & 930 nm & – & 0.56\% & [279] \\ QD-micropillar & no & Reflection, EL & NF & SMF & yes & 930 nm & – & – & [282] \\ QD-micromesa & yes & Interference & NF & SMF & yes & 1.3 \(\mu\)m & – & 1\% & [278] \\ QD-nanowire & no & Pick-and-place & NF & SMF & yes & 970 nm & – & 5.8\% & [281] \\ QD-hole-CBG & no & Pick-and-place & NF & SMF & yes & 1.3 \(\mu\)m & – & 4.6\% & [280] \\ \hline \end{tabular} \end{table} Table 2: Comparison of fiber-coupled QD-SPSs. The table contains information about the SPS design, the fabrication method (deterministic nanofabrication yes/no), the alignment technique, near-field (NF) or far-field (FF) coupling, type of fiber; multi-mode fiber (MMF), single mode fiber (SMF), micro-fiber (MF), permanent coupling (yes/no), wavelength, extraction and total efficiency and the corresponding reference. The table does not include solutions where the fiber coupling is performed outside the cryostat (using intermediate free-space optics). ## 6 Performance of quantum dots as stationary qubits and as sources of flying qubits This section is dedicated to the optical and quantum optical properties of QD-based QLSs, spin-photon interfaces and photonic cluster state sources. In this context, it is interesting to note that quantum nanophotonics was inspired by and benefited greatly from previous studies on atomic cQED and quantum optics. For example, S. Haroche and A. Aspect were the first to demonstrate vacuum Rabi oscillations in a strongly coupled atom-microcavity system [283] and to use the radiation cascade of calcium for realizing the Einstein-Podolsky-Rosen-Bohm Gedankenexperiment [284], respectively. These and other concepts from quantum optics were adopted later to demonstrate cQED effects also in semiconductor systems [285, 286, 47, 287] and to apply the radiative biexciton-exciton cascade of QDs for the generation of time-correlated [288] and polarization entangled photon pairs [30, 31]. Significant progress has been made since then regarding the application of QDs in photonic quantum technology, and in the following QLSs are first presented, with all relevant emission wavelengths in the range from 780 nm to 1.55 \(\mu\)m being discussed. They are compared with regard to important emission parameters such as photon extraction efficiency and quantum properties such as single-photon purity and indistinguishability. In the second part of this section, we turn to concepts for efficient spin-photon interfaces and entangled photon pairs as well as photonic cluster state generation. Figure 24: Performance of GaAs QDs in AlGaAs nanoholes obtained by local Al-droplet etching. (a) Typical example of PL spectrum of a GaAs QD under non-resonant excitation, showing the isolated neutral exciton emission (X) and additional emission lines attributed to ground state and hot trions. (b) Second-order autocorrelation histogram for the X line under two-photon excitation of the XX state. (c) Example of PL spectrum under two-photon excitation (TPE). (d) XX-X polarization entanglement fidelity as a function of the excitonic fine-structure-splitting \(E_{\rm FSS}\) – see inset of (c) – tuned via strain, induced by a microprocessed piezoelectric actuator. (a – c) Reprinted from _Cove de Silva et al.__2021_[236] under Creative Commons CC BY license. (d) Reprinted from _Huber et al.__2018_[120]. Copyright (2018) by the American Physical Society. ### Quantum dot single- and entangled photon sources emitting around 780 nm QLSs with emission in the near-infrared are interesting for free-space quantum communication and integrated quantum photonics. Among different QD types, we focus here on GaAs QDs in nanoholes obtained via the local droplet etching (see Section 4.2.3) and with emission wavelength around 780 nm [236]. Besides the facility of obtaining QDs with low density (\(<10^{8}\) cm\({}^{-2}\)) for single QD devices, the negligible strain, limited alloy disorder, high ensemble homogeneity, high in-plane shape symmetry, and relatively large QD size compared to the free-exciton Bohr radius in GaAs [289] lead to rather unique properties. In particular, the large QD size yields enhanced oscillator strengths, manifesting in spontaneous radiative decay times of the order of 200 ps for confined excitons and trions and 100 ps for biexcitons [290, 291, 292]. This allows high-rate excitation and alleviates the effect of dephasing mechanisms. In spite of the dense excitonic levels resulting from the "weak confinement" regime (see PL spectrum in Fig. 24(a)), driving QDs resonantly with the two-photon-excitation method results in PL spectra dominated by the XX and X emission (Fig. 24(b)) and outstanding \(g^{(2)}(0)\) values below \(10^{-4}\)[293], see Fig. 24(c). The short lifetime, and hence relatively large Fourier-transform-limited (natural) linewidths of about 2-3 \(\mu\)eV, combined with high in-plane symmetry (see Fig. 14(e) and inset of Fig. 24(a)), which result in ensemble-averaged \(E_{\rm FSS}\) of \(<3\)\(\mu\)eV [236], make these QDs ideally suited as sources of polarization-entangled photon pairs [53, 291, 292]. To ensure full cancellation of the fine-structure-splitting and simultaneous tuning of the emission energy, multiaxial strain actuators based on laser-microprocessed piezoelectric substrates have been introduced [294, 295]. By employing them to tune the \(E_{\rm FSS}\) of a GaAs QD, entanglement fidelities of up to about 98% have been observed [120] (see Fig. 24(d)). Recent investigations have pointed out that the residual deviation from perfect entanglement is due to a combination of several phenomena, which we discuss in more detail in Sec. 6.5. Also for entangled photon generation the short intrinsic QD lifetimes contribute to alleviate the effect of inhomogeneous dephasing. At the same time, the large QD size and consequently small spacing between ground-state and excited states make state-of-the-art GaAs QDs vulnerable to thermally activated decoherence due to interactions with acoustic phonons [296]. The high ensemble homogeneity of GaAs QDs (wavelength spread of a few nanometers in an ensemble [235, 292, 297, 236]) facilitates experiments and applications relying on TPI among photons emitted by independent QDs [298, 299, 300] and have led to the highest TPI visibility to date [300], as discussed in Section 8.1.3 (see Fig. 39(c)). High ensemble homogeneity combined with post-growth fine-tuning provided by strain and electric fields may lead to scalable QD hardware. ### Quantum dot quantum light sources emitting at around 900 nm As the pioneering work on S-K QDs was carried out on the InGaAs/GaAs material system, typically resulting in QD emission wavelengths between 890 nm to 970 nm (see Section 4.2.2), QD-based QLSs emitting in this wavelength range have the longest history. Since the first-time demonstration of single-photon emission from epitaxial QDs by Michler et al. in 2000 [28], these QLSs developed to a mature quantum technology enabling high performance quantum light generation. To achieve high photon extraction efficiencies, QDs can be integrated into different types of photonic structures [154], including microlenses, micropillars, CBG resonators, PC cavities and photonic wires as discussed in previous sections. A frequently used type of photonic structure are micropillar cavities [47, 301], which enable large Purcell enhancements and high photon extraction efficiencies in a narrow spectral range along with directional emission normal to the sample surface [302]. Using resonant excitation, single photons can be generated on-demand with near-unity generation probability while keeping dephasing low. In 2016, Ding et al. [50] reported an SPS simultaneously achieving a high photon extraction efficiency of 66%, a single-photon purity of 99.1%, and photon indistinguishability of 98.5% (see Fig. 25(a)) using QD-micropillars with Q-factor around 6000 [304]. While this high performance level has been reached with non-deterministic device approaches here, deterministic fabrication technologies are useful to increase the device yield by spatio-spectrally matching the emitter-mode coupling. Using in situ photolithography [263], Gazzano et al. fabricated deterministic micropillar cavities containing single pre-selected QDs [305] with high photon extraction efficiency and high degree of photon indistinguishability of 0.79\(\pm\)0.08 and (82\(\pm\)10)%, respectively. Implementing electrical gates in p-i-n doped micropillars, this technology has been further developed to enable a spectral fine-tuning of the quantum emitters [306]. This enabled the fabrication of a near-optimal SPS with a photon indistinguishability of up to (\(99.56\pm 0.45\))% in 2016 [49]. In a similar work by Unsleber et al., extraction efficiencies of up to (\(74\pm 4\))% have been achieved using a deterministically fabricated QD-micropillar device [307]. Experimental realizations of coherent resonant excitation schemes often rely on polarization filtering for suppressing the excitation laser at the wavelength of the quantum emitter, which in-turn reduces the photon extraction efficiency by 50%. In 2019, this limitation has been overcome by Wang et al. using polarization-selective Purcell microcavities [258]. Here, narrow-band elliptical micropillars, as previously used to achieve linearly polarized emission with Purcell-enhanced photon extraction efficiency under non-resonant excitation [48], (see Fig. 25(c)) and broad-band elliptical Bragg gratings (cf. discussion on CBGs in Section 6.1) were employed to realize a polarization-orthogonal excitation-collection scheme minimizing the polarization filtering loss under resonant excitation. The authors demonstrated a polarized single-photon efficiency of 0.60\(\pm\)0.02, a single-photon purity of 0.975\(\pm\)0.005 and an indistinguishability of 0.975\(\pm\)0.006 for Figure 25: (a) Schematic (left) and spectral detuning vs. temperature map (right) of a QD micropillar cavity used for the generation of highly indistinguishable photons in Ref. [50]. (b) Illustration of a deterministically-fabricated electrically-gated QD micropillar cavity (left) and measurement data of HOM two-photon interference (right) from Ref. [49]. (c) Schematic of an elliptical QD micropillar cavity (left) enabling the highest photon extraction efficiencies and highly indistinguishable photons (right). (d) Schematic and cross-sectional SEM image of a single-photon light-emitting diode based on electrically contacted p-i-n doped micropillar cavities with self-organized InAs/GaAs-QDs [303]. (e) \(g^{(2)}(\tau)\) measurements on a single-photon LED from (d) as a function of the excitation repetition rate up to the GHz range [51]. (a) reprinted by permission from _Ding et al.__2016_[50] Copyright 2016 by the American Physical Society, (b) adapted from _Somaschi et al.__2016_[49] with permission of Springer Nature: Copyright 2016 Springer Nature, (c) reprinted from _Wang et al.__2019_[258] with permission of Springer Nature: Copyright 2019 Springer Nature, (d) reprinted from _Heindel et al.__2010_[303] with the permission of AIP Publishing, (e) reprinted from _Schleahn et al.__2016_[51] under Creative Commons CC BY license. their micropillar device. Another route to combine coherent pumping with high photon extraction efficiencies are advanced excitation schemes using driving laser fields which are spectrally detuned with respect to the transitions of the quantum emitter. Examples are two-photon resonant excitation of the XX-X cascade without [38, 39, 38, 310] and with [168] stimulation pulse, dichromatic excitation of a single transition [311, 312], or recent theory proposals for swing-up schemes using spectrally far-detuned frequency-modulated laser pulses [313]. The latter is also referred to as SUPER (for Swing UP of quantum emittER population) scheme and has recently been demonstrated for the first time experimentally by Karli et al. [314]. For some applications, including those relying on the XX-X cascade (e.g. for the generation of polarization entangled photon pairs), it is beneficial to have a high photon extraction efficiency in a wider spectral range. Examples of such photonic structures offering broad-band capability are photonic (nano)wires [55], lens structures, and the aforementioned CBGs [315]. Following the pioneering top-down QD-photonic-nanowire approach of Claudon et al. [55] with a photon extraction efficeincy of 72% and \(g\left({}^{2}\right)\left(0\right)\approx 0.01\), a bright SPS based on bottom-up grown tapered InP nanowires with integrated positioned InAsP QDs was demonstrated by Reimer et al. in 2012 [159], with a reported photon extraction efficiency of 42% and a measured antibunching value of \(g^{\left({}^{2}\right)}\left(0\right)<0.5\) under continuous wave excitation. While the spatial distribution of the nanowires was statistically random in this work, pre-patterned substrates can be used to achieve site-controlled growth of photonic nanowires with integrated single quantum emitters [316]. Employing deterministically fabricated microlenses with embedded pre-selected QDs, Gschrey et al. demonstrated in 2015 an SPS [56] with a broadband photon extraction efficiency of \(\left(23\pm 3\right)\%\) into an NA of 0.4, low multi-photon emission probabilities \(g^{\left({}^{2}\right)}\left(0\right)<0.01\), and high photon indistinguishability of (80\(\pm\)7)%, even beyond saturation of the quantum emitter. In follow-up work, these QD-microlenses were used to explore dephasing mechanisms limiting the photon indistinguishability [160], revealing photon indistinguishability of up to \(\left(96\pm 4\right)\%\) under quasi-resonant excitation of the quantum emitter at short temporal separations (2 ns) and low temperatures (10 K). As revealed in this study, the semiconductor environment in QD samples results in non-Markovian noise correlations, which can lead to reduced photon-indistinguishability at larger temporal separation (see Fig. 25(d)). Also, to further push the achievable single-photon flux at a given extraction efficiency, Schlehahn et al. demonstrated an innovative approach using a mode-locked vertical-external-cavity surface-emitting laser at 500 MHz repetition rate [317]. A major advantage of semiconductor based QLSs, which is not exploited in experiments using optical excitation, is the possibility to realize complex engineered devices including diode structures for electrical charge carriers injection. This is highly beneficial for applications, not only because higher degrees of device integration become possible, as bulky laser systems become obsolete, but also the clock rate of quantum cryptographic implementations can easily be adjusted and pushed to their limits (see also section 8.1.1). The first electrically injected QD-based SPS was reported in pioneering work by Yuan et al. [318]. Later, the photon extraction efficiency of electrically triggered SPSs could be significantly increased to 34%, by embedding QDs in p-i-n doped micropillar cavities with ring-shaped top-contacts [303] (see Fig. 25(d)). In follow-up work by the authors, the overall efficiency could be pushed further to values exceeding 60% (including electrical losses), while excitation repetition, or clock, rates of up to the GHz-range were achieved for this type of device [51] (cf. Fig. 25(e)). As discussed in Section 8.1.1, these efficient single-photon emitting diodes have in turn also be employed for the first QKD experiments using electrically injected QD-devices [319, 320]. In another approach, QDs were embedded in diode structures to electrically generate polarization-entangled photon pairs via the XX-X radiative cascade [321, 322]. These so-called entangled light-emitting diodes have later been employed for the first entanglement-based QKD experiments using QD-devices [323] (see Section 8.1.2 for details). ### Quantum dot quantum light sources emitting in the telecom O- and C-band With regard to fiber-based quantum networks, QLSs with emission in telecom O-band and C-band at 1.3 \(\mu\)m and 1.55 \(\mu\)m wavelength form important building blocks. In these transmission bands, glass fibers have a local minimum attenuation of 0.31 dB/km at 1.3 \(\mu\)m and an absolute minimum attenuation of 0.15 dB/km at 1.55 \(\mu\)m wavelengths, which makes them ideal for optical data transmission over long distances. Noteworthy, the O-band is relevant due to a material dispersion that is close to zero, above all for high-bit-rate quantum communication over medium distances of up to around 50 km. Compared to many other QLSs, which are based, for example, on nitrogen vacancy centers in diamond with fixed spectral properties, semiconductor properties have the great advantage that their emission wavelength can be flexibly nanoengineered through the choice of material and the growth conditions. InGaAs QDs on a GaAs substrate or on an InP substrate are particularly relevant for emissions in the telecom O- and C-band as discussed in section 4. Due to the described extensive optimization in the epitaxial growth of telecom QDs, enormous progress has been made in the field of O-band and C-band QD-SPS in recent years. Regarding the development of telecom-wavelength QD-SPSs, epitaxially grown quantum emitters were integrated into various nanophotonic structures to increase photon extraction efficiency. Moreover, cavity-based concepts, in analogy to NIR SPSs discussed in Sections 6.2, use cQED effects for performance optimization, for instance in terms of high photon indistinguishability. On the one hand, the implemented concepts include simple QD micromesas, QD microlenses and QD solid immersion lenses, some of which were manufactured deterministically [324, 325, 326]. Using this broadband approach, O-band (C-band) SPSs with extraction efficiencies \(\eta_{\text{ext}}\) of up to 17% [327] (13% [328]) could be demonstrated. Temperature emission spectra and \(g^{(2)}(0)\) functions of a O-band micromesa SPS are presented in Fig. 26(d) [329]. The high temperature stability of this device makes it compatible with cooling via a stand-alone stirling cryocooler with a base temperature of about 30 K [330]. Single-photon emission was achieved with a high multi-photon suppression with \(g^{(2)}(0)\) as low as \(0.027\pm 0.005\)[326]. Further broadband telecom QD-SPSs include photonic wires [331, 332] and photonic horn devices [333] (see Fig. 26(a)) with \(\eta_{\text{ext}}=11\%\) (C-band) and \(g^{(2)}(0)\) in the few percent range. Noteworthy, photonic wires based on epitaxially grown QDs can cover both the O-band and C-band depending on the height and diameter of the dot-in-a-rod structure [331]. Resonator-based telecom O-band QD-SPSs include vertically emitting micropillars as presented in Fig. 26(b) with \(\eta_{\text{ext}}=3.3\%\)[334] and PC-based devices with \(\eta_{\text{ext}}=36\%\)[335] and \(g^{(2)}(0)\) values of 0.14 and 0.085, respectively. Moreover, laterally emitted tapered nanobeam devices were developed which feature \(\eta_{\text{ext}}=27\%\) and \(g^{(2)}(0)<0.1\) in the O-band [336] (see Fig. 26(c)). Recently there have also been interesting developments regarding CBG SPS, which, thanks to their wavelength and material flexibility, also promise very good emission properties such as \(\eta_{\text{ext}}\) exceeding 90% in the telecom wavelength range [271, 282]. In experiment, CBG-SPS have been demonstrated in both O-band [261] and C-band [337, 338] with \(\eta_{\text{ext}}\) 23% [261] and \(g^{(2)}(0)=0.01\), and 17% combined with very good single-photon purity of \(g^{(2)}(0)=0.0052\)[337] (see Fig. 26(e,f)). Overall, these results demonstrate the tremendous advances in telecom QD-SPSs that have been made in recent years. It is noticeable that achieved \(\eta_{\text{ext}}\) with values of about 10-40% are well below the theoretically predicted values of over 90% [271, 282] for these wavelengths and also behind the experimental \(\eta_{\text{ext}}\) values that were achieved for comparable NIR QD-SPSs (see Sections 6.1 and 6.2). On the one hand, this issue can be related to a non-ideal position of the QD in the nanophotonic structure. On the other hand, the systematic and clear deviation of the \(\eta_{\text{ext}}\) strongly indicates that the optical quality of the telecom QDs, in terms of internal quantum efficiency, can be a problematic factor that directly affects brightness, and \(\eta_{\text{ext}}\) if it includes the internal quantum efficiency as factor as it is often the case in experimental evaluations. Determining the internal quantum efficiency for quantum emitters is a nontrivial task. One way to extract this important parameter is to perform time resolved PL studies under variation of the optical density of state at the position of the QDs. Experimentally, this is done by systematically reducing the capping layer thickness of the QD sample [339]. A first study of this kind for O-band InGaAs QDs revealed an internal quantum efficiency of (85\(\pm\)10)% [340], which is a promising, but also non-ideal, value. In the future, it will be interesting to determine the internal quantum efficiency also for C-band QDs, and to include this parameter in the evaluation of QD-SPSs in order to explain a possible mismatch between theoretical predicted and experimentally obtained \(\eta_{\text{ext}}\). Another important aspect is the photon indistinguishability of the telecom QD-SPSs. While for NIR SPSs, values close to one are obtained almost routinely (see section 6.2), in the case of telecom sources, it is still a major challenge to achieve significant photon indistinguishability. In fact, so far, results from HOM experiments show (non post-selected) two-photon interference visibility \(V_{\text{TPI}}\) of a maximum of about 20% in the O-band [335, 336, 329] (see Fig. 27(a,b)) and 15% in the C-band [337], even under resonant excitation [341] (Fig. 26(c)). It is noteworthy that a significantly higher post-selected TPI visibility is often mentioned, but this visibility is mainly limited by the temporal resolution of the HOM setup compared to the coherence time of the photons and is not relevant for typical applications in photonic quantum technology. For instance, Figure 26: Telecom wavelength QD-SPSs. (a) Photonic horn based QD-SPS emitting in the C-band [333]. (b) Telecom O-band micropillar SPS based on InAs/GaAs QDs in a \(\text{Al}_{0.9}\text{Ga}_{0.1}\text{As}/\text{GaAs}\) DBR cavity [334]. (c) Tapered nanobeam based SPS with emission in the O-band Courtesy of [336]. (d) Emission spectrum and temperature dependent \(g^{(2)}(\tau)\) functions of a deterministically fabricated O-band micromresa SPS [329]. The device shows high temperature stability and strong multi-photon suppression up to 40 K with \(g^{(2)}(0)=0.076\) at 4 K. (e) Optical characteristics and SEM image of a C-band CBG-SPS and (f) temperature dependent \(g^{(2)}(\tau)\) functions demonstrating also high stability up to 40 K, and \(g^{(2)}(0)=0.0052\) at 4 K [337]. (a) Reproduced from Ref. [333] under Creative Commons CC BY license. (b) Reproduced from Ref. [334] under Creative Commons CC BY license. (c) Reprinted with permission from Ref. [336] 2020 American Chemical Society. (d) Reprinted from Ref. [329], with the permission of AIP Publishing. (e, f) Reprinted with permission from Ref. [337]. Copyright 2022 American Chemical Society. in Ref. [335] a post-selected TPI visibility of \(0.97\pm 0.04\) (compared to the non-post-selected value of \(V=0.18\pm 0.01\)) was determined, considering a HOM resolution of 200 ps and a coherence time of (\(150\pm 29\)) ps. In two-photon interference measurements, spectral diffusion of the emitter significantly reduces the achievable visibility. Thus, the moderately high photon indistinguishability of the telecom QD-SPSs indicates electronic and magnetic fluctuations in proximity to the QDs, which lead to increased decoherence and spectral fluctuations and thereby limit the TPI visibility [342, 119]. Defect states in the strain reducing layer are possible causes of electronic fluctuations in O-band InGaAs QDs [329]. Future growth optimizations should therefore aim at optimizing the strain reducing layer to ensure stable electrostatic conditions around the QDs. Furthermore, externally applied electric fields, as successfully practiced in the case of NIR QD-SPS [49], could be used for charge noise in order to maximize \(V_{\text{TPI}}\). Before me move on to the discussion of spin-photon interfaces and photonic-cluster-state sources, we summarize in Table 3 the state-of-the-art of QD-based QLSs in the presented wavelength ranges from 780 nm to 1550 nm. As can be seen, a variety of device structures have been used to achieve high-performance QLS with close-to-ideal values in terms of single-photon purity, indistinguishability and entanglement fidelity, as well as photon extraction efficiencies exceeding 80% at emission wavelengths below 1\(\mu\)m. In contrast, despite enormous progress has been achieved for QLSs emitting in the telecom O- and C-band, there is still a lot of room for improvement especially regarding the indistinguishability which still below about 20% in the best case. Figure 27: Photon indistinguishability of telecom wavelength QD-SPSs determined via two-photon interference measurements in HOM configuration. (a) HOM-correlation histogram of an O-band PC-based QD-SPS for parallel polarization and zoom-in (lower panel) of the center peak for parallel (solid dots) and orthogonal polarizations (open dots). Comparing the data under parallel and orthogonal polarization yields \(V_{\text{TPI}}=18\%\). (b) HOM-correlation histogram of a deterministically fabricated O-band QD-micromesa for parallel (left) and orthogonal (right) polarization. Evaluation of the data results in \(V_{\text{TPI}}=12\%\). (c) Photon autocorrelation histogram (left) and HOM-correlation histogram (right) of a C-band CBG-based QD-SPS under strict resonant excitation. Fitting the data yields \(g^{(2)}(0)=0.0236\pm 0.019\) and \(V_{\text{TPI}}=0.1446\pm 0.015\%\). (a) Reprinted from Ref. [335]. (b) Reprinted from Ref. [329], with the permission of AIP Publishing. (c) Reprinted from Ref. [341], with the permission of AIP Publishing. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Structure} & \multirow{2}{*}{Det. fab.} & \multirow{2}{*}{Exc. scheme} & \(\lambda\) & \multirow{2}{*}{NB/BB} & \multirow{2}{*}{\(F_{P}\)} & \multirow{2}{*}{\(\eta_{\text{ext}}\)} & \(g^{(2)}(0)\) & \multirow{2}{*}{\(V_{\text{TPI}}\)} & \multirow{2}{*}{\(F\)} & \multirow{2}{*}{Ref.} \\ & & & [nm] & & & & & & (pulsed) & \\ \hline CBG & yes & TPR & 780 & BB & \(\approx\) 3 & (85\(\pm\)3)\% & \(<\)1\% & (90.3\(\pm\)0.3)\% & (88\(\pm\)2)\% & [53] \\ Planar DBR & no & RF & 780 & NB & \(\approx\) 10 & 2.3\% & 1\% & (98.2\(\pm\)1.3)\% & (85.0\(\pm\)1.0)\% & [300] \\ Micropillar & yes & non-res. & 930 & NB & \(\approx\) 3-4 & 34\% & 17-40\% & – & 67\% & [264] \\ Photonic wire & no & non-res. & 930 & BB & – & 72\% & \(<\)1\% & – & – & [55] \\ Micropillar & yes & RF & 930 & NB & \(\approx\) 8 & 65\% & (0.28\(\pm\)0.12)\% & (99.56\(\pm\)0.45)\% & – & [49] \\ Micropillar & yes & RF & 930 & NB & \(\approx\) 6 & 66\% & \(<\)1\% & 98.5\% & – & [50] \\ Micropillar & no & el. & 930 & NB & \(\approx\) 3 & 61\% & (0.076\(\pm\)0.014)\% & (41.1\(\pm\)9.5)\% & – & [51] \\ Open cavity & no & RF & 930 & NB & \(\approx\) 11 & 82\% & 2.1 & 96.7\% & – & [157] \\ PC cavity & no & non-res. & 1300 & NB & \(\approx\) 4 & 36\% & (8.5\(\pm\)2.2)\% & 18\% & – & [335] \\ Micromesa & yes & p-shell & 1300 & BB & – & 5-10\% & 2-4\% & 12\% & – & [329] \\ CBG & no & p-shell & 1550 & BB & 3 & 17\% & (0.52\(\pm\)0.10)\% & 8\% & – & [337] \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of state-of-the-art QLSs based on semiconductor QDs. (abbreviations: det. fab.: deterministic fabrication, exc. scheme: excitation scheme, NB/BB: narrowband/broadband enhancement of emission, \(F_{P}\): Purcell-factor, \(F\): entanglement fidelity, TPR: two-photon-resonant excitation, RF: resonance fluorescence, non-res.: non-resonant excitation. ### Quantum dot spin-photon interfaces Spin-photon interfaces are important links between stationary qubits and flying qubits. Due to this functionality, they have diverse uses in quantum networks. In quantum repeater networks, for example, in connection with quantum memories, they can temporarily store the information to be transmitted locally. They are also needed to exchange quantum information between quantum processor nodes of a future quantum internet via quantum channels [108, 343], which requires the coherent coupling of distant (stationary) qubits. The basic idea behind spin-photon interfaces based on QDs is to entangle the spin degree of freedom of a confined electron or hole with the energy or polarization of an emitted photon. A milestone in this context was reached by W. B. Gao et al. with a sophisticated quantum optical experiment [344] using ultra-fast optical quantum control of a single QD spin which had been demonstrated before by D. Press et al. [345]. The concept by W.B. Gao et al. is based on previous experiments on spin-state-dependent resonance fluorescence from a single-electron charged QD [346] and aims at realizing the entangled spin-photon state \(\left|\Psi\right\rangle=\frac{1}{\sqrt{2}}(\left|\uparrow\right\rangle\left| \omega_{\text{red}};H\right\rangle+i\left|\downarrow\right\rangle\left|\omega _{\text{blue}};V\right\rangle)\). It uses the optical control of a QD trion \(T_{r}\) (negatively charged QD) as depicted in Fig. 28(a). The depicted level scheme is based on a magnetic-field-induced splitting of the electron's spin states (magnetic field in Voigt configuration), with \(\left|\uparrow\right\rangle\) and \(\left|\downarrow\right\rangle\) respectively denoting spins parallel and antiparallel to the magnetic field direction. In the applied pulse sequence (see Fig. 28(b), first a 5 ns resonant laser pulse \(\Omega_{\text{res}}\), drives the transition \(\left|\downarrow\right\rangle\leftrightarrow\left|T_{r}\right\rangle\) and prepares the QD with high probability of 87% in the \(\left|\downarrow\right\rangle\) state. Then a 4 ps \(\pi\)-pulse is applied to transfer the QD into the \(\left|\downarrow\right\rangle\) state. Subsequently, the entangled spin-photon pair is generated with a 1.2 ns resonant "entangler pulse", and a \(\pi/2\) or \(3\pi/2\) Figure 28: Spin–photon entanglement using a charged QD. (a) Energy-level diagram of a single-electron-charged InGaAs QD under application of a magnetic field \(B_{x}\) in Voigt geometry. (b) Pulse sequence used to generate and verify spin-photon entanglement in the charged QD system. (c) Corresponding time correlogram between single-photon detection following the entanglement pulse and the detection of a photon during the first measurement/preparation pulse after a \(\pi/2\)-pulse (red squares). Black squares correspond to reference measurements between spin and photon detection events, taking different excitation/preparation cycles. (d) Normalized coincidences obtained by normalization by counts from correlated spin–photon pairs. Reprinted from _Gao et al._ _2002_[344] with permission of Springer Nature: Copyright 2012 Springer Nature. "measurement/preparation" pulse rotates the electron spin and project it into the \((\left|\downarrow\right\rangle-i\left|\uparrow\right\rangle)/\sqrt{2}\) and \((\left|\downarrow\right\rangle+i\left|\uparrow\right\rangle)/\sqrt{2}\) state, respectively, after a photon detection event. The whole sequence is repeated every 13 ns according to the pulsed laser repetition frequency. The spin-photon entanglement process is verified by photon correlation measurements between the single-photon detection events induced by the entanglement pulse and the detection of a photon during the measurement/preparation pulse. In the corresponding coincidence diagrams depicted in Fig. 28(c,d) (for a \(\pi/2\) measurement/preparation pulse) starting at the onset of the entanglement pulse (t = 0) presented in Fig. 28b), the oscillations appearing for the correlated trace (red data points) clearly reflect the superposition of the photonic state in its two frequency components. In fact. the appearance of these oscillations constitutes a remarkable manifestation of the quantum coherence of the entangle spin-photon state. Comparing the result with correlations measured in different excitation/preparation cycles (black data points) yields an entanglement fidelity of \(F=~{}0.46\pm 0.04\). For more details on the experimental scheme, we refer to Ref. [344]. Since the first demonstration of spin-photon entanglement in the QD system, there has been a number of interesting follow-up work in this area. In the same year, J. R. Schaibley et al. also spin-photon entanglement in a QD with electric charge carrier control [347]. Also, resonant optical excitation with a similar pulse scheme as described above was used to generate spin-photon entanglement in a four-level QD system generated by a magnetic field in Faraday configuration. In their case, an entanglement fidelity of \(0.59\pm 0.04\) was achieved. Also in 2012, again using an effective four-level QD system, K. De Greve et al. succeeded in demonstrating spin-photon entanglement with a fidelity of \(0.80\pm 0.085\), whereby the emitted photon was transmitted to the telecom C-band via frequency conversion [348]. This is an important step to enable long-distance quantum networks in which photons are transmitted in optical fibers with low loss. In another groundbreaking work, Y. He et al. demonstrated the quantum state transfer from a single photon to a distant QD electron spin using spin-photon entanglement [349]. This type of quantum information transmission is another important resource in the development of quantum networks that can be used for distributed quantum processing in the future. The concept developed by Y. He et al. is shown schematically in Fig. 29(a). Again, a four-level system with a \(\lambda\) scheme between the QD trion and the two spin states of the ground state is used to generate spin-photon entanglement at Alice via a pulse scheme as described above. The frequency-encoded photon qubit generated is then transmitted to Bob at a distance of 5 m in order to carry out state encoding of the qubits' polarization there. Finally, polarization, frequency, and path degrees of freedom of the photon are measured jointly on the four GHZ state basis, and via a feedback signal the photon polarization detected at Bob is deterministically transferred to the QD spin at Alice. It is interesting to note that ultrafast optical spin echo was applied for prolonging the QD's spin coherence to enable the remote state transfer experiment. Fig. 29(b) illustrates the quantum state transfer from Bob to Alice, where a state vector on the photon's Bloch sphere is transferred to the state vector on the spin's Bloch sphere via a flying qubit. The experimental results obtained in this way are shown in Fig. 29(c-e). Normalized coincidence counts are plotted, which show the probability that Bob's target photon state was successfully transferred to Alice's spin state. The coincidences of the desired state are shown in blue and those of the undesired state in gray, and analyzing these results yields quantum state transfer fidelities of \(F_{\left|H\right\rangle}=0.851\pm 0.017\), \({}_{\left|D^{+}\right\rangle}=0.756\pm 0.027\), and \({}_{\left|\sigma^{+}\right\rangle}=0.747\pm 0.027\). Based on the results obtained, it will be interesting to extend the distance between Alice and Bob in the future and also to perform quantum state transmission over longer distances via optical fibers. Further work on QD-based spin-photon interfaces includes waveguide and resonator systems with and without electrical control of the QD states. In such quantum devices, effects of light-matter interaction are used, for example to increase the spin initialization efficiency and qubit gate fidelity and thus to enable the generation of deterministic spin-photon entanglement in the future. In this context, Z. Luo et al. demonstrated that an electrically contacted QD photonic crystal nanocavity exhibits spin-dependent cavity reflectivity in the strong coupling regime [350]. Here, the reflectivity can also be controlled electrically, to deterministically load and stabilize electron spin inside QDs. Waveguide-based spin-photon interfaces include QD-nanobeam structures for the quantum state transfer. Figure 29: Quantum state transfer via spin-photon entanglement. (a) Level scheme of the used QD and illustration of the quantum state transfer concept between Alice and Bob separated by 5 m. After generating spin-photon entanglement, Alice sends the frequency-encoded photon qubit to Bob, where the to-be-teleported state is prepared in the photon’s polarization. Measuring the polarization, frequency, and path degrees of freedom of the photon jointly on the four Greenberger–Horne–Zeilinger (GHZ)-state basis and using the obtained results for feedback, the photon polarization is deterministically transferred to the QD spin at Alice. (b) Schematic illustration of the photon-to-spin remote state mapping process from the photon’s to the spin’s Bloch sphere. (c) Coincidence diagrams comparing the experimental results of quantum state transfer with intended outcomes (blue bars) with undesired outcomes (gray bars) of the target photon states (\(|H\rangle\),\(|D^{+}\rangle\), \(|\sigma^{+}\rangle\)) and correlated spin states (\(|\downarrow\rangle\),\(|\rightarrow\rangle\), \(|\circlearrowright\rangle\)). Reprinted from _He et al. 2017_[17]. Copyright (2017) by the American Physical Society. the coherent optical control of a QD spin-qubit [351], and crossed suspended waveguides for interfacing an optically addressed spin qubit to a path-encoded photon [352]. Overall, enormous progress has been made in the development of spin-photon interfaces and in the realization of spin-photon entanglement in the last decade. This was made possible on the one hand by the high optical quality of the QDs, but above all by innovative ideas and very sophisticated quantum optical experiments. Further advances in the field can be achieved through increased in- and out-coupling efficiency, in which QDs are integrated into CBG resonators, for example. Furthermore, it will be interesting to increase the spin coherence time, e.g. for quantum state transfer over large distances. As shown in Ref. [103], all-optical Hahn echo decoupling can be used for this purpose, through which the electron spin coherence times from few tens of nanoseconds to the microsecond regime could be increased. ### Quantum dots for entangled photon pair generation Quantum entanglement is not only an intriguing physical effect, but also a key resource in photonic quantum technology. An application example is the quantum repeater concept, which is based on entanglement distribution between distant nodes of a quantum network, see Sections2.1 and 8.1.3. In order to implement corresponding applications, quantum light sources are required, which in the best case emit entangled photon pairs on demand. Widespread sources of entanglement pairs, which, however, generally do not meet the requirement of making photons available to the user at the touch of a button, are based on spontaneous parametric down-conversion processes [353]. Sources of this type with a non-deterministic emission process have already been widely used in photonic quantum technology. However, the classical emission statistics intrinsically limits their brightness in terms of the average photon pair generation probability per pulse to a rate that is typically < 11% [354]. This imposes a great challenge in advancing efficiency-demanding photonic quantum information technologies. In principle, this limitation can be tackled by heralding of photons, but only at the cost of a large experimental overhead [355]. In contrast to photon sources which are based on parametric downconversion QDs basically offer the possibility to develop on-demand sources of entangled photon pairs. For this purpose, the biexciton-exciton cascade of QDs can be used in an excellent way, as suggested in the seminal work by Benson et al. [29]. In fact, temporally correlated photon pairs are created in the emission cascade of QDs [288], with their polarization in one of the maximally entangled Bell states provided the QD has vanishingly small (on the scale of the homogeneous linewidth) fine-structure splitting, as mentioned in Sec. 3.1. This ideal degenerate case is compared in Fig. 30(b) with the typical QD situation with finite fine-structure splitting \(S\) in panel (a). In the latter case, the "which path" information is maintained so that only classical time-correlations can be observed between the biexciton and exciton photons. The generation of polarization-entangled photons via the biexciton-exciton cascade was first achieved in 2006 by Akopian et al. [30] and Young et al. [31]. In the first case, the studied QD exhibited a fine-structure splitting of more than a factor of 10 larger than the homogeneous linewidth, so spectral post-selection was necessary (at the expense of photon flux) to detect quantum mechanical entanglement. To prove polarization entanglement, quantum tomography measurements were carried out in both experiments, which are based on a total of 16 different polarization-resolved photon correlation measurements. Corresponding two-photon density matrices are shown in Fig. 30 (c, d) for a QD with a fine structure splitting of (27 \(\pm\) 3) \(\mu\)eV for a spectral selection of 200 \(\mu\)eV (c) and 25 \(\mu\)eV (d), respectively. While an evaluation of the matrices shown in panel (a) without spectral post-selection and with vanishingly small imaginary entries does not result in any quantum mechanical entanglement, the density matrices of the photon pairs in panel (b) meet the Peres criterion for entanglement by more than 3 standard deviations [30]. Similarly, in Ref. [31] it was shown that \(>\) 70% of the detected photon show polarization entanglement. Based on these milestone results, many other important results related to QDs as sources of entangled photon pairs have since been obtained. An important direction of development was to minimize the excitonic fine structure splitting of QDs. This was achieved on the one hand through optimized growth methods, and on the other hand through the post-growth manipulation of this quantity [357], above all via strain tuning [358, 359]. In the area of epitaxial growth, using substrates with highly symmetrical crystal orientations such as (111)-oriented GaAs and the growth of strain-free GaAs/AlGaAs QDs are of particular interest in order to achieve low fine structure splittings for efficient generation of polarization-entangled photon pairs [133, 360] (see Section 4 for details on the epitaxial growth of such QDs). On the other hand, strain tuning is an attractive method to control the electronic properties of QDs and in particular the fine structure splitting. In the context of polarization-entangled photon pairs, Ref. [356] impressively shows that strain tuning can also influence entanglement fidelity and that it can be maximized for \(E_{\text{FSS}}=0\), as presented in Fig. 30(e) in accordance with theoretical expectations. Moreover, as presented in Ref. [359], three-directional strain engineering can be used to generate polarization-entangled photons whose energy can be tuned, Figure 30: Generation of entangled photon pairs from semiconductor QDs. Energy scheme illustrating the radiative decay of the biexciton state (XX) in (a) a typical QD and (b) a QD zero fine-structure splitting. In (a) the biexciton-exciton cascade generates a pair vertically or horizontally colinearly polarized photons. In the ideal case presented in (b) leads to a super-positions of cross-circularly polarized photon pairs which are polarization entangled. Measured two-photon density matrix of photon pairs emitted by the biexciton-exciton cascade using spectral filtering of (c) 200 \(\mu\)eV and (d) 25 \(\mu\)eV, respectively. (e) Entanglement fidelity (\(f^{+}\)) as a function of the strain-controlled fine structure splitting \(s\). The dashed line indicates the classical value of 0.5. Quantum mechanical entanglement is achieved for \(|s|\lessapprox 2.5\)\(\mu\)eV. (f) QD emission spectrum under resonant TPE of the biexciton state. The underlying excitation scheme is schematically shown in the inset. The excitation laser signal is strongly suppressed by using notch filters. (a, b) Figure reproduced with permission from Ref. [31] © IOP Publishing and Deutsche Physikalische Gesellschaft. Reproduced by permission of IOP Publishing, CC BY-NC-SA. (c, d) Reprinted from _Akopian et al. 2006_[120]. Copyright (2006) by the American Physical Society. (e) Reproduced from Ref. [356] under Creative Commons CC BY license. (f) Reproduced from Ref. [292] under Creative Commons CC BY license. in this case to the two D\({}_{1}\) lines of Cs, without degrading their degree of entanglement, which is highly interesting for hybrid quantum systems aiming for instance at combining efficient QD quantum emitters with atomic based quantum memories. Further important work in the field of polarization-entangled photon pairs from QDs aims at the coherent preparation of the biexciton state. For this purpose, resonant two-photon excitation (TPE) can be used [38, 308, 309], which is shown schematically in Fig. 30(f). With this method, the energy of the exciting laser is chosen in such a way that the two-photon energy is sufficient to directly prepare the biexciton state. In practice, this means that the laser energy corresponds \((E_{X}+E_{XX})/2\) with the energies of the exciton \(E_{X}\) and the energy of the biexciton \(E_{XX}\). The corresponding laser straylight (after efficient suppression by notch filters) is observable in Fig. 30(f) together with the generated bexciton (XX) and exciton (X) emission lines. The TPE excitation scheme was first used in Ref. [310] for the on-demand generation of indistinguishable polarization-entangled photon pairs, in which the biexciton population was deterministically prepared by a \(\pi\)-pulse with high efficiency. It was possible to simultaneously show ultrahigh purity \((g^{(2)}(0)<0.004)\), high entanglement fidelity (\(0.81\pm 0.02\)), high two-photon interference with non-post selective visibilities of \(0.86\pm 0.03\) for the biexciton and \(0.71\pm 0.04\) for the exciton. Since then, the TPE excitation scheme has been used in many works aiming at the coherent control of the QD biexciton-exciton cascade [361, 362, 363] and on the on-demand generation of polarization entangled photon pairs [309, 309, 292, 364], see also Fig. 24. Interestingly, advanced applications in photonic quantum information technology such as quantum repeater networks based on entanglement distribution via Bell-state measurements require both high entanglement fidelity and high photon indistinguishability. In this context, it can be shown that due to temporal jitter induced by the biexciton-exciton cascade the maximum indistinguishability of photons generated is \(\gamma_{XX}/(\gamma_{XX}+\gamma_{X})\), with the decay rates \(\gamma_{XX}\) and \(\gamma_{X}\) of the biexciton and exciton state [365]. Thus, a typical ratio of the decay rates \(\gamma_{XX}/\gamma_{X}=0.5\) limits the achievable photon indistinguishability to \(66\%\). This limit can possibly be overcome by engineering the lifetime ratio using the Purcell effect in suitable resonator structures such as CBG cavities [365], by spectral filtering [366, 367], or by more advanced excitation schemes then TPE [368, 313]. For sources of entangled photon pairs, the source brightness in terms of photon extraction efficiency is an important parameter, especially with respect to future applications. As with single-photon sources, this property can also be enhanced for photon pair sources by integrating the QD into an appropriate nanophotonic structure. However, here the situation is more complex due to the fact that extraction of both the biexciton and the exciton photons must be increased efficiently. Narrow-band photon extraction, as in the case of simple micropillar structures, is not suitable for this purpose, and broadband concepts are generally used. Experimental results on entangled photon pair sources with enhanced brightness include laterally coupled micropillars whose resonance frequencies were engineered to the biexciton and exciton energies of a deterministic integrated QD. This approach yielded entangled photon pair generation \(12\%\) per excitation pulse [264]. However, the coupled micropillar approach is comparatively complex, and recent work focuses mainly on broadband photon extraction concepts such as microlenses [364], optical antennas [99] and CBG resonators [54, 53] to increase the brightness of paired sources in which entanglement fidelities of about 0.9 and pair extraction efficiencies exceeding 0.6 were reported. Obtaining a QD entangled-photon pair source with ideal performance in terms of entanglement fidelity, photon indistinguishability, and brightness is still an open challenge. In addition to the mentioned time-correlation inherent to the cascaded decay and affecting the photon indistinguishability, also the interaction of the laser excitation with the QD electronic states can deteriorate the ultimate performance of the sources via the AC-Stark effect. In fact, the commonly used TPE method relies on linearly polarized laser pulses with finite duration (typically >2-5 ps), which induce a temporary symmetry breaking even in the case of a QD with \(E_{\rm FSS}=0\). Specifically, the AC-Stark effect induces an energy splitting of the excitonic levels and thus a drop in the entanglement fidelity for a fraction of the photon pairs characterized by a biexciton decay occurring while the laser field is still present. This effect becomes particularly pronounced for Purcell-enhanced QDs in photonic structures, in which the lifetime of the biexciton state approaches the duration of the laser pulses, as experimentally demonstrated by Basso Basset et al. [369]. These recent findings make it clear that the QD excitation method must be taken into account during the source optimization process and that the Purcell enhancement must be used with caution for increasing the source brightness and for alleviating other dephasing effects such as spin noise [370] and possibly time-correlation effects in the biexciton-exciton cascade. In addition to polarization entanglement, also time-bin entanglement, hyper-entanglement, and energy-time entanglement have been demonstrated with photon pairs emitted by QDs [121, 122, 371, 372]. Time-bin entanglement is particularly attractive for fiber-based applications but up to now its efficiency remains limited compared to polarization entanglement because of the probabilistic nature of the used excitation schemes. The creation of deterministic time-bin entanglement is therefore highly desired. Also in this case a concerted design of source hardware and excitation method will be required. ### Quantum dots for photonic cluster state generation Entangled photonic states are the basis for advanced quantum communication schemes, photonic quantum computing and eventually the quantum internet. While polarization-entangled photon pairs can be generated e.g. via the QD XX-X radiative cascade [29], as discussed in the previous section, it is a major challenge to generate entangled photonic states in a scalable manner. In this context, photonic cluster states play a large and important role and can enable for instance a one-way quantum computer [21]. In addition to approaches generating photonic cluster states optically [373] or using ions [374] and nitrogen vacancy centers [375], there are also very attractive concepts for their deterministic generation using QDs as we discuss in the following. In general, and related to the results presented in the previous section, corresponding concepts are based on the fact that sequentially emitted photons are entangled via a common stationary qubit. In Ref. [376], C.Y. Hu et al. proposed to use a charged QD inside a microcavity to generate polarization photon entanglement. Here, the authors propose to use a giant circular birefringence induced by the strong coupling between the integrated QD and the resonator mode to make a photon-spin entangling gate. In their approach, independent photons interacting with a single QD electron spin in the superposition state \((\ket{\uparrow}+\ket{\downarrow})/\sqrt{2}\) are entangled as soon as the initial state of a third incident photon is measured (see Fig. 31(a)). In this way, tripartite GHZ states and, in principle, 1D photonic cluster states can also be generated by sequential application of the scheme. However, the entanglement fidelity of these states is severely limited by the spin-coherence time and the optical losses of the resonator. While a related QD-induced phase shift in a pillar microcavity was reported [377], the experimental implementation of the proposed scheme is still pending. An interesting alternative for the on-demand generation of a continuous stream of 1D-dimensional cluster states was proposed by N. H. Lindner and T. Rudolph [110]. It does not require strong light-matter interaction and is also based on the photon polarization entanglement via a common electron spin, which in principle can be repeated any number of times in order to generate scalable photonic cluster states. As shown schematically in Fig. 31(b), the scheme is based on the Hadamard (H) gate operation on the electron spin followed by a CNOT operation with the nth photon of the photonic cluster state. Repeated execution of this operation generates and entangles all photons of the cluster state across the common electron spin qubit. This concept is largely immune to decoherence, and it can be shown that standard spin errors affect only 1 or 2 of the emitted photons at a time. I. Schwartz et al. succeeded in experimentally implementing the scheme proposed by Lindner and Rudolph for the first time, with the resulting qubit corresponding to a QD confined dark exciton with a long spontaneous lifetime [94]. In their experimental approach, as shown schematically in Fig. 31(c), first a dark exciton is deterministically initialized in its higher energy spin eigenstate (green arrow). Then the dark exciton is repeatably excited to the biexciton state (blue arrows) which results in the subsequent emission of single photons (magenta arrows) forming the 1D cluster state. The corresponding circuit diagram is presented in the lower part, where initialization of the dark exciton is performed by the U gate operation, followed by the excitation-emission represented by a CNOT gate operation and a timed dark exciton precision to prepare for the next entanglement step via a single qubit gate operation G (we refer to Ref. [94] for more details). The experimental implementation of these qubit operations is very demanding and requires the synchronized resonant excitation of the QD system via several lasers of different wavelengths, and sophisticated correlation measurements between the emitted photons in order to prove the entanglement and to determine the fidelity. The result of corresponding quantum-optical measurements is presented in Fig. 31(d). It shows the negativity as a measure of the achieved localizable entanglement in the generated photonic cluster state over the photon distance \(d\). The experimental data shows the measured negativity of localizable entanglement between the dark exciton and the emitted photon after one application of the cycle (orange data point), and in a two and a three qubit string (orange and purple data points), respectively. An extrapolation of the achieved fidelity to larger distances between qubits indicated the robustness of the multipartite entanglement in the state produced by our device, and promises entanglement to persist up to 5 Figure 31: QD-based photonic cluster state generation. (a) Photon entangling scheme based on electron spin interaction in a strongly coupled QD-microcavity system. (b) Circuit diagram for the on-demand photonic cluster state generation based on the repeated Hadamard (H) gate and CNOT gate operation. (c) Corresponding experimental level scheme (upper part), and associated circuit diagram (lower part) based on the coherent control of the dark exciton and biexciton of a QD integrated into a planar microcavity (inset). (DE: dark exciton) (d) Localizable entanglement in the generated photon cluster state as function of the distance \(d\) between two qubits in the string. (a) Reprinted from _Hu et al.__2008_[376]. Copyright (2008) by the American Physical Society. (b) Reprinted from _Lindner et al.__2009_[110]. Copyright (2009) by the American Physical Society. (c, d) From Ref. [94]. Reprinted with permission from AAAS. qubits. The discussed results show the possibility to generate photonic cluster states with the help of QDs. However, further technological and experimental improvements are necessary for use in quantum information processing. On the one hand, the performance in terms of the generation rate can be significantly increased by using, for example, a CBG resonator with efficiencies beyond 70% [53] instead of a planar resonator with a photon extraction efficiency of < 20%, or by using deterministically manufactured QD microlenses [378]. Furthermore, through a targeted optical [379] or electrical [51] occupation of the QD, a higher repetition rate of the entanglement cycle, and thus in turn an increased generation rate of the cluster states, could be achieved. With regard to large-scale quantum networks based on entanglement distribution using BSM, photon indistinguishability is also an important parameter. In fact, by using photonic cluster states or graph states, which provide redundancy against photon loss and the probabilistic nature of photonic BSMs, all photonic quantum repeaters can be realized that do not require complex quantum memories [92]. Recently, an important step was taken in this direction, in which cluster states with a characteristic entanglement decay length of about ten photons, and a photon indistinguishability of about 80% were generated with a GHz rate using the QD heavy hole as an entangler [380]. Based on these results, efficient fusion could be made in the future of cluster states to get more complicated graph states for demonstrating all-photonic quantum repeaters. Beyond that, 2D photonic cluster states are of even higher interest for photonic quantum computing, and proposals exist, which promise their efficient generation by coupled QDs, i.e. quantum dot molecules [111, 381]. Such 2D photonic cluster generators can strongly benefit from technological advances in the deterministic fabrication of bright electrically tunable QD molecule devices, as recently reported by J. Schall et al. [382]. Figure 32: Integration of quantum resources in an integrated quantum photonic circuit (IQPC). A variety of exciton complexes in QDs serves single photons, polarization-entangled photon pairs, spin quantum memories, and spin-photon interfaces. IQPCs offers low-loss, functional platforms for manipulating the path, phase, and frequency of photons. Integrated quantum photonics with QDs Integration of quantum emitters into photonic integrated circuits will play a crucial role in future quantum information technologies as these circuits will advance the performance, scalability, and functionality of quantum systems [383]. Much progress has already been made in the field of classical photonic integrated circuits. Matured growth and fabrication techniques in photonic integration enable the integration of a few hundred phase shifters and directional couplers that rapidly control the flow of light and optically map unitary operations on a miniaturized chip [384]. To bring the advantages of these low-loss and functional photonic platforms to quantum photonics, it is essential to combine them with QLSs. In a simple approach, quantum light can be employed with photonic platforms by external coupling or internal generation, usually based on nonlinear effects such as spontaneous parametric down-conversion or spontaneous four-wave mixing [385]. However, such sources are inherently probabilistic, so they have an unfavorable trade-off between achievable single-photon purity and generation rate. Besides that, additional detectors are required for heralding. Therefore, these types of QLSs pose fundamental limitations to the scalability and efficiency of integrated quantum photonics. To address these challenges, new approaches to integrating solid-state quantum emitters are arising. In particular, incorporating QDs as active sources and hosts of quantum information encoded in photons and spins into low-loss and programmable photonic platforms offers several potential advantages: deterministic single photons and entangled photon pairs, quantum memories, and quantum light-matter interactions [386]. Thus, as shown in Fig. 32, combining these quantum resources into compact, integrated photonic platforms enables the generation, manipulation, storage, and detection of quantum states in a more efficient and functional way. Here, we introduce recent advances in semiconductor QDs integrated into scalable and functional photonic chips. ### Homogeneous integrated quantum photonic systems The capability of wafer-scale growth of QDs in a thin film makes Group III-As materials as an ideal platform for integrated quantum photonics [387]. In particular, Group III-As can host QDs with light emission in a wide spectral range, from visible to telecom wavelengths and can be used to form the photonic structures of low-loss (<0.5 dB/cm) waveguides and high \(Q\) (>100,000) resonators. Also, from its high refractive index and large electro-optic effect, a Mach-Zehnder interferometer with 50 GHz modulation speed has been demonstrated [388]. Generation, manipulation, and detection of single photons in an IQPC rely on the efficient interconnection between quantum resources and photonic elements via low-loss channels. Figure 33: A variety of nanophotonic waveguides and their photonic dispersion curves. (a) A ridge waveguide forms optical modes depending on its size and dimensions. (b) A photonic crystal waveguide creates slow light modes within photonic bandgaps. (c) A topological photonic waveguide uses helical topological edge states formed at the boundary of two photonic systems with different band topologies. Therefore, coupling QDs to linear waveguides is of paramount importance. Figure 33(a-c) displays different types of waveguides. A ridge waveguide provides a basic unit of photonic circuits and simply holds a single TE mode field that couples the in-plane dipoles of QDs with minimal optical loss. More functional waveguides can be made by photonic crystal structures. Although photonic crystal waveguides require more sophisticated nanofabrication processes, they have the ability to engineer the photonic density of states [389]. An important feature of slow-light effects in photonic crystal waveguides is that it significantly enhances the emitter-waveguide coupling efficiency (\(\beta\)) and cooperativity (\(\eta\)). Near-unity \(\beta\) and high \(\eta\) over 60 have been reported [58]. Topological photonic waveguides are another important platform that supports topological edge states formed at the boundary of two photonic crystal waveguides with different topologies. Topological waveguides are robust against structural imperfections and allow for unusually narrow bending angles, reflectionless propagation, and unidirectional transport and are thus highly attractive in IQPCs. Homogeneously integrated QD-waveguide systems have led to new opportunities for exploiting quantum optics in compact photonic platforms. Major progress has been made recently toward bright, coherent single photons in a variety of waveguide platforms [390, 391, 58]. Even though, nowadays, a single QD itself can emit indistinguishable single photons with (quasi) resonant excitation techniques, the spectral randomness of each QD significantly limits the scalability of quantum systems. Scalable integration of multiple, "identical" quantum emitters has been Figure 34: Generation of on-chip coupling of single photons, chiral light-matter interaction, single-photon nonlinearity, and scalable interactions in a variety of QD-coupled waveguide systems. (a) Two separated QDs in a waveguide with independent frequency tuners produce indistinguishable photons. (b) QDs in a symmetry-broken waveguide show directional chiral coupling depending on their spin states. (c) Single-photon nonlinearity based on a single QD in a waveguide demonstrates deterministic few-photon scattering of a weak resonant laser, deforming photon statistics. (d) Two separated QDs in a waveguide are independently tuned to resonant, which leads to cooperative emissions. (a) Reproduced from Ref. [392] under Creative Commons CC BY license. (b) Reproduced from Ref. [393] under Creative Commons CC BY license. (c) Reproduced from Ref. [394] under Creative Commons CC BY license. (d) Adapted with permission from Ref. [395]. Copyright 2018 American Chemical Society. realized using independent frequency tuning methods based on temperature, bias voltage, or strain [392, 396, 397]. Figure 34(a) shows a TPI experiment of resonant single photons from separate QDs. The frequency mismatch between remotely located QDs is eliminated via the quantum confined Stark effect, and TPI is performed off-chip. An important task with integrated quantum emitters in photonic waveguides is creating light-matter interfaces at the level of single photons. Introducing light-matter interactions provides new capabilities, such as deterministic spin-photon interfaces and quantum gates, which are difficult in linear quantum optics. Chiral light-matter interaction is an interesting example that creates spin-photon interfaces and quantum state-controlled directionalities [398]. Chiral light-matter interaction has been investigated in various QD-coupled photonic waveguide platforms [399, 391, 400, 392, 401]. Chirality in photonic waveguides can arise from position-dependent spin-momentum locking effects of QDs [399, 400], symmetry breaking in waveguide structures [401], and helical edge modes of topological waveguides [391, 393]. Figure 34(b) illustrates a chiral quantum optical interface that enables nonreciprocal systems implementing directional conversion of spin information of QD excitons into path information of photons. Also, more recently, the chiral coupling of excitons and biexcitons to a waveguide demonstrated the deterministic generation of spatial path-entangled photon pairs [402]. Another essential feature of light-matter interaction associated with QD-coupled waveguides is their strong single-photon nonlinearity. For example, the optical transparency of QD-coupled waveguides can be controlled by the states of a single QD, known as dipole-induced transparency [403, 404], similar to electromagnetically induced transparency in three-level atoms. When a single dipole couples to optical modes, it induces an abrupt change in the phase shift around the transparency window. This change in transmission from transparent to opaque can be controlled by weakly coupled QDs with a large Purcell effect without strong coupling, making it much easier to achieve in waveguides. Another example is a single-photon nonlinear device that induces deterministic few-photon scattering for a weak resonant laser on a single QD. Figure 34(c) shows the modification of transmitted photon number statistics depending on incoming photon numbers [394]. The strong single-photon nonlinearity of the QD-coupled waveguides is responsible for selectively filtering single photons, which can be used to implement single-photon transistors and deterministic Bell-state measurements [405]. Also, advanced schemes for multiphoton-entangled states based on spin-photon interface, such as GHZ states or photonic cluster states prepared by waveguide-coupled single QDs, have been proposed [406]. One important aspect for the QD-waveguide platforms compared to QDs coupled to high \(Q\), small mode-volume cavities is that the light-matter interaction in a waveguide requires less strict spectral and spatial matching conditions between QDs and modes, and thus developing integrated quantum photonic architectures is more feasible. Moreover, it is also possible to create photon-mediated long-range interaction between coupled multiple QDs. To achieve collective interaction between emitters in free space or in a homogenous medium, the emitters need to be placed at a short distance, comparable to the wavelength [407], making independent spectral control difficult. In a single-mode waveguide, far-separated emitters can couple to the same optical mode, extending the interaction distance. With recent efforts, multiple QDs in a waveguide have been successfully tuned into resonance and showed quantum interactions via cooperative emission (Fig. 34(d)) [395, 397]. Multiple quantum emitters in a waveguide with tunable long-range interactions open new perspectives for exploring complex "multi-atomic" systems. ### Heterogeneous integrated quantum photonic systems A fully functional quantum photonic architecture requires on-chip integration of highly reliable quantum emitters, low-loss waveguides, fast phase-shifters, and highly efficient single-photon detectors. Also, the integration of other quantum and photonic components, such as quantum memories, spectral filters, and frequency converters, would be desirable for storing quantum states and manipulating photons. The primary limitation of monolithic integration approaches discussed above is that no single material can meet all of these functionalities. In recent years, hybrid quantum photonic architectures heterogeneously integrating multiple components from different material platforms have emerged as an alternative solution [408, 409]. Heterogeneous integration can be employed at different levels with different assembly techniques. For example, a GaAs thin film can be epitaxially grown on a silicon-on-insulator wafer [410]. To prevent crystal quality degradation due to the different lattice structure and lattice constant of GaAs and Si, active and passive materials for QDs and photonic circuits can be grown individually at wafer scale with high quality and then integrated using wafer bonding techniques [411]. Alternatively, integration can take place at the functional device level, such as single QD devices placed on prefabricated IQPCs [59, 253, 412]. In this approach, devices can be assembled by Van der Waals forces or direct bonding techniques, providing freedom in the choice of materials and device design. Furthermore, the device-to-device integration allows pre-characterization and post-selection of each component. Given the difficulties of spatial and spectral control of QDs during their growth process, selective integration features are crucial to increasing system yield in large-scale chips. As illustrated in Fig. 35(a), a number of groups have successfully demonstrated heterogeneous integration of a variety of QD structures and photonic material platforms using wafer bonding [411], transfer printing [412, 413], and pick-and-place techniques [59, 253]. As losses are a major source of error in photon-based quantum information processing, the high coupling efficiency of single photons from QDs to waveguides is the most important feature for the heterogeneous assembly of dissimilar platforms via post-processing. The evanescent coupling between the QD structure and the photonic waveguide enables efficient transmission of single photons into photonic circuits. To further increase the coupling efficiency and directionality of single photons, QDs in tapered nanostructures and Bragg mirrors can be added [414]. In addition, to guarantee precise spatial alignment between QDs and photonic circuits, the site-controlled growth (see section 4.2.4) or fabrication techniques introduced in section 5.1 can be adopted. Instead of fabricating QD devices for heterogeneous integration, QDs in a nanowire (see section 4.2.7) may also be useful as heterogeneously transferable SPSs. Nanowires can be easily picked and placed into photonic circuits using a micro tip, and they typically have a tapered shape, which is desirable for single-photon coupling with waveguides [59, 415]. Furthermore, excitons in epitaxial QDs generally couple to light with polarization in the growth plane, so laterally integrated QD devices on a waveguide predominantly couple with TE modes of a waveguide. On the other hand, nanowires can be placed in waveguides with orthogonal growth directions. As both TE and TM modes of the waveguide can be exploited in this configuration [59], a polarization-insensitive coupling is possible, which is important for supporting polarization-entangled photons from exciton and biexcitons. In the near future, to increase the processing speed and yield for scalable quantum photonic systems, characterization and integration processes could be fully automated [416]. Along with the efficient integration of quantum emitters and waveguides, spectrally synchronizing multiple emitters and photons in a photonic chip is essential for achieving quantum interactions and interferences in all-integrated quantum photonic chips. A limited TPI visibility of transmitted photons increases the error rate and lowers the fidelity in the quantum gate operation or Bell-state analysis. In addition, the interaction between single photons and quantum memories also becomes inefficient when their spectra are detuned. Although it would be possible to pre-characterize and post-select proper QD devices with heterogenous integration, fine frequency tuning of emitters is still required for removing residual spectral mismatch. Similar techniques of frequency tuning used in monolithic integration can be applied to hetero-systems. Figure 35(b) shows the local engineering of the emitter's frequency via heat, electric gates, or strain controls [413, 417, 418]. Applying elastic stress [423] is particularly useful because it not only tunes the emission frequency [358, 424, 425] but also controls the fine structure splitting of excitons, which is essential for generating entangled photon pairs [358, 359, 426, 427]. As fine-tuning techniques allow for spectral matching, TPI has been demonstrated between separated QDs on a hetero-waveguide platform [396]. For long-term measurement, the frequencies could be precisely controlled and monitored in real-time to be resonant during the operation [428]. Combining cavity structures in the QD devices can further engineer the optical properties of QDs in the weak and strong coupling regimes [429]. Frequency control often requires conversion in a wide spectral range up to a few hundred Figure 35: Heterogeneous integration and manipulation for fully integrated quantum photonics. (a) Three different methods for hetero assembly of QDs: Wafer bonding (Reproduced from Ref. [411] under Creative Commons CC BY license.), Pick-and-place (Adapted with permission from Ref. [59]. Copyright 20016 American Chemical Society.), and Transfer printing (Adapted) with permission from Ref. [412]. ©The Optical Society) (b) Engineering emitter’s frequency by local temperature control (Reprinted from Ref. [413], with the permission of AIP Publishing.), strain ( Adapted with permission from Ref. [417]. Copyright 2018 American Chemical Society.), and applying voltage bias (Reprinted from Ref. [418], with the permission of AIP Publishing.). (c) Various techniques can engineer the frequencies of single photons in integrated photonic circuits: (Top) four-wave mixing Bragg scattering in a microring resonator for frequency conversion (Adapted with permission from Ref. [419]. ©The Optical Society), (Bottom left) frequency multiplexing for spectral distribution (Adapted with permission from Ref. [420]. ©The Optical Society), and (Bottom right) frequency filtering using a tunable add-drop filter (Adapted with permission from Ref. [254]. ©The Optical Society.). (d) Hybrid integration of micro pump lasers (Reprinted with permission from Ref. [421]. Copyright 2017 American Chemical Society.) and superconducting nanowire single-photon detectors (Reprinted with permission from Ref. [422]. Copyright 2015 American Chemical Society.) in photonic circuits. nm. For example, most solid-state quantum emitters emit photons in the visible to near-infrared range, while important photonic platforms of Si photonic integrated circuits and optical fibers demand longer wavelengths, such as telecom wavelengths. However, the frequency tuning by engineering the emitters is generally limited to at most a few tens of nm [430, 431]. Wider-range frequency conversion can be employed using nonlinearity in photonic waveguides or resonators. Figure 35(c) shows the on-chip frequency conversion of single photons from a single QD with a conversion efficiency of 12% [419]. A much higher conversion efficiency of 74% has been recently reported on a lithium niobate on an insulator platform [432]. Furthermore, frequency conversion allows connecting dissimilar quantum emitters, such as InAs QDs and nitrogen vacancies in diamonds. They are advantageous for QLSs and quantum memories, respectively. Therefore, interfacing different types of quantum emitters could open new pathways for hybrid quantum systems. When using QDs in IQPCs, one remaining issue is the spectral separation of a single QD from the background fluorescence, including pumping laser, multi-exciton processes, and the emissions from other QDs. In free space, spatial and spectral isolation of single photons from such background noises can be easily done with confocal microscopy and spectral filters or monochromators. To implement such isolation of single photons in a photonic chip, on-chip single-photon spectrometers performing spectral demultiplexing can be employed using arrayed waveguide gratings [420]. Tunable add-drop filters can also serve as a spectral filter at a single frequency matched with a target QD [254] (See Fig. 35(c)). In Table 4, we summarize key demonstrations of integrated quantum photonic systems incorporating QDs with different integration approaches and functionalities. Optical quantum information processing starts with preparing photonic quantum states and ends with measuring single photons. Therefore, pumping lasers and single-photon detectors should be efficiently interfaced with the QD-containing IQPCs. One representative method is a fiber-optic interface that in-/out-couples external pump lasers and single-photon detectors with photonic circuits. Alternatively, hybrid integration can also bring these pump lasers and single-photon detectors directly into photonic circuits and enables all-on-chip configurations without free-space alignment. Figure 35(d) shows an integrated tunable microscale laser next to the emitter. The approach successfully demonstrated on-chip resonant optical excitation of a single QD [421]. On-chip integrated single-photon detectors have also been demonstrated by several groups in the Si material system [433, 434] and in the GaAs material system [435, 129, 422]. In particular, superconducting nanowire detectors can be easily integrated onto waveguides and can detect single photons with high efficiency and low timing jitter of about 50% and 200 ps, respectively [129]. Advances in hybrid integration enable us to utilize several functional building blocks from different platforms in a compact photonic circuit. The approach does not just leverage the strengths of multiple platforms of quantum sources, photonic chips, and detectors, but also brings new capabilities beyond linear quantum optics for a range of quantum applications. However, integrating emitters and detectors on a chip also imposes new constraints. Along with the increased complexity of manipulating quantum emitters and filtering their emissions, cryogenics temperature is vital for operating both emitters and detectors. Although the maturity of integrated photonic technology can establish complex linear transformations using multiple directional couplers with programable phases, they are mostly tested at room temperature. At cryogenic temperatures below 10 K, carriers freeze out and \(\chi^{(2)}\) and \(\chi^{(3)}\) nonlinear optical susceptibilities of materials are significantly altered. Therefore, additional technical development of photonic circuits may be needed for fully integrated quantum photonic architectures [436]. In addition, even though combining multiple technologies have spurred the implementation of several protocols of quantum optics in a single photonic chip, addressing all issues from single-photon purity, Fourier-transform linewidth, coupling efficiency, and spectral match of solid state quantum emitters to fabrication yield, reproducibility, and coupling efficiency of photonic devices at the same time is still a formidable challenge. Such large-scale integrated quantum photonics with solid-state quantum emitters was recently demonstrated with defect centers in diamonds [437]. The 128-channel photonic chip integrates more than 70 germanium and silicon vacancies, generating spectrally identical single photons with near Fourier-transformed linewidths. Adopting a similar approach for QDs would require more effort to compensate for larger spectral randomness, but higher oscillator strength and reduced coupling with high-frequency phonons are expected for QDs. These features enable QDs to generate much brighter single photons at the zero-phonon line, which is crucial for speeding up large-scale quantum systems. \begin{table} \begin{tabular}{c c c c c c} \hline \multirow{2}{*}{**Integration**} & **Types of QD** & \multirow{2}{*}{**Photonic platforms**} & \multirow{2}{*}{**Functionality**} & \multirow{2}{*}{**Figure of merit**} & \multirow{2}{*}{**Ref.**} \\ & **(Wavelength)** & & & & \\ \hline \multirow{2}{*}{Monolithic} & InAs QD & GaAs Photonic & Indistinguishable & \(V_{\text{TPI}}\)=96\% & \multirow{2}{*}{[390]} \\ & (Near IR) & crystal waveguide & single photons & (up to 115 photons) & \\ \multirow{2}{*}{Monolithic} & InAs QD & GaAs Photonic & \multirow{2}{*}{TPI} & \multirow{2}{*}{(two independent QDs)} & \multirow{2}{*}{[392]} \\ & (Near IR) & crystal waveguide & & & \\ \multirow{2}{*}{Monolithic} & InAs QD & GaAs Photonic & Single-photon & \(T/T_{0}\)=8\%(\(\sim\)35\%) & \multirow{2}{*}{[394]} \\ & (Near IR) & crystal waveguide & nonlinearity & \(g^{(2)}(0)\) =1.08\((\sim\)2.1) & \\ \multirow{2}{*}{Monolithic} & InAs QD & GaAs & \multirow{2}{*}{Superradiance} & Two coupled QDs & [395] \\ & (Near IR) & waveguide & & Three coupled QDs & [397] \\ \multirow{2}{*}{Monolithic} & InAs QD & GaAs topological & Chiral & \(\eta_{\text{dini}}\)=68\% & [391] \\ & (Near IR) & photonic crystal & spin-photon interface & & \\ \multirow{2}{*}{Heterogeneous (Wafer bonding)} & InAs QD in & \multirow{2}{*}{Si\({}_{3}\)N\({}_{4}\) waveguide} & Adiabatic coupling & \multirow{2}{*}{\(\beta\)=20\%} & [411] \\ & GaAs nanobeam & & in a hybrid system & & \\ \multirow{2}{*}{Heterogeneous (Pick-and-place)} & InAs QD in & \multirow{2}{*}{Si waveguide} & Adiabatic coupling & \multirow{2}{*}{\(\beta\)=32\%} & [253] \\ & InP nanobeam & & in a hybrid system & & \\ \multirow{2}{*}{Heterogeneous (Pick-and-place)} & InAs QD in & \multirow{2}{*}{Si\({}_{3}\)N\({}_{4}\) waveguide} & Strain tuning & \multirow{2}{*}{\(\beta\)=1\%} & [417] \\ & InP nanowire & & in a hybrid system & & \\ \multirow{2}{*}{(Near IR)} & InAsP QD in & \multirow{2}{*}{Tunable} & \multirow{2}{*}{\(\beta\)=24\%} & \multirow{2}{*}{[254]} \\ & InP nanowire & & single-photon routing & Bandwidth = 40 nm & [254] \\ & (Near IR) & & with a ring resonator & Selectivity = 15 dB & \\ \multirow{2}{*}{Heterogeneous (Pattern transfer)} & InAs QD in & \multirow{2}{*}{Si waveguide} & Independently tunable & \multirow{2}{*}{\(\beta\)\(\sim\)80\%} & \multirow{2}{*}{[413]} \\ & GaAs nanobeam & Si waveguide & two QDs devices & & \\ \multirow{2}{*}{(Pattern transfer)} & InAs QD in & \multirow{2}{*}{Strangly coupled} & \multirow{2}{*}{\(Q\)=8,000} & \multirow{2}{*}{[429]} \\ & GaAs nanobeam & Si waveguide & QD-cavity & & \\ \multirow{2}{*}{(Pattern transfer)} & InAs QD in & \multirow{2}{*}{Strangly coupled} & \multirow{2}{*}{\(Q\)=8,000} & \multirow{2}{*}{[429]} \\ & GaAs nanobeam & Si waveguide & QD-cavity & & \\ \multirow{2}{*}{(Pattern transfer)} & (\(\sim\)1.2\(\mu\)m) & & & \\ \end{tabular} \end{table} Table 4: Representative demonstrations of integrated quantum photonic systems with QDs. \(V_{\text{TPI}}\): TPI visibility, \(T/T_{0}\): modulated transmission of a weak resonant laser by a single QD. The value in parentheses is a correction after deconvolution. \(g^{(2)}(0)\): Change in the photon statistics of a transmitted laser by a single QD. The value in parentheses is a correction after deconvolution. \(\beta\): QD-waveguide coupling efficiency, \(g_{0}\): light-matter coupling strength of a QD-cavity system Applications of single quantum dot devices in photonic quantum technology So far, we introduced QDs as promising candidates for quantum information technologies and reviewed various aspects ranging from the fabrication of photonic devices to the evaluation of their quantum optical properties. Representing one of the most promising quantum emitter platforms, the research in the field soon also considered demonstrations of applications using QDs. In this section, we present an overview on applications which have already been implemented to date or are currently tackled in by the community using QD-based devices. We start with QD-based implementations of QKD and proof-of-concept experiments on quantum teleportation and entanglement swapping, as important steps towards larger quantum networks, and move on to boson sampling and photonic computing. ### Quantum key distribution As discussed in section 2.1, QDs can either be used in prepare-and-measure type settings (cf. BB84 protocol) or entanglement-based settings (cf. E91 protocol) of QKD. While single photons are sufficient for the first type of implementation, entangled photon pairs are required as a key resource for the latter. In addition, advanced protocols exploiting the concept of device-independence have been proposed, which remove security risks associated with various loopholes otherwise possible to exist in practical applications. In the following, we begin with discussing BB84-type implementations using optimized QD-SPSs. #### 8.1.1 Single-photon quantum key distribution The first implementation of single-photon QKD was reported by Waks et al. in 2002 [438]. Here, the authors used single-photons emitted by an optically triggered InAs QD integrated into a micropillar cavity [442] to implement the BB84 protocol with polarization-encoded single-photon states. The non-resonant optical excitation at a rate of 76 MHz resulted in a mean photon number of \(\mu=0.007\) injected into the quantum channel, as deduced from a measurement using a single-photon detector on Alice's side. Using a short free-space link with a variable attenuator, the photons were sent to Bob for polarization-state discrimination, photon detection, and post-processing. The experiments revealed an asymptotic secure key rate, calculated according to [443], of 25 kbit/s with a QBER of 2.5% in back-to-back configuration, i.e. vanishing losses in the quantum channel. Using the variable attenuator, a maximum tolerable channel loss up to which communication is possible (given by non-zero rate) of 28 dB was observed. A comparison with attenuated laser pulses, not yet implementing decoy states, revealed, that the QD-SPS was able to outperform the attenuated laser at link losses exceeding 16 dB (see 36(a)). Overall, the SPS could tolerate about 4 dB higher losses than the laser in this experiment. Noteworthy, decoy-state protocols nowadays allow for the in situ estimation of the multi-photon contribution to mitigate photon number splitting attacks and hence much higher average photon numbers in the laser pulses [444]. As a result, the asymptotic key rate achieved by Waks et al. would not beat a decoy-state implementation using WCPs. To improve the secure key rates achievable in BB84-QKD for a given QD-QLS, Aichele et al. presented an elegant approach in 2004 [445]. In their work, the authors used the XX-X radiative cascade of a QD to generate two single-photons at slightly different energies with each excitation pulse, effectively doubling the achievable key rate. Using this scheme, a rate of secure bits per pulse of \(5\cdot 10^{-4}\) was demonstrated, which results in a communication rate of 38 kbit/s at a laser repetition rate of 76 MHz. While these first implementations used short laboratory-scale free-space optical (FSO) links as quantum channels, the use of optical fibers in ground-based communication scenarios has the practical advantages of being less susceptible to environmental fluctuations and being compatible with existing deployed fiber-networks. The first among several QD-based QKD experiments using optical fibers as quantum channels was conducted by Collins et al. using QD-generated single-photon pulses at a wavelength of 900 nm sent through 2 km of optical fiber [446]. To benefit from the minimal transmission losses possible in optical fibers, the community soon also tackled the fabrication of QDs operating at wavelengths in the second and third telecom window (O- and C-band) - work that has been pioneered by Ward et al. [447]. Single-photon QKD using QDs at 1300 nm was first demonstrated by Intallura et al. in 2009 using a QD-micropillar cavity optically excited above the bandgap [439]. As the quantum channel, the authors used 35 km of standard SMF-28 optical fiber. To avoid polarization-state distortions, possible at long transmission distances in optical fibers, phase encoding has been implemented using path-length matched Mach-Zehnder-Interferometers at Alice and Bob. Using their QKD demonstration testbed operated at a clock rate of 1 MHz, the authors reported a calculated (according to the so-called asymptotic GLLP 4 rate equations) maximum secure key rate of about 160 bit/s at a measured QBER of 5.9% and achieved a non-zero key rate at a distance of 35 km. This performance surpassed the distance limit of a WCP-source (without decoy states) in their setup (see Fig. 36(b)). Footnote 4: Named after the authors D. Gottesman, H.-K. Lo, N. Lütkenhaus, and J. Preskill of Ref. [448]. Figure 36: Selected single-photon QKD experiments using QD sources: (a) First QD-SPS QKD experiment by Waks et al. in 2002 [438]. (b) QD-based QKD at 1300 nm wavelengths by Intallura et al. in 2009 using a phase-encoding over a 35 km long optical fiber [439]. (c) Implementations of BB84 QKD by Takemoto et al. using QD single photons at telecom C-band wavelengths (1550 nm) over 50 km (gray line, [440]) and 120 km (red line, [441]) of SFM-28 optical fiber (cf. gray line). Extrapolations by the authors (yellow and green lines) still show prospects for substantial future improvements. (d) In 2012 Heindel et al. demonstrated lab-scale single-photon QKD using two different types of electrically triggered QD-SPSs based on InAs QDs (900 nm) and InP QDs (650 nm), respectively [319]. (a) Reprinted from _Waks et al. 2002_[438] with permission of Springer Nature: Copyright 2002 Springer Nature. (b) Figure reproduced with permission from _Intallura et al. 2009_[439]. IOP Publishing. All rights reserved. (c) Figures reproduced from _Takemoto et al. 2015_[441] under Creative Commons CC BY license. (d) reproduced with permission from _Heindel et al. 2012_[319] © IOP Publishing and Deutsche Physikalische Gesellschaft. Reproduced by permission of IOP Publishing. CC BY-NC-SA The first implementation of single-photon QKD in the telecom C-band (1560 nm), i.e. at the lowest transmission loss possible in optical fibers, was reported only one year later by Takemoto et al. [440]. In their phase-encoding setup, the SPS comprised a QD integrated into a horn-structure (cf. Ref. [333]). The authors achieved a maximum secure communication distance of 50 km based on the asymptotic GLLP rate equations. Further, improving their QKD implementation, by employing low-noise single-photon detectors based on superconducting nanowires [449] and a QD source with better single-photon purity (\(g^{(2)}(0)=0.005\)), the same group presented an improved version of their QKD implementation in Ref. [441]. Here, the improvements resulted in a maximal communication distance of 120 km - the longest transmission distance achieved in fiber-based single-photon QKD to date (cf. Fig. 36(c)). While all aforementioned QKD experiments used optically excited QD devices, relying on pulsed laser systems, a major advantage of semiconductor based QLSs is the possibility to realize complex engineered devices including diode structures for electrical charge carrier injection. The electrical triggering of QD emission [318] enables both, higher degrees of device integration and flexibly adjustable clock rates in protocol implementations. In 2012 Heindel et al. demonstrated lab-scale BB84-QKD experiments using two different types of single-photon emitting diodes operating in the near-infrared and visible spectral range, at 897 nm, and 653 nm, respectively (see Fig. 36(d)) [319]. Employing engineered QD devices based on different material systems and growth techniques, their work highlighted the flexibility semiconductor based QLSs offer for quantum information technologies. The near-infrared SPS was based on an electrically contacted micropillar cavity, exploiting the Purcell effect to enhance the photon extraction efficiency [303]. For the shorter wavelength SPS, QDs were integrated into a quasi-planar DBR cavity structure [450]. Using the Purcell-enhanced SPS at 897 nm under pulsed current injection at a clock-rate of 182.6 MHz, the authors achieved sifted key rates of 27.2 kbit/s at a QBER of 3.9% and a \(g^{(2)}(0)\) value of 0.35 at moderate excitation. The 653 nm SPS was triggered at 200 MHz, resulting in a sifted key rate of 95.0 kbit/s at a QBER of 4.1% and a \(g^{(2)}(0)\) value of 0.49. These first proof-of-principle QKD experiments using electrically operated semiconductor SPSs were considered as a major step forward in photonic quantum technologies. Shortly after the lab-scale QKD experiments reported in 2012, the authors integrated the near-infrared-emitting SPS in a rather compact quantum transmitter setup to be employed for QKD field experiments in downtown Munich. These QKD experiments by Rau et al. [320] comprised a 500 m FSO link between two buildings of the Ludwigs-Maximilians-Universitat Munich, with the transmitter and receiver units synchronized via GPS-disciplined oscillators. Using a single-photon LED modulated at a clock-rate of 125 MHz, the authors achieved sifted key rates of 7.4 kbit/s (11.6 kbit/s) at a quantum bit error ratio of 7.2% (6.3%) and a \(g^{(2)}(0)\) value of 0.39 (0.46) at low (moderate) excitation. Table 5 summarizes the QD-based single-photon QKD experiments discussed above. Figure 37: (a) BB84-QKD testbed using a triggered QD-SPS for the development of tools for the performance optimization of single-photon QKD, reported by Kupko et al. [451]. (b) Exploiting temporal filtering in this testbed, the maximal tolerable loss inside the quantum channel can be enhanced by 24%. (c) QBER and \(g^{\left(2\right)}\left(0\right)\) as a function of time for the QD-SPSs. The parameter can be directly used for the key distillation process, enabling a security monitoring in real time. (d) QKD testbed using a benchtop plug&play telecom-wavelength QD-SPS providing single-photon pulses via an SMF28 optical fiber for polarization-coding. The 19-inch rack module houses a compact Stirling cryocooler including the fiber-pigtailed QD-device, a pulsed diode laser, and a fiber-based bandpass filter. (e) 2D temporal filtering for optimization of the expected secret key rate fraction \(S\) as a function of the temporal width \(\Delta t\) and the center \(t_{\mathrm{c}}\) of the acceptance time window for different losses in the quantum channel. Blue circles mark the optimal parameter sets indicated in the time-resolved measurements in the lower panels. (f) Rate-loss diagrams considering the experimental data from (d) showing the asymptotic (black) and finite (blue/orange) key rate with (solid lines) and without (dashed lines) optimization of the temporal acceptance time window, respectively. In the finite-key scenario, accumulation times of 100 seconds (blue) and 1 million seconds (orange) are considered. (a-c) reprinted from _Kupko et al. 2020_[451] under Creative Commons Attribution 4.0 International License, (d-f) reprinted from _Gao et al. 2022_[452] with the permission of AIP Publishing. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \multirow{2}{*}{Photonic Device} & \multirow{2}{*}{QD Material} & \(\lambda\) & \multirow{2}{*}{Pump} & \multirow{2}{*}{Coding} & \multicolumn{2}{c}{Clock} & \multirow{2}{*}{FSO/FC} & \multicolumn{2}{c}{Sifted/Secure} & \multicolumn{1}{c}{QBER} \\ & & [nm] & & & [MHz] & & Key Rate & [\%] & \multicolumn{1}{c}{Ref.} \\ \hline Micropillar & InAs/GaAs & 880 & optic. & Pol & 76 & FSO (In-Lab) & - / 25 kbps & 2.5 & [438] \\ Planar & InP/GalnP & 635 & optic. & Pol & 0.01 & FSO (In-Lab) & 15 bps / 5 bps & 6.8 & [445] \\ Micropillar & InAs/GaAs & 1300 & optic. & Phase & 1 & FC (35 km) & 10 bps / 1 bps & 5.9 & [439] \\ Planar Microcavity & InAs/GaAs & 895 & optic. & Pol & 40 & FC (2 km) & - / 8-600 bps & 1.2-21.9 & [446] \\ Optical horn & InAs/InP & 1580 & optic. & Phase & 20 & FC (50 km) & 15-386 bps / 3-9 bps & 3.4-6 & [440] \\ Micropillar LED & InGaAs/GaAs & 898 & elect. & Pol & 182.6 & FSO (In-Lab) & 8-35 kbps / - & 3.8-6.7 & [319] \\ Resonant-cavity LED 1 & InP/GalnP & 653 & elect. & Pol & 200 & FSO (In-Lab) & 9-117 kbps / - & 4.1-6.0 & [319] \\ Micropillar LED & InGaAs/GaAs & 910 & elect. & Pol & 125 & FSO (500 m) & 5-17 kbps / - & 6-9 & [320] \\ Optical horn & InAs/InP & 1500 & optic. & Phase & 62.5 & FC (120 km) & 34 bps / 0.307 bps & 2-9 & [441] \\ \hline \end{tabular} \end{table} Table 5: Implementations of single-photon QKD based on the BB84 protocol and QD sources (abbreviations: light emitting diode (LED), polarization (Pol), free space optical (FSO), fiber-coupled (FC)) Despite the enormous progress seen in the development of telecom-wavelength SPSs (cf. Section 6.3), the fabrication of devices offering high performance remains challenging. Therefore, recent work also considers quantum frequency conversion to transfer the emission of high-performance NIR QD-SPSs emitting to C-band wavelenghts [453]. Respective sources were first employed in proof-of-concept QKD experiments by Morrison et al. in 2022 [454]. Along this route, Zahidy et al. recently demonstrated QKD in an 18-km-long field-installed fiber link, generating a secret key at 2 kbits/s at 9.6 dB channel loss [455]. Other recent developments in the implementation of QD-based single-photon QKD aim at the performance optimization of single-photon QKD systems, as well as the development of compact devices for applications in practical scenarios. In this context, Kupko et al. studied the impact of temporal filtering on the performance of single-photon QKD and showed how the secret key rate and the achievable tolerable loss can be optimized using two-dimensional temporal filtering as presented in Fig. 37(a-c) [451]. In addition, the authors demonstrated real-time security monitoring by evaluating \(g^{(2)}(0)\) in situ during key generation. Two years later, the same group reported on a benchtop QKD testbed using a stand-alone fiber-coupled QD-SPS emitting at telecom O-band wavelengths [452]. The plug&play device emitted single-photon pulses at 1321 nm and was based on a directly fiber-pigtailed deterministically fabricated QD-device integrated into a compact Stirling cryocooler housed in a 19-inch rack module (see Fig. 37(d)). Emulating the BB84 protocol in their testbed, the authors achieved \(g^{(2)}(0)=0.10\pm 0.01\), a raw key rate of up to \((4.72\pm 0.13)\) kHz, and predicted tolerable losses of up to 23.19 dB applying the 2D temporal filtering approach introduced in their previous work as presented in Fig. 37(e, f). While Stirling-type refrigerators are the most compact solution to date, the achievable base temperatures are presently limited to about 27 K. For applications which rely on the excellent coherence properties of QDs, e.g. for the generation of highly indistinguishable photons, small-footprint Gifford-McMahon cryocoolers in combination with compact compressors are an alternative. It should be noted that, while this review focuses on QDs, several emerging quantum emitter platforms recently attracted significant attention in the context of quantum information technologies and QKD in particular. In proof-of-concept studies, confined excitons in hexagonal boron nitride (hBN) [456], molecules of polyaromatic hydrocarbons [457], and monolayers of transition metal dichalcogenides [458] were considered and evaluated for their application in QKD, including an implementation of the B92 protocol [459] using a hBN-based SPS [460]. While the QKD implementations discussed in this section were performed in prepare-and-measure configuration, we review QKD experiments using QD-based entangled photon pair sources in the next section. #### 8.1.2 Entangled-photon quantum key distribution As discussed in section 2.1, QD-based QLSs can also be employed in entanglement-based QKD protocols. For this purpose, polarization-entangled photon pairs can be generated via the XX-X emission cascade (cf. Fig. 5(a)). Since the photons obey single-photon statistics, higher generation rates of entangled photons are possible compared to spontaneous parametric down-conversion sources [53, 54, 99]. Thus, QDs can be used for entanglement-based implementations of QKD, where the entangled photons can either be distributed via FSO or fiber-optical links. The first proof-of-concept demonstration of QD-based entanglement QKD was reported by Dzurnak et al. in 2015 [323]. Here, the authors implemented the BBM92 protocol using entangled photons generated via an entangled-light emitting diode, an electrically triggered QD-device introduced earlier by Salter et al. [321]. Using the experimental setup depicted in Fig. 38(a), the entangled photon pairs were distributed via optical fibers connected to the receiver stations of Alice and Bob, analyzing the polarization state of the XX- and X-photon, respectively. The photons emitted by the biexciton- and exciton-state were spatially separated using a spectral filter (cf. Fig. 38(b)), before coupling them to the individual fiber links connected to Alice and Bob. Polarization-entanglement of the photon pairs was verified by violating the CHSH inequality, yielding a \(S\)-parameter \(>\,2\) for vanishing time delays (cf. Fig. 38c). In the QKD experiment a sifted key of 2053 bits was transferred with a bit error rate of 9.8%, below the threshold of 11% required for the security of the BBM92 protocol, resulting in a final secret key of 949 bits shared between Alice and Bob after error correction. A sifted key rate of about 10 bits/min was achieved in the experiment. All QD-based implementations of QKD discussed so far, including those summarized in 8.1.1, used in-coherent optical or electrical excitation. In 2021, two research groups independently reported the first entanglement based-QKD experiments employing coherently driven QD sources [461, 462]. In both experiments, a GaAs QD fabricated by the droplet-etching technique (cf. Fig. 39) as coherently excited via two-photon resonant excitation to generate entangled photon pairs. This enabled significant improvements in the single-photon purity and entanglement fidelity compared to the first-time demonstration discussed above. Schimpf et al. [461] distributed the entanglement of their QD-QLS using a 350 m optical fiber, resulting in an asymptotic secure key rate of 86 bits/s (cf. Fig. 38(b)). Basso-Basset et al. [462] realized a fiber link (250 m) as well as a free-space link (270 m) with comparable transmission lengths, allowing for a direct comparison of both channel types operated with the same QD-QLS (see Fig. 38(c)). The authors observed larger Bell parameters and fewer fluctuations for the entanglement distribution via the fiber-based link, which used active feedback to compensate for polarization-state distortions. Moreover, the average raw key rate of 486 bits/s achieved in the fiber-link was higher compared to the FSO link (60 bits/s), due to environmental fluctuations in the FSO channel. Noteworthy, the two groups implemented slightly different protocols in their entanglement Figure 38: Entanglement-based implementations of QKD using QD-generated entangled photon pairs: (a) Experimental setup of the first demonstration of entanglement-based QKD by Dzurnak et al. [323] using an electrically triggered QD-device. Entanglement was verified by violating the CHSH inequality with an \(S\)-parameter \(>\,2\). (b) Schimpf et al. [461] and (c) Basso Basset et al. [462] realized entanglement-based QKD experiments of the BBM92 QKD protocol and an asymmetric Ekert protocol, respectively. Both groups used QD entangled photon pair sources coherently driven via two-photon excitation. (a,b,c) Reprinted from _Dzurnak et al.__2015_[323], with the permission of AIP Publishing. (d) reproduced from _Schimpf et al.__2021_[461] under Creative Commons Attribution License 4.0, (e) reprinted from _Basso Basset et al.__2021_[462] © The Authors, some rights reserved; exclusive licensee AAAS. Distributed under a CC BY-NC 4.0 license. based QKD experiments. In the demonstration by Basso-Basset et al. an asymmetric version of the original E91 protocol was applied. Here, the violation of the CHSH inequality was evaluated using a subset of the transmitted bits, to quantify the degree of entanglement left after the photons propagated through the quantum channel. This reveals the amount of potential eavesdropping, in turn determining the amount of privacy amplification required for distillation of the secret key. On Alice's side, the authors measured in the basis set known to maximally violate the CHSH inequality, while the conventional BB84 basis was used for Bob's measurement. Compared to the original E91 protocol, this asymmetric approach reduces the number of detectors required. In the work by Schimpf et al. [461], the BBM92 protocol [72] was implemented, which represents the entanglement-based analog of the BB84 protocol. Here, Alice and Bob both measure their respective halves of the entangled state in two conjugate bases and evaluate deviations in a subset of their data. The result determines the amount of privacy amplification required for distilling the secret key. Hence, in contrast to the work by Basso-Basset et al., the degree of entanglement (i.e. the \(S\)-parameter) is not monitored during the key generation. Interestingly, the QD-sources used in the experiments by Schimpf et al. and Basso-Basset et al. exhibited a non-negligible blinking effect, which limited the photon pair extraction efficiency. This has been improved in a follow-up work by Schimpf et al., where the authors demonstrated entanglement QKD using a p-i-n doped QD diode delivering blinking-free entangled photon pairs under pulsed optical excitation [463]. More recently, the group in Rome also achieved daylight operation in their urban 270-m-long FSO QKD-link [464]. Here, Basso-Basset et al. used narrower spectral filtering and better stray-light suppression in combination with improved beam tracking to enable the continuous operation of their entanglement-based QKD-link over 3.5 days under different light and weather conditions. Table 6 summarizes the entanglement-based QKD experiments employing QD-sources as discussed above. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{Photonic Device} & \multirow{2}{*}{QD Material} & \(\lambda\) & \multirow{2}{*}{Pump} & \multirow{2}{*}{Protocol} & \multicolumn{2}{c}{Clock} & \multirow{2}{*}{FSO/FC} & \multicolumn{2}{c}{Sifted/Secure} & \multicolumn{2}{c}{QBER} & \multirow{2}{*}{Ref.} \\ & & [nm] & & & & & [MHz] & Key Rate & [\%] & \\ \hline Planar Microcavity LED & InAs/GaAs & 885 & elect. & BBM92 & 50 & 320 & FC (250 m) & 243 bps & - & 3.4 & [462] \\ ” & ” & ” & ” & ” & ” & ” & FSO (270 m) & 30 bps & 9 bps & 4.0 & ” \\ Planar Microcavity & GaAs/AlGaAs & 785 & optic. & BBM92 & 80 & FC (350 m) & 135 bps & 86 bps & 1.9 & [461] \\ Planar Microcavity Diode & 320 & FSO (270 m) & 106 bps & 11.5 & 7.16 & [464] \\ \hline \hline \end{tabular} a) Time-multiplexed detector effectively reduced clock rate to below 1 MHz; b) Diode structure used for spectral tuning; c) Modified asymmetric E91 protocol; \end{table} Table 6: Implementations of entanglement-based QKD using QD sources (abbreviations: light emitting diode (LED), free space optical (FSO), fiber-coupled (FC)) As discussed in section 2.1, the quantum cryptographic protocols of the type BB84, E91, or BBM92 are provably secure in an information theoretical sense. But if implemented with imperfect devices, security risks may arise from potential side-channel attacks. This motivates the exploration of DI-QKD protocols, as discussed in the following section. #### 8.1.3 Towards Device-independent quantum key distribution Fully or partially DI-QKD protocols, are constructed such that they close all, or some specific, loopholes resulting from device imperfections in practical implementations. An important example for such protocols is MDI-QKD, which relies on the quantum interference of remote QLSs. Implementing triggered QLSs in an MDI-QKD setting, by using spatially separated QDs, TPI visibility exceeding the fundamental limit of 50% for WCPs becomes possible. As a result, the Bell-state measurements become more efficient [86], which is why substantial quantum advantages can be expected for MDI-QKD with QD-QLSs. The crucial prerequisite for implementations of MDI-QKD is the TPI at a beam splitter. While indistinguishable photons from the same QD-QLS have been reported several times with high TPI visibility [49, 49, 50, 160, 442, 465], this is much more challenging to achieve for photons emitted from remote, i.e. spatially separated, QD-QLSs. Here, the spectral properties are of particular importance, due to the self-organized nature and the semiconductor environment of the quantum emitters. This requires firstly a coarse spectral matching of quantum emitters using pre-selection of suitable QDs. Using deterministic fabrication technologies (cf. section 5.1), the yield for identifying suitable candidates can be increased significantly. Additionally, a spectral fine-tuning of one of the QDs is typically required, for which several techniques can be used, such as the tuning via the temperature [466, 467], strain [299, 425], or electrical gates by exploiting the quantum-confined Stark effect [468]. Noteworthy, these schemes are also directly compatible with fully integrated device concepts (see Fig. 35 and corresponding discussions). To reduce spectral drifts of two QD emitters relative to each other in a dynamic fashion, active feedback routines can be employed. In the remote TPI experiments by Zopf et al. [428], dynamic strain-tuning of both QD sources was implemented via piezo-electric actuators. Using a Figure 39: Progress towards the implementation of advanced QKD protocols: (a) Zopf et al. used active stabilization of QDs in remote TPI to improve the visibility [428]. (b) Weber et al. employed quantum frequency conversion to shift and spectrally match the emission of remote QDs to telecom C-band wavelengths [469]. (c) Zhai et al. used low-noise GaAs QDs to demonstrate near-unity photon indistinguishability from remote solid-state quantum emitters [300]. (a) reprinted with permission from _Zopf et al. 2018_[428] Copyright 2018 by the American Physical Society, (b) reprinted with permission from Springer Nature: Nature Nanotechnology _Weber et al. 2019_[469] Copyright 2019, (c) reprinted with permission from Springer Nature: Nature Nanotechnology _Zhai et al. 2022_[300] Copyright 2022. rubidium-vapor-cell-based Faraday filter (see 39(a)), spectral shifts in the emission wavelength were detected and used as input parameter for the feedback loop. Applying this technique, the remote TPI visibility was enhanced to 41% on average compared to 31% without stabilization. A conceptually different technique was employed by Weber et al. using quantum frequency conversion of the single-photon emission of two spectrally unmatched QDs (see Fig. 39(b)) [469]. To perform their remote TPI experiment at telecom C-band wavelengths, the emission of both QDs was converted from the near-infrared to 1550 nm. During the upconversion process, the spectral mismatch between both QDs (\(\approx 6\) GHz) was compensated for using two independently tunable lasers, resulting in a remote TPI visibility of 29%. The highest indistinguishability between remote QD sources was reported by Zhai et al. [300] in 2022 (see 39(c)). Using GaAs QDs fabricated by droplet-etching in combination with electrical gates to reduce spectral diffusion, unprecedented low levels of dephasing were obtained [105], resulting in remote TPI visibility of up to 93%. The fact that this high visibility was realized without employing Purcell enhancement, tight spectral filtering, post-selection, or active stabilization, thereby rises promises for further improvements in the future. Another interesting recent experiment was reported by You et al. [470]: by demonstrating TPI of remote QD-based QLSs separated by 300 km of optical fiber, a remarkable distance record for the interference of QLS was set. Here, again quantum frequency conversion to 1583 nm was used, as introduced earlier by Weber et al. [469]. The work summarized above showed, that remote TPI visibility exceeding the classical limit of 50% by far can nowadays be achieved using QD QLSs (see [37], Table 2). This lays the foundation for implementations of MDI-QKD fully exploiting the single-photon advantages possible with state-of-the-art engineered QD sources. To quantify and predict the quantum advantage possible with sub-Poissonian light sources in practical settings, however, also work from theory side is required. The secure key rates achievable in MDI-QKD with attenuated laser pulses have been studied analytically both, in the asymptotic [17] and the finite-size regime [471]. Theoretical studies of MDI-QKD using deterministic SPSs enabling photon indistinguishability up to unity have yet to be conducted. Before recent advances in quantum teleportation and entanglement swapping experiments are reviewed in the following section, it should be noted that QD-generated indistinguishable photons also enable the generation of entanglement between remote solid-state spin qubits [472], as demonstrated experimentally using remote hole- [473] and electron- [474] spin qubits confined in distant QDs. This functionality opens the route towards the transfer and storage of quantum information in complex quantum network architectures. ### Quantum teleportation and entanglement swapping with QD photons One of the basic elements of a BDCZ quantum repeater is a "quantum relay" [475], which entangles two photons (which we denote as X\({}_{A}\) and X\({}_{B}\)) of two EPR pairs (EPR\({}_{A}\) and EPR\({}_{B}\)) by performing a BSM on the other two photons (which we denote as XX\({}_{A}\) and XX\({}_{B}\)) and a classically-controlled rotation of the state of photon X\({}_{B}\). This "entanglement swapping" operation [476] is equivalent to the teleportation of the state of one of the photons (e.g. X\({}_{A}\)) of the EPR\({}_{A}\) pair to one of the photon of the other pair ("target photon", XX\({}_{B}\)), resulting in a new pair of entangled photons (XX\({}_{A}\) and XX\({}_{B}\)). We have already seen that QDs can be used as EPR sources using the polarization degree of freedom of the XX and X photons emitted by the biexciton-exciton cascade. Fidelities to the ideal \(|\Phi^{+}\rangle_{\text{XX,X}}\) Bell state of up to about 98% have been reported for GaAs QDs [120]. For a system consisting of two qubits \(|X\rangle_{A}\) and \(|X\rangle_{B}\), performing a BSM means projecting the state into one of the four Bell states \(|\Psi^{\pm}_{X}\rangle_{\text{A,B}}\) and \(|\Phi^{\pm}_{X}\rangle_{\text{A,B}}\), which represent a complete set of basis states for the corresponding 4-dimensional Hilbert space. For _indistinguishable_ photons, a partial BSM can be performed by HOM interference at a 50/50 beam splitter. For a non-polarizing beam splitter, the simultaneous detection of one photon at each of the two outputs of the beam-splitter projects the X\({}_{A}\) and X\({}_{B}\) photons onto the \(\ket{\Psi_{X}^{-}}_{\text{A,B}}\) Bell state. The application of a \(\sigma_{y}\) gate on the target photon XX\({}_{B}\) completes the teleportation of the state of the control photon X\({}_{A}\) onto XX\({}_{B}\). The first successful implementation of quantum teleportation of the polarization state of a photon emitted by a QD onto the polarization of another photon emitted by the same QD was achieved by using an InGaAs QD in a light emitting diode under DC excitation as a source of entangled photon pairs with fidelity of 0.77 [479]. In that experiment, crossed-polarizers were inserted at the output ports of the BSM beam splitter, thus reducing by 1/2 the detection probability of the \(\ket{\Psi_{X}^{-}}_{\text{A,B}}\) state (and thus to 1/8 the total efficiency of the Bell state measurement). Later on, a similar experiment was performed using GaAs QD producing a stream of entangled photons with fidelity of 0.92 under pulsed resonant excitation using the arrangement shown in Fig. 40(a) [477]. In both works, photons subsequently emitted by the same QD and were delayed to meet at the Bell state measurement beam-splitter were used for teleportation. Teleportation fidelities well above the classical limit were observed in Ref. [477] for all chosen polarization states of the control X\({}_{L}\) photon, as shown in Fig. 40(b). More recently, improved BSM efficiency and noise rejection was reported in Ref. [462] using the experimental arrangement shown in Fig. 40(c). By inserting two polarizing beam-splitters at the output of the 50/50 beam-splitter, also the \(\ket{\Psi_{X}^{+}}_{\text{A,B}}\) becomes detectable. Figure 40: Quantum teleportation and entanglement swapping using photons emitted by one GaAs QD. (a) Experimental configuration used in Ref. [477] to teleport the state of the control photon X\({}_{L}\) onto the control photon XX\({}_{E}\) using the ancilla photon X\({}_{E}\) belonging to the ”early” (E) entangled photon pair XX\({}_{E}\)-X\({}_{E}\). The photon X\({}_{L}\) is emitted by the same QD in a “later” (here 2 ns) excitation cycle. (b) Teleportation fidelity for three mutually unbiased basis states obtained with the experimental configuration in (a). (c) Experimental configuration used in Ref. [366] to increase the efficiency of the partial Bell state measurement used for teleportation by a factor of 2 and improving the rejection of accidental coincidences. (d) Example of density matrix for a pair of photons XX\({}_{E}\)-XX\({}_{L}\) belonging to two subsequence XX-X photon cascades in a GaAs QD after entanglement swapping and (e) Contour plot showing the expected entanglement swapping fidelity as a function of TPI visibility and the ratio between fine-structure-splitting and radiative lifetime. For more details see Ref. [478]. (a-b) reprinted from _Reindl et al._ _2018_[477] under Creative Commons CC BY license. (c) reprinted from _Basso Basset et al._ _2021_[366] under Creative Commons CC BY license. (d–e) reprinted from _Basso Basset et al._ _2019_[478] under Creative Commons CC BY license. By performing a thorough analysis of the sources of imperfections in the above-mentioned experiments, it is concluded that the limited photon indistinguishability mostly affects the overall fidelity of the teleportation process. Although the TPI visibility for photons emitted by resonantly driven excitons or trions can exceed 98% (see Section 6.2), the XX-X photon pair from a cascade is not only entangled in polarization but also in time, as reported first by Moreau et al. [288], so that the purity and indistinguishability of the reduced one-photon system is not unity [365, 480, 481]. This correlation, together with noise sources inherent to the solid-state environment (interaction of excitons with charges, spins, and phonons) limits the fidelity of the BSM. In spite of the limited indistinguishability of X photons (TPI visibility of about 0.7), two experiments have been recently performed demonstrating entanglement swapping between two XX-X pairs emitted by a QD during two independent excitation cycles [478, 482]. Figure 40(d) shows the two-photon polarization density matrix for the system composed of the XX\({}_{E}\)-XX\({}_{L}\) photons, which are expected to be in the \(\Psi^{-}_{\rm XX,E,L}\), after a BSM projecting the X\({}_{E}\)-X\({}_{L}\) pair in the \(\Psi^{-}_{\rm X,E,L}\) state. Although the density matrix is close to the ideal one, mixing elements are observed, as neither spectral nor temporal filtering was applied to the data, making the swapping vulnerable to the limited fidelity of the BSM. The calculated effect of the non-perfect photon indistinguishability and entanglement fidelity (produced by a finite value of the excitonic fine structure splitting) is shown in Fig. 40(e). A route to improve the performance of QDs for entanglement swapping experiments and relying on Purcell enhanced emission and spectral filtering to alleviate the negative effect of time-correlations in the cascaded decay is sketched in Ref. [461]. However, excitation-induced effects inherent to the two-photon excitation method should be considered [464, 483], as discussed in Sec. 6.5. Performing entanglement swapping with photons emitted by remote QDs brings in additional challenges due to uncorrelated charge noise and blinking [484], further reducing the TPI visibility, and the necessity of tuning the QD emission energy while maintaining \(E_{\rm FSS}=0\). The first issue can be solved by embedding the QDs in charge-tunable devices [105, 300, 463] and the second by integrating the QD devices on top of multiaxial strain actuators [120, 294, 359, 485]. ### Boson sampling Recent advances in photonic quantum information technologies in coherent controls, scales, and integration promise practical applications of quantum states in not only communication but also in computation and simulation. While fault-tolerant universal quantum computations remain a long-term challenge, quantum machines can solve specific problems much faster than classical computers with a limited number of quantum resources and efficiency at present. Boson sampling is a well-known such computational task formulated by Aaronson and Arkhipov [486], and it models the probability distribution of scattered indistinguishable photons in a linear interferometer. Its usefulness has been tested in simulating molecular vibronic spectra [96]. Photons and linear optics are well suited to accomplishing scalable boson devices. The implementation of boson sampling with photons highly relies on a high generation rate and indistinguishability of single photons, as well as the low multi-photon probability. Heralded single photons and squeezed states in nonlinear crystals are widely used for implementing different types of boson samplings, such as scattershot and Gaussian boson sampling [487]. However, they still suffer from several difficulties in efficiency and scaling up. Nowadays, high-performance QD SPSs can generate trains of identical single photons at a high rate over a few tens of MHz and with high single-photon purity (\(g^{(2)}(0)<0.01\)) and indistinguishability (>0.99) [49]. As the \(N\)-photon input probability decreases exponentially with the single-photon generation rate and multi-photons are the major source of error, QDs are considered a good alternative to heralded SPSs [488, 489]. From on-chip waveguide integrated QDs, a single-photon generation rate of 122 MHz has been demonstrated while maintaining the indistinguishability of more than 100 single photons. By combining an active time-to-space demultiplexing technique with such indistinguishable single-photon trains, QD-SPSs simulated boson sampling with fivefold coincidence detection on a 16 X 16 modes ultralow-loss photonic circuit (See Fig. 41(a)) [488]. A different strategy that used temporal modes instead of spatial modes via a loop-based interferometer has implemented time-bin-encoded boson sampling [349]. This approach significantly reduces experimental overhead, as it only requires a bright SPSs and two detectors with fiber delays (see Fig. 41(b)). ### Photonic quantum computing Schemes for universal quantum computations using photons were proposed in 2001 based on linear optics and measurements, known as linear optical quantum computing [98] and measurement-based quantum computing [21]. The schemes are based on quantum interference and measurement-induced nonlinearities without direct photon-photon interactions. Most demonstrations of quantum gates and quantum algorithms typically employ photons from parametric down-conversion processes [20, 97], but as QD-SPSs are beginning to meet all the important criteria (see section 1) for ideal SPSs and outperform the existing SPSs, QDs are considered an excellent candidate for implementing photonic quantum computation. Furthermore, a variety of exciton complexes in QDs, including single excitons, biexcitons, and charged excitons, produce coherent single photons, entangled photon pairs, and spin-photon entanglements. As stationary spins provide local storage of quantum states [490, 491], creating highly efficient QD-photon interfaces play a key role in implementing scalable photonic quantum computing architectures [492] and distributed quantum networks [343]. Quantum logic gates and Bell-state analyzers are basic units of quantum computations and an important prerequisite for implementing several quantum protocols such as teleportation or entanglement swapping. In measurement-based quantum optics experiments, such operations are inherently probabilistic, while introducing cavity (waveguide) QEDs with two-level atoms efficiently mediates nonlinear light-matter interactions and enables deterministic quantum operations and quantum non-demolition measurements. The cavity (waveguide)-coupled quantum emitters have been intensely studied to realize scalable quantum architecture based on Figure 41: Schematics of experimental setup for boson sampling. (a) A single InAs/GaAs QD SPS combined with a time-to-space demultiplexer consisting of Pockels cells and polarizing beam splitters. The seven single photons are then coupled with an optical fiber and fed into a 16 × 16 modes interferometer on an ultra-low-loss photonic circuit. A five-photon boson sampling rate of 4 Hz was demonstrated. (b) Time-bin-encoded boson sampling consists of a single QD-micropillar device, two detectors, and a fiber loop-based interferometer. Three- and four-photon boson sampling rates of 18.8 and 0.2 Hz were reported. (a) Reprinted from _Wang et al. 2018_[488]. Copyright (2018) by the American Physical Society. (b) Reprinted from _He et al. 2017_[349]. Copyright (2018) by the American Physical Society. deterministic quantum gates and Bell-state analysis [493, 494, 495]. Such schemes were experimentally demonstrated in a number of groups [106, 394, 405, 496, 407], Fig. 42(a) shows deterministic photon-photon interaction mediated by QD's spin in Ref. [106]. The result can implement single-photon transistors, which are inapplicable in conventional quantum optics. The capability of producing bright on-demand single-photon streams, whose polarizations are entangled with the spin state of QDs, are beneficial to create photonic cluster states, which are essential for one-way quantum computing [21, 376, 406]. A four-photon linear cluster state has also been demonstrated using a single quantum emitter and a single sequential entangler based on a fiber delay loop [95] (See Fig. 42(b)). As another approach, a proof-of-principle demonstration of Shor's algorithm has been accomplished with a QD SPS and active multiplexers, factoring 15. Here, deterministically generated single photons with high extraction efficiency, single-photon purity, and indistinguishability are transferred into a four-photon cluster states and implement inverse quantum Fourier transform [498] (See Fig. 42(c)). For a full-scale demonstration of quantum processing, a larger number of qubits, gates, and entangled states will be necessary. Furthermore, auxiliary qubits are also required to store the intermediate results, while electron or hole spins in QDs have a rather short coherence time of a few microseconds [491] compared to long coherence times of more than 1 second in other qubit platforms, such as atoms, ions, and color centers in crystals. This constraint could be addressed by constructing hybrid architectures with such dissimilar qubit sources. In particular, there exist a number of possible candidates, including rubidium (cesium) vapor cells [499, 500] and Nd\({}^{3+}\) doped crystals [501], which can optically interface with InGaAs or GaAs QDs. By matching their spectral frequencies, a single photon in a QD can be delayed or stored through strongly interacting hybrid quantum memories. Such hybrid architectures will provide a route for distributing entanglement and ultimately realizing distributed quantum computing. This enables us access to more advanced quantum protocols and utilizes more quantum resources from separate quantum modules. Figure 42: Schematics of deterministic photon-photon interactions, compiled Shor’s algorithm, and linear cluster states based on cavity-coupled single QDs. (a) A single charged QD in a photonic crystal cavity creates spin-photon interfaces. The first gate photon controls the spin state of a QD and then determines the polarization of incoming photons. (b) A single QD in a micropillar produces indistinguishable single-photon trains. An entanglement gate consisting of a fiber delay loop, a polarizing beam splitter, and an electrically driven polarization controller creates linear cluster states encoded in the polarization degree of freedom. A delay loop stores a photon until the next photon comes in. (c) Experimental setup for Shor’s algorithm, consisting of an SPS with active demultiplexer, a quantum circuit, and four-fold correlation measurement. (a) Reprinted from [106] with permission from AAAS. (b) Reproduced from Ref. [95] under Creative Commons CC BY license. (c) Adapted with permission from Ref. [498]. (c) The Optical Society. Open challenges and outlook Based on the results presented, this section discusses open challenges and provides an outlook to future directions and required development. ### Theory and numerical device modelling With most QLS design approaches, numerical simulations typically predict higher figures of merit than what is measured experimentally. While such deviations are often attributed to uncontrolled fabrication imperfection such as surface roughness or spatial QD-cavity misalignment, a careful validation of the theoretical predictions using accurate measurements and well-controlled deterministic fabrication techniques is a prerequisite to improving the device performance. A further practical challenge is the numerical difficulty in performing accurate simulations [152] of large device geometries as well as structures including geometrical features on several length scales. Finally, even assuming loss-less materials and perfect nanofabrication capabilities, it is still not fully understood how to increase the product \(\eta_{\text{ext}}V_{\text{TPI}}\) of the efficiency and the indistinguishability arbitrarily close to unity. The governing physics describing the light extraction for several QLS designs including the CBG resonator concept [52, 53, 54, 158] is not fully understood, posing a challenge for the device optimization. Furthermore, for the designs relying on Purcell enhancement, a fundamental trade-off persists between the achievable efficiency and indistinguishability in the presence of phonon-induced decoherence [171, 144]. ### Epitaxial growth The basis of any QD device is represented by epitaxially-grown heterostructures. Different material combinations and growth methods have been explored in the last three decades, resulting in QDs with steadily improving performance. While strained InGaAs QDs obtained via the S-K growth mode on GaAs and with emission wavelength around 900 nm have been instrumental to many pioneering works [131, 30, 44, 30, 13] and are now even commercially available, improved performance in terms of indistinguishability among photons emitted by separate QDs [300], polarization-entanglement fidelity [120], and electron spin coherence [502] have all been achieved on almost unstrained GaAs QDs in nanoholes obtained by the local droplet etching in AlGaAs [236, 234]. Significant deviations from the ideal figures of merit are however still observed, especial in the case of telecom-wavelength QDs. In part, these can be attributed to extrinsic effects, such as impurities, point- and extended defects, as well as surface and interface traps. Understanding the impact of these factors on the properties of QDs is a formidable challenge. A pragmatic solution to achieve the highest possible material quality should focus on the careful selection of the source materials, of the methods for conditioning epitaxial-growth-systems, and of the growth parameters, similar to what is done for the fabrication of ultrahigh mobility electron- and hole-gases. Native and processed surfaces are also a source of noise, and many different passivation methods have been developed over the years for the commonly used compound semiconductors [503, 504, 505] and in part used also for QD structures (see also next section). There are also intrinsic factors, which should be considered and further understood, such as the role of QD structural properties (QD size, shape, strain etc.), alloy and interface disorder, heterostructure design [506], interaction with phonons in the used materials, and the nuclear spins of the contained atoms. In this respect, a close interaction between experiment and theory will be further required to engineer the QD properties to meet the increasingly stringent requirements posed by advanced experiments and applications. As hyperfine interaction limits the performance of commonly used compound semiconductors, it would be useful to "re-discover" nuclear spin-free materials such as II-VI semiconductors for which, however, point defects may be the limiting factor. ### Device nanofabrication The fabrication methods developed in recent years for the deterministic integration of QDs into photonic nanostructures complement established techniques such as reactive ion etching in a targeted manner. Today, they allow one to produce high-performance quantum devices with high process yield and high control of their electro-optical properties. With regard to the practical application of these devices, great progress was made concerning user-friendly on-chip fiber coupling, which made it possible to develop plug'n'play QD-SPSs and use them for QKD experiments. Despite these enormous technological advances, there are still open challenges and a need for optimization for the nanoprocessing of QD quantum devices. In addition to further optimizations in the area of QD growth mentioned above, and above all in relation to QDs with emission in the telecom O-band and C-band, further efforts are needed in the area of nanostructuring and device fabrication, for example to maximize photon indistinguishability, to enable scalability and to enhance the properties and capabilities of fiber-coupled QLSs. **Photon indistinguishability** In this context, charge fluctuations in the vicinity of the QD are problematic, leading to spectral diffusion and thereby limiting photon indistinguishability [507]. Externally applied electric fields can counteract this problem, but this complicates device design and fabrication and is sometimes impractical. Since spectral diffusion is often induced by the charging and discharging of defect states and etched surfaces, it will be important to effectively passivate component surfaces in the future. Promising results were published in Ref. [508], where it could be shown that surfaces coated with 15 nm of Al\({}_{2}\)O\({}_{3}\) using atomic layer deposition (ALD) lead to a significantly reduced blinking and spectral linewidth of QDs. ALD surface passivation using 8 nm layer of Al\({}_{2}\)O\({}_{3}\) was also carried out in a QD device with open cavity design to achieve a close-to-ideal photon indistinguishability of 96.7% [157]. **Scalability** Another open point concerns the scalability of single-QD devices to large-scale quantum networks and complex IQPCs. These advanced applications require a large number of QD-QLSs or QDs as single-photon emitters with identical emission wavelengths on the scale of homogeneous linewidth in the \(\mu\)eV range. In this context, self-organized QD growth is fundamentally problematic, since the position and spectral properties of individual QDs cannot be predetermined. In fact, when using conventional nanotechnology methods, the process yield of resonant QD devices would be drastically reduced and would take on negligibly small values as soon as one considers scaling across systems with more than one QD [266](supplementary information). However, the problem of the random position can be efficiently solved by the presented deterministic nanofabrication processes in order to create single-QD devices in a very controlled manner. With simultaneous spectral selection of QDs, also devices of the same emission energy on a scale of 0.1 - 1 meV can be produced. For the necessary resonance in the area of the homogeneous linewidth, however, spectral fine-tuning is inevitable. For this purpose, spectral control via the quantum confined Stark effect and via local strain tuning are very attractive, which has already been demonstrated in experiments on individual QDs [63, 358, 430], and more recently also for two-QD systems fabricated randomly [396, 509] and deterministically [270]. In the future, it will be interesting and important to establish this kind of spectral fine-tuning also in a scalable way in IQPCs based on many resonant QDs and in quantum repeater networks based on BSMs on indistinguishable photons emitted by remote SPSs. For a discussion, see section 8.2. In the case of the quantum networks, it will also be important to have an absolute wavelength reference for the spectral synchronization of the individual QLSs, for which atomic transitions could come into question, for example. **Advanced fiber-coupling** The development of efficient solutions for on-chip fiber coupling is an important basis for real applications of single-QD devices in photonic quantum technology. In the future, it will be essential to increase the coupling efficiency. Furthermore, it will be interesting to develop advanced fiber-coupling schemes that go beyond comparatively simple single-emitter single-mode connections described above. For instance, sources of entangled photon pairs play a central role in quantum repeater networks. In this context, it is important to develop solutions through which the polarization-entangled XX and X photons of a QD can be coupled into an optical fiber while maintaining the entanglement, and which ensures that the entangled photons are transmitted to two different fiber outputs. For this purpose, the specialty fiber would have to contain wavelength-selective elements that direct XX and X photons into different fiber outputs. Another interesting approach could be the coupling of emitter arrays and multicore fibers. In this way, quantum keys could be transmitted in parallel in several fiber cores, so that the achievable QKD transmission rate would be multiplied accordingly. One could also imagine using this concept to transmit quantum channels and classical channels simultaneously with little crosstalk via different cores of the multicore fiber. ### Practical applications in quantum information As discussed in this review article, in recent years, QD-based QLSs clearly proved their high potential for applications in quantum information technology. Allowing for the efficient generation of single, indistinguishable, and entangled photons with excellent quantum optical properties and at high rates, QD-sources are today able to outperform probabilistic sources. While numerous proof-of-principle experiments have been already demonstrated employing QD-sources, e.g. in implementations of QKD, quantum teleportation, or entanglement swapping, major challenges remain for their practical applications in quantum networks. In this case, it is no longer sufficient to evaluate the QD source as an isolated device, rather than integrated in functional systems. One challenge in this context concerns the efficient coupling of flying qubits not only to the "first lens" and in the controlled environment of a quantum-optical laboratory, but also to the quantum channel such as a deployed optical fiber, typically including additional losses for quantum state preparation, and in realistic environments. The direct and permanent fiber-pigailing of carefully optimized QD-devices in combination with compact cryocoolers (cf. Sections 5.2 and 8.1.1) offer promising routes to master this challenge. Experimental realizations employing this approach for stand-alone QD-sources, however, did not reach the performance level of lab-scale experiments. Thus, a crucial next step will be, to show that practical QD-devices can indeed also exploit the full potential QDs offer in terms of photon extraction efficiency, single-photon purity and photon-indistinguishability. Another major challenge concerns the achievement of high photon indistinguishability from remote solid-state quantum emitters - a crucial prerequisite for advanced schemes of quantum communication. As discussed in Section 8.1.3, the high fabrication quality and the degree of control possible with QD QLSs today, led to substantial advances in TPI experiments with remote sources, which resulted in numerous experiments exceeding the 50%-limit set for the TPI visibility of sources exhibiting Poissonian statistics. To increase the reproducibility of high photon indistinguishabilities from multiple remote QLSs, also in practical scenarios outside shielded laboratories, will lead to breakthroughs in advanced implementations of quantum communication, ranging from MDI-/DI-QKD implementations with unprecedented performance, entanglement swapping and quantum teleportation of flying qubits from spatially distant sources (if combined with high entanglement fidelity), to long-haul quantum repeater links. Furthermore, on the very applied side, to benchmark different quantum communication protocols and different technology platforms, standards must be developed or agreed upon [510, 511]. Solutions can be to certify the same amount of overall \(\epsilon\)-security (see the discussion of security definitions in [512]) or newly defined figure-of-merits such as "security-per-dollar-spent", which considers the fact that different quantum communication architectures, that in principle promise different levels of security, also have different levels of implementation difficulty and hence costs. And, last but not least, while the achievement of unconditional security, ruling out even the most unlikely attacks (that are practically impossible but allowed by the laws of quantum mechanics), remains the ultimate aim, one might be content with a more relaxed, applied form of security in practice - whether being the result of a deliberate trade-off between security-gain and implementation-costs, or an intermediate step towards ultimate security. Noteworthy, the approach of assuming realistic restrictions on an adversary are well known and even required in the field of cryptographic primitives beyond QKD in untrusted settings, e.g. quantum oblivious transfer in the so-called noisy storage model [513], representing other crucial building blocks for modern communication [514] networks. Overall, the understanding of QDs as a high-quality quantum emitters and the development of related quantum devices has reached a very high level of knowledge in science and technology, which is reflected for instance in the near-ideal emission characteristics of QD-QLSs. In addition to further optimization of the QDs and corresponding nanophotonic structures towards quantum devices with properties that come even closer to the ideal values, practical aspects in particular will play an important role in further optimizations. These certainly include the development of compact fiber-coupled QD-QLSs and a scalable QD technology for the implementation of complex quantum networks and highly functional IQPCs. In this context, the focus of the work will certainly shift away from basic research to applied research in the field of quantum engineering. This offers very attractive development opportunities in a multidisciplinary research environment that synergistically combines topics from basic physical research, nanophotonics, quantum optics, integrated photonics, network technology and quantum information science to realize highly innovative components for applications in quantum information technology. ## 10 Conclusion In summary, this article has discussed the great potential of semiconductor QDs for applications in quantum information technology. Due to their discrete energy levels, these high-quality quantum emitters form almost ideal two-, three- and four-level systems via which individual photons, entangled photon pairs and also photonic cluster states can be generated on demand. In addition, charged QDs can act as coherent spin-photon interfaces. From a technological point of view, these quantum emitters are very attractive because they are fundamentally compatible with established manufacturing techniques such as semiconductor epitaxy. However, as we have discussed, related techniques need to be optimized for the particular requirements relevant to the fabrication of QD-based quantum devices. For example, highly symmetrical QDs can be produced epitaxially using the droplet-etching technique, which are predestined for the generation of polarization-entangled photon pairs. Furthermore, the telecom O- and C-band can be reached via sophisticated material engineering by QDs in order to enable fiber-based quantum communication. Corresponding developments require special theoretical concepts and numerical methods that represent a link between device design, growth, nanofabrication and optical characterization and form an important basis for device optimization for the targeted applications in quantum information technology. Building on this, we presented modern manufacturing concepts that can be used to fabricate single-QD devices with the highest process control in a deterministic and scalable manner. These concepts include, above all, in situ lithography techniques, in which suitable QDs are first selected in order to then integrate them spatially and spectrally aligned into nanophotonic structures. Nanophotonic structures and resonators serve to increase the photon extraction efficiency of the QDs to values beyond 70%. On the other hand, they can exploit the Purcell effect to optimize emission dynamics and quantum optical properties, such as indistinguishability. In this way, almost ideal QD-QLSs with high multi-photon suppression, indistinguishability and entanglement fidelity in combination with high single-photon emission rates in a wide wavelength range from about 780 nm to 1550 nm have been developed in recent years. In addition, important progress towards user-friendly QLSs has been made by coupling the QD devices on-chip with optical fibers and integrating them into stand-alone cryostats for direct integration into QKD testbeds and perspectively into fiber-based quantum networks. In fact, the first QKD experiments with QD-QLSs are very promising, for example in terms of the achievable data transmission rates, while it is becoming apparent that the full potential of QD-QLS will only become apparent in advanced quantum communication networks, which are based on entanglement distribution, spin-photon entanglement and quantum state transfer. In addition to quantum communication applications, the field of photonic quantum processors and quantum computers has generated great interest. As we discussed, QDs can also form important building blocks here, for example by being scalably integrated into integrated quantum photonic circuits. These can have a hybrid architecture in order to contain highly functional single photon detectors and elements such as ring resonators for qubit manipulation in addition to the quantum emitters themselves. In this context, 2D photonic cluster states, which promise efficient computing operations via one-way quantum computing, play a special role. While it has already been possible to generate 1D photonic cluster states, it is a great challenge to generate higher-dimensional photonic cluster states, which could be done efficiently with quantum dot molecules. In conclusion, this article gave an insight into the fascinating advances of QD research activities in the field of photonic quantum technologies. These include a wide range of theoretical questions, technical solutions, and experimental methods, which in a very interdisciplinary environment in cooperation with experts from quantum optics and quantum information science form the basis for the application of the corresponding QD devices in quantum communication and the photonic quantum computing. The enormous dynamics in the development of QD-based quantum devices and their excellent performance parameters lead to the conclusion that semiconductor QDs will play an important role in the implementation of complex quantum networks and the future quantum internet. **Funding.** Content in the funding section will be generated entirely from details submitted to Prism. **Acknowledgments.** T.H. acknowledges fruitful discussions with Daniel A. Vajner. N.G. acknowledges fruitful discussions with Luca Vannucci and Dara P. S. McCutcheon. **Disclosures.** The authors declare no conflicts of interest. **Data availability.** No data were generated or analyzed in the presented research. **Abbreviations.** * Bell-state measurements (BSMs) * biexciton (XX) * cathodoluminescence (CL) * Clauser, Horne, Shimony, and Holt (CHSH) * circular Bragg grating (CBG) * device-independent (DI) QKD * Einstein-Podolski-Rosen (EPR) * electron beam lithography (EBL) * exciton (X) * Frank-van-der Merwe (F-M) * free-space optical (FSO) * Greenberger-Horne-Zeilinger (GHZ) * Hong-Ou-Mandel (HOM) * integrated quantum photonic circuit (IQPC) * light emitting diode (LED) * measurement-device-independent QKD (MDI-QKD) * metal-organic-vapor-phase-epitaxy (MOVPE) * molecular beam epitaxy (MBE) * numerical aperture (NA) * photonic crystal (PC) * quantum bit error ratio (QBER) * quantum dot (QD) * quantum key distribution (QKD) * receiver (Bob) * sender (Alice) * two-photon excitation (TPE) * two-photon interference (TPI) * cavity quantum electrodynamics (cQED) * distributed Bragg reflector (DBR) * scanning tunneling microscopy (STM) * Volmer-Weber (V-W) * scanning electron microscope (SEM) * single-photon source (SPS) * Stranski-Krastanow (S-K) * single-mode fiber (SMF)
2303.00014
Light Shining Through a Thin Wall: Evanescent Hidden Photon Detection
A kinetically-mixed hidden photon is sourced as an evanescent mode by electromagnetic fields that oscillate at a frequency smaller than the hidden photon mass. These evanescent modes fall off exponentially with distance, but nevertheless yield detectable signals in a photon regeneration experiment if the electromagnetic barrier is made sufficiently thin. We consider such an experiment using superconducting cavities at GHz frequencies, proposing various cavity and mode arrangements that enable unique sensitivity to hidden photon masses ranging from $10^{-5}$ eV to $ 10^{-1}$ eV.
Asher Berlin, Roni Harnik, Ryan Janish
2023-02-28T19:00:03Z
http://arxiv.org/abs/2303.00014v1
# Light Shining Through a Thin Wall: Evanescent Hidden Photon Detection ###### Abstract A kinetically-mixed hidden photon is sourced as an evanescent mode by electromagnetic fields that oscillate at a frequency smaller than the hidden photon mass. These evanescent modes fall off exponentially with distance, but nevertheless yield detectable signals in a photon regeneration experiment if the electromagnetic barrier is made sufficiently thin. We consider such an experiment using superconducting cavities at GHz frequencies, proposing various cavity and mode arrangements that enable unique sensitivity to hidden photon masses ranging from \(10^{-5}\) eV to \(10^{-1}\) eV. + Footnote †: preprint: FERMILAB-PUB-23-073-SQMS-T ## I Introduction Massive vector fields with feeble couplings to Standard Model (SM) particles may readily exist and present a compelling target for experimental searches. Such _hidden photons_[1] have been well-studied both in their own right and as a possible component of the dark matter sector (see, e.g., Refs. [2; 3; 4; 5] and references within). A simple and natural possibility is that the hidden photon couples to the SM through a kinetic mixing \(\epsilon\ll 1\), \[\mathscr{L}=-\frac{1}{4}\,F^{2}-\frac{1}{4}\,F^{\prime\,2}+\frac{1}{2}\,m_{A ^{\prime}}^{2}A^{\prime\,2}+\frac{\epsilon}{2}\,FF^{\prime}-jA\, \tag{1}\] where \(A\) is the SM photon field and \(F\) its field strength, \(A^{\prime}\) is the hidden photon field with mass \(m_{A^{\prime}}\) and \(F^{\prime}\) its field strength, and \(j\) is the electromagnetic (EM) current density. We have suppressed Lorentz indices and absorbed the EM coupling into \(j\) for brevity. Many experiments have directly searched for or indirectly constrained the existence of such a kinetically-mixed hidden photon [3; 4; 5; 6]. Most of these efforts have involved searching for the effects of hidden photons that are produced _on-shell_, since a detectable signal often requires propagation across a considerable distance compared to an \(A^{\prime}\) Compton wavelength.1 Experiments of this form include photon regeneration or "light-shining-through-wall" (LSW) experiments, such as ALPS [10] and CROWS [11], as well as Dark SRF, which recently set its first limits employing two ultra-high quality superconducting radio frequency (SRF) cavities [12]. Footnote 1: Exceptions to this are, e.g., limits derived from tests of Coulomb’s law [7; 8] and atomic spectroscopy [9]. In the field basis of Eq. (1), an LSW setup involves driving an EM field \(A\) at a frequency \(\omega\) which in turn produces hidden photons \(A^{\prime}\). These hidden photons have energy \(\omega\) and propagate with momentum \(k=\sqrt{\omega^{2}-m_{A^{\prime}}^{2}}\) across a barrier opaque to SM photons, after which they source a SM field \(A\) in a shielded detection region. For an off-shell hidden photon \(m_{A^{\prime}}>\omega\), the signal is evanescent, i.e., it is suppressed by the propagation distance \(d\) as \(e^{-|k|d}\). In most cases, \(d\) corresponds to the separation between the production and detection regions, which is usually chosen to be \(d\gtrsim\omega^{-1}\). Thus, in such an arrangement, searching for more massive hidden photons requires operating EM sources at higher frequencies, since this increases the maximum mass for which such particles can propagate on-shell. This is the strategy employed in current experiments such as ALPS [10] and recently proposed millimeter wavelength setups [13]. However, using higher frequency sources is not a fundamental requirement. It appears as a consequence of requiring long propagation distances in an LSW experiment, which can be avoided by employing a thin barrier, \(d\ll\omega^{-1}\). We are thus motivated to consider light-shining-through-_thin_-wall (LSthinW) setups, which allow for the detection of evanescent (i.e., virtual or off-shell) hidden photon signals for \(m_{A^{\prime}}\gg\omega\). There is considerable advantage in this approach as opposed to increasing the frequency, since lower frequency sources are able to generate a larger density of source photons and employ very high-quality resonators, such as SRF cavities. It should be noted that thin barriers can still provide effective EM shielding. In the radio frequency (RF) regime, \(\omega\sim 1\) GHz \(\sim 10\)\(\mu\)eV and the length scale \(\omega^{-1}\) is of order 10 cm. This is much larger than the penetration depth of RF fields into (super)conductors, which can be as small as \(\sim 50\) nm, allowing for \(d\ll\omega^{-1}\). Further, it is crucial to note that in an evanescent LSthinW setup, it is not required that the photon detector be located within a Compton wavelength \(m_{A^{\prime}}^{-1}\) of the barrier. Although the virtual hidden photons are restricted to this small region, they act as a localized source of on-shell photons which readily propagate to a distant detector. It follows that, apart from the thickness of the barrier, there is no exponential suppression relative to any other experimental length scale, such as the size or frequency of a detection cavity. Similar insights were noted previously in Ref. [14] within the context of LSW searches for electromagnetically-coupled axions. In this work, we propose a simple LSthinW setup to search for hidden photons with mass \(m_{A^{\prime}}\gg 1\)\(\mu\)eV using high-quality SRF cavities. As shown schematically in Fig. 1, a driven "emitter" cavity sources both SM and hidden fields at frequency \(\omega\sim 1\) GHz. A narrow super conducting barrier separates and shields the emitter from a quiet "receiver" cavity. The SM field is exponentially attenuated through the barrier over the London penetration depth \(\lambda_{\rm L}\sim 50\) nm, while the hidden field is attenuated over \(1/m_{A^{\prime}}\sim 1\) mm \(\times\) (meV/\(m_{A^{\prime}}\)). Thus, for a barrier of thickness \(10~{}\mu\)m \(\lesssim d\lesssim m_{A^{\prime}}^{-1}\), the receiver cavity is shielded from the large driven fields, while the evanescent hidden photon field can penetrate into the receiver cavity and excite its resonant modes. As we show, such an experiment can provide leading sensitivity to hidden photons in the \(10~{}\mu\)eV \(-100\) meV mass range, whether or not they constitute the dark matter. The rest of this work proceeds as follows. In Sec. II, we introduce the formalism needed to calculate LSthinW signals. In Sec. III, we specialize to the evanescent regime and highlight the signal parametrics particular to this case. We apply this formalism to optimize a simple LSthinW setup in Sec. IV and then estimate the sensitivity of several concrete searches. We conclude in Sec. V. ## II General formalism Here we review the classical equations of motion governing SM and hidden photon fields, the production of hidden photons, and their excitation of EM cavities. We use some of the formalism of Ref. [15], which studied on-shell hidden photons in the far-field limit, but we reformulate some of the key results to highlight the physics of the off-shell evanescent case. The results of this section are general, however, and can be applied to hidden photons of any mass (provided that \(\epsilon\ll 1\)) and are equivalent to the results of Ref. [15]. Later in Sec. III, we will apply these results to the evanescent case. ### Classical Field Equations We begin by obtaining the classical wave equations for the photon and hidden photon fields in a field basis which diagonalizes the kinetic and mass terms of Eq. (1). Eq. (1) is converted by means of the field-redefinitions \[A^{\mu} \to A^{\mu}+\frac{\epsilon}{\sqrt{1-\epsilon^{2}}}\,A^{\prime\,\mu} \tag{2a}\] \[A^{\prime\,\mu} \to \frac{1}{\sqrt{1-\epsilon^{2}}}\,A^{\prime\,\mu}\, \tag{2b}\] which yields \[\mathscr{L} =-\frac{1}{4}\,F_{\mu\nu}F^{\mu\nu}-\frac{1}{4}\,F^{\prime}_{\mu \nu}F^{\prime\,\mu\nu}+\frac{1}{2}\,\frac{m_{A^{\prime}}^{2}}{1-\epsilon^{2}} \,A^{\prime}_{\mu}A^{\prime\,\mu}\] \[-j_{\mu}\left(A^{\mu}+\frac{\epsilon}{\sqrt{1-\epsilon^{2}}}\,A^{ \prime\,\mu}\right)\,. \tag{3}\] In this basis, SM charges couple to both the massless and massive fields, which are themselves uncoupled. Hence, the wave equations follow immediately from Eq. (II) as \[\left(\partial_{t}^{2}-\nabla^{2}\right)\vec{A} =\vec{j} \tag{4a}\] \[\left(\partial_{t}^{2}-\nabla^{2}+\frac{m_{A^{\prime}}^{2}}{1- \epsilon^{2}}\right)\vec{A}^{\prime} =\frac{\epsilon}{\sqrt{1-\epsilon^{2}}}\,\vec{j}\, \tag{4b}\] where \(\vec{A}\) and \(\vec{A}^{\prime}\) are the SM and hidden vector potentials, respectively, and \(\vec{j}\) is the SM current density. Eq. (4a) is Maxwell's wave equation in Lorenz gauge, with the scalar potential \(\phi\) determined by \(\partial_{t}\phi=-\vec{\nabla}\cdot\vec{A}\,\). The hidden potentials obey an identical condition,2 and the electric Figure 1: **Left**: A sketch of the LSthinW setup. A highly excited emitter cavity is separated from a quiet receiver cavity by a thin wall. The driven emitter and signal receiver modes are shown in blue. **Right**: A closeup of the thin wall. The visible photon field (blue) is suppressed over the short London penetration depth \(\lambda_{\rm L}\) whereas the evanescent invisible mode (pink) extends across the hidden photon Compton wavelength \(\sim 1/m_{A^{\prime}}\), which can be orders of magnitude larger than the wall thickness. The receiver mode is resonantly excited to detectable levels even though it is effectively sourced by the thin region of non-zero invisible field near the receiver side of the wall. and magnetic fields are determined from the potentials in the usual manner for both the SM and hidden fields. As discussed in Ref. [15], to compute field propagation across a conducting barrier it is useful to define "visible" and "invisible" linear combinations of fields that couple to or are completely sequestered from SM sources, respectively. In such a basis, conducting boundary conditions are simple to enforce as they apply only to the visible field and do so in the standard way. The required transformation can be read off of Eq. (3). We define \[A^{\mu}_{\rm vis} =\sqrt{1-\epsilon^{2}}\,A^{\mu}+\epsilon\,A^{\prime\,\mu} \tag{5a}\] \[A^{\mu}_{\rm inv} =-\epsilon\,A^{\mu}+\sqrt{1-\epsilon^{2}}\,A^{\prime\,\mu}\, \tag{5b}\] and also take \(e\to\sqrt{1-\epsilon^{2}}\,e\) such that the visible field couples to SM charge via the empirical EM coupling constant. Eq. (4) becomes \[\left(\partial_{t}^{2}-\nabla^{2}\right)\vec{A}_{\rm vis} =\vec{j}-\frac{\epsilon\,m_{A^{\prime}}^{2}}{1-\epsilon^{2}}\, \vec{A}^{\prime} \tag{6a}\] \[\left(\partial_{t}^{2}-\nabla^{2}+\frac{m_{A^{\prime}}^{2}}{1- \epsilon^{2}}\right)\vec{A}_{\rm inv} =-\frac{\epsilon\,m_{A^{\prime}}^{2}}{1-\epsilon^{2}}\,\vec{A}. \tag{6b}\] Note that we have kept the right-hand-side of these equations written in terms of the mass-basis fields \(\vec{A}\) and \(\vec{A}^{\prime}\) for simplicity, but since these are linear combinations of \(\vec{A}_{\rm vis}\) and \(\vec{A}_{\rm inv}\), Eq. (6) contains only two undetermined fields. On the right-hand-side of Eq. (6a), \(\vec{A}^{\prime}\) enters analogously to a current that sources \(\vec{A}_{\rm vis}\). Hence, to leading order in \(\epsilon\ll 1\), we define this _effective current_ as \[\vec{j}_{\rm eff}=-\epsilon\,m_{A^{\prime}}^{2}\,\vec{A}^{\prime}. \tag{7}\] This form of the effective current is equivalent to that presented in Ref. [15], \(\partial_{t}\,\vec{j}_{\rm eff}=\epsilon\,m_{A^{\prime}}^{2}(m_{A^{\prime}}^{ 2}\vec{E}^{\,\prime}-\vec{\nabla}\,\vec{\nabla}\cdot\vec{E}^{\,\prime})\). This follows from the definition of \(\vec{E}^{\,\prime}\) in terms of its potentials, as well as Gauss's law for the hidden field in vacuum \(\vec{\nabla}\cdot\vec{E}^{\,\prime}=-m_{A^{\prime}}^{2}\phi^{\,\prime}\), as derived from Eq. (3). ### Sourcing the Hidden Field Consider an emitter cavity of volume \(V_{\rm em}\) which contains a driven, monochromatic SM field at frequency \(\omega\). We want to determine the hidden fields that are produced outside of this cavity. We work here with the mass-basis fields of Eq. (4), as this will prove to yield a result which is particularly useful for the evanescent case. At \(\mathcal{O}(\epsilon^{0})\), the emitter cavity is described by the driven massless fields \(\vec{E}_{\rm em}\) and \(\vec{B}_{\rm em}\) oscillating at frequency \(\omega\), which vanish within the conducting walls of the cavity and obey conductor boundary conditions3 on the inner cavity surface \(\partial V_{\rm em}\). These boundary conditions are maintained by charges and currents on \(\partial V_{\rm em}\), e.g., the tangential magnetic field is supported by a surface current \(\vec{K}_{\rm em}\) given by Footnote 3: \(\vec{B}_{\rm em}\) is tangential to the surface and \(\vec{E}_{\rm em}\) is normal to the surface. \[\vec{K}_{\rm em}=-\hat{n}\times\vec{B}_{\rm em}\, \tag{8}\] where \(\hat{n}\) is the unit normal orientated outward to \(\partial V_{\rm em}\) and \(\vec{B}_{\rm em}\) is evaluated on \(\partial V_{\rm em}\). From Eq. (4b), this current sources an \(\vec{A}^{\prime}\) which to leading order in \(\epsilon\) is \[\vec{A}^{\prime}\simeq\frac{\epsilon}{4\pi}\,e^{i\omega t}\int_{\partial V_{ \rm em}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! excitation amplitude and phase at time \(t\), and the mode profile \(\vec{E}_{p}(\vec{x})\) satisfies \[\vec{\nabla}\cdot\vec{E}_{p}=(\nabla^{2}+\omega_{p}^{2})\,\vec{E}_{p}=\vec{E}_{p} \times\hat{n}\Big{|}_{\partial V_{\rm rec}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! =0. \tag{13}\] Here, \(\omega_{p}\) is the mode's resonant frequency and \(\hat{n}\) is the unit normal to the surface \(\partial V_{\rm rec}\) of the receiver cavity. Performing this decomposition in Eq. (11) yields an equation for the time-evolution of \(\mathcal{E}_{p}(t)\), \[\left(\partial_{t}^{2}\!+\!\frac{\omega_{p}}{Q}\,\partial_{t}\!+\!\omega_{p} ^{2}\right)\mathcal{E}_{p}\simeq-\frac{\int_{\rm rec}{\rm d}^{3}x\,\vec{E}_{p }^{*}\cdot\partial_{t}(\vec{j}+\vec{j}_{\rm eff})}{\int_{\rm rec}{\rm d}^{3}x \,|\vec{E}_{p}|^{2}}\;, \tag{14}\] where on the left-hand-side we have inserted a damping term, quantified by the quality factor \(Q\) of the receiver resonant mode. The integrals on the right-hand-side are performed over the volume of the receiver. Assuming that there are no SM currents within the inner volume of the receiver cavity, \(\vec{j}\) can be dropped from the right-hand-side of Eq. (14) since the conducting boundary condition implies that current on the inner walls of the cavity is orthogonal to \(\vec{E}_{p}\) at the boundary. On resonance (\(\omega=\omega_{p}\)), the visible field in the receiver is \[\vec{E}_{\rm vis}(\vec{x})\simeq-\,\frac{Q}{\omega_{p}}\,\vec{E}_{p}(\vec{x} )\,\frac{\int_{\rm rec}{\rm d}^{3}x^{\prime}\,\vec{E}_{p}^{*}(\vec{x}\,^{ \prime})\cdot\vec{j}_{\rm eff}(\vec{x}\,^{\prime})}{\int_{\rm rec}{\rm d}^{3}x \,|\vec{E}_{p}(\vec{x}\,^{\prime})|^{2}}\, \tag{15}\] which corresponds to a signal power of \[P_{\rm sig}=\frac{\omega_{p}}{Q}\int_{\rm em}\!\!{\rm d}^{3}x\,|\vec{E}_{\rm vis }|^{2}\simeq\frac{Q}{\omega_{p}}\,\frac{\big{|}\int_{\rm rec}{\rm d}^{3}x\, \vec{E}_{p}^{*}\cdot\vec{j}_{\rm eff}\big{|}^{2}}{\int_{\rm rec}{\rm d}^{3}x\, |\vec{E}_{p}|^{2}}. \tag{16}\] ## III Evanescent signals Eqs. (12) and (16) can be numerically evaluated for any choice of experimental and model parameters to determine the corresponding signal strength. However, it is useful to also have analytical expressions for simple geometries. This was done in Ref. [15] in the limit that the separation \(d\) between the emitter and receiver cavities is much greater than the size of either cavity. In this work we are interested in the opposite limit, where the cavities are very closely spaced and in the evanescent regime. As we show below, it is also possible to analytically evaluate \(\vec{j}_{\rm eff}\) and \(P_{\rm sig}\) in this limit. To begin, let us take a simple LSthinW setup consisting of two cavity volumes obtained from a single larger volume partitioned by a thin conducting surface \(\mathcal{S}\) (as shown in Fig. 2), whose thickness \(d\) is much smaller than both the cavity length and the Compton wavelength of the hidden photon, but much thicker than the EM penetration depth of the partition material. Consider \(\vec{j}_{\rm eff}(\vec{x})\) at a point \(\vec{x}\) fixed in the receiver cavity, within a distance \(m_{A^{\prime}}^{-1}\) of \(\mathcal{S}\) and a distance much greater than \(m_{A^{\prime}}^{-1}\) from the external walls of the receiver cavity (i.e., the top and bottom of Fig. 1). The integrand in the expression for \(j_{\rm eff}\) in Eq. (12) is exponentially suppressed for \(|\vec{x}\,^{\prime}-\vec{x}|\gg 1/m_{A^{\prime}}\), where \(\vec{x}\,^{\prime}\) is a point on \(\mathcal{S}\). Thus the \(\vec{x}\,^{\prime}\) integral is heavily weighted near \(\vec{x}\,^{\prime}_{0}\), defined as the point on \(\mathcal{S}\) that is closest to \(\vec{x}\). These coordinates are shown in Fig. 2. We thus expand \(\vec{K}_{\rm em}(\vec{x}\,^{\prime})\) around \(\vec{x}\,^{\prime}_{0}\) in the integrand of Eq. (12), which is a good approximation for \(\vec{x}\) away from the external edges of the receiver cavity and provided that \(\omega/m_{A^{\prime}}\ll 1\). Then, taking \(\mathcal{S}\) to be approximately planar near \(\vec{x}\,^{\prime}_{0}\), we may extend the integration region to an infinite plane and evaluate Eq. (12) to find \[\vec{j}_{\rm eff}(\vec{x})\simeq\!\frac{1}{2}\,\epsilon^{2}\,m_{A^{\prime}}\, e^{i\omega t}\,e^{-m_{A^{\prime}}|\vec{x}-\vec{x}\vec{j}_{0}|}\ \hat{n}\times\vec{B}_{\rm em}(\vec{x}\,^{\prime}_{0}) \tag{17}\] up to \(\mathcal{O}(\omega/m_{A^{\prime}})\), where as in Fig. 2 the \(\hat{n}\)-axis is defined along the direction normal to \(\mathcal{S}\) pointing from \(\vec{x}\,^{\prime}_{0}\) to \(\vec{x}\). For later convenience, we factorize the spatial dependence of \(\vec{j}_{\rm eff}\) into a dimensionless function \(\hat{j}\) defined as, \[\vec{j}_{\rm eff}(\vec{x})\equiv\epsilon^{2}\,m_{A^{\prime}}\,e^{i\omega t}\, e^{-m_{A^{\prime}}d}\,\bar{B}_{\rm em}\,\hat{j}\left(\vec{x}\right)\,, \tag{18}\] where \(\bar{B}_{\rm em}=\sqrt{\int_{\rm em}{\rm d}^{3}x\,|\vec{B}_{\rm em}|^{2}/V_{ \rm em}}\) is the amplitude of the emitter magnetic field RMS-averaged over the emitter cavity volume \(V_{\rm em}\). An example of \(\vec{j}_{\rm eff}\) computed numerically from Eq. (12) is shown in Fig. 3 for two different emitter modes, taking the emitter to be a right-cylindrical cavity. This displays the features expected from Eq. (17) (discussed below), which applies near the emitter wall and away from the side-endcap boundary (\(\rho=R\), \(z=0\)). In Fig. 4, we compare the analytic result of Eq. (17) directly to a numerical evaluation of Eq. (12) for a particular emitter field and at various hidden photon masses. For \(m_{A^{\prime}}\gtrsim 2\,\omega\), \(j_{\rm eff}\) is well-approximated by Eq. (17) to \(\lesssim 10\%\), except within \(1/m_{A^{\prime}}\) of the outer radial edge (\(\rho=R\)). In this region, ignoring contributions from surface currents along the radial edge Figure 2: A closeup of the thin wall \(\mathcal{S}\) separating the emitter and receiver cavities. To evaluate \(\vec{j}_{\rm eff}\) at the point \(\vec{x}\) in the receiver cavity, we use the fact that the integral over \(\mathcal{S}\) in Eq. (12) is heavily weighted near the point \(\vec{x}\,^{\prime}_{0}\) in the \(m_{A^{\prime}}\gg\omega\) limit. This yields Eq. (17). of the emitter cavity is no longer a valid approximation, and the effective current develops a component along the \(\hat{z}\) direction. However, the contribution of such corrections to the signal power is relatively suppressed by the small volume of this region. In particular, this radial edge contributes to \(P_{\rm sig}\) as \(\mathcal{O}(\omega^{4}/m_{A^{\prime}}^{d})\), which is sub-dominant for an optimal choice of modes (see Eq. (20)). The effective current in Eq. (17) has three key features: 1. \(j_{\rm eff}\propto m_{A^{\prime}}\). This follows from Eq. (12), as the relevant length scale in the integral is \(1/m_{A^{\prime}}\ll 1/\omega\). 2. \(\vec{j}_{\rm eff}\) is tangential to \(\mathcal{S}\). This is because (away from the external radial edges of the cavity) if \(m_{A^{\prime}}\gg\omega\) then the only SM current within a Compton wavelength of the receiver cavity is that running along \(\mathcal{S}\). 3. \(j_{\rm eff}(\vec{x})\) only has weight within a distance \(1/m_{A^{\prime}}\) of \(\mathcal{S}\), as evident by the exponential factor in Eq. (17). These facts have important implications for the signal power and the selection of modes. In particular, in the numerator of Eq. (16), the integral of \(\vec{E}_{p}^{*}\cdot\vec{j}_{\rm eff}\) over the receiver cavity volume involves only the components of the receiver mode electric field that are tangential to \(\mathcal{S}\) and within a distance \(1/m_{A^{\prime}}\) from \(\mathcal{S}\). These electric field components are suppressed by \(\omega/m_{A^{\prime}}\ll 1\) relative to the typical field value, due to the conducting boundary condition on \(\mathcal{S}\). Taken together, this implies that the overlap of \(\hat{j}\) with the receiver cavity mode scales as \(\int_{\rm rec}\mathrm{d}^{3}x^{\prime}\,\vec{E}_{p}^{*}(\vec{x}^{\prime})\cdot \hat{j}\,(\vec{x}^{\prime})\propto 1/m_{A^{\prime}}^{2}\). We thus choose to define a dimensionless overlap parameter \(\eta\), \[\eta\equiv\frac{m_{A^{\prime}}^{2}}{\omega_{p}^{1/2}}\,\frac{\left|\int_{\rm rec }\mathrm{d}^{3}x^{\prime}\,\vec{E}_{p}^{*}(\vec{x}^{\prime})\cdot\hat{j}\,( \vec{x}^{\prime})\right|}{\sqrt{\int_{\rm rec}\mathrm{d}^{3}x^{\prime}\,|\vec{ E}_{p}(\vec{x}^{\prime})|^{2}}}\, \tag{19}\] such that in the evanescent limit and for optimal mode choices, \(\eta\) is independent of \(m_{A^{\prime}}\) and set only by cavity and mode geometry. For an optimal configuration, \(\eta\gtrsim\mathcal{O}(1)\) as discussed in Sec. IV.2. Using this in Eq. (16), the signal power reduces to \[P_{\rm sig}\simeq Q\,\bar{B}_{\rm em}^{2}\,\frac{\epsilon^{4}\,\eta^{2}}{m_{A ^{\prime}}^{2}}\,e^{-2m_{A^{\prime}}d}. \tag{20}\] ## IV LSthinW DESIGN A full design study is beyond the scope of this work. Here we highlight the key requirements, focusing on those unique to the evanescent regime. Some practical challenges, such as matching the frequencies of the cavities and improving quality factors, are shared with ongoing on-shell LSW searches [12]. A future LSthinW search can make use of their improvements. The most important design consideration is the use of a thin barrier. This is optimally as thin as possible to increase the upper mass reach, while still sufficiently thick to provide adequate shielding and to preserve the Figure 3: The spatial profile (in cylindrical coordinates) of \(\vec{j}_{\rm eff}\) outside of a cylindrical emitter cavity of radius \(R\) and length \(L=R\) for two source modes, \(\mathrm{TM}_{010}\) (left panel) and \(\mathrm{TE}_{011}\) (right panel). The hidden photon mass is \(m_{A^{\prime}}=10/R\). Only a portion of the emitter cavity is shown, with its walls indicated by the thick black line. The emitter electric field profile is shown in blue, with darker shading indicating larger field magnitude and arrows indicating direction. The relative magnitude of the resulting effective current is shown by the red shading and its direction by the red arrows. Figure 4: Comparison of numerical and analytic evaluations of the \(\phi\)-component of the effective current profile \(\hat{j}\,(\vec{x})\) (see Eq. (18)) outside the endcap of a cylindrical emitter cavity operating in the \(\mathrm{TE}_{011}\) mode. The solid curves show the numerically-determined radial profiles computed from the integral in Eq. (12) (evaluated within a Compton wavelength of the emitter cavity) for various choices of the hidden photon mass \(m_{A^{\prime}}\). For \(m_{A^{\prime}}\gtrsim\omega\), the numerical results agree well with the analytic result of Eq. (17), shown in dotted black. large field gradients and quality factors associated with SRF cavities. Below, we also consider the optimal choice of emitter and detector modes as well as cavity shape, as this has important qualitative differences from the on-shell case. Finally, we give some example experimental parameters and discuss the resulting sensitivity to hidden photons in the \(10^{-6}\ \mathrm{eV}-10^{-1}\ \mathrm{eV}\) mass range. ### Thin Barrier As discussed above, the largest mass that an LSthinW experiment is sensitive to is dictated by the thickness \(d\) of the barrier separating the two cavity volumes, \(m_{A^{\prime}}\sim 1/d\), which is evident by the exponential suppression of \(P_{\mathrm{sig}}\) in Eq. (20). Although thinner barriers enhance the signal for large masses, the penetration of SM EM fields into superconductors means that a minimum thickness is required to suppress noise from the strong driven fields of the emitter cavity leaking into the detection region. We conservatively estimate this minimum thickness by demanding that such leakage fields \(B_{\mathrm{leak}}\sim e^{-d/\lambda_{\mathrm{L}}}\tilde{B}_{\mathrm{em}}\) be no larger than the signal field \(B_{\mathrm{sig}}\sim Q\,\epsilon^{2}\tilde{B}_{\mathrm{em}}(\omega/m_{A^{ \prime}})\), where \(\lambda_{\mathrm{L}}\sim 40\ \mathrm{nm}\) is the London penetration depth of niobium. This implies a minimum barrier thickness of \(d\sim 2\ \mu\mathrm{m}\) for the weakest signals that we consider in this work. In our reach estimates, we adopt an even more conservative minimum of \(d>10\ \mu\mathrm{m}\), corresponding to a maximum hidden photon mass of \(m_{A^{\prime}}\sim 0.02\ \mathrm{eV}\). The manufacture and use of a \(d\sim 10\ \mu\mathrm{m}\) barrier poses additional challenges. One option would be to utilize \(\sim 1\ \mu\mathrm{m}\) commercial niobium foils [16]. However, maintaining high-\(Q\) requires post-fabrication treatments to rid the niobium of material contaminants, and this is not simple to do for very thin surfaces, as it often requires chemical or electropolishing etching treatments that remove the outer \(\sim 100\ \mu\mathrm{m}\) of material [17]. The incorporation of such standalone niobium barriers would be additionally complicated by the fact that they must resist stress induced by vacuum and EM pressure gradients. An alternative strategy would be to fabricate a rigid barrier by sputtering a few microns of niobium onto a low-loss, insulating substrate, such as sapphire [18]. In this case, the high-\(Q\) of the receiver cavity could be maintained by orientating the exposed sapphire towards the emitter volume and the thin niobium towards the receiver volume. For such an arrangement, achieving large driven fields in the emitter cavity in the presence of higher loss (\(Q\sim 10^{9}\)) sapphire [19; 20; 21; 22] requires a larger amount of power to be driven and dissipated in the emitter cavity. However, for the largest field strengths and volumes that we consider (see Table 1) this corresponds to \(\lesssim 100\ \mathrm{W}\), which can be readily supplied and then dissipated through the liquid helium already required to cool the emitter cavity [23; 24]. Thermal transport through the thin barrier is another concern. Heat transmission along a standalone \(d\sim 10\ \mu\mathrm{m}\) niobium barrier or through a dielectric substrate is restricted, causing the temperature at the center of the barrier to be larger than that of the cavity's external walls which are cooled by liquid helium. If this temperature peak is too large it may quench the superconductivity of the barrier. This may be mitigated, for instance, by incorporating helium cooling channels through the substrate itself, but we leave a dedicated study of such designs to future work. To account for this, we consider two representative barrier thicknesses: \(d=0.5\ \mathrm{mm}\) (which is similar to the wall thickness of existing SRF cavities and for which the above challenges are not expected to be a concern [25]) and \(d=10\ \mu\mathrm{m}\). ### Cavity and Mode Geometry We consider a simple arrangement of two coaxial right-cylinder cavities of radius \(R\) and length \(L\) formed by partitioning a single cylinder of radius \(R\) with a planar barrier parallel to its endcaps. From the structure of the effective current in Eq. (17), we can ascertain that the optimal receiver mode has electric field components tangential to the barrier surface. This is the case for the \(\mathrm{TE}_{011}\) mode, which is azimuthal, \(\vec{E}(\vec{x})\propto J_{1}(\alpha_{11}\rho/R)\,\sin\left(\pi z/L\right)\hat{\phi}\), where the radial coordinate \(\rho\) spans \([0,R]\), \(z\) spans \([0,L]\), and \(\alpha_{11}\simeq 3.83\) is the first zero of \(J_{1}\). Eq. (17) along with \(\hat{n}=\hat{z}\) implies that to excite this receiver mode, the magnetic field of the emitter should possess radial components near the endcap. This is also satisfied by the \(\mathrm{TE}_{011}\) mode, since \(\vec{B}(z=L)\propto J_{1}(\alpha_{11}\rho/R)\,\hat{\rho}\). Hence, we expect optimal overlap when both cavities are operated in the \(\mathrm{TE}_{011}\) mode. This is also demonstrated schematically by the right panel of Fig. 3. Analytically evaluating the overlap parameter of Eq. (19) for this mode choice, we find \[\lim_{m_{A^{\prime}}\gg\omega}\eta^{2}=\frac{\pi^{5}R^{5}}{L^{2}\big{(}\pi^{2 }R^{2}+\alpha_{11}^{2}L^{2}\big{)}^{3/2}}\, \tag{21}\] such that \(\eta\simeq 1.6\) for \(R=L\) and \(m_{A^{\prime}}\gg\omega\). The dimensions of the emitter/receiver cavity also strongly impact the signal power. In particular, from Eq. (21) we see that the evanescent signal grows significantly with the aspect ratio \(R/L\) as \[\lim_{R\gg L\gg m_{A^{\prime}}^{-1}}\eta=\pi R/L. \tag{22}\] This is expected in the evanescent limit. Since \(j_{\mathrm{eff}}\) is peaked within one Compton wavelength of the barrier, decreasing the receiver length increases the fraction of the receiver volume which contains appreciable effective current. Eqs. (20) and (22) imply that the signal power is independent of the receiver volume provided that \(m_{A^{\prime}}\gg\omega,R^{-1},L^{-1}\). We thus consider as an example a search with \(R/L=50\). Note that such a geometry causes a suppression of the on-shell (\(m_{A^{\prime}}<\omega\)) signal, as evident in Fig. 5, due to decreased receiver volume and incoherence of the emitted hidden photon field over the emitter cavity, as now \(R\gg 1/\omega\simeq L/\pi\). Thus, such a geometry is purely optimized for evanescent hidden photons. While there is a strong overlap for a \(\text{TE}_{011}\) emitter to excite a \(\text{TE}_{011}\) receiver in the evanescent limit, this does not generally hold for matched cavity modes. This is evident in the left panel of Fig. 3, which shows \(\vec{j}_{\text{eff}}\) sourced by a \(\text{TM}_{010}\) emitter mode. In the coaxial region (\(\rho<R\), \(z>0\)), \(\vec{j}_{\text{eff}}\) is primarily radial but the mode has electric fields purely in \(\hat{z}\). Thus, \(\vec{j}_{\text{eff}}\cdot\vec{E}_{p}^{*}\) is non-zero only within \(1/m_{A^{\prime}}\) of the outer radial edge of the endcap (\(\rho=R\), \(z=0\)). As a result, for both cavities in the \(\text{TM}_{010}\) mode, \(\eta\) is suppressed by \(\mathcal{O}(\omega/m_{A^{\prime}})\ll 1\) relative to that of the \(\text{TE}_{011}\) case considered above.4 Footnote 4: Fig. 3 implies that large overlap is possible for \(\text{TM}_{010}\) using a different cavity arrangement, such as nested concentric cavities. It is natural to ask if evanescent hidden photons can be searched for in a setup minimally modified from existing RF LSW efforts, such as Dark SRF [12], simply by taking the cavity separation to be much smaller than the \(d\sim\omega^{-1}\sim 10\) cm gap currently used. The Dark SRF setup can be well-approximated as two cylindrical cavities operating in the \(\text{TM}_{010}\) mode. This was chosen to target the longitudinal \(A^{\prime}\) polarization, which provides enhanced sensitivity in the on-shell \(m_{A^{\prime}}\ll\omega\) regime [15]. However, as discussed above, this mode configuration has a suppressed overlap in the evanescent limit. Therefore, in addition to decreasing the cavity separation, a change in modes is needed for an effective search for hidden photons heavier than \(\sim\text{few}\times\mu\text{eV}\). It is also interesting to note that the CROWS LSW experiment conducted a search employing the optimal \(\text{TE}_{011}\) configuration discussed here [11]. However, their cavity separation \(d\simeq\omega^{-1}\) was too large to enable enhanced sensitivity to evanescent hidden photons. ### Hidden Photon Sensitivity The reach of an optimized LSthinW setup is estimated by the signal-to-noise ratio, \(\text{SNR}=P_{\text{sig}}/P_{\text{noise}}\), where the noise power \(P_{\text{noise}}\) is assumed to be dominated by thermal occupation of the receiver mode. We take \(\text{SNR}=5\) to determine the projected sensitivity. If the phase of the emitter cavity is not actively monitored, the noise power is given by the Dicke radiometer equation, \(P_{\text{noise}}\simeq T\sqrt{\delta\omega/2\pi t_{\text{int}}}\,,\) where \(T\) is the receiver cavity temperature, \(\delta\omega\) is the analysis bandwidth which we take to be the receiver bandwidth \(\omega/Q\), and \(t_{\text{int}}\) is the experimental integration time. Following Ref. [15], an active monitoring of the emitter field's phase allows for enhanced sensitivity, corresponding to an effective thermal noise power of \(P_{\text{noise}}\simeq T/t_{\text{int}}\). We will consider both possibilities in our estimates below. Operating at low temperature is advantageous, and existing RF LSW searches optimize their cooling strategy by cooling the receiver much more than the emitter [12]. This may not be feasible for an LSthinW search, as the use of a thin barrier with no vacuum gap places the emitter and receiver in thermal contact. The entire system likely needs to be cooled to the temperature of the receiver cavity, and for this reason we adopt a conservative readout temperature of \(T=2\) K. The projected sensitivities of three distinct LSthinW setups are shown in Fig. 5, assuming coaxial right-cylindrical cavities operated in the \(\text{TE}_{011}\) mode. Experimental parameters for each are given in Table 1. In all cases we take \(T=2\) K and we set the magnetic field in the emitter to have a peak value along the cavity walls of \(100\) mT, slightly below the critical field of niobium. For the "LSthinW I" setup, we consider cavities of aspect ratio \(R/L=1\), a relatively thick separating barrier \(d=0.5\) mm, and generally conservative parameters regarding cavity volume, quality factor, and integration time. For "LSthinW II," we adopt the same cavity and mode geometry, but optimize the other parameters to the design goals of the existing Dark SRF collaboration [12]. Our estimates demonstrate that either of these two setups would enable sensitivity to a large range of unexplored parameter space for hidden photons of mass \(10^{-5}\) eV \(<m_{A^{\prime}}<10^{-3}\) eV. "LSthinW III" employs parameters optimized for larger hidden photon masses, \(10^{-4}\) eV \(<m_{A^{\prime}}<10^{-1}\) eV, using cavities with \(R/L=50\) and a thinner barrier of \(d=10\)\(\mu\)m. Also shown as shaded gray in Fig. 5 are existing limits on the existence of kinetically-mixed hidden photons. In Fig. 5, the sensitivity is maximized in each setup when the hidden photon mass is equal to the cavity frequency, \(\omega=\sqrt{(\pi/L)^{2}+(\alpha_{11}/R)^{2}}\,.\) The cavity frequency for LSthinW III is larger than that of LSthinW I and LSthinW II due to the smaller cavity length. The scaling \(\epsilon\propto m_{A^{\prime}}^{1/2}\) for \(m_{A^{\prime}}>\omega\) follows from Eq. (20). For \(m_{A^{\prime}}<\omega\) we find \(\epsilon\propto m_{A^{\prime}}^{-2}\), as expected from Ref. [15] for an on-shell search in the "transverse configuration." These two \begin{table} \begin{tabular}{c c c c c c c} \hline \hline LSthinW & \(d\) & \(R\) & \(L\) & \(Q\) & \(t_{\text{int}}\) & readout \\ \hline I & 0.5 mm & 10 cm & 10 cm & 10\({}^{10}\) & 4 hr & power \\ II & 0.5 mm & 10 cm & 10 cm & 10\({}^{12}\) & 1 yr & phase \\ III & 10 \(\mu\)m & 15 cm & 0.3 cm & 10\({}^{12}\) & 1 yr & phase \\ \hline \hline \end{tabular} \end{table} Table 1: Benchmark experimental parameters for the three projections shown in Fig. 5. For each, the peak magnetic field on the emitter cavity wall is fixed to \(B=100\) mT and the readout temperature is \(T=2\) K. Each setup assumes cylindrical cavities with radius \(R\), length \(L\), and quality factor \(Q\). The cavities are assumed to be aligned along a common axis, separated by a wall of thickness \(d\). \(t_{\text{int}}\) is the total integration time. The last column denotes whether the readout involves only a measurement of power in the receiver, or additionally employs an active monitoring of the emitter field’s phase. See Sec. IV for details. power laws suggest that the optimal sensitivity occurs for \(m_{A^{\prime}}\simeq\omega\). But there is additionally a resonant peak at \(m_{A^{\prime}}=\omega\), evident in Fig. 5. At this critical mass, the hidden photon field is sourced with wavenumber \(k=0\), giving maximally constructive interference in the integral of Eq. (9). The width of this feature is narrower for cavities of larger aspect ratio \(R/L\gg 1\), as \(R\gg\omega^{-1}\) implies a larger phase incoherence over the emitter surface for a small deviation of \(m_{A^{\prime}}\) away from \(\omega\). ## V Discussion The highest mass accessible to light-shining-through-wall experiments is dictated not by the frequency of the driven field but by the inverse of the distance separating the emission and detection regions. Thus, the upper part of the mass range that can be explored can be significantly enlarged in a "light-shining-through-_thin_-wall" (LSthinW) experiment. We have focused on thin superconducting barriers, since their small penetration depth implies that only several microns is needed for sufficient shielding. In particular, such an experiment employing superconducting cavities operating at \(\sim\) GHz frequencies could have exquisite sensitivity to hidden photons as heavy as \(\sim 0.1\) eV. The development of thin barriers, as opposed to higher-frequency sources, is a natural alternative to enlarging the mass reach to new particles and has strong synergistic overlap with other efforts, such as future versions of the ARIADNE experiment [34; 35], which will tentatively utilize \(\sim 100\)\(\mu\)m niobium shields. A unique feature of this setup is its ability to produce and detect particles of mass up to \(\sim 0.1\) eV, regardless of whether they constitute the dark matter of our Universe. Indeed, the power of this approach is highlighted by the fact that most experiments operating in this mass range are often hindered by the difficulty in operating low-loss resonators and photon detectors at meV \(\sim\) THz frequencies [36; 37; 38; 39; 40; 41]. In fact, our projections suggest that the reach of an LSthinW experiment may even exceed that of certain dark matter detectors [41], even though the latter benefit from the assumed presence of a hidden photon dark matter background. In this work, we have discussed various setups in which an LSthinW experiment using electromagnetic cavities may be realized, but dedicated design efforts are needed to fully bring this proposal to light. While we have considered simple vacuum cavities with a conducting barrier, other arrangements may be advantageous and deserve further study. For example, the mass suppression Figure 5: The projected reach of three representative LSthinW setups employing SRF cavities, as shown in red. All the searches assume coaxial right-cylindrical cavities, with the emitter cavity driven in the TE\({}_{011}\) mode and the receiver cavity tuned to have a TE\({}_{011}\) mode of matching frequency. LSthinW I (solid red) assumes readily achievable experimental parameters and employs cavities of equal aspect ratio (\(R=L=10\) cm) separated by a barrier of thickness \(d=0.5\) mm. LSthinW II (dashed red) and III (dotted red) use improved experimental parameters, similar to the design goals of the ongoing Dark SRF experiment (whose projected reach is shown in dotted dark gray [26]). LSthinW II and III are complementary searches, using different cavity geometries to specialize to smaller and larger \(m_{A^{\prime}}\), respectively. LSthinW II uses the same cavity shape and wall thickness as LSthinW I, whereas LSthinW III employs a pancake-like cavity (\(R=15\) cm, \(L=0.3\) cm) and a smaller barrier thickness, \(d=10\)\(\mu\)m. For more details, see Table 1 and Sec. IV.3. Also shown in shaded gray are existing constraints on kinetically-mixed hidden photons, taken from the repository Ref. [27]. These include limits derived from the CROWS LSW experiment [11], the Dark SRF LSW Pathfinder run [12], XENON1T [28], CMB spectral distortions [29; 30; 31], and solar energy loss [32; 33]. of the signal power in Eq. (20) is not fundamental, but results from the vanishing of the receiver cavity mode near the conducting barrier. It may be that the use of absorbing shield materials or novel mode structures with more support near the barrier can avoid this suppression and generate an improved scaling of \(P_{\rm sig}\propto m_{A^{\prime}}^{0}\). Finally, in addition to searching for hidden photons, the proposed LSthinW approach may also be applied to enlarge the mass reach for related experiments searching for different types of particles, such as electromagnetically-coupled axions [42; 43] and millicharged particles [44]. We leave these investigations to future work. ## Acknowledgements We would like to thank Paddy Fox, Timergali Khabi-boulline, Sam Posen, Paul Riggins, Vladimir Shiltsev, and Slava Yakovlev for valuable conversations. This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under contract number DE-AC02-07CH11359. Fermilab is operated by the Fermi Research Alliance, LLC under Contract DE-AC02-07CH11359 with the U.S. Department of Energy.
2309.05137
Debugging Trait Errors as Logic Programs
Rust uses traits to define units of shared behavior. Trait constraints build up an implicit set of first-order hereditary Harrop clauses which is executed by a powerful logic programming engine in the trait system. But that power comes at a cost: the number of traits in Rust libraries is increasing, which puts a growing burden on the trait system to help programmers diagnose errors. Beyond a certain size of trait constraints, compiler diagnostics fall off the edge of a complexity cliff, leading to useless error messages. Crate maintainers have created ad-hoc solutions to diagnose common domain-specific errors, but the problem of diagnosing trait errors in general is still open. We propose a trait debugger as a means of getting developers the information necessary to diagnose trait errors in any domain and at any scale. Our proposed tool will extract proof trees from the trait solver, and it will interactively visualize these proof trees to facilitate debugging of trait errors.
Gavin Gray, Will Crichton
2023-09-10T21:12:52Z
http://arxiv.org/abs/2309.05137v1
# Debugging Trait Errors as Logic Programs ###### Abstract. Rust uses traits to define units of shared behavior. Trait constraints build up an implicit set of first-order hereditary Harrop clauses which is executed by a powerful logic programming engine in the trait system. But that power comes at a cost: the number of traits in Rust libraries is increasing, which puts a growing burden on the trait system to help programmers diagnose errors. Beyond a certain size of trait constraints, compiler diagnostics fall off the edge of a complexity cliff, leading to useless error messages. Crate maintainers have created ad-hoc solutions to diagnose common domain-specific errors, but the problem of diagnosing trait errors in general is still open. We propose a trait debugger as a means of getting developers the information necessary to diagnose trait errors in any domain and at any scale. Our proposed tool will extract proof trees from the trait solver, and it will interactively visualize these proof trees to facilitate debugging of trait errors. ## 1. Introduction Rust is a systems programming language that provides strong memory safety guarantees through ownership. A less-touted but equally-important aspect of Rust's design is its trait system. Similar to typeclasses in Haskell, traits in Rust define units of shared behavior which can bound generic types. For example, this snippet illustrates a _Tostring_ trait for converting values to strings: ``` 1//A traitdefinitionestablishestheunitofsharedbehavior. 2traitToString{ 3fnto_string(&self)->String; 4} 5//A traitimplementationassociatesatypewithatrait.Logically,thisisthefact: 6//(i32,i32):ToString. 7implToStringfor(i32,i32){ 8fnto_string(&self)->String{ 9format!("({},{})",self.0,self.1} 10} 11} 12} 13//Animplementationcanbeparameteric.Logically,thisistherule: 14//Vec<T>:ToString:-T:ToString. 15impl<T:ToString>ToStringforVec<T>{ 16fnto_string(&self)->String{ 17lets=self.iter().map(|v|v.to_string()).collect::<Vec<->>().join(","); 18format!("[[s]]") 19} 20} 21//A traitmethodisnormallyinvokedwiththedotoperator.Logically,thisisthequery: 22//?-Vec<(i32,i32)>:ToString 23fnmain(){ 24letv=vec![{0,1},{2,3}]; 25println!(")",v.to_string()); 26} ``` To call a traitmethodlikev.to_string(),theRustcompilermustdeterminethatthetypeofv satisfiestheconditionsrequiredtocall.to_string().AssuggestedbytheProlog-esque syntaxin the comments above, this problem reduces to logic programming. A trait is a predicate, a non-parameterized implementation is a fact, a parameterized implementation is a rule, and a required trait bound is a query. This analogy is made explicit by Chalk (rust-lang/chalk), an implementation of Rust's trait solver within a generic logic programming framework. If trait solving is logic programming, then debugging trait errors is debugging logic programs. Considering the current popularity of Prolog, the Rust compiler goes to great lengths to obscure this connection. In fact, the compiler does a heroic amount of work to help its users debug trait errors. For instance, if one tries to call.to_string() on aVec+i32>, then Rust gives a handy diagnostic that localizes the root cause to the vector's type parameter: ``` 1error[E0599]:themethod'to_string'existsforstruct'Vec+i32>', 2butitstraitboundswerenotsatisfied 3-->src/main.rs:105:20 4|| 5105||println!("{}",v.to_string()); 6|| 7|| 8duetos unsatisfiedtraitbounds 9 10note:traitbound'i32:ToString'wasnotsatisfied 11-->src/main.rs:93:9 12|| 139||impl<T:ToString>ToStringforVec<T>{ 14|| 15|| 16|| 17|| 18|| 19note:traitbound'i32:ToString'wasnotsatisfied 20-->src/main.rs:93:9 21|| 229||impl<T:ToString>ToStringforVec<T>{ 23|| 24|| 25|| 26|| 27|| 28|| 29|| 30|| 31|| 32|| 33|| 34|| 35|| 36|| 37|| 38|| 39|| 40|| 41|| 42|| 43|| 44|| 45|| 46|| 47|| 48|| 49|| 50|| 51|| 52|| 53|| 54|| 55|| 56|| 57|| 58|| 59|| 60|| 61|| 62|| 63|| 64|| 65|| 66|| 67|| 68|| 69|| 70|| 71|| 72|| 73|| 74|| 75|| 76|| 77|| 78|| 79|| 80|| 81|| 82|| 83|| 84|| 85|| 86|| 87|| 88|| 89|| 90|| 91|| 92|| 93|| 94|| 95|| 96|| 97|| 98|| 99|| 99|| 90|| 91|| 92|| 93|| 94|| 95|| 97|| 98|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 999|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 9|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 9|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 9|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 9|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 9|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 9|| 99|| 9|| 99|| 99|| 99|| 9|| 99|| 99|| 99|| 9|| 99|| 99|| 9|| 99|| 9|| 99|| 99|| 9|| 99|| 9|| 99|| 99|| 99|| 99|| 9|| 99|| 9|| 99|| 99|| 9|| 99|| 99|| 99|| 9|| 99|| 99|| 9|| 99|| 99|| 99|| 9|| 9|| 99|| 99|| 9|| 99|| 9|| 99|| 99|| 99|| 9|| 99|| 99|| 99|| 9|| 99|| 99|| 9|| 99|| 9|| 99|| 9|| 99|| 9|| 99|| 99|| 9|| 99|| 9|| 99|| 99|| 9|| 99|| 9|| 99|| 9|| 9|| 99|| 9|| 99|| 99|| 99|| 9|| 99|| 99|| 9|| 99|| 9|| 99|| 9|| 99|| 9|| 99|| 9|| 9|| 99|| 9|| 99|| 9|| 99|| 9|| 99|| 9|| 9|| 99|| 9|| 99|| 9|| 99|| 9|| 99|| 9|| 99|| 9|| 99|| 9|| 99|| 99|| 99|| 9|| 99|| 9|| 99|| 9|| 99|| 9|| 99|| 9|| 99|| 9|| 99|| 99|| 9|| 9|| 99|| 99|| 99|| 99|| 9|| 99|| 99|| 9|| 99|| 99|| 9|| 99|| 9|| 99|| 9|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 9|| 99|| 99|| 9|| 99|| 99|| 9|| 99|| 99|| 9|| 99|| 99|| 9|| 99|| 99|| 9|| 9|| 99|| 9|| 99|| 99|| 99|| 9|| 99|| 99|| 99|| 9|| 99|| 99|| 9|| 99|| 99|| 99|| 99|| 9|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 9|| 99|| 99|| 99|| 99|| 99|| 9|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 999|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 999|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99||99|| 99|| 99|| 99|| 99|| 99|| 99|| 99|| 99||9|| 99|| 99|| 99|| 99|| 99|| 9||99|| 9|| 999|| 99|| 99|| 99||9|| 99|| 99|| 99|| 99|| 99|| 99|| 99||9|| 99|| 99|| 99|| 99||9|| 99|| 99|| 99|| 99|| 99||9|| 99|| 99||9|| 99|| 9||9|| 99|| 99||9|| 99|| 99||9|| 99|| 99 ``` **Algorithm 1** A Rust program (top left) uses the Bevy game engine (top right) with an incorrect type parameter to run_timer. The partial proof tree--represented as a feature model diagram--for the IntoSystemConfigs trait bound (middle) results in a poor diagnostic (bottom). Figure 1: A Rust program (top left) uses the Bevy game engine (top right) with an incorrect type parameter to run_timer. The partial proof tree—represented as a feature model diagram—for the IntoSystemConfigs trait bound (middle) results in a poor diagnostic (bottom). parameters' type, _F_e_ and _F_i_, implement SystemParam. The SystemParam trait is implemented for Res<T> if T: Resource and for Query<Q> if Q: WorldQuery. The preceding paragraph contains many details! Again, we stress that this example is still small in comparison to real-world Rust. The program is barely a "Hello World" for Bevy, and the traits used simplify many gritty details in Bevy's implementation. Nonetheless, this example is sufficient to demonstrate the diagnostic issue. Within the Rust compiler, the trait solver constructs a proof tree like the diagram in Figure 1 (middle). The diagnostic system consumes this proof tree and then emits the error shown in Figure 1 (bottom). An ideal diagnostic would point to the fact that _Timer needs to implement SystemParam, but this diagnostic instead points farther up the proof tree. It says that the entire function signature does not implement [IntoSystem<->, providing no further details. At a high level, the issue here is the branching point at [IntoSystem<->. The diagnostic system does not know whether the user intended [run_timer] to implement SystemParamFunction or ExclusiveSystemParamFunction, and so the diagnostics cut off at that point in the proof tree. This differs from the [ToString] example where there only exists a single implementation possibility. More generally, the diagnostic system is full of what we call "complexity cliffs." Error messages contain helpful debugging information at small levels of complexity. Once a complexity threshold is exceeded, then error messages degrade in quality, akin to the infamous Prolog error: No. This trend of poor compiler diagnostics has led to libraries implementing their own debugging tools to get more precise errors. For the Bevy game engine this tool is called bevycheck (jakobhellermann/bevycheck), a simple debugging macro that statically checks that all type requirements of [SystemParamFunction] are met. The requirements themselves are straightforward, every parameter needs to satisfy the trait bound [SystemParam]. This crate-level debugging intervention is not only used in Bevy. Other popular trait-heavy crates, [(tokio-rs/axum), and Diesel (diesel-rs/diesel) provide this same macro solution. Moreover, the Rust Foundation has started funding efforts to improve error messages for trait-heavy crates (weiznich/rust-foundation-community-grant). ## 3. A Trait Debugger We are working to address this problem by designing a _trait debugger_. The broad goal is to extract the internal state of the Rust trait solver, and then present it to the developer in the form of a broken proof tree as an aid to debugging. The hypothesis is that the proof tree can provide more granular information about the root cause of an error compared to reporting a failure of the top-level trait bound, and it can do so in a domain-general way. The advantage of proof trees over traditional diagnostics is that they can scale to larger programs and not fail at complexity cliffs. The proof tree shown in Figure 1 (middle) is represented using the notation common to feature model diagrams (Kang et al., 1990). Goals are shown in the nodes. _Mandatory_ subgoals are shown with a small filled circle above them (e.g., all subgoals of [run_timer: SystemParamFunction must hold]. _Alternative_ subgoals are shown as children of the same parent goal with an arc drawn through their edges (e.g., one subgoal of [run_timer: IntoSystem<-> must hold]. We have built an initial prototype of this debugger that produces trees similar to that shown in Figure 1 (middle), although the output is currently not quite as polished as the figure. A concern from our initial prototype is that proof tree may become too large to effectively skim, reducing their value as a debugging aid. This is more of a statement on the structure of modern Rust crates--Bevy contains thirty-four parameterized implementors for the trait SystemParamFunction, all of which are represented in the raw proof tree. Therefore, even after pruning implementation details, the proof tree will still likely need to be augmented with heuristics that suggest "starting points" for exploration. As is the case in the code of Figure 1, the function [run_timer] was never meant to implement [ExclusiveSystemParamFunction] and the proof tree reflects this. Traditional diagnostics have a hard time moving past these barriers but given the information in the proof tree it's easier to drill down to the root cause of the trait error, the unsatisfied bound \(\texttt{Tiner: SystemParam}\). Proof trees make finding unsatisfied bounds easier, and the provenance for introduced bounds comes baked in the structure. There are several key challenges in developing this tool as proposed. The major challenge is how to compactly and interactively visualize the tree. So far we've described additional diagnostic information as being good, but too much information and developers can feel overwhelmed. Trees with too much information will lead to wasted time searching through the extra nodes and ultimately a difficult-to-use debugger. The last challenge is to maintain the facade that Rust has built, developers should not know they are debugging Prolog programs. Displayed information needs to be reported in terms of the source program, referring to source locations when appropriate. In practice we've found the last challenge particularly difficult. The raw proof trees obtained directly from the trait solver reflect implementation details. Fixpoint iterations and performance optimizations are details we need to abstract out of the proof trees before presenting them to the user. How to properly build these abstractions while retaining the proof tree semantics is an area where we are actively working. We will demonstrate the current state of our debugger at HATRA. We are looking for any feedback regarding design decision, suggestions to visualize the trees, and further connections between this problem and other work. ## 4. Related Work The challenge of debugging Rust trait errors overlaps with several areas of related work: type inference diagnostics, logic program debuggers, and human factors of proof assistants. One goal of this paper is to pick out ideas from these areas which can influence the design of our trait debugger, as well as to solicit missed connections from readers. ### Diagnosing Type Errors Hindley-Milner type inference is at once a great triumph of functional programming, and simultaneously a source of unending pain. For 40 years, researchers have proposed increasingly sophisticated methods for diagnosing type inference errors (although none have made it to production, to our knowledge). A variety of strategies have emerged: Fault localizationOne strategy is to blame the "right" line of code for a type error, which is usually not where the initial error is noticed by the type-checker. Wand (1986) developed an algorithm to track the provenance of unifications made by the type-checker, which could then be presented to the user. Hage and Heeren (2007) used heuristics such as a "trust factor" to sort type constraints in order from most to least problematic. Many recent systems look for sets of constraints such that the program would type-check. Pavlinovic et al. (2014) and Loncaric et al. (2016) use an SMT solver, and Zhang et al. (2015) use a Bayesian analysis. Seidel et al. (2017) use machine learning to predict blame based on a training set of ill-typed programs. A fault localization approach could help with diagnosing Rust trait errors. In the Figure 1 example, the root cause was a particular failed inference that could be identified by a heuristic like "the deepest failed inference in the proof tree." However, such a heuristic-based approach is unlikely to generalize to all possible Rust trait errors--after all, that is exactly how the compiler developers have operated for many years, and their large bag of heuristics is not sufficient for cases like Figure 1. Nonetheless, we expect that some heuristics will be valuable in filtering and ranking the information displayed in our interactive proof tree. _Interactive debuggers._ Rather than try to find the one true answer, an alternative is to give the programmer an interface into all the information in the type system. Chitil (2001) designed an "explanation graph" with a command-line interface where users ask about the constraints and computed types of sub-expressions in a process called "algorithmic debugging." Tsushima and Asai (2013) refine this approach to not require a debugger-friendly reimplementation of type inference. Stuckey et al. (2003) refine this approach by tracking the provenance of constraints as a debugging aid. Chen and Erwig (2014) develop a similar "guided type debugging" approach that leverages counterfactual typing (Chen and Erwig, 2014) to more quickly identify a solution. We envision our proof tree tool to certainly be interactive. Like these tools, it should be able to explain the proof tree in terms of the source program with a rich mapping between the two. Unlike these tools, we intend to develop a more expressive 2D graphical interface rather than being restricted to text on the command line. _Automated repair._ Several systems for type error diagnosis attempt to identify a small change to the input program that causes it to become well-typed (Chen and Erwig, 2014; Lerner et al., 2007; Sakkas et al., 2020). This repair can then either directly solve the user's type error, or point them to the root cause. Repair may help with Rust trait errors, but it seems premature to reach directly for program synthesis until we have exhausted the obvious avenues for a diagnostic-only tool. _Domain-specific annotations._ Rather than building a fully generic diagnostic system, an alternative is to give library authors the necessary tooling to express domain-specific knowledge about common trait errors. Heeren et al. (2003) describe such an approach for the Helium subset of Haskell where library authors create domain-specific type rules with error messages tailored to the library's domain. Notably, most of the efforts in the Rust ecosystem towards addressing trait errors also have the shape of domain-specific annotations. RFC #2397 (github.com) describes a #[do_not_recommend] annotation that library authors could place on certain trait implementations. For instance, if a trait is implemented for tuples of length 32, then a library author could mark that implementation to not appear in the suggestions of diagnostics. A domain-specific approach could likely work in complement to a domain-general approach like we propose. ### Logic Programming While logic programming has fallen out of fashion, the sizable research program around it in the 1980s and 90s has left us many interesting threads to potentially pick back up. In particular, researchers developed a number of tools to facilitate debugging of Prolog programs. Several systems visualized "and/or trees" that represented the execution trace of a Prolog program: the Dewlap debugger (Dewar and Cleary, 1986), the Transparent Prolog Machine (Eisenstadt and Brayshaw, 1988), and cyclic AND/OR graphs (Senay and Lazzeri, 1991). Figure 2 shows examples of diagrams from the latter two systems. Other Prolog debuggers like Opium (Ducasse, 1998) focused on abstracting data and control-flow within large execution traces. These trees provide some guidance in how to compactly visualize various aspects of a logic program trace, such as unification and backtracking. We expect to adopt some of these techniques into our visual design for the trait debugger. However, the challenge will be to ensure the visualization scales to more complex programs, in the sense that a person can read through the diagram and find the information they are looking for. ### Debugging Proof Assistants The use of tactics in modern proof assistants seems to have the same flavor of usability problem as trait errors. For example, a programmer applies a tactic to some goal, and either succeeds or gets simply a "No" from the proof assistant. In theory, similar techniques might be useful to diagnose a Figure 2: Examples of logic program trace visualizations. Top: an AORTA tree from Eisenstadt and Brayshaw (1988). Bottom: a cyclic AND/OR tree from Senay and Lazzerri (1991). trait error as to diagnose a tactic failure. However, we struggled to find much related work on this subject. Shi et al. (2023) describe a "tactic preview" which can help readers of proofs identify the goals solved by a given tactic. Beyond this, we are interested to find any additional research about tools for debugging broken proof trees.
2307.16381
Quantum State Tomography with Locally Purified Density Operators and Local Measurements
Understanding quantum systems is of significant importance for assessing the performance of quantum hardware and software, as well as exploring quantum control and quantum sensing. An efficient representation of quantum states enables realizing quantum state tomography with minimal measurements. In this study, we propose an alternative approach to state tomography that uses tensor network representations of mixed states through locally purified density operators and employs a classical data postprocessing algorithm requiring only local measurements. Through numerical simulations of one-dimensional pure and mixed states and two-dimensional pure states up to size $8\times 8$, we demonstrate the efficiency, accuracy, and robustness of our proposed methods. Experiments on the IBM and Quafu Quantum platforms complement these numerical simulations. Our study opens avenues in quantum state tomography for two-dimensional systems using tensor network formalism.
Yuchen Guo, Shuo Yang
2023-07-31T03:14:31Z
http://arxiv.org/abs/2307.16381v3
# Scalable Quantum State Tomography with Locally Purified Density Operators and Local Measurements ###### Abstract Understanding quantum systems holds significant importance for assessing the performance of quantum hardware and software, as well as exploring quantum control and quantum sensing. An efficient representation of quantum states enables realizing quantum state tomography with minimal measurements. In this study, we propose a new approach to state tomography that uses tensor network representations of mixed states through locally purified density operators and employs a classical optimization algorithm requiring only local measurements. Through numerical simulations of one-dimensional pure and mixed states and two-dimensional random tensor network states up to size \(8\times 8\), we demonstrate the efficiency, accuracy, and robustness of our proposed methods. Experiments on the IBM Quantum platform complement these numerical simulations. Our study opens new avenues in quantum state tomography for two-dimensional systems using tensor network formalism. ## I Introduction Quantum state tomography plays a fundamental role in characterizing and evaluating the quality of quantum states produced by quantum devices. It serves as a crucial element in the advancement of quantum hardware and software, regardless of the underlying physical implementation and potential applications [1; 2; 3]. However, reconstructing the full quantum state becomes prohibitively expensive for large-scale quantum systems that exhibit potential quantum advantages [4; 5], as the number of measurements required increases exponentially with system size. Recent protocols try to solve this challenge through two main steps: efficient parameterization of quantum states and utilization of carefully designed classical optimization algorithms. For one-dimensional systems with area law entanglement, matrix product state (MPS) [6; 7; 8; 9; 10; 11] provides a compressed representation. It requires only a polynomial number of parameters that can be determined from local or global measurement results. Two iterative algorithms using local measurements, singular value thresholding (SVT) [12] and maximum likelihood (ML) [13], have been demonstrated in trapped-ion quantum simulators with up to 14 qubits [14]. However, SVT is limited to pure states and thus impractical for noisy intermediate-scale quantum (NISQ) systems. Meanwhile, although ML can handle mixed states represented as matrix product operators (MPOs) [15; 16], it suffers from inefficient classical optimization. Another scheme reconstructs the quantum state by inverting local measurements, but the resulting MPO is not necessarily positive [17]. On the other hand, some approaches are also based on tensor networks (TN) but update parameters from the sampling output of global measurements across the entire system [18; 19], which is significantly more demanding than local measurements. Moreover, these approaches cannot readily incorporate error mitigation techniques that focus mainly on estimators rather than samplers [20; 21; 22; 23; 24; 25; 26; 27], especially for readout errors [28; 29]. To scale to larger systems and integrate with error mitigation techniques, TN-based methods using only local measurements are preferred. However, TN state tomography remains unexplored for higher-dimensional quantum systems. In this work, we introduce a new approach to reconstructing mixed quantum states using only local expectation values, for which readout error mitigation is feasible. We propose representing mixed states as locally purified density operators (LPDOs) [30] and optimizing their parameters using a variant of gradient descent for a local loss function. To validate the effectiveness of our method, we perform numerical simulations using typical quantum states: 1D critical Ising ground states, 1D gapless spin-\(\frac{1}{2}\) Heisenberg ground states subjected to various noise models, and two-dimensional (2D) random projected entangled pair states (PEPSs) [31; 32; 33]. Furthermore, we implement experiments on real quantum devices accessible through the IBM Quantum platform. ## II Quantum State Tomography with Lpdo and Local Loss Function In this section, we introduce our new reconstruction approach for scalable tomography of mixed states. We begin by efficiently parameterizing mixed states using LPDOs, which have been shown efficient in simulating thermal or dissipative many-body systems in 1D [34; 35]. LPDO variants have also been applied to quantum process tomography for small systems [36]. In the LPDO form, a 1D mixed quantum state is expressed as follows \[\begin{split}\hat{\rho}=\sum_{\{\mathbf{\mu},\mathbf{\nu}\}}\sum_{\{\mathbf{\kappa }\}}\prod_{j=1}^{N}[A_{j}]_{\mu_{j}-1,\mu_{j}}^{\tau_{j},\kappa_{j}}[A_{j}^{*}] _{\nu_{j}-1,\nu_{j}}^{\omega_{j},\kappa_{j}}\\ |\tau_{1},\cdots,\tau_{N}\rangle\!\langle\omega_{1},\cdots,\omega_ {N}|\,,\end{split} \tag{1}\] where \(\mathbf{\mu}\) and \(\mathbf{\nu}\) denote virtual indices that describe quantum entanglement, and \(\mathbf{\kappa}\) represents inner indices (also known as Kraus indices) introduced for open systems [35], as depicted in Fig. 1(a). Importantly, an LPDO is constructed to be Hermitian and positive semidefinite by design. In scenarios with weak local noise, the Kraus dimension \(d_{\kappa}\) is typically a small constant, independent of the system size \(N\). This enables a significantly more efficient tomography approach compared to directly reconstructing an MPO. Note that the ML method proposed in Ref. [13] could be adapted and integrated into this framework by replacing MPO iterations with computationally less expensive LPDO operations. However, we do not explore this further as the exponential computational cost of iteration (shown later) is not easily reduced. Next, we propose a loss function constructed from local measurements and a variant of gradient descent to update the local LPDO tensors. The loss function for our problem is chosen as \[\Theta=\sum_{i}||\hat{\sigma}_{\langle i\rangle}-\hat{\rho}_{\langle i\rangle }||_{F}^{2}\equiv\sum_{i}\Theta_{\langle i\rangle}, \tag{2}\] where \(\hat{\sigma}_{\langle i\rangle}\) and \(\hat{\rho}_{\langle i\rangle}\) are the reduced density matrices for the sites \(\{i,\cdots,i+L-1\}\) of the target state obtained from experiments (see Methods) and the reconstructed state respectively, as shown in Fig. 1(a). Expanding each term in the loss function gives \[\Theta_{\langle i\rangle}=\mathrm{Tr}\Big{[}\hat{\sigma}_{\langle i\rangle}^{ 2}-2\hat{\sigma}_{\langle i\rangle}\hat{\rho}_{\langle i\rangle}+\hat{\rho}_{ \langle i\rangle}^{2}\Big{]}. \tag{3}\] To calculate the gradients, terms such as \[\frac{\partial\Theta_{\langle i\rangle}}{\partial A_{j}^{*}}=2\mathrm{Tr} \Bigg{[}\big{(}\hat{\rho}_{\langle i\rangle}-\hat{\sigma}_{\langle i\rangle} \big{)}\,\frac{\partial\hat{\rho}_{\langle i\rangle}}{\partial A_{j}^{*}} \Bigg{]}, \tag{4}\] need to be computed, requiring \(O(N^{2})\) computational complexity. However, instead of optimizing the entire loss function directly, we update each local tensor \(A_{j}\) by only considering the adjacent terms \(\Theta_{\langle i\rangle}\) that involve the target site \(j\) in the loss function. Specifically, we update \(A_{j}\) according to the following rule \[A_{j}\to A_{j}-\eta\sum_{i=j-L+1}^{j}\frac{\partial\Theta_{\langle i\rangle}} {\partial A_{j}^{*}}, \tag{5}\] where \(\eta\) is the learning rate, automatically adjusted using the Adam optimizer [37]. Evaluating the gradient in each iteration step has a time complexity of \(O(ND^{3})\), where \(D\) is the virtual bond dimension. This approach converges to a high-quality approximation of the target state in only \(O(\log(N))\) iterative steps without encountering the issue of local minima or barren plateaus, as demonstrated below. LPDO and the local loss function together constitute our Grad-LPDO method. Importantly, for pure states, one can set \(d_{\kappa}=1\) and optimize over the MPS manifold using the same gradient method, which is treated as a special version of Grad-LPDO. ## III Numerical simulations for 1D systems In this section, we present numerical demonstrations of our Grad-LPDO method for both pure and mixed states of special interest. We begin by comparing the performance of the ML-MPS method and our proposed Grad-LPDO method for reconstructing the ground state of the 1D transverse field Ising model at the critical point \(g=1\) under open boundary condition (OBC). The Hamiltonian is given by \[H=-\sum_{\langle i,j\rangle}Z_{i}Z_{j}+g\sum_{i}X_{i}.\] We first obtain the ground state approximated by an MPS with \(D_{0}=16\) using the standard variational method [38; 10]. This MPS serves as the target state for subsequent reconstruction. We first assume that all measurements are ideal and directly construct the reduced density matrices \(\hat{\sigma}_{i}\) from the target state. Thus, any reconstruction errors are solely attributed to insufficient local measurements and inefficient classical optimization. We perform both algorithms for different system sizes \(N\), with \(D=D_{0}=16\) and \(L=2\), where the initial states are chosen as the paramagnetic ground state for \(g\rightarrow+\infty\). The hyperparameters in the Adam optimizer are set as \(\xi_{1}=\xi_{2}=0.8\) and \(\epsilon=10^{-8}\). Fig. 1(b)-(c) shows the number of iterative steps required to achieve fidelity \(f=0.9\) for different system sizes, representing the convergence speed of the algorithms. The inset of Fig. 1(b) clearly shows that the time complexity of the ML-MPS method scales exponentially with system size for a given reconstruction accuracy. In contrast, the results shown in Fig. 1(c) are highly promising, indicating that the number of gradient steps required for convergence scales only as \(O(\log N)\). In other words, the overall time complexity of our Grad-LPDO method to reconstruct a pure 1D state is \(O(N\log(N)D^{3})\). This time complexity significantly outperforms ML-MPS (and SVT-MPS with time complexity \(O(N^{4}D^{4})\)[12]), meaning that our method saves considerable time compared to ML-MPS, especially for large systems. Even for small systems, ML-MPS holds no efficiency advantage since truncating MPS in each iteration, as required in the ML method (see Methods), is generally more computationally expensive than simply contracting the environment in Grad-LPDO. To provide a comprehensive comparison of these two methods, we calculate the maximal fidelity achieved during iterations for different \(N\) in Fig. 1(d), which demonstrates the higher accuracy of our method. At the same time, we also consider convergence stability when comparing different methods. Typical iteration curves plotted in Fig. 1(e)-(f) reveal that Grad-LPDO consistently converges to maximal fidelity, while the convergence of ML-MPS is less stable, regardless of the update or learning rate per step. In conclusion, our Grad-LPDO method surpasses the previous ML-MPS method in three key aspects: efficiency, accuracy, and stability. We now examine the performance of our Grad-LPDO method on the ground state of the 1D gapless spin-\(\frac{1}{2}\) Heisenberg model \[H=\sum_{\left\langle i,j\right\rangle}\mathbf{S}_{i}\cdot\mathbf{S}_{j} \tag{6}\] with \(N=20\) and \(D_{0}=16\). The ZZ correlation function \(C_{ij}^{\mathrm{ZZ}}=\left\langle Z_{i}Z_{j}\right\rangle-\left\langle Z_{i} \right\rangle\left\langle Z_{j}\right\rangle\) are calculated for both the target and the reconstructed states with \(D=D_{0}=16\) and \(L=2\) in Fig. 2(a). Their high consistency indicates that Grad-LPDO can capture most of the long-range correlation and antiferromagnetic order from only local measurements. Furthermore, we compare the performance of ML-MPS and Grad-LPDO in Fig. 2(b), where the differences in correlation between the target state and the reconstructed states are shown for both methods. Unlike Grad-LPDO, ML-MPS cannot accurately reproduce the correlation between sites with intervals \(L\geq 3\), hindering its application to systems with nontrivial orders. Next, we consider Heisenberg ground states with \(D_{0}=16\) that undergo different types of local noise. Specifically, we add four types of single-qubit noise with an equal error rate \(\varepsilon=0.01\) to each qubit of the ideal state, including depolarizing (DP), bit flipping (BF), amplitude damping (AD), and phase damping (PD). Fig. 2(b) shows numerical results, where we reconstruct target states using LPDOs with \(D=16\) and different \(d_{\kappa}\), along with local measurements of length \(L=2\). The fidelity between two density matrices is defined as their inner product in operator space \(f\left(\hat{\rho}_{1},\hat{\rho}_{2}\right)\equiv\mathrm{Tr}(\hat{\rho}_{1} \hat{\rho}_{2})/\sqrt{\mathrm{Tr}(\hat{\rho}_{1}^{2})\mathrm{Tr}(\hat{\rho}_{ 2}^{2})}\), which can be efficiently calculated for LPDOs (see Methods). Introducing noise to the target state will generally reduce the reconstruction fidelity \(f\), which cannot be im Figure 1: Schematic of Grad-LPDO method and performance on 1D critical Ising ground state with \(D=D_{0}=16\) and varying \(N\). (a) Mixed state with \(N=4\) in the LPDO form and the reduced density matrix for the sites \(\left\{i,i+1\right\}\). (b)-(f) Comparison of ML-MPS and Grad-LPDO methods. (b)-(c) Number of steps to achieve \(f=0.9\), with \(\log(n)\) inset, for (b) ML-MPS and (e) Grad-LPDO. (d) Maximal fidelity during iterations for both methods. (e)-(f) Convergence stability of two methods for difference learning and update rates. (e) The ML-MPS method, where \(\epsilon\) is the update rate defined by Eq. (11). (f) Our Grad-LPDO method, where \(\eta\) is the learning rate in the Adam optimizer. proved by only increasing \(d_{\kappa}\) as implied in Fig. 2(c). This decrease arises because local measurements alone cannot distinguish between mixtures in the reduced density matrix from entanglement with unmeasured qubits (represented by virtual indices) and from quantum noise (represented by Kraus indices), as confirmed by our numerical simulations. For example, reconstructing quantum states with depolarizing or bit-flipping noise poses greater challenges, since these noise models are stochastic in nature and directly introduce mixtures across different trajectories. Furthermore, the purity of the target state \(\mathcal{P}=\mathrm{Tr}[\hat{\rho}_{0}^{2}]\) is plotted in Fig. 2(d) for different noise types and in Fig. 2(e) for depolarizing noise across varying system sizes \(N\). They show a clear dependence of the reconstruction fidelity \(f\) on \(\mathcal{P}\), supporting our earlier arguments. To overcome this challenge, we increase the measurement length \(L\) in Fig. 2(d)-(e), significantly improving the accuracy for all types of noise. With adjacent 4-site measurements, the reconstruction fidelity \(f\) exceeds 0.985 even for the most challenging depolarizing noise with up to \(N=20\) qubits. However, the experimental and numerical costs scale exponentially with \(L\), requiring a trade-off between accuracy and efficiency. Our results indicate that even for moderate-sized critical systems with typical noise levels, a small constant \(L\) suffices for high-fidelity reconstruction at an acceptable cost using modern experimental and numerical techniques. ## VI Generalization to 2D systems We now generalize our Grad-LPDO method to pure states of two-dimensional systems on square lattices, represented as PEPSs. The key idea and procedure remain the same, where the local measurements in Eq. (2) are applied to \(L_{1}\times L_{2}\) subsystems. When updating the corresponding local tensor \(A_{j}\), gradients in Eq. (5) contain terms with local measurements covering the site \(j\). In general, the contraction of a 2D TN is computationally expensive. Here, we adopt the standard truncation method for finite-size systems [10; 39]. The 2D TN with bond dimension \(D\) is contracted layer by layer as the evolution of a 1D MPS, truncating the bond dimension to \(\chi=D^{2}\) after each contraction. We simulate random PEPS target states with fixed \(D_{0}=3\) and varying system sizes \(N\times N\), where random PEPSs with different \(D\) are chosen as initial states for the Figure 2: Results for gapless spin-\(\frac{1}{2}\) Heisenberg ground states without and with noise. (a) ZZ correlation function for the target and reconstructed states with \(N=20\) and \(D=D_{0}=16\). (b) Differences in the correlation between the target and reconstructed states for the two methods. (c)-(e) Maximal fidelity \(f\) and purity \(\mathcal{P}\) for Heisenberg ground states (\(D=D_{0}=16\)) undergoing various types of noise including depolarizing (DP), bit flipping (BF), amplitude damping (AD), and phase damping (PD) with error rate \(\varepsilon=0.01\). (c) \(N=20\) spins with four types of noise, reconstructed by LPDOs with \(L=2\) and different \(d_{\kappa}\). (d) \(N=20\) spins with four types of noise, reconstructed by LPDOs with \(d_{\kappa}=2\) and different \(L\). (e) Depolarizing noise added to systems with different \(N\), reconstructed by LPDOs with \(d_{\kappa}=2\) and different \(L\). iteration. Local measurements are performed on all \(1\times 2\) and \(2\times 1\) subsystems. Fig. 3 shows the reconstruction fidelity of 100 shots for each pair of \(N\) and \(D\), with error bars giving the standard deviation. The average fidelity reaches 0.995 for random \(8\times 8\) PEPSs with \(D_{0}=3\), which are generally non-critical. ## III Experiments on IBM Quantum Platform To demonstrate our method on real quantum hardware, we conduct experiments on the IBM Quantum platform. Specifically, we use the 'ibm_nairobi' quantum computer, a superconducting processor with \(N=7\) available qubits, whose qubit configuration and noise information are shown in Fig. 4(a). The input state is the trivial product state \(\ket{\psi_{0}}=\ket{0}^{\otimes N}\), then random Haar circuits are implemented to generate the output states to be reconstructed. \(4^{L}\) (\(L=2\) here) numbers of Pauli strings are measured to estimate each reduced density matrix \(\hat{\sigma}_{i}\) for the target state, with each observable measured using 10,000 shots. To mitigate errors in measurement results, twirled readout error extinction (T-REx) [28] is employed. Other error mitigation techniques for circuit errors are not considered, as one of the main objectives of tomography is to learn about the noise information in the system. Since the target state is unknown in practice, we use the loss function \(\Theta\) defined in Eq. (2) to represent the residual error for tomography, which is highly related to the final reconstruction fidelity that cannot be directly evaluated in practical scenarios [12]. In Fig. 4(b), the iteration process of one typical circuit realization (seed 57 for the random circuit) is plotted for \(d_{\kappa}=2\) and different \(D\). The experimentally measured expectation values \(\text{Tr}[\hat{\sigma}_{\langle i\rangle}\hat{P}_{\langle i\rangle}]\) and the corresponding estimated values from the reconstructed states \(\text{Tr}[\hat{\rho}_{\langle i\rangle}\hat{P}_{\langle i\rangle}]\) are calculated for each reduced density matrix with \(d_{\kappa}=2\) and \(D=4\), where \(\hat{P}_{\langle i\rangle}\) refers to Pauli strings. These results are visualized in Fig. 4(d)-(i) to demonstrate their high consistency. Fig. 4(c) shows the residual error averaged over 64 random circuit realizations along with the standard deviation. For each circuit realization, we choose 100 random LPDOs as initial states and record the average residual error. These results confirm that LPDOs with small \(d_{\kappa}\) and \(D\) serve as good approximations for quantum states generated from noisy quantum circuits, validating the performance of our tomography scheme. ## IV Discussion In this study, we introduce a new state tomography scheme that uses the LPDO parameterization of mixed states and classical optimization to estimate unknown tensors from only local measurements. Our approach demonstrates enhanced efficiency, accuracy, and robustness compared to previous methods relying on MPS representations. In particular, we extend TN state tomography, originally developed for 1D systems, to higher spatial dimensions. Our optimization method demonstrates efficacy in alleviating the curse of dimensionality for high-dimensional systems. The findings suggest that the recently proposed process tomography method [36] and the error mitigation approach [25] may generalize to higher-dimensional circuits, enabling a deeper understanding of noise and its effects in such systems [40]. Our protocol facilitates the realization of quantum state tomography for large systems with potential quantum advantages, promoting advancements in precise quantum control and complex algorithm implementation [2; 41]. ## V Methods ### Local measurements and reduced density matrices To obtain the reduced density matrix \(\hat{\sigma}_{\langle i\rangle}\) for the sites \(\{i,\cdots,i+L-1\}\) in experiments, one needs to implement an informationally complete set of measurements in this subsystem. Specifically, we consider all possible products of Pauli operators \(\hat{P}^{\mathbf{m}}_{\langle i\rangle}\) acting on these adjacent \(L\) sites, where \(\mathbf{m}=\{m_{i},\cdots,m_{i+L-1}\}\) with \(m_{i}\in\{I,X,Y,Z\}\). Then \(\hat{\sigma}_{\langle i\rangle}\) can be expanded as [1; 12] \[\hat{\sigma}_{\langle i\rangle}=\frac{1}{2^{L}}\sum_{\mathbf{m}}\text{Tr}\Big{[} \hat{\sigma}_{\langle i\rangle}\hat{P}^{\mathbf{m}}_{\langle i\rangle}\Big{]}\hat {P}^{\mathbf{m}}_{\langle i\rangle} \tag{7}\] since \(\hat{P}^{\mathbf{m}}_{\langle i\rangle}\) constitute a set of complete and orthogonal basis in the operator space. To reconstruct each \(\hat{\sigma}_{\langle i\rangle}\), the expectation values of all \(4^{L}\) Pauli strings must be measured. Figure 3: Reconstruction fidelity for random PEPSs with bond dimension \(D_{0}=3\) and varying system sizes. Results are averaged over 100 random target and initial states. In our numerical simulations for 1D pure and mixed states and 2D PEPSs, we directly construct \(\hat{\sigma}_{\langle i\rangle}\) from the target states. In experiments on the IBM Quantum platform, we estimate \(\hat{\sigma}_{\langle i\rangle}\) by applying measurements on the target states. ### SVT-MPS method and ML-MPS method We briefly review two QST methods for 1D systems based on MPS and local measurements [12, 13]. The SVT-MPS method [12] is inspired by the SVT algorithm in computer science for matrix completion [42]. The target state can be approached iteratively by solving a local Hamiltonian at each step. Specifically, in the \(n\)-th iterative step we construct and find the dominant eigenstate of the following local Hamiltonian \[\hat{Y}_{n+1}=\hat{Y}_{n}+\delta_{n}\left(\sum_{i}\hat{\sigma}_{\langle i \rangle}-E_{n}\sum_{i}\hat{\rho}_{n\langle i\rangle}\right). \tag{8}\] Here, \(\delta_{n}\) is the 'update rate' for each step, \(\hat{\sigma}_{\langle i\rangle}\) are reduced density matrices for adjacent \(L\) sites of the target state obtained through direct measurements, while \(\hat{\rho}_{n\langle i\rangle}\) are reduced density matrices of the dominant eigenstate of \(\hat{Y}_{n}\) with eigenvalue \(E_{n}\). The authors have shown that the number of iterative steps to achieve fixed fidelity typically scales as \(O(N^{2})\), where \(N\) is the number of qubits. Additionally, in each step, a variational method is needed to solve a local Hamiltonian, involving \(O(N)\) sweep back and forth and \(O(N)\) calculations of the environment per sweep. Updating local tensors for each site also requires calculating the dominant eigenvector of a \(D^{2}d_{p}\times D^{2}d_{p}\) matrix, with complexity at least \(O(D^{4})\) assuming sparsity. Therefore, the total computational cost of SVT-MPS is \(O(N^{4}D^{4})\), which is scalable but limited to pure states. However, since real experiments encounter mixed states with decoherence, this reconstruction scheme has limited practicality for characterizing the noise effect in quantum devices. The ML-MPS method directly searches for the target state that maximizes the log-likelihood function \[\log\mathcal{L}\left(\hat{\rho}\right)=\sum_{i,j}n^{j}_{\langle i \rangle}\log\left(\operatorname{Tr}\left[\hat{\Pi}^{j}_{\langle i\rangle} \hat{\rho}\right]\right) \tag{9}\] Figure 4: Experiments on IBM Quantum platform. (a) Configuration of qubits and noise information for device ‘ibm_nairobi’. (b) The iteration process of one circuit realization (\(\mathrm{seed}=57\)) for \(d_{\kappa}=2\) and different \(D\). (c) Averaged residual loss \(\Theta\) across all circuit realizations for different \(D\) and \(d_{\kappa}\). (d)-(i) Expectation values for \(L=2\) local observables obtained from experiments and reconstruction for \(d_{\kappa}=2\) and \(D=4\). via a fixed-point iterative algorithm. Here, \(\hat{\Pi}_{\langle i\rangle}^{j}\) are local projectors labeled with \(j\) applied at adjacent \(L\) sites \(\{i,\cdots,i+L-1\}\), and \(n_{\langle i\rangle}^{j}\) are the corresponding measurement outcomes. The solution \(\hat{\rho}_{\mathrm{ML}}\) that maximizes the above function satisfies \[\hat{\rho}_{\mathrm{ML}}=\frac{1}{M}\sum_{i,j}\frac{n_{\langle i \rangle}^{j}}{\mathrm{Tr}\!\left[\hat{\Pi}_{\langle i\rangle}^{j}\hat{\rho}_{ \mathrm{ML}}\right]}\hat{\Pi}_{\langle i\rangle}^{j}\hat{\rho}_{\mathrm{ML}} \equiv\mathcal{R}(\hat{\rho}_{\mathrm{ML}})\hat{\rho}_{\mathrm{ML}}, \tag{10}\] which corresponds to the fixed-point equation \[\hat{\rho}=\mathcal{R}(\hat{\rho})\hat{\rho}\mathcal{R}(\hat{ \rho}). \tag{11}\] In practice, one can replace \(\mathcal{R}\) by \((\mathcal{I}+\epsilon\mathcal{R})/(1+\epsilon)\) with \(\epsilon\ll 1\). Furthermore, under the assumption of pure state, we only need to iterate on the pure state manifold \(\ket{\psi}=\mathcal{R}\ket{\psi}\). To implement this, we construct the MPO representation of \(\mathcal{R}\) in each iteration and truncate the resulting MPS \(\mathcal{R}\ket{\psi}\). Truncation can be done variationally by minimizing the error \(e=(\bra{\psi}-\bra{\psi^{\prime}})(\ket{\psi}-\ket{\psi^{\prime}})\) or more efficiently by using SVD from site to site in canonical form, which requires \(O(ND^{3})\) operations. In the following table, we summarize the computational complexity and application scope of the previous two methods and our Grad-LPDO method. ### Fidelity between two states The fidelity between two normalized pure states is defined as \[f\left(\ket{\psi},\ket{\phi}\right)=\ket{\bra{\psi}}^{2}. \tag{12}\] This is usually generalized for two mixed states as \[f\left(\hat{\rho}_{1},\hat{\rho}_{2}\right)=\left(\mathrm{Tr} \sqrt{\sqrt{\hat{\rho}_{1}\hat{\rho}_{2}}\sqrt{\hat{\rho}_{1}}}\right)^{2} \tag{13}\] with normalized \(\mathrm{Tr}[\hat{\rho}_{1}]=\mathrm{Tr}[\hat{\rho}_{2}]=1\). However, this definition cannot be directly estimated for two mixed states in their LPDO form, which prevents direct benchmarking of our tomography method for mixed states in large systems. Therefore, we adopt an alternative definition \[f\left(\hat{\rho}_{1},\hat{\rho}_{2}\right)\equiv\mathrm{Tr}( \hat{\rho}_{1}\hat{\rho}_{2})/\sqrt{\mathrm{Tr}(\hat{\rho}_{1}^{2})\mathrm{ Tr}(\hat{\rho}_{2}^{2})}, \tag{14}\] which is the inner product in the operator space and equals the overlap between two superoperators \(\ket{\rho_{1}}\) and \(\ket{\rho_{2}}\). In particular, this alternative definition of fidelity reduces to Eq. (12) for pure states. ### Noise models In our numerical simulations for mixed states, the noise added after each state includes four types. The depolarizing noise is defined as \[\mathcal{E}\left(\hat{\rho}\right)=\left(1-\frac{4}{3}\varepsilon \right)\hat{\rho}+\frac{1}{3}\varepsilon\sum_{i=0}^{3}\sigma_{i}\hat{\rho} \sigma_{i}. \tag{15}\] The bit flipping noise is defined as \[\mathcal{E}\left(\hat{\rho}\right)=\left(1-\varepsilon\right) \hat{\rho}+\varepsilon\sigma_{x}\hat{\rho}\sigma_{x}. \tag{16}\] The amplitude damping noise is defined by the Kraus operator \(E_{0}=\ket{0}\!\bra{0}+\sqrt{1-\varepsilon}\ket{1}\!\bra{1}\) and \(E_{1}=\sqrt{\varepsilon}\ket{0}\!\bra{1}\) with the operator-sum representation \[\mathcal{E}\left(\hat{\rho}\right)=E_{0}\hat{\rho}E_{0}^{\dagger}+E_{1}\hat{ \rho}E_{1}^{\dagger}. \tag{17}\] Phase damping noise is defined similarly, with the Kraus operator \(E_{0}=\ket{0}\!\bra{0}+\sqrt{1-\varepsilon}\ket{1}\!\bra{1}\) and \(E_{1}=\sqrt{\varepsilon}\ket{1}\!\bra{1}\). ## Data availability The datasets generated and analyzed during the current study are available from the corresponding author upon reasonable request.
2309.11142
Prototype of a robotic system to assist the learning process of English language with text-generation through DNN
In the last ongoing years, there has been a significant ascending on the field of Natural Language Processing (NLP) for performing multiple tasks including English Language Teaching (ELT). An effective strategy to favor the learning process uses interactive devices to engage learners in their self-learning process. In this work, we present a working prototype of a humanoid robotic system to assist English language self-learners through text generation using Long Short Term Memory (LSTM) Neural Networks. The learners interact with the system using a Graphic User Interface that generates text according to the English level of the user. The experimentation was conducted using English learners and the results were measured accordingly to International English Language Testing System (IELTS) rubric. Preliminary results show an increment in the Grammatical Range of learners who interacted with the system.
Carlos Morales-Torres, Mario Campos-Soberanis, Diego Campos-Sobrino
2023-09-20T08:39:51Z
http://arxiv.org/abs/2309.11142v1
Prototype of a robotic system to assist the learning process of English language with text-generation through DNN ###### Abstract In the last ongoing years, there has been a significant ascending on the field of Natural Language Processing (NLP) for performing multiple tasks including English Language Teaching (ELT). An effective strategy to favor the learning process uses interactive devices to engage learners in their self-learning process. In this work, we present a working prototype of a humanoid robotic system to assist English language self-learners through text generation using Long Short Term Memory (LSTM) Neural Networks. The learners interact with the system using a Graphic User Interface that generates text according to the English level of the user. The experimentation was conducted using English learners and the results were measured accordingly to International English Language Testing System (IELTS) rubric. Preliminary results show an increment in the Grammatical Range of learners who interacted with the system. Keywords:Robotic Systems Natural Language Processing Text Generation Long Short Term Memory Networks. ## 1 Introduction As Artificial Intelligence (AI) becomes more equipped to comprehend human communication, more institutions will adopt this technology for areas where Natural Language Processing (NLP) would make a difference. AI technology is already being used in smart home and office assistants, customer service, healthcare, and human robotics, among others. There are multiple aspects of AI and NLP that generate the opportunity of having machines offering engaging, interactive capabilities. However, the current state of the art in NLP lacks reasoning and empathy capabilities, making complex interactions difficult. One way to exploit NLP technology engagement potential is the application of assistive technology. A particularly interesting field is the use of such systems in interactive robotics. Humanoid robots are useful with tedious and risky errands for people, including tasks that can result in exhausting for human beings. Jobs that require a lot of concentration and feedback, like tutoring and guidance, can benefit from incorporating autonomous robotic systems to let the students interact with learning about a specific field. Robotic systems will require the capacity to understand human lexis to achieve these goals, making characteristic language handling more significant. In the educational context, there are systems capable of teaching or assisting individuals in a self-learning process, such as Conversational Intelligent Tutoring Systems. However, they are still not optimal enough to automatically provide knowledge to help students in the learning process of a language without the need of human assistance [2]. Also, there have been interesting studies that show that interactive robotic systems are beneficial for learning [3]. The previous characteristics devise a synergy opportunity of a robotic system that incorporates an NLP component to be helpful in the self-learning process [20]. This article presents a functional prototype of a robotic system to assist the English language learning process through text-generation using Deep Neural Networks (DNN). A humanoid robot was designed and manufactured to promote learners' engagement with the assisting tool. The interaction was conducted using a Graphical User Interface (GUI) incorporated in the robot. A text-generation component was included to allow the users to interact with the system and generate language using different English levels. The experimentation was conducted with English learners and measured using the International English Language Testing System (IELTS) rubric. Preliminary results show an improvement of the subjects' current English level through regular usage of the system. However, there is a need for further and deeper experimentation to generalize the findings in this work. The article is structured as follows: Section 2 describes the state of the art of robotic systems implemented to assist self-learning; Section 3 presents the research methodology; Section 4 describes the experimental work carried out, presenting its results in Section 5. Finally, conclusions and lines of experimentation for future work are provided in Section 6. ## 2 Background A humanoid robot is a robotic system capable of presenting similar features to resemble human anatomy. These robots are usually presented and utilized as a research tool in scientific fields aimed to understand the human body structure and behavior to build. It has been proposed that robotics will be helpful in various education scenarios [3]. Previous studies indicate that robotics is providing benefits as a teaching tool in particular in the STEM fields [16], and English learning [10]. Robotic systems also provide a learning environment that seeks to improve the interdisciplinary process of learning, promoting the engagement of students in their learning activities [9, 17]. There are examples where the use of a robot for assisting the learning process is appropriate to use in language skill development as it allows a richer interaction than digital platforms [15, 17]. A significant challenge to incorporate robots as a tool to assist the self-learning process of a language is to design an engaging experience tightly related to the language the learner is using. NLP is particularly well suited to close this gap. NLP has evolved from simple classification methods like logistic regression to more complex language statistical methods and DNN [14]. Neural Networks are the dominant paradigm in NLP and have increased the research of end-to-end systems for understating human language, leading to complex applications as conversational chatbots [21]. The current and approachable theory of already-existing NLP models makes extensive use of transformers, which are topologies that use an encoder-decoder architecture incorporating an attention mechanism [26]. Many state of the art results make use of this architecture training with vast amounts of information. Models like BERT [7], T5 [22] and GPT-3 [6] are examples of big transformers delivering state-of-the-art results for various NLP tasks. Nevertheless, the field of NLP is still underdeveloped in terms of using low data quantities to perform fine-tuning in big transformers models. One way to deal with low quantity data for NLP tasks is using RNNs. These models are effective for predicting sequence analysis tasks [12], as they store the information for the current feature based on previous information, including within the model forecasting and conditioned output capabilities [19]. Recurrent architectures learn the relative importance of different parts of the sequence; nevertheless, transformers substitute recurrent mechanisms with attention mechanisms [26], which allows the capture of longer size dependencies while reinforcing training. There exist studies that favor traditional models like Conditional Random Fields (CRF) and LSTM networks over big transformers models in settings where the amount of data is not enough to perform fine-tuning, or the language specificity makes generalization difficult [13, 23]. Additionally, LSTM runs faster, making it well suited for real-time systems interaction [4]. Language models (LM) also have been used for text-generation either using large transformers [25] or LSTM like in [5, 18]. In this research, an LM is generated using an LSTM trained on a specific dataset, and it is used to predict the succeeding word. The predicted output word is then appended with the existing input words and given as new input. This process is continuously repeated by shifting the window to generate text. In the presented work, a humanoid robotic system was designed and manufactured to help engage in the self-learning process of English language students. A text-generation module to expose users to a variety of vocabulary and sentences was developed, thorough the experimentation, selection, and fine-tuning of LSTM models, transformers, and encoder-decoder architectures. The best model is selected to perform text-generation using a lower seed-text as shown in [24]. ## 3 Methodology This section presents the tools, methodologies, and development approaches used for corpus creation, text-generation module training, humanoid robotic system design, and the system integration to allow students to interact with it. ### Corpus creation The dataset consisted on different English sentences divided into three categories: basic, intermediate, and advanced. A human expert IELTS evaluator assisted in the creation of sentences with different levels of English proficiency, considering variation in grammatical range and lexical resources according to each level. The corpus is structured in sentences, divided by punctuation signs that are further cleaned and omitted to individual process words in the text-generation model. It contains 4,785 sentences and 150,000 words. ### Text generation module Most advanced models for text-generation make use of deep learning models, including LSTM networks and transformer architectures [8]. Different DNN models were trained using the dataset described in the previous section to develop the text-generation component. The researched models were: Simple LSTM model, BERT fine-tuned model, Encoder-Decoder LSTM model, Bidirectional LSTM model. To process the text, the input sentences were tokenized and passed through the input layer of each model, then to an embedding layer, and subsequently fed to the RNN substructure that processes the tokens. Finally a softmax layer is used to predict the probability of the next word. The general architecture of the networks are depicted in the figure 1 Each model was implemented using the Keras framework and trained using the same dataset split with 80% for training and 20% for testing. Also, at train Figure 1: General architecture of the text-generation network ing time, a development set proportion of 10% was used for Keras to compute validation loss and accuracy. After experimenting with the mentioned models, the model with the best performance accuracy is selected and fine-tuned to perform the text-generation. ### Robotic system design The methodology used to design the humanoid robotic system consisted of three main phases: requirement definition, specification, and design. In the requirement definition phase, an analysis of the functionality requirements of the robot was made, and the functional structures were defined. Then, through the specification stage, the robot and general guidelines for the project were carried on. In the design stage, specifications and guidelines were measured quantitatively, including the kinematics analysis and the definition of mechanical structures. To favor student engagement with the robot, it was decided to use an anthropomorphic system bearing kinematics considerations. Regardless, the presented robotic system does not attempt to include mechanical components; the mechanical design was made to adopt mechanical actuators further to let the system move and increase interaction with users. The parameters that represent kinematics configuration in general terms were based on Denavitt Hardenberg [11] motion equations. After the design stage was done, the system was drawn using the 3D drawing software fusion360. The manufacturing stage consists on printing and assembling a 3D sketch of the entire robotic system with the appropriate parameters obtained from the previous analysis. ### System implementation The implementation includes an embedded system that captures the user's speech and uses Google's Text to Speech (TTS) web service to get the transcription of the user utterance. The embedded system sends the transcription to a web service implemented in Flask to consume the best text-generation model found in the experimentation. The implemented service uses the TTS transcription as a seed to predict the following text using a fixed number of 5 words. After the model predicts the text, the Flask server sends the predicted text to the embedded system using a webhook. The embedded system uses Google's Speech to Text (STT) service to generate an audio file with the predicted text and play it using a speaker. The system is attached to the robot's body, and the user initiates the interaction. Alternatively, a Graphic User Interface (GUI) was implemented using the Gradio library [1], which can consume the service using a tablet incorporated into the robot. The GUI was intended to include users with speech or hearing disabilities. The communication architecture is depicted in the figure 2. ## 4 Experimentation This section shows the methods used for the text-generation module training, the manufacturing process of the robotic system, and the experimental process to measure system's effectivity to assist self-learning process for English students. The implemented mechanisms are illustrated, as described in section 3 of the document. ### Corpus data The corpus consisted of sentences with 3 different English levels: elemental (IELTS accuracy level 1-2.5), pre-intermediate (IELTS accuracy level 3-4.5), and upper intermediate (IETLS accuracy level 6+). Each set contained different sequence-to-sequence compound-complex sentences. This was recommended by the IELTS evaluator to optimize three specific levels of English to tackle fluency levels in different scenarios. The corpus included 171,461 tokens, 150,356 words, and 4,785 sentences. ### Text-generation module The different models were trained using the corpus described in section 4.1 divided into random partitions for training, validation, and test. Four different models were trained: Simple LSTM model, BERT fine-tuned model, Encoder-Decoder LSTM model, and a Bidirectional LSTM model. Each model was trained for 20 epochs, and the validation metrics were reported using the validation set. Different models were iterated using dropout regularization (_dropout_) with different probability parameters. Once the best model was obtained in the validation set, it was evaluated in the test data to report the metrics presented in section 5.1. The models were implemented using Tensorflow 2.0 and Keras on a Debian GNU/Linux 10 (buster) x86_64 operating system, supplied with an 11 GB Nvidia GTX 1080 TI GPU. Figure 2: System communication architecture. After the first experiments were conducted the best performance model found was the Bidirectional LSTM measured in terms of accuracy and validation. Once the best model was found further experimentation was done using a grid search strategy to find the best hyper-parameters of the model resulting in the following topology: LSTM layer (100 units), Dropout Layer (0.6 drop rate), LSTM layer (100 units), Dense layer (100 units, ReLU activation), Dense layer (125 units, softmax activation). The best parameters found were the following: Embedding vocabulary-size: 70, dropout layer: 0.6, activation function: softmax, trainable parameters: 180,275, loss function: categorical cross entropy, batch size: 150. ### Robotic system manufacturing The whole manufacturing design was approached under engineering methods to allow time-optimization and cost reductions to be considered. The process involves the following stages: Material Printing (Through a 3-D printing machine, segments from the material were printed to further treating and assembly), Material purification (Through chemical components, the segments of materials are purified through a specific epoxy designed to purify the material extracting impurities while adding brightness, Assembly of materials. (Through engineering glue, segments are assembled properly). Each of the previous stages was divided in three segments: head-manufacturing segment, arm-manufacturing segment, body-manufacturing segment respecting each of the previously presented stages. Final configurations of the robot using the tablet and embedded system are presented in figure 3 ### System Evaluation To evaluate the system's effectiveness to help learners, they were evaluated using an IELTS rubric before interacting with the system. After that, the learners were exposed to interact with the system for 5 days and a new evaluation using the same rubric was made to asses the performance of the students. The evaluation was conducted with three subjects, one for each English level in the corpus. Figure 3: Robot configurations using embedded system and tablet. ## 5 Results This section shows the results obtained from the experimentation described in section 4. The improvement of the subjects is analyzed from 250 recorded minutes of training with the system by each subject, including quantitative and qualitative evaluation from IELTS instructors. The system's performance was measured to determine the progress of the subjects. ### LSTM text-forecast model with encoded-decoded attention mechanism. Four different models were considered and evaluated to obtain the one with the best performance. Table 1 shows the accuracy obtained with the four different models when evaluated with the test dataset. The most suitable model that provided results to be used on experimental subjects was the Bidirectional LSTM model. Figure 4 shows the training accuracy and loss for the 20 epochs of training of the Bidirectional LSTM model. \begin{table} \begin{tabular}{c c} \hline Model Type & Accuracy \\ \hline \hline Simple LSTM & 80\% \\ \hline BERT fine tuned & 80\% \\ \hline Encoder-Decoder LSTM & 89\% \\ \hline Bidirectional LSTM & 95\% \\ \hline \end{tabular} \end{table} Table 1: Model accuracy results. Figure 4: Accuracy and loss validation for Bidirectional LSTM model ### Fluency improvement on subjects This section presents the outcome for the fluency analysis in each of the three experimental subjects after 250 minutes of interaction (50 minutes per day for five consecutive days) with the robotic system. The grammatical range and accuracy and marked by using a determined number of grammatical structures (6 types) in a percentage rate of accuracy and error-mistake (1-100%). The assigned instructors included the number of grammatical sentence usage in terms of accuracy percentage. After elementary training, an increase in grammatical range and accuracy, lexical resources, and fluency is observed, while pronunciation and language-idiomatic terminology doesn't show improvement. From the pre-intermediate level training, a sustained increase overall dimensions was observed, except for pronunciation. The upper-intermediate level attempted to evaluate fully understanding of complex ideas generated from the advanced corpus previously trained. The idea is to oversee a different set of more compound-complex sentences generated by the robotic system. The results before and after the training are showed in figure 5. ### Qualitative results The qualitative data obtained in this section was collected from IELTS instructors who evaluated and listened a set of questions from one specific context of Figure 5: IELTS metrics comparison before and after training. coherence for each subject, to determine a mark in grammatical range and accuracy based on IELTS rubric. Finally, instructors who listened the same ideas in the second interview attached written feedback shown in the figure 6. The results express that the instructors perceived noticeable enhancement in the English abilities of the subjects after the interaction with the robot. ## 6 Conclusions and future work This work presented the design, development, and manufacturing of a humanoid robotic system to assist English language students in a self-learning process. The robotic system was developed using a three-phase methodology (requirement analysis, specification, and design) which yields good results since the system is articulated and ready to add further interaction using actuators. Various models were tested to implement the text-generation module; a particularly interesting observation is related to the relatively poor results (80% accuracy) obtained when using a fine-tuned BERT model. This occurs due to the relatively small amount of data used to perform the fine-tuning; in this regard, the bidirectional LSTM model performs better, achieving a 95% of accuracy in the test set. The bidirectional LSTM text-generation model was useful to predict text using a seed given by the user; nevertheless, noticeable irregular fluctuations were reported on the validation accuracy and loss chart, which can be produced from irregularities in the English levels used within the corpus. The experimentation was carried on with three English students of elementary, pre-intermediate, and upper-intermediate English levels, and their progress Figure 6: Qualitative feedback from IELTS instructor after training. was measured according to the IELTS rubric. After 250 hours of training, comparative results demonstrated an average improvement of 4% in their grammatical range, 4% in grammatical accuracy, and 3.33 % in their fluency. No difference was observed in their pronunciation abilities. Quantitative and qualitative data obtained from the experimentation depicted a positive result on how a robotic system can provide aid while tackling a specific ability from a foreign language. In this case, the main improvements were reported in terms of fluency and grammatical range skills. Qualitative results show a favorable opinion both from IELTS instructors and students. In general, they perceived the system as a beneficial tool for the progress of the students. The experimental results were limited by time constraints and the reduced number of subjects, so further research is needed to generalize the observed results. The future work regarding this project includes: robust experimentation using more subjects and more structured training sessions, revision of other learning techniques and the overall effect on the English language improvement, experiment with variations on the composition of the corpus to measure its impact in the learning process. Also, interesting research can be conducted regarding pronunciation improvement using a more controlled spoken interaction with the users and the effect of dynamic movement adding actuators to the robot and measuring the impact in the self-learning process.
2309.07046
Prediction of Van Hove singularity systems in ternary borides
A computational search for stable structures among both $\alpha$ and $\beta$ phases of ternary ATB4 borides (A= Mg, Ca, Sr, Ba, Al, Ga, and Zn, T is 3d or 4d transition elements) has been performed. We found that $\alpha$-ATB4 compounds with A=Mg, Ca, Al, and T=V, Cr, Mn, Fe, Ni, and Co form a family of structurally stable or almost stable materials. These systems are metallic in non-magnetic states and characterized by the formation of the localized molecular-like state of 3d transition metal atom dimers, which leads to the appearance of numerous Van Hove singularities (VHS) in the electronic spectrum. The closeness of these VHS to the Fermi level can be easily tuned by electron doping. For the atoms in the middle of the 3d row (Cr, Mn, and Fe), these VHS led to magnetic instabilities and new magnetic ground states with a weakly metallic or semiconducting nature. The magnetic ground states in these systems appear as an analog of the spin glass state. Experimental attempts to produce MgFeB4 and associated challenges are discussed, and promising directions for further synthetic studies are formulated.
Yang Sun, Zhen Zhang, Andrew P Porter, Kirill Kovnir, Kai-Ming Ho, Vladimir Antropov
2023-09-13T15:57:19Z
http://arxiv.org/abs/2309.07046v1
# Prediction of Van Hove singularity systems in ternary borides ###### Abstract A computational search for stable structures among both \(\alpha\) and \(\beta\) phases of ternary ATB\({}_{4}\) borides (A = Mg, Ca, Sr, Ba, Al, Ga, and Zn, T is \(3d\) or \(4d\) transition elements) has been performed. We found that \(\alpha\)-ATB\({}_{4}\) compounds with A = Mg, Ca, Al, and T = V, Cr, Mn, Fe, Ni, and Co form a family of structurally stable or almost stable materials. These systems are metallic in non-magnetic states and characterized by the formation of the localized molecular-like state of \(3d\) transition metal atom dimers, which leads to the appearance of numerous Van Hove singularities (VHS) in the electronic spectrum. The closeness of these VHS to the Fermi level can be easily tuned by electron doping. For the atoms in the middle of the \(3d\) row (Cr, Mn, and Fe), these VHS led to magnetic instabilities and new magnetic ground states with a weakly metallic or semiconducting nature. The magnetic ground states in these systems appear as an analog of the spin glass state. Experimental attempts to produce MgFeB\({}_{4}\) and associated challenges are discussed, and promising directions for further synthetic studies are formulated. + Footnote †: Email: [email protected] (Y.S.); [email protected] (K.K.); [email protected] (V.A.) + Footnote †: Email: [email protected] (Y.S.); [email protected] (K.K.); [email protected] (V.A.) + Footnote †: Email: [email protected] (Y.S.); [email protected] (K.K.); [email protected] (V.A.) ## I Introduction The electronic density of states (DOS), in the vicinity of the Fermi level \(E_{f}\) (\(N(E_{f})\)), is crucial for the understanding of many properties of metallic systems. A significant \(N(E_{f})\) value typically leads to a broad spectrum of unusual and exciting electronic, magnetic, and structural properties. However, such large values of \(N(E_{f})\) simultaneously destroy the stability of the material, creating difficulties in their experimental synthesis. In some cases, losing initial stability is not a destructive factor, as it can transform the system into a stable magnetic, superconducting, or, for instance, charge density state [1; 2; 3; 4]. A physical reason for developing large \(N(E_{f})\) was discussed in 1953 when Van Hove demonstrated the crucial role of topology in the electronic or phonon band structure [5]. He has shown that peaks of the DOS are determined by so-called Van Hove critical points or singularities (VHS), i.e., places in the Brillouin zone (BZ), \(\mathbf{k}\), where (for 2D systems) with energy dispersion \(\varepsilon(\mathbf{k})\), an ordinary VHS with logarithmically diverging DOS occurs at a saddle point \(\mathbf{k}\), determined by \(\nabla_{\mathbf{k}}=0\). Thus, the energy surface area and the energy-band dispersion are closely related to peaks in the DOS. These VHSs are expected to play an important role in any properties of metallic materials where electrons at the Fermi level are involved. The significance of such VHS becomes stronger in systems with lower dimensions [6; 7; 8; 9; 10]. For instance, for 2D materials, the needed VHS is in the middle of the band, where the number of carriers is large. Such VHS 2D materials have been a hot topic in many areas of solid-state physics, especially after discovering high-temperature superconductivity in cuprates [11]. VHS scenario was proposed for the different types of superconductivity, phase separation, magnetic and charge instabilities, and their coexistence [12; 13; 14]. Strong effects of VHS have been reported in many other systems. For instance, the anomalies of an anisotropic thermal expansion near points of electronic topological transition (induced by the corresponding VHS) have been discussed in Ref. [15] In general, the effect of the proximity of the Fermi level to VHS on the kinetic and lattice properties of metals and alloys was studied in Ref. [16]. The stress-driven Lifshitz transition was found in Sr\({}_{2}\)RuO\({}_{4}\), where the uniaxial pressure lowered the saddle-point singularity below the \(E_{f}\), which caused enhancement of the superconducting critical temperature [17; 18; 19]. Such measurement strongly implies that fermiology plays an ultimate role in mechanisms of many different orderings. For chemical applications, for instance, surface VHS can also serve as a capacitor for the electrons to enhance the contribution of the systems to O\({}_{2}\) absorption through electron transfer [20]. Below, we will focus on materials crystallizing in the YCrB\({}_{4}\) (or below 114) structural type. Such type of metal borides was first discovered in a series of RE-T-B\({}_{4}\) systems (RE=rare earth, T=transition metals) in the 1970s [21; 22; 23]. The 114 families attracted much interest in studying \(f\)-electron magnetism [24; 25]. In the 2000s, the transition metal site in the RE-T-B\({}_{4}\) was populated by Al atoms to form RE-Al-B\({}_{4}\) with heavier and smaller RE elements (RE=Tm, Yb, Lu) [26; 27; 28; 29]. YbAlB\({}_{4}\) stimulated great research interests as the first Yb-based heavy fermion superconductor with quantum criticality [30; 31]. It is also predicted to be an ultra-high-temperature ce ramic with outstanding thermal and structural properties [32]. While previous studies of 114 systems mainly focused on the properties caused by the RE, the structure is not limited to the RE-based compounds. If the element at A site can form the network to maintain the T dimer and match the (5,7)-membered boron rings, one would expect other stable phases in the 114 structures. Moreover, relatively isolated T dimers may create certain localized states. Such dimers can also be close to the magnetic threshold. However, we are unfamiliar with the observed magnetism in experimentally known systems with 3\(d\) dimers, and previous electronic structure studies ignored magnetism in similar borides [33, 34]. Searching for such new stable systems can be a heavy burden for direct experimental synthesis. Recently, computational screening using electronic structure calculations analysis demonstrated its effectiveness in discovering new materials [35, 36, 37, 38]. Below, we perform such computational search for the stable systems in 114 structural families, including the possibility of magnetism. The paper is organized as follows. After reviewing the structures of these systems, we perform computational screening and identify \(\alpha\)-ATB\({}_{4}\) systems with A=Mg, Ca, Al, and T=V, Cr, Mn, Fe, Ni, and Co as a possible structurally stable family with VHS of different strengths (below, we call these VHS systems). Then, we show how the change of type of 3\(d\) atoms can be used to tune the strength of VHS, which generally can lead to instability of the paramagnetic Fermi-liquid state of density functional and the formation of a new quantum state. This can be a new electronic, magnetic, or structural state. We focus on analyzing only magnetic instabilities and demonstrate the richness of possible magnetic ground states of these systems. Furthermore, we discuss our comprehensive experimental synthetic studies of these materials. While it did not yield the desired MgFeB\({}_{4}\) phase it allowed us to formulate directions for further synthetic endeavors for ATB\({}_{4}\) systems. ## II Results and Discussion ### 114 structures 114 systems have two polymorphs, i.e., \(\alpha\)-phase with the space group \(Pbam\) and \(\beta\)-phase with the space group \(Cmmm\). RE-T-B\({}_{4}\) mainly adopts \(\alpha\)-phase while RE-Al-B\({}_{4}\) can have both \(\alpha\) and \(\beta\) phases [39]. The structure and properties of \(\alpha\) and \(\beta\) phases are usually similar [32]. Figure 1 shows their atomic packing. Both phases consist of a boron layer and a metal layer. The boron layer comprises a combination of (5,7)-membered rings. The metal layer shows a hexagonal framework of larger A atoms. Two smaller T atoms form a dimer in the center of the A framework. Along the out-plane direction, the A site corresponds to the center of the 7-membered ring in the boron layer, while the T-site aligns to the center of the 5-membered ring. Intradimer one in the \(a\)-\(b\) plane is the closest distance between the T site atoms (\(\sim\)2.3-2.6 A). The second nearest distance between the interlayer dimers, along the \(c\) direction, is \(\sim\)40% longer than the intradimer distance. The third nearest distance between the dimers is in the plane (\(>\)6 A). Therefore, the intradimer interaction between T atoms is expected to be strongest, while interaction along the \(c\)-direction should be somewhat weaker. Whether or not T atoms can interact in the plane is to be found. The main difference between \(\alpha\) phase and \(\beta\) phase is in the orientation of in-plane networks within the 3\(d\) atoms layer. ### Phase stability The computational screening of stable phases was performed on both \(\alpha\) and \(\beta\) phases of ATB\({}_{4}\). We consider typical 3\(d\) and 4\(d\) transition metal elements for the T site, including V, Cr, Mn, Fe, Co, Ni, Zr, Nb, and Mo. For the A site, we consider Mg, Ca, Sr, Ba, Al, Ga, and Zn, which have relatively large ionic radii. Figure 2 shows the energy (\(E_{d}\)) of these ATB\({}_{4}\) phases above the convex hull formed by the compounds in existing A-T-B phase diagrams from the Material Project database [40]. Compounds with 3\(d\) transition metal elements are generally more stable than 4\(d\) ones at the T site. This can be related to the size effects that 3\(d\) elements fit the 5-membered boron rings more efficiently. We identified two stable phases as new ground states, MgMnB\({}_{4}\) and MgFeB\({}_{4}\). Phonon calculations confirm that MgMnB\({}_{4}\) and MgFeB\({}_{4}\) are dynamically stable (see Supplementary Figure 1: **Crystal structure of (a-c) \(\alpha\)-ATB\({}_{4}\) and (d-f) \(\beta\)-ATB\({}_{4}\).** (a, d) The network in the boron layer forms (5,7) rings. (b, e) The network in the metal layer. Metal A forms 6 membered-ring framework, and T forms a dimer. (c, f) Side view of metal and boron layers. Red shows the A site; Blue shows the T site; Green shows B atoms. Fig. S1). In addition to the ground states, many low-energy metastable phases can be identified from Fig. 2. Phonon calculations also show the metastable CaMnB\({}_{4}\) and MgCoB\({}_{4}\) are dynamically stable (see Supplementary Fig. S1). If using a criterion of \(E_{d}\sim 0.2\) eV/atom [41] to classify these metastable phases, one can see the stable and metastable phases almost only consist of \(3d\) transition metals at T sites, with the A site only occupied by Mg, Ca, or Al. The \(\alpha\) phases always show lower energy for these compounds than the \(\beta\) phases, while their energy differences are relatively small (Fig. 2). In addition to the two stable phases, the experiments might achieve these metastable phases with \(E_{d}<0.2\) eV/atom. For instance, the LiNiB compound with \(E_{d}=0.21\) eV/atom was recently synthesized from high-temperature reactions using the hydride synthetic method [42; 43]. ### Electronic structures of nonmagnetic states We now focus on stable or metastable \(\alpha\)-ATB\({}_{4}\) systems: A is Mg, Ca, or Al, and T is V, Cr, Mn, Fe, Ni, or Co. Their non-magnetic (NM) electronic densities of states (DOS) are shown in Fig. 3. A significant amount of VHS near the Fermi level can be identified for many systems, especially for the atoms of the middle of the \(3d\)-band (Cr, Mn, Fe), where the number of carriers is large. By comparing the DOS among different compounds, we find a few recurring features: First, the states near the Fermi level mainly belong to transition metal atoms forming strongly bonding dimers, with a minor contribution from B atoms and almost no contribution from Mg, Ca, or Al atoms. Second, the DOS for the different elements follows "rigid band" behavior. This is illustrated by Supplementary Fig. S2, which shows the integrated partial DOS for Mn in MgMnB\({}_{4}\). When we shift the Fermi level up (or down), imitating the addition (Fe) or removing (Cr) 1 electron, the resulting PDOS is very similar to the actual calculational PDOS for MgFeB\({}_{4}\) or MgCrB\({}_{4}\), respectively. From Fig. 3, one can also see that when moving from V to Cr systems and further to Mn and Fe, the Fermi level is situated at or near some VHSs. These VHSs are very strong for Cr and Mn atoms, are somewhat weaker for Fe, and are inefficient for Co, Ni, and V-based 114 systems. When the A site is occupied by Al (right column in Fig. 3), the localized states of transition metals remain; however, relative positions of Fermi levels to the localized peaks are different from the Mg-based or Ca-based compounds in the same row. It is related to the various electronic populations for these systems as Al has one additional valence electron relative to the Mg or Ca system. Thus, the DOSs for the discussed family of compounds contain numerous localized dimer states above and below the Fermi level and follow rigid band behavior under doping. This observation can be verified experimentally (pending successful synthesis) as such well-defined localized states of \(3d\) atomic dimers can be seen directly by spectroscopic experiments (optics in particular). VHS corresponds to the regions of the BZ where flat bands are located. In Fig. 4, we show the location of such flat bands for nonmagnetic Mg-based systems. The Figure 3: **The non-magnetic density of states for \(\alpha\)-ATB\({}_{4}\) (A=Mg, Ca, and Al; T=V, Cr, Mn, Fe, Co, Ni).** Each column has the same A element, while each row has the same T element. Contributions of states from A (red curve), T (blue curve), and B (green curve) are shown. The tick indicates the systems with a magnetic ground state. Figure 2: **Stability of non-RE ATB\({}_{4}\) phases (spin-polarized calculations).** The left panel shows the energy above the convex hull (\(E_{d}\)) for the \(\alpha\) phase. The right panel shows the energy difference between \(\beta\) and \(\alpha\) phases. The vertical axis labels indicate specific A-T element combinations. The dashed line indicates the 0.2 eV/atom threshold to identify the metastable phases. dispersionless electronic structure at the Z-U-R-T path is persistent in all three compounds. Note that the Z-U-R-T path is parallel to the layer plane. These flat bands are the manifestation of localized electrons in the planes. According to the projected orbitals on the band structures in Supplementary Fig. S3, these flat bands have dominantly \(3d\) orbital characters of transition metals with a minor contribution from B's \(p\) orbital. These localized states correspond to forming quasimolecular isolated dimer states of \(3d\) atoms. The position of VHS in these systems is only sometimes precisely at the Fermi level in nonmagnetic calculations. For instance, in AlCrB\({}_{4}\), the Fermi level is practically in the gap, while the low \(N(E_{f})\) can also be seen for Al(V, Fe, Co)B\({}_{4}\) and (Mg, Ca)CoB\({}_{4}\), though the VHS is close to the Fermi level in all these cases. If the rigid band scenario is valid, one can predict that under suitable hole (electron) doping, one can situate the VHS at the Fermi level and study how the corresponding electronic fluctuations would change the ground state. Since the density functional theory allows us to study magnetic instabilities in these cases, we now switch to analyzing such instabilities induced by the VHS in magnetic states. ### Magnetic states The ferromagnetic instability is defined by fulfillment of the Stoner criteria, which indicates that for systems with \(3d\) atoms, if the value of \(N_{\rm T}(E_{f})\) is around 1 eV\({}^{-1}\), the system is close to such instability. Figure 3 and Table 1 demonstrate that such instability exists in several of our systems. Of course, VHS in DOS can only provide information about FM instability. We must analyze corresponding Fermi-level singularities of spin susceptibility for the arbitrary magnetic instability. Below, we avoid this step by performing a direct self-consistent search of different magnetic collinear states. In the 114 structure, one can consider three types of collinear magnetic order between T atoms. To clarify them, we use \(<\)**ijk\(>\)** notation to represent the magnetic orders, where **i**, **j**, and **k** are either F (ferromagnetic) or A (antiferromagnetic). **i** represents the magnetic order within the dimer between two T atoms. **j** represents the magnetic order between two dimers in the \(a\)-\(b\) plane, while **k** represents the magnetic order between layers of dimers along the \(c\) direction. With this notation, there are eight different FM and AFM structures. Figure 5 shows all magnetic configurations for each compound with their magnetic energy and corresponding magnetic moments. In Mg- or Ca-based compounds, Cr, Fe, and Mn show magnetic states, while no magnetism is found in V, Co, and Ni systems. Different dopants result in different magnetic ground states. For instance, the ground states of MgCrB\({}_{4}\), MgMnB\({}_{4}\), and MgFeB\({}_{4}\) are \(<\)AAA\(>\), \(<\)AFF\(>\) and \(<\)FFA\(>\), respectively. The energy of these states is about 40-80 meV/T lower than the NM state, which is like that of FM fcc Ni (60meV/Ni). The magnetic moments on transition metal atoms are typically close to 1 \(\mu_{\rm B}\) in the ground states, larger than that in FM fcc Ni (0.6 \(\mu_{\rm B}\)). Thus, we expect these magnetic dimers' thermal stability to be very high as the Stoner temperature is expected to be like the one of Ni (\(>\)2500 K). Ca-based compounds show the same magnetic ground states as Mg-based compounds, suggesting very similar isoelectronic behavior (like the rigid band behavior of DOS discussed earlier). The magnetic behavior of Al systems appears to be similar to both Mg and Ca systems with the corresponding electronic population shifted by +1: AlCrB\({}_{4}\) has the same magnetic ground state as (Mg, Ca)MnB\({}_{4}\), and AlMnB\({}_{4}\) behaves as (Mg, Ca)FeB\({}_{4}\). The case of AlCrB\({}_{4}\) is less trivial due to the stable AFM insulating \(<\)AFF\(>\) state. Next, we analyze the metastable magnetic states in Fig. 5. In MgCrB\({}_{4}\) and CaCrB\({}_{4}\) all eight magnetic states can be stabilized. Nevertheless, Cr magnetism does not look localized. For all FM Cr dimers (**i**=F) the magnetic moments are much smaller, with the energies of \(<\)FFA\(>\) and \(<\)FAA\(>\) states being very close to the NM Figure 4: **Band structure for non-magnetic MgCrB\({}_{4}\), MgMnB\({}_{4}\), and MgFeB\({}_{4}\).** The inset in the left panel shows the bulk Brillouin zone. Red highlights the bands with localized states. state. Thus, individual Cr moment has a substantial degree of itineracy. However, the total magnetic moment of Cr dimer behaves like a very localized spin formation. Moreover, these AFM Cr dimers form AFM coupled ladders along the z-direction (**i**=A and **k**=A). The energy of the AFM ladder is well separated by \(\sim\)35 meV from the FM ladder (**i**=F and **k**=A). In Mg and Ca systems, such AFM Cr spin ladders weakly interact in plane as \(<\)AAA\(>\) and \(<\)AFA\(>\) spin configurations are nearly degenerate with very similar moments and energies. We can identify the magnetic ground state in these systems as weakly interacting AFM spin ladders, analog to spin glass systems. In MgMnB\({}_{4}\), CaMnB\({}_{4}\) and isoelectronic AlCrB\({}_{4}\), corresponding Mn (Cr) atoms in dimer also form stable spin antiparallel order but form FM ladders as a ground state (**i**=A and **k**=F). While FM dimers have been found to be metastable their magnetic moments are much smaller, and the energies of such configurations appear close to the NM state. Thus, the atomic magnetic moments of Mn atoms in these compounds are also not localized and have rather significant itineracy, similar to Cr moments. However, Mn atoms in the dimer form very localized, AFM-coupled spin formation and the moments on Mn atoms in this formation practically do not depend on configuration relative to the nearest dimers. Overall, we expect that in Cr and Mn systems, the low-temperature spin disorder will be a disorder between weakly coupled spin ladders first, then at intermediate temperatures between more strongly interacting spin dimers inside each ladder; and at higher temperatures, the disorder should appear between spins inside dimer. This high-temperature disorder would also ultimately lead to the local moment disappearance on each atom. To some extent, such behavior would support the idea of spin clusters' disorder of Sokoloff [44]. In MgFeB\({}_{4}\), CaFeB\({}_{4}\), and isoelectronic AlMnB\({}_{4}\), no AFM configurations between Fe atoms in the dimers are found (Fig. 5a). It only allows the formation of FM dimers. In these systems, we predict the formation of stable AFM spin ladders along the z-direction. These ladders, however, weakly interact in plane so one can also expect that disordered spin ladders exist in plane at low temperatures. Figure 6 shows the spin-polarized DOS for magnetic ground states in Mg- and Al-based compounds. The DOS of Ca-based compounds are like Mg-based compounds, which can be seen by comparing the first column, i.e., \(<\)AAA\(>\)-MgCrB\({}_{4}\) and \(<\)AAA\(>\)-CaCrB\({}_{4}\). Compared to the NM DOS in Fig. 3, the magnetism strongly reduces the \(N(E_{f})\), stabilizing the new ground states. The magnetic ground states all show very low \(N(E_{f})\), representing either semiconducting or weakly metallic states. In all cases, fluctuations are suppressed in the newly ordered ground magnetic state. The actual situation can be even more insulating as GGA/LDA methods traditionally tend to produce a metallic state that does not exist experimentally and underestimate the energy gap in semiconductors. Corresponding studies using methods like LDA+U or GW can be applied if the experimental data indicate larger band gaps in these materials. Overall, while all nonmagnetic states appear to be metallic, magnetic interactions of density functional theory drive these metals to semiconductors or insulators. In this sense, the physics appears as the Slater metal-insulator transition proposed more than 70 years ago [45]. However, while the idea is theoretically very attractive, numerous experimental studies claimed that electron-electron interaction is more important for most materials than magnetic interactions. Thus, most known materi Figure 5: **The energy difference and magnetic moment for different magnetic states.** The symbols indicate different magnetic orders. The lower panel shows the magnetic order corresponding to different \(<\)**ijk\(>\)** notations. Different colors show different elements: Mg (blue), Ca (red), and Al (green). The compounds with V, Co, Ni, and AlFeB\({}_{4}\) do not have magnetic solutions. als with metal-insulator transition follow strongly correlated scenarios or Mott behavior. Systems with Slater's metal-insulator transition are unique, and our prediction must be verified experimentally. The absence of magnetic states for Ni, Co, and V systems is most likely related to the fact that significant VHS near the Fermi level is not formed for the atoms of the beginning and the end of the \(3d\) row. ### Superconductivity We also examine the electron-phonon coupling (EPC) strength in the non-magnetic 114 systems, as metal borides can show phonon-mediated superconductivity [46]. We employ a recently developed frozen-phonon method to efficiently screen strong EPC candidates to compute the zone-center EPC strength [47]. This method can identify the strong EPC candidates in MgB\({}_{2}\) and many metal borides because the zone-center EPC strongly correlates with these materials' full Brillouin zone EPC [47; 48]. In Fig. 7, we plot the zone-center EPC, \(\lambda_{\Gamma}\), for the non-magnetic ground states of \(\alpha\)-ATB\({}_{4}\) compounds. We reference the zone-center EPC of MgB\({}_{2}\) computed in [47]. It shows that these 114 phases do not possess any strong EPC. Therefore, in these 114 systems, significant electron-phonon superconductivity should not be expected. However, we cannot dismiss the possibility of a different type of superconductivity pairing in these materials. While compounds with Cr and Mn can be eliminated from consideration of any superconductivity due to the presence of localized magnetic dimer states, the corresponding Fe (with Al) and all Co and Ni systems indeed represent metallic systems with the average value \(N(E_{f})\) close to the one in iron pnictides superconductors. Besides, the DOSs for all these systems (Fig. 3) show certain electronic singularities near the Fermi levels, suggesting the possibility of different instabilities in the electronic, magnetic, or structural subsystems. Moreover, as we discussed above, it is evident that the closeness of the Fermi level to VHS (amplitude of electronic fluctuations) in these systems can be tuned by electronic (hole) doping. It is seen from the DOSs of MgCoB\({}_{4}\) and CaCoB\({}_{4}\) or AlFeB\({}_{4}\) and AlCoB\({}_{4}\) (Fig. 3) where significant VHS exist right below and above the Fermi level. The Fermi surface of MgCoB\({}_{4}\) is shown in Fig. 8. It forms anisotropic ellipsoids along the \(c\) direction, i.e., the ladder direction of spin dimers. Such strong anisotropy in the Fermi surface can accompany superconductivity such as those observed in MgB\({}_{2}\) and cuprates [49; 50]. Doping in the structure can move the VHS peak closer to the Fermi level, increasing \(N(E_{f})\) by 2-3 times. Thus, one can expect that the amplitude of electronic charge and spin fluctuations can be effectively manipulated in the needed way. Such tuning of the strength of spin fluctuations near the quantum critical point in these layered boride systems could represent a convenient playground for searching for spin fluctuation-mediated superconductivity [51]. ### Synthesis exploration Due to exciting, predicted properties, we focused on the MgFeB\({}_{4}\) compound for synthetic exploration. Several factors make the synthesis of the MgFeB\({}_{4}\) phase challenging. Boron is a refractory element that requires a higher temperature to activate. Methods such as arc melting are commonly used to synthesize transition metal borides. In contrast, magnesium is a reactive element with high vapor pressure, \(T_{\rm boiling}\) = 1100 \({}^{\circ}\)C. This reactivity difference precludes the use of arc-melting or other high-temperature methods. We have recently shown the power of the mixing of refractory materials method when two refractory components are mixed by forming a binary compound via arc melting. The resulting compound is introduced into a reaction with more active components simultaneously and in close spatial proximity [52; 53; 54; 55]. This method cannot be applied to Fe-B mixing due to the absence of boron-rich Fe borides. Melting of Fe+4B Figure 8: **Band structure, density of states, and Fermi surface of MgCoB\({}_{4}\).** Figure 7: **Zone-center EPC strength for non-magnetic \(\alpha\)-ATB\({}_{4}\) phases.** Green is MgB\({}_{2}\) as a reference [47]. resulted in a mixture of FeB+3B, preventing homogeneous Fe-B mixing. The phase diagram shows that Mg and Fe metals are also immiscible in solid or liquid state [56]: Mg and Fe do not form binary compounds; the solubility of Mg in solid Fe below 1526 \({}^{\circ}\)C is less than 0.6 at.%; above 1526 \({}^{\circ}\)C the segregation into two immiscible liquids of almost pure (\(>\)98 at.%) Mg and Fe takes place. Our preliminary attempts to perform a reaction of elements in a wide temperature range of 600-1000 \({}^{\circ}\)C were unsuccessful; they all produced a mixture of MgB\({}_{2}\), iron borides, and unreacted boron. Partial mixing of elements may be achieved when binary MgB\({}_{2}\) is used as a source of boron and magnesium. _Gillan et al._ have shown that the reaction of MgB\({}_{2}\) and metal chlorides may produce corresponding binary metal borides, i.e., MgB\({}_{2}\)+FeCl\({}_{2}\) = MgCl\({}_{2}\)+FeB [57]. The formation of stable MgCl\({}_{2}\) was a thermodynamic driving force for this reaction. In our syntheses, the ternary Mg-Fe boride was a target, so we attempted a reaction of MgB\({}_{2}\)+Fe. Yet, MgB\({}_{2}\) is a relatively inert precursor, and at temperatures of 750-850 \({}^{\circ}\)C, the reaction of MgB\({}_{2}\) and elemental Fe is very slow, presumably due to high kinetic activation barriers. Hence, we attempted a hydride reaction using MgH\({}_{2}\) as a source of Mg. Unlike ductile Mg, MgH\({}_{2}\) can be effectively mixed with Fe and B, and the released hydrogen may improve boron reactivity. The hydride approach's success was demonstrated by synthesizing several compounds in the Li-Ni-B system [58, 59, 42, 43]. To guide hydride syntheses, we performed an _in-situ_ powder X-ray diffraction (PXRD) study (Fig. 9a). The formation of novel diffraction peaks was observed upon heating in 560-820 \({}^{\circ}\)C range. However, they cannot be assigned to known or predicted ternary (including \(\alpha\)-MgFeB\({}_{4}\)), binary phases, or elements in the Mg-Fe-B system. A set of intense unindexed peaks appeared at 560 \({}^{\circ}\)C (assigned as an \(\alpha\)-unknown phase) and was present until 760 \({}^{\circ}\)C (Fig. 9b). Additionally, a second set of intense unknown peaks (assigned as \(\beta\)-unknown phase) formed at 690 \({}^{\circ}\)C and disappeared at 820 \({}^{\circ}\)C. Upon further heating, the formation of FeB and FeSi was observed. An _in-situ_ study was performed in a sealed silica capillary that can react with Mg at high temperatures as a Si or O source. The unknown phases were observed only in the _in-situ_ measurements at a narrow temperature range. Upon further heating, those phases decomposed such that the final products of the experiment contain no unknown phases, preventing elemental and spectroscopic analysis. Nevertheless, from _in-situ_ data we can conclude that the unknown phases had limited thermal stability, and reactions at temperatures above 820 \({}^{\circ}\)C are expected to produce binary FeB. Further identification of these unknown phases solely from powder diffraction data is challenging. Our _ex-situ_ hydride reactions do not yield new phases in any appreciable amounts. Some possibilities for this could be that the unknown phase incorporates silicon from the silica container, whereas for the _ex-situ_ reactions, niobium ampules are used for the reaction container. Another possibility is that this phase is metastable, and the slower heating ramp in _ex-situ_ reaction allows for this phase to decompose. The thermodynamics calculations in Supplementary Note 1 suggest the MgFeB\({}_{4}\) is thermodynamically stable in an extensive temperature range. However, the synthesis is likely hindered by the formation of intermediate phases. These intermediates consume much reaction energy, leaving little driving force to form the target MgFeB\({}_{4}\). Such interplay between thermodynamics and kinetics represents a major challenge to control the solid-state synthesis [60, 61]. These experimental results suggest that non-traditional solid-state methods may be required to synthesize MgFeB\({}_{4}\). The two main challenges are the immiscibility of Mg and Fe and the upper limit of \(\sim\)820 \({}^{\circ}\)C synthetic temperature, which is relatively low for borides. When Fe was replaced with Ni, which is miscible with Mg, we were able to readily form a known ternary MgNi\({}_{3}\)B\({}_{2}\) phase (Fig. 9c). Further potential methods to produce target 114 borides discovered by the current computational study include flux reactions and designing of special precursors with pre-built functionality [62, 63]. The latter approach was shown to be successful in the production of 1D B-P polymeric chains [64]. Flux reactions need to be carefully designed to dissolve all consisting elements (Mg, Fe, and B) yet not to react with container materials. Mg from flux readily reacts with silica, while salt fluxes react with Nb or Ta ampoule materials at elevated temperatures. Further synthetic explorations are currently underway. Figure 9: **Experimental attempts to synthesize 114 phases.** (a) _In-situ_ powder X-ray diffraction investigation of 1.3MgH\({}_{2}\) + Fe + 4B reaction in the 25-900\({}^{\circ}\)C temperature range. Unindexed phases first appear at 560\({}^{\circ}\)C for \(\alpha\)-phase and 690\({}^{\circ}\)C for \(\beta\)-phase. (b) PXRD between the _in-situ_ reaction’s temperature range of 560-900\({}^{\circ}\)C. The red triangles represent unindexed peaks for the unknown phase \(\alpha\) and the yellow stars represent unindexed peaks for the unknown phase \(\beta\). (c) PXRD pattern, collected with Cu-\(K_{\alpha}\) radiation, of ternary MgNi\({}_{3}\)B\({}_{2}\) produced due to the reaction of MgB\({}_{2}\) and Ni in salt flux. Minor phase peaks of MgB\({}_{2}\) are represented by (*). ## III Conclusion In summary, using electronic structure calculations, we identified new structurally stable compounds among ternary borides of \(\alpha\)-ATB\({}_{4}\) type (A=Mg, Ca, Al; T=V, Cr, Mn, Fe, Ni, and Co). These predicted systems are characterized by numerous Van Hove singularities in their electronic spectrum formed by the emergence of highly localized quasimolecular states of 3\(d\) atomic dimers. Obtained VHS are stronger for electronic occupations near half filling of the 3\(d\) band and weaker for the states near the beginning and end of this band. The presence of such VHS, in turn, creates favorable conditions for the development of different types of fluctuations and the appearance of new quantum states. In our case, we analyzed magnetic instabilities. We found the formation of spin glass systems with spin dimers in magnetic semiconducting (or weakly metallic) states that the experiment can verify. The systematic appearance of VHS as a function of the electronic population should be verified by spectroscopic experiments as these localized states are well separated by 1-2 eV. Overall, we demonstrated how to search for the systems with developing Van Hove singularities. The proposed variation of the type of 3\(d\) atoms represents a convenient tuning of the VHS strength, which can lead to instability of the original paramagnetic Fermi liquid and the formation of new magnetic states. While producing MgFeB\({}_{4}\) experimentally was found to be challenging, we formulated a promising direction for further synthetic studies. The discovery of these materials would lead to an opportunity to study VHS systems and the possible formation of new quantum states, including unusual magnetic orders, superconducting states, charge density waves, and phase separation effects. ## IV Methods **First-principles calculations.** Density functional theory (DFT) calculations were carried out using the projector augmented wave (PAW) method [65] implemented in the VASP code [66, 67]. The exchange and correlation energy is treated with the generalized gradient approximation (GGA) and parameterized by the Perdew-Burke-Ernzerhof formula (PBE) [68]. A plane-wave basis was used with a kinetic energy cutoff of 520 eV. The convergence criterion was set to 10\({}^{-5}\) eV for the total energy and 0.01 eV/A for ionic relaxation. Monkhorst-Pack's sampling scheme was adopted for Brillouin zone sampling with a k-point grid of 2\(\pi\)\(\times\) 0.033 A\({}^{-1}\) for the structure optimization. Energy differences among different magnetic configurations and electronic density of states are computed with a denser k-point grid of 2\(\pi\)\(\times\) 0.022 A\({}^{-1}\). Phonon calculations were performed with the density functional perturbation theory [69] implemented in the VASP code and the Phonopy software [70]. **Phase stability calculations.** The phase stability is evaluated by the formation energy from spin-polarized calculations. The formation energy \(E_{f}\) of ATB\({}_{4}\) is calculated as \(E_{f}=E-\frac{1}{6}E(\mathrm{A})-\frac{1}{6}E(\mathrm{T})-\frac{4}{6}E( \mathrm{B})\), where \(E\left(\mathrm{M}_{x}\mathrm{N}_{y}\mathrm{B}_{z}\right)\) is the total energy of bulk ATB\({}_{4}\). \(E\)(A), \(E\)(T), and \(E\)(B) are the total energies of A, T, and B ground-state bulk phases, respectively. \(E_{d}\) is defined by the formation energy differences with respect to the three reference phases forming the Gibbs triangle on the convex hull (If \(E_{d}=0\), it indicates the ATB\({}_{4}\) is a new stable phase, and the existing convex hull should be updated. The reference phases in the convex hulls are obtained from the Material Project database [40] and the OQMD database [71]. These reference phases are fully relaxed and the energies are re-calculated with the same DFT setting used in our high-throughput calculations. **Electron-phonon coupling calculations.** The zone-center electron-phonon coupling (\(\lambda_{\Gamma}\)) was calculated using the difference between the screened (\(\omega\)) and unscreened (\(\widetilde{\omega}\)) zone-center phonon frequencies [47] as \[\lambda_{\Gamma}=\frac{\widetilde{\omega}^{2}-\omega^{2}}{4\omega^{2}} \tag{1}\] The screened phonon frequency was computed by fully self-consistent (SC) calculations in the displaced atomic configurations using the tetrahedron method with Blochl corrections. To compute the unscreened phonon frequency \(\widetilde{\omega}\), the identical calculation was first performed in the equilibrium configuration, followed by the calculations with the displaced atoms, but with partial occupations fixed as the one in the equilibrium configuration. The detailed workflow of this method can be found in [47]. **Synthesis.** For experimental synthetic attempts, MgB\({}_{2}\) powder (Alfa Aesar, 99%), Fe powder (JT Baker, 99.5%), Cr powder (Alfa Aesar, 99.5%), Mn powder (Alfa Aesar, 99.95%), Ni powder (Alfa Aesar, 99.996%), Co powder (Alfa Aesar, 99.998%), MgH\({}_{2}\) powder (Alfa Aesar, 99%), MgCl\({}_{2}\) powder (Alfa Aesar, 99%), amorphous B powder (Alfa Aesar, 98%), and Nb ampoules were used. The total sample weight was 250 mg. A desired mixture of the ball-milled precursors was loaded into Nb ampoule, which was weld-shut under Ar atmosphere in the glovebox. The sealed Nb ampoules were enclosed in the Silica ampoule, which was evacuated and sealed using the hydrogen-oxygen torch. Various reactions were attempted: the reaction of three elements (Mg+Fe+4B), the reaction of magnesium boride with metal (2MgB\({}_{2}\)+Fe), as well as hydride reactions (MgH\({}_{2}\)+Fe+4B). The typical heating profile consists of a 10-hour ramp up to the desired temperature followed by isothermal annealing of 72 hours. After turning off the furnace, samples were allowed to cool back to room temperature. Powder X-ray Diffraction (PXRD). After annealing, all samples were exposed to air and ground into fine powder using agate mortar. PXRD characterization was performed using Rigaku MiniFlex600 powder diffractometer with Cu-\(K_{\alpha}\) radiation and Ni-\(K_{\beta}\) filter (\(\lambda=1.54059\) A). _In-situ_ PXRD was performed at beamline 17-BM at the Advanced Photon Source, Argonne National Laboratory (\(\lambda=0.24110\) A). A mixture of MgH\({}_{2}\), Fe, and B in a 1.3:1:4 ratio was ball-milled and loaded into a silica capillary with a 0.5 mm inner diameter and 0.7 mm outer diameter. The silica capillary was evacuated and flame-sealed such that the total length of the capillary was 50 mm. The capillary was placed vertically in a cell with resistive heating elements and aligned with the X-ray beam. Diffraction data were collected every 60 seconds as the sample was heated and cooled. Due to the reactor design, the thermocouple is slightly removed from the sample. We estimate that the error in the temperature is around 20-30 \({}^{\circ}\)C. The reported temperature is the measured temperature. ## V Acknowledgments Y.S. acknowledges support from the Fundamental Research Funds for the Central Universities (20720230014). The work at Iowa State University was supported by National Science Foundation Awards No. DMR-2132666. Shaorong Fang from the Information and Network Center of Xiamen University is acknowledged for his help with high-performance computing.
2309.17255
Knowledge Graphs for the Life Sciences: Recent Developments, Challenges and Opportunities
The term life sciences refers to the disciplines that study living organisms and life processes, and include chemistry, biology, medicine, and a range of other related disciplines. Research efforts in life sciences are heavily data-driven, as they produce and consume vast amounts of scientific data, much of which is intrinsically relational and graph-structured. The volume of data and the complexity of scientific concepts and relations referred to therein promote the application of advanced knowledge-driven technologies for managing and interpreting data, with the ultimate aim to advance scientific discovery. In this survey and position paper, we discuss recent developments and advances in the use of graph-based technologies in life sciences and set out a vision for how these technologies will impact these fields into the future. We focus on three broad topics: the construction and management of Knowledge Graphs (KGs), the use of KGs and associated technologies in the discovery of new knowledge, and the use of KGs in artificial intelligence applications to support explanations (explainable AI). We select a few exemplary use cases for each topic, discuss the challenges and open research questions within these topics, and conclude with a perspective and outlook that summarizes the overarching challenges and their potential solutions as a guide for future research.
Jiaoyan Chen, Hang Dong, Janna Hastings, Ernesto Jiménez-Ruiz, Vanessa López, Pierre Monnin, Catia Pesquita, Petr Škoda, Valentina Tamma
2023-09-29T14:03:34Z
http://arxiv.org/abs/2309.17255v4
# Knowledge Graphs for the Life Sciences: Recent Developments, Challenges and Opportunities+ ###### Abstract The term _life sciences_ refers to the disciplines that study living organisms and life processes, and include chemistry, biology, medicine, and a range of other related disciplines. Research efforts in life sciences are heavily data-driven, as they produce and consume vast amounts of scientific data, much of which is intrinsically relational and graph-structured. The volume of data and the complexity of scientific concepts and relations referred to therein promote the application of advanced knowledge-driven technologies for managing and interpreting data, with the ultimate aim to advance scientific discovery. In this survey and position paper, we discuss recent developments and advances in the use of graph-based technologies in life sciences and set out a vision for how these technologies will impact these fields into the future. We focus on three broad topics: the construction and management of Knowledge Graphs (KGs), the use of KGs and associated technologies in the discovery of new knowledge, and the use of KGs in artificial intelligence applications to support explanations (explainable AI). We select a few exemplary use cases for each topic, discuss the challenges and open research questions within these topics, and conclude with a perspective and outlook that summarizes the overarching challenges and their potential solutions as a guide for future research. ###### Abstract We present a novel approach to the _Lief sciences_ of the problem of In the remainder of this paper we will focus on three broad topic areas in which graph-based technologies have been used extensively, and we illustrate each area with some specific projects or use cases that guide our discussion and summary of the challenges that have been encountered. * The construction and management of KGs to represent life science knowledge; * The use of KGs and associated technologies in the discovery of new knowledge; * The use of KGs in artificial intelligence applications to support explanations (eXplainable AI or XAI). We then provide a summary of the general challenges across the topics, that include intrinsic characteristics of KGs (_e.g._, scalability, evolution, heterogeneity) and their operational aspects in the real world (_e.g._, human interaction, personalization, distributed setting, and representation learning). We present the challenges by means of use cases and the current research efforts that address them. It is worth mentioning that while we aim to focus on the life sciences, many of the topics and challenges discussed in this work, especially those of KG construction and management in Section 3, are general and applicable to KGs in other domains such as finance, e-commerce, material, and urban management [113, 32], etc. The KG-based problem modeling and solving approaches in life science knowledge discovery could be applicable for addressing many other use cases and problems in a broader domain of AI for scientific discovery [177, 61]. In the next section, we introduce several different categories of KGs as they have been used in life sciences. Thereafter in Sections 3-5, each of the above topics is described in a dedicated section together with a survey of recent advances. Finally, in Section 6 we synthesize the overarching challenges and trends into a perspective on the outlook for the future. ## 2 Knowledge Graphs in the Life Sciences KGs represent semantically-described real-world entities, typically through ontologies (vocabularies or schemas) [69, 62] and the data instantiating them, and thus provide descriptions of the entities of interest and their interrelations, by means of links to ontology classes describing them, organized in a graph [160]. KGs have been widely adopted in the life sciences, as can be seen in the composition Figure 1: An overview illustration of definitions (upper right, in grey), topics (left column, in blue), use cases (middle), and challenges (bottom right, in green) for the research of KGs in the life sciences. of the Linked Open Data Cloud3, where life sciences represent one of the largest subdomains. A prominent example is the KG representing annotations regarding proteins by means of terms in the Gene Ontology describing different protein functions [4]. Footnote 3: [http://cas.lod-cloud.net](http://cas.lod-cloud.net) Whilst KGs are becoming increasingly popular in different domains including the life sciences, there is no single accepted definition of KG [44]. A KG can be formally described as a directed, edge-labeled graph \(\mathcal{G}=(V,E)\), where \(V\) refers to the _vertices_ or _nodes_, representing real-world entities of interest (_e.g._, proteins, genes, compounds, cellular components, but also pathways, biological processes and molecular functions, to name a few) while \(E\) refers to the edges in the graph, representing relationships or links between the entities in \(V\) (_e.g._, binds, associates, etc.). These may be represented as statements about entities in the form of RDF4 triples: (subject, predicate, object). Footnote 4: Resource Description Framework: [https://www.w3.org/RDF/](https://www.w3.org/RDF/) However, this formal definition only focuses on the components of KGs, but does not pose any constraint on what a KG should model or represent, and how. This is particularly true in life sciences, where the term _Knowledge Graph_ has been used to refer to diverse graph data structures, typically interconnected, but often isolated. Many of the everyday tasks faced by researchers in this domain require the systematic processing and integration of data and knowledge from data sources that are characterized by heterogeneous syntaxes and structures, formats, entity notation, schemas and scope, _e.g._, ranging from molecular mechanisms to phenotypes. Researchers in this area have been early adopters of Semantic Web and linked data approaches as a means to facilitate knowledge integration and processing to support tasks including semantic search, clinical decision support, enrichment analysis, data annotation and integration. However, a recent analysis of life science open data has identified several stand-alone data sources that exist in isolation, are not interlinked with other sources, and are schema-less (or use unpublished schemas), with limited reuse or mappings to other data sources [89]. Therefore, we can define a life sciences KG, following [132], as a data resource integrating one or more possibly curated sources of information into a graph whose nodes represent entities and edges represent relationships between two entities. This definition is consistent with other definitions found in the literature, _e.g._, [137]. These considerations underlie the reasons why KGs in life sciences can be of different types, and can be categorized across different dimensions. One of the most critical dimensions (in terms of support for complex queries and integration) is the categorization of KGs into schema-based and schema-less knowledge bases. In turn, the expressivity of the schema provides a further categorization criterion, depending on whether schemas are modelled as simple taxonomies (_e.g._, the NCBI taxonomy [156] included in the UMLS Metathesaurus [10]), RDFS5 vocabularies or (fully axiomatized) OWL ontologies. In particular, this paper refers to this broad definition of KGs, which we then divide into: Footnote 5: RDF Schema: [https://www.w3.org/TR/rdf-schema/](https://www.w3.org/TR/rdf-schema/) * Schema-less KGs composed of only relational facts in the form of RDF triples. Examples include the PharmaGKB dataset, an integrated online knowledge resource capturing how genetic variation contributes to variation in drug response [182]. Note that many semantic networks (defined in Appendix A) could be assigned to this category as their triples form a multi-relational graph. * Schema-based KGs composed of relational facts and their schema (meta information) in _e.g._, RDFS, OWL, and constraint languages such as SHACL6. Examples include Wikidata with its property constraints, and DBpedia with its DBpedia ontology. Whilst Wikidata and DBpedia are general-purpose KGs, they also include large-scale life science knowledge. * Simple ontologies representing taxonomies. Notable examples include the tree structure of the UMLS Semantic Network7 and the International Classification of Diseases, version 10 (ICD-10) [184]. Footnote 7: [https://uts.nlm.nih.gov/uts/umls/semantic-network/root](https://uts.nlm.nih.gov/uts/umls/semantic-network/root) * Expressive OWL ontologies, with complex axioms beyond simple taxonomies. OWL ontologies may be composed of a TBox and an ABox. Depending on the expressivity of the axioms modeled in the ontology, _i.e._, the basic statements that an OWL ontology expresses, OWL ontologies can fall into one of the previous categories: for instance, an OWL ontology with just an ABox can be seen as the case above of a KG composed of relational facts alone. In this final category we include fully axiomatized OWL ontologies, _e.g._, with complex classes and property restrictions. Notable examples of these ontologies include SNOMED CT [2], the Gene Ontology [4, 29], and the Food Ontology (FoodOn)8. Footnote 8: [http://foodon.org](http://foodon.org) ## 3 Knowledge Graph Construction and Management The adoption of KGs in the life sciences is motivated by the need for standardisation of taxonomies and vocabularies to support the integration, exchange and analysis of data. More recently, richly annotated data is also being used in combination with machine learning methods for many applications, including helping to overcome issues related to the sparsity of data and helping to select promising candidates for reducing expensive and time-consuming physical experiments [65]. Graph-based machine learning approaches such as Graph Neural Networks have been applied to a number of life science tasks [50], including drug repurposing [122] and predicting polypharmacy side effects [198]. Given the diverse nature of the knowledge and tasks supported by KGs, the focus of state-of-the-art approaches has been the description of how individual KGs are developed within the specific domain [192], typically in terms of the specific approaches used for the development of the KG (_e.g._, data extraction process, relation extraction and entity discovery), rather than on the overall development process. More recently, some efforts have focused on providing an overview of development approaches and pipelines for the construction of KGs in the life sciences, and beyond [132, 166]. The process of constructing a KG depends heavily on: * The type of data sources integrated and annotated by the KG, _e.g._, CSV files, public and proprietary data sources, structured databases, full-text publications, etc. * The granularity of the KG to be constructed, _e.g._, schema-less KG, simple or expressive ontology. * The usability expectations in downstream applications, _e.g._, the ability to customize and manipulate the graph to support different use cases, or the ease of consumption as input to machine learning methods [52]. A recent systematic review [166] surveyed different KG development approaches to determine a general development framework. The review identified six main phases that are common across different KG development approaches: 1. Data source selection. 2. Ontology construction. 3. Knowledge extraction. 4. Knowledge ingestion and validation. 5. KG storage and inspection. 6. KG maintenance and evolution. In the remainder of this section we will present the individual phases and the role they play in a KG development process by means of two use cases, where we illustrate the construction of KGs and discuss how these support knowledge integration and validation (Section 3.2). We then present some recent technical developments in Section 3.3, while Section 3.4 discusses open challenges for the construction and management of KGs. ### 3.1 Knowledge Graph Construction Phases This section provides more details on the phases involved in the KG construction process, with the aim of identifying recent trends, rather than providing an exhaustive literature survey. These phases are discussed in order of execution, however the _ontology construction_ phase can occur either together with the data source selection (if an ontology covering the domain of interest already exists or can be constructed through a set of given requirements) or as part of the _knowledge ingestion and validation_ phase, where an ontology is built semi-automatically from the available data or through modularization and alignment of existing ontologies. #### Data source selection This phase identifies the data sources that are to be integrated by the KG, which in turn affects the choice of knowledge extraction techniques. Generally, life science KGs ingest knowledge from structured, semi-structured and unstructured data sources. By _structured_ we refer to data modeled according to an existing structure, _e.g._, data in tables or public or proprietary reference (relational) databases such as UniProt [30] or ChEMBL [51]. Semi-structured data refer to, _e.g._, XML documents [118], whereas unstructured data refer to data that do not conform to a given structure, _i.e._, free-text sources, such as scientific publications from PubMed9. Data ingested from manually curated databases [132] and semi-structured sources constitute the foundation of a KG [52], generally defining the entities and some of the relations in the KG. This data is then further enriched by performing text mining on large-scale free text sources, in order to extract relationships, which is the objective of the _knowledge extraction_ phase. Footnote 9: [https://pubmed.ncbi.nlm.nih.gov](https://pubmed.ncbi.nlm.nih.gov) #### Ontology construction The aim of this phase is to define a common, consensus-based, controlled vocabulary to describe the data in an _ontology_[148]. The existence of a common structure, or schema, supports querying, integration and reasoning tasks over the KG. Traditional ontology engineering approaches are divided into top-down or bottom-up. Top-down approaches are based on more or less formal ontology engineering methodologies [46, 97, 133] or common practices [3] to build ontologies from a description of the domain elicited from domain experts [131], and/or by reusing or extending existing ontologies [83]. Ontology engineering methodologies define the ontology development process in terms of requirement analysis, entity and property definitions, ontology reuse, validation and population. In contrast, bottom-up approaches utilize semi-automatic data driven techniques, _e.g._, ontology learning from text [112], and can be used to refine and validate an ontology. These approaches are discussed in more detail when presenting the _knowledge ingestion and validation_ phase. Whilst general purpose ontology engineering methodologies have evolved to be used in the development of KGs [141], a considerable number of ontologies in the life science domain have been built as part of the Open Biological and Biomedical Ontologies (OBO) Foundry effort,10 which defines a set of development principles for biological and biomedical ontologies and provides a suite of high-quality, interoperable, free and open source tools that support ontology development [117]. Footnote 10: [https://obofoundry.org](https://obofoundry.org) #### Knowledge extraction Knowledge extraction refers to the identification of entities and their relations from the data sources, which is a crucial step in the development of a KG [166]. _Entity extraction_ identifies entities from the various data sources selected using Natural Language Processing (NLP) approaches and text mining techniques to analyse and extract relevant information from large text corpora [180, 105, 72]. Named entity recognition (NER) supports the identification of named entities in text, such as drug names, diseases, or chemical compounds, and their classification according to pre-defined entity types [129]. NER approaches in the life sciences are typically based on labour intensive tasks such as the definition of generic (_e.g._, orthographic, morphological, or dictionary-based) and specific rules that are typically defined by experts, and are not easily applicable to other corpora [197]. There are a number of issues hindering these approaches: a) the pace of scientific discovery and the identification of new entities; b) the large number of synonyms and term variations associated with an entity; and c) entity identifiers that are composed of a mixture of letters, symbols and punctuation, often in large sentences [103]. More recent approaches have proposed the use of supervised machine learning methods (_e.g._, conditional random fields, or Support Vector Machines, SVMs, neural networks, and neural language models in particular) [114, 87, 36] either in isolation, or combined in hybrid approaches to improve accuracy [151]. Entity recognition generates entities that are isolated and not linked [166]. The goal of _Relation extraction_ is to discover relationships of interest between a pair of entities, thus describing their interaction. Relation extraction is a necessary step for entities defined in semi-structured or unstructured sources, whereas structured data sources are characterized by explicitly identifiable relationships. Typical approaches for relation extraction include rule-based [76, 147, 146], supervised [108, 49] and unsupervised approaches [100, 132]. Rule-based relation extraction identifies keywords (based on existing ontologies or expert defined dictionaries) and grammatical patterns to discover relations between entities. Supervised relationship extraction methods utilize publicly available pre-labelled datasets (_e.g._, BioInfer [143] or BioCreative II [99]) to construct generalized patterns that separate positive examples (sentences implying the existence of a relationship) from negative ones. Supervised approaches include SVMs, Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) [7, 132]. Unsupervised relation extraction methods [115] have emerged to address the lack of scalability of supervised relation extraction methods, due to the high cost of human annotation. Unsupervised methods involve some form of clustering or statistical computation to detect the co-occurrence of two entities in the same text [132]. More recently, end-to-end approaches (End-to-End Relation Extraction - RE) have been used to tackle both tasks simultaneously. In this scenario, a model is trained simultaneously on both the NER and Relation Extraction objectives [75]. Furthermore, rule-based approaches can be combined with relation classification using specialized pre-trained language models adapted for life science domains, _e.g._, BioBERT [104], SapBERT [110], and RoBERTa-PM [106], to name a few. There is also a recent trend to probe and prompt pre-trained language models to extract relations (_e.g._, disease-to-disease, disease-to-symptoms) [189, 165]. #### Knowledge ingestion and validation The aim of this phase is to ingest the entities and relationships extracted in a previous phase, which models knowledge from different sources. These entities and relations can be incomplete, ambiguous or redundant, and need to be appropriately aligned and integrated, and finally annotated according to the ontology constructed in phase 2. Knowledge integration or fusion can critically improve the quality of data by performing _entity resolution_, _i.e._, the detection of different descriptions of the same real-world entity (also called entity matching, deduplication, entity linkage or entity canonicalization), prior to ingesting them in the KG. This reconciliation step is particularly crucial in the life sciences, where duplication can be caused by data modelled using different vocabularies or ontologies, or when data is extracted from literature sources that are rapidly changing. The severity of the ambiguity depends on the number of ontologies available for the domain. For instance, the number of gene vocabularies is far smaller than the number of disease vocabularies that could be present in the ingested datasets. Linking these entities requires costly alignment processing; in particular the alignment of disease entities is especially problematic given the number of different coding systems, whose conversion is often not trivial [52]. We further explore this issue in two of the use cases presented in Section 3.3, where we explore the problem of aligning vocabularies and ontologies through the use of mapping repositories and instance matching in automated clinical coding. Entities are assigned unique identifiers (URI or IRI) that support the definition of bespoke namespaces, and support integration by reusing identifiers in related namespaces. Entity resolution is based on clustering similar entities together in a _block_, where similarity measures are used to detect duplicates [166]. Typical methods include sorted neighborhoods and traditional blocking; and machine learning methods are commonly used for similarity computation, _e.g._, feature vector computation [95]. This phase may also include the bottom-up construction of the ontology for those applications where a top-down approach is not feasible. Bottom-up approaches extract the relevant knowledge first, and then they construct the data schema / ontology based on the extracted data, typically using (semi-)automated methods, based on machine learning. Ontologies define the structure of the knowledge graph, which supports querying and data analytics. In bottom-up ontology development the structure of the knowledge graph is determined based on the extracted knowledge, thus providing a structure for this knowledge [70]. Often the construction of ontologies (either bottom-up or top-down) relies on the ability to correctly align and reuse entities defined across different domains and KGs. Furthermore, reuse of (or conformance to) existing upper level ontologies, _e.g._, BFO (Basic Formal Ontology) [3] provides the basis for the consistent and unambiguous formal definition of entities and relations that prevents errors in coding and annotation. The alignment of ontologies in life sciences and other domains is an active area of research, and we provide an overview of recent technical developments and challenges in Section 3.3. Whilst bottom-up approaches, especially those based on alignment, are becoming more viable, especially given the support of language models, such as BERT [64], their performance is not always adequate for the task, as discussed in the second challenge in Section 3.4. Knowledge enrichment and completion improve the KG quality by performing reasoning (KG materialization), inference [57] and optimization. Reasoning and inference support the assertion of new relations based either on logical reasoning (_e.g._, [130, 172]) or machine learning techniques (_e.g._, statistical relational learning or through embedding based link predictors for new concepts [35, 36, 67, 77] and node classifiers, also called KG refinement [137]). The extent and type of logical inferences depends on the expressivity of the ontology built in phase 2, or in a bottom-up fashion in this phase, together with any associated mappings. Description Logic formalisms, such as OWL, use logic-based reasoning for detecting and correcting incorrect assertions and ontology alignments [25]. #### KG storage and inspection KGs need to be accessible to support a variety of different tasks, beyond the mere integration of different knowledge sources, and thus KG storage management [166, 144, 179] is an active area of research. Current KG storage mechanisms are divided into relation based stores (_e.g._, [1]) and native graph stores (_e.g._, [199]). Relational KG stores, either based on relational databases or through NOSQL databases and / or triple stores such as Jena TDB11, have reached a considerable level of maturity and have been optimized in order to avoid common problems, _e.g._, a large number of null values in columns or optimized query performance [144]. Graph databases store nodes, edges and properties of graphs natively, and support query and graph mining tasks. Examples of state of the art implementations include Neo4J12, GraphDB13, and RDFFox14. The evolution of the performance of these systems has been the object of systematic studies [9], whereas [170] explicitly focuses on biomedical use cases. Footnote 11: [https://jena.apache.org/documentation/tdb/index.html](https://jena.apache.org/documentation/tdb/index.html) Footnote 12: [https://neo4j.com](https://neo4j.com) Footnote 13: [https://graphdb.ontotext.com](https://graphdb.ontotext.com) Footnote 14: [https://www.oxfordsemantic.tech/product](https://www.oxfordsemantic.tech/product) Storage management has implications on the ways KGs support expressive queries for nodes and edges and visualization, to support data analysis, navigation and discovery of related knowledge [95, 164]. Graph databases often provide built-in tools for visualization, _e.g._, Neo4J, whereas different Javascript libraries (_e.g._, SigmaJS15) are available for developing visualization front ends. Support for complex queries is also either built in a graph database or a triple store by supporting the SPARQL query language [142, 199], or proprietary query languages such as Cypher [48], supported by Neo4J. Footnote 15: [https://github.com/jacomyal/sigma.js](https://github.com/jacomyal/sigma.js) #### Knowledge maintenance and evolution Given the rapid scientific development in the life sciences, and the consequent continuous update of ontologies for this domain, artefacts annotated with these ontologies can become outdated very quickly, and require some form of update (also called ontology extension). These update mechanisms need to be automated to ensure that they scale to the size of KGs. Automatic update approaches are based on the periodical detection and extraction of new knowledge that is then mapped to existing entities and relations in the KG [185]. Update mechanisms are typically based on the detection of _changes_[123] that can affect an ontology, _e.g._, addition, removal or modification of meta-entities (_i.e._, entities, relations and their definitions). These changes include renaming concepts and properties, setting domain and range restrictions, or setting a subsumption relation. To date, the most comprehensive account of ontology change is given in [47], where change is described for different sub-fields, _e.g._, ontology alignment, matching and mapping, morphisms, articulation, translation, evolution, debugging, versioning, integration and merging; each with different requirements and implications. The study [139] further investigates the impact of biomedical ontology evolution on materialization. Currently available tools and methodologies use (semi)-automated methods to perform many of the operations that trigger a change in an ontology and the consequent creation of a new version [55, 64]. Different ontology management platforms and portals mandate different principles and frameworks for handling ontology versioning (_e.g._, OBO foundry16 or BioPortal17), but these are typically implemented by ontology developers with limited tool support. Section 3.3 presents an example of automated ontology extension that relies on machine learning to cope with the scale of data. Footnote 16: [http://www.obofoundry.org/principles/fp-004-versioning.html](http://www.obofoundry.org/principles/fp-004-versioning.html) Footnote 17: [https://bioportal.bioontology.org](https://bioportal.bioontology.org) ### Examples of Life Science KG Construction In this section we provide two examples of life science KGs that illustrate in practice the phases composing the generic KG construction process discussed in Section 3; namely a KG for Pharmacogenomics, PGxLOD [120], and one for Ecotoxicological Analysis, TERA [126, 127]. **Alignment for Knowledge Validation: An Example of Pharmacogenomics.** As mentioned in Section 3, the task of aligning knowledge in KGs supports several downstream applications and domains. For instance, pharmacogenomics studies the influence of genetic factors on drug response phenotypes (_e.g._, expected effect, side effect). Hence, pharmacogenomics is of interest for personalized medicine. The atomic knowledge unit in pharmacogenomics is a ternary relationship between a drug, a genetic factor, and a phenotype. Such a relationship states that a patient being treated with the specified drug while having the specified genetic factor may experience the described phenotype. Semantic Web and KG technologies have been employed in this application domain, for example by building ontologies in which patients and pharmacogenomic knowledge are represented, and then using deductive reasoning mechanism to conditionally recommend genetic testing before drug prescription [155]. However, the knowledge relevant to pharmacogenomics is scattered across several sources including reference databases such as PharmGKB, and the biomedical literature. Additionally, this knowledge may lack sufficient validation to be implemented in clinical practice. For example, some relationships may have only been observed in smaller cohorts of patients or in non-replicated studies. Hence, there is a need to align different sources of pharmacogenomic knowledge to detect additional evidence validating (or moderating) a knowledge unit. To this aim, the PGxLOD KG was proposed [120]. Automatic knowledge extraction approaches were applied on semi-structured and unstructured data from PharmGKB and the biomedical literature to represent their knowledge in the KG. Then, matching approaches were developed to align knowledge units from various sources [119, 121]. The resulting alignments outlined some agreements between PharmGKB and the biomedical literature, which was expected since PharmGKB is manually completed by experts after reviewing the literature. Interestingly, this automatic knowledge extraction pipeline could guide the manual review process by automatically pointing out studies confirming or mentioning a pharmacogenomic knowledge unit. **Knowledge Integration: An Example of Ecotoxicological Analysis.** In ecotoxicological analysis, data and knowledge from different domains such as chemistry and biology are often needed. These are usually located in different sources such as spreadsheets or CSV files for local experimental results, open databases for public research results, and ontologies for domain knowledge. Thus knowledge integration becomes a critical and fundamental challenge before real analysis can be conducted. In the study by Myklebust _et al._[126, 127], which aims to predict adverse biological effects of chemicals on species, a toxicological effect and risk assessment KG named TERA was constructed for knowledge integration. TERA includes three sub-KGs: _(i)_ the Chemical sub-KG, which is constructed by integrating the vocabulary MeSH (Medical Subject Headings) with selective knowledge from two chemical databases PubChem and ChEMBL utilizing the chemical mappings in Wikidata; _(ii)_ the Taxonomy sub-KG, which is constructed by integrating EOL (Environment Ontology for Livestock) and the NCBITaxon ontology utilizing NIBI-EOL mappings in Wikidata; and _(iii)_ the ECOTOX sub-KG, which is composed of RDF triples transformed from experimental risk results and is aligned with the other two sub-KGs by the ontology alignment system LogMap [81] and the chemical mappings in Wikidata. Another example of knowledge integration is for drug repurposing, where the KG Hetionet18 is created by integrating 29 public resources, including biomedical KGs and other types of data [68]. Footnote 18: [https://github.com/hetio/hetionet](https://github.com/hetio/hetionet) ### What has been done: recent technical developments Given the many existing ontologies in life sciences, _e.g._, ontologies available in the OBO Foundry collection or in BioPortal [134], KG construction usually involves the reuse, alignment, and enrichment of state-of-the-art ontologies. The existing ontologies in life sciences need to be updated given the new discoveries in the field. This is broadly a key issue in the management, maintenance, and evolution of ontologies. We select a few promising use cases below to highlight some recent developments that support the KG construction in the life sciences. **Repositories of Ontologies and Mappings.** Ontologies and their mappings play a central role in semantically enabled products and services consumed by life science companies, academic institutions and universities, as highlighted by the Pistoia Alliance ontology mapping project [59].19 Ontology mappings are essential in knowledge graph construction tasks to bridge the knowledge provided by different ontologies and expand their coverage. Ontology mappings can also play a key role when identifying the right ontologies to be reused as they will enable the retrieval of the relevant (overlapping) ontologies for the domain of interest. For this reason, a number of notable efforts in life sciences have created large repositories of ontologies and mappings to serve the research within the community. Prominent examples include the UMLS Metathesaurus [10], BioPortal [134, 154], MONDO [174], and the EBI services: OLS [176], OxO [85] and the RDF platform [86]. The UMLS Metathesaurus is a comprehensive effort for integrating biomedical ontologies through mappings. In its 2023AA version, it integrates more than two hundred vocabularies, with more than 3 million unique concepts and more than 15 million concept names. BioPortal is a repository containing more than 1,000 biomedical ontologies and more than 79 million lexically computed mappings among them (as of July 13, 2023). The Mondo Disease Ontology (MONDO) is a manually curated effort to harmonize and integrate disease conceptualizations and definitions across state-of-the-art ontologies (_e.g._, HPO [98], DO [157], ICD, SNOMED CT, etc.). The services provided by the European Bioinformatics Institute (EBI) also deserve a special mention. The Ontology Lookup Service (OLS) has become a reference to explore the latest versions of more than two hundred ontologies via its graphical interface or programmatically via its API. OxO is a repository of ontology mappings and cross-references extracted from the OLS and UMLS. OxO allows users to visually traverse the graph of mappings to identify additional potential mappings beyond direct ones (_i.e._, multi-hop mappings). Finally, the EBI RDF platform provides a unified KG with all the RDF resources at the EBI. Complementary to the efforts from the life sciences, the Semantic Web has also contributed to the systematic evaluation of mappings in public repositories (_e.g._, [82, 45]) and mappings produced by automated ontology mapping systems (_e.g._, the Ontology Alignment Evaluation Initiative (OAEI) [140]). Automatically generated mappings of high quality have the potential to be integrated within the aforementioned repositories and hence, the OAEI has always had a special focus on life science test cases with evaluation tracks like Anatomy [40], LargeBio [84], Phenotype [60] and the newly created track BioML [65]. The Simple Standard for Sharing Ontological Mappings (SSSOM) [116] represents a joint effort between the life sciences and Semantic Web communities to facilitate the exchange of mappings across different parties and repositories, while keeping the provenance and other relevant characteristics of the mappings. **Ontology Extension.** Ontology extension in life sciences aims to connect new concepts and their relations to an ontology from updated sources, _e.g._, scientific papers in PubMed and chemical information in PubChem20. Manual ontology extension, while essential for the development of gold standard resources, is not scalable to the full scope of large domains due to its high cost and low efficiency, and sometimes is even unfeasible as human beings may not be able to review the quantities of new information at the rate they become available. Thus machine-learning-based, automated methods are needed. One recent example is the use of deep learning, specifically a Transformer-based model, to categorize new chemical entities within the ChEBI ontology21[54]. In addition, recent studies have explored enriching SNOMED CT by mining new concepts from texts [36] and placing them into the ontology [111, 35]. A new concept can be identified by NIL entity linking, _i.e._, exploring unlinkable mentions, usually through setting a "linkable" score threshold or through classification [36]. Resolution and disambiguation of NIL mentions with clustering can help to represent NIL entities [67, 93]. For concept placement, similar to the aforementioned CHEBI ontology extension [54], machine learning, especially in the form of Transformer-based deep learning, has been applied to predict subsumption relations between a new concept and the existing concepts. Complex concepts in OWL ontologies that contain logical operators (_e.g._, existential quantifier and conjunction in SNOMED CT) can be supported in subsumption prediction [24] and new concept placement [35]. Another group of studies use post-coordination or formalising a new term with existing concepts and attributes [17, 94], which is similar to composing subsumption axioms with complex concepts. The methods include using lexical features [94], word embeddings and KG embeddings [17]. Pre-trained and Large Language Models, through fine-tuning, zero-shot and few-shot prompting have the potential to support the mining [36] and placement of new concepts (_e.g._, by subsumption prediction [24, 66]). Footnote 20: [https://pubchem.ncbi.nlm.nih.gov/](https://pubchem.ncbi.nlm.nih.gov/) Footnote 21: [https://www.ebi.ac.uk/chebi/](https://www.ebi.ac.uk/chebi/) **Instance Matching: Automated Clinical Coding.** A main source for patients' KG construction is Electrical Health Records (EHR). Using medical ontologies as backbones, it is possible to add a layer of data by instance matching (or patient matching) through _Clinical Coding_. Clinical coding is the task of transforming medical information in EHR into structured codes described in medical ontologies [37], _e.g._, ICD and SNOMED CT. Recent approaches mainly formulate the problem as a multi-label classification problem. Various neural network architectures have been proposed and knowledge plays a key role to enhance the neural architectures [37, 80]. Pre-trained language models, _e.g._, BERT [33], have been applied to clinical coding and gradually achieved better results with adapted modelling methods and more advanced language models, _e.g._, PLM-ICD [71] with RoBERTa-PM [106], according to studies [37, 43, 79]. Other studies formulate the task as a Named Entity Recognition and Linking (NER+L) problem, by extraction of concepts and linking them with the ontologies [37]. Overall, the recent progress in clinical coding, along with the advent of Large Language Models (LLMs) suggests a trend in this area for patients' KG construction from EHR. However, there is still room for improvement in knowledge integration to better address explainability (see Section 5 for more details) and in zero-shot learning problems, _i.e._, for classifying into rare codes or concepts [37, 43, 80]. There are also further recent examples of instance matching with EHR data, including the works [16, 168]. ### What are the challenges? KG construction and management often play a fundamental role in supporting life sciences with computation. There are still quite a few technical challenges, and many of the current tools and algorithms can be improved by modern machine learning and AI techniques. Here we present some critical and fundamental technical challenges. * **How to construct a customized KG?** For a specific application, we often need to extract relevant data and knowledge from multiple sources, and at the same time integrate extracted knowledge from different sources. Considering a case study of personal health assistance, a customized KG with knowledge of at least exercise (sports), food, disease and medicine are required, while fine-grained knowledge of these aspects will lie in different domain KGs. The key challenge for integrating different ontology modules lies in estimating the semantic similarity and discovering the equivalence of two knowledge elements with their contexts considered, as well as the subsequent refinement like KG completion and knowledge representation canonicalization. Adequate tool support to minimize manual curation but enabling the user involvement when required is also paramount (_e.g._, [107]). * **How to ensure adequate performance using machine learning based approaches for automated KG construction?** At the TBox level, the state-of-the-art alignment between classes (especially for subsumption relations) seems to not yet be achieving good enough performance, as reflected in recent biomedical ontology alignment benchmarking [65]. At the ABox level, predicting missing facts for practical KG construction expects high precision (_e.g._, beyond 90% or 95%) but only a few relations can be populated with a precision above 80% using prompt learning with BERT as evaluated in [175]. This is also the case to associate patients' EHR (as a part of ABox) with clinical codes or concepts in medical ontologies, where a micro \(F_{1}\) score is below 60% [37]. Learning subsymbolic representations (see defined in Appendix A) of KG and data sources may help address the challenge. Transformer-based language models have achieved great performance in recent years. Among them, pre-trained language models such as BERT have been applied for KG construction with a promising performance achieved (see _e.g._, the package DeepOnto [64]), while the more recent and more powerful generative language models like GPT series [14] have not been well applied at the time of writing, especially in the life science domain. * **How to ensure reliable semi-automated deep learning-based KG construction with human interaction?** Many tasks in the KG life cycle unavoidably rely on human experts to achieve consensus on reliable knowledge; on the other hand, as the automated KG construction process is growing opaque with deep learning methods, it is important to ensure trustworthiness and reliability [193]. Apart from enhancing performance metrics with novel methods, results with certain explainability are needed, for example, highlighting key parts in the data input when they are used as sources for KG construction. We discuss other aspects of explainability with KG, on life science knowledge discovery and healthcare decision making, in Section 5. Human-in-the-loop learning design for explainable KG construction may ensure the use of experts' knowledge for the task across the KG life cycle, which still remains a challenge for future research [193]. ## 4 Life Science Knowledge Discovery Research into AI technologies - including machine learning and KG-based reasoning - to accelerate the pace of scientific discovery is an emerging and rapidly developing field. The challenge lies in assisting scientists to uncover new knowledge and solutions, such as discovering novel therapeutic opportunities, identifying candidate molecular drugs to treat complex diseases or alternatively new uses for existing drugs, and supporting more personalized predictions. Knowledge Graphs are powerful tools for representing complex biomedical knowledge, including molecular interactions, signalling pathways, disease co-morbidities, and more. Overviews of graph representation learning in biomedicine for healthcare applications and polypharmacy tasks are presented in [109] and [53] respectively. In graph representation learning, the graph's topology is leveraged to create compact vector embeddings. Through nonlinear transformations, high-dimensional information about a node's graph neighborhood is distilled into low-dimensional vectors, where similar nodes are embedded close together in the vectorial space. Embeddings have been shown to be valuable for handling numerous relations in a KG while efficiently exploiting relation sparsity using vector computations. These optimized representations are subsequently used to train downstream models for various tasks, such as predicting property values of specific nodes (_e.g_., protein function), predicting links between nodes (_e.g_., binding affinity between molecules and protein targets), or performing classification tasks (_e.g_., predicting the toxicity profile of a candidate drug, or risk of readmission for a patient). It is worth mentioning that among the existing works for life science knowledge discovery, different kinds of KGs have been exploited. The schema-less KG can be used to model different kinds of interaction between instances such as proteins and drugs; the taxonomy alike simple ontology is often used to represent concepts and their hierarchy such as protein functions defined in the gene ontology, chemical compounds, species, and diseases; expressive OWL ontologies and schema-based KGs can be used to model complex logical relationships between concepts, besides simple interaction between instances. Such diverse knowledge representation capabilities make KGs more flexible in modeling the input data and prediction targets of different knowledge discovery tasks, than graphs and tabular data that are widely used in previous pure machine learning-based methods. In the following, we present some typical use cases, where machine learning techniques (including graph representation learning and language models) are applied over KGs built from diverse sources and domain ontologies, to facilitate life science discovery. ### What has been done: use cases and their recent developments **Therapeutics and Drug Discovery: Learning a representation using multi-modal and heterogeneous knowledge.** Drug discovery entails exploring an extremely large space of potential drug candidates. AI can help to accelerate this process by narrowing down the most promising candidates before expensive experimentation. The key to leveraging predictive and generative models for candidate solution generation lies in learning an effective multi-modal representation of protein targets, molecules and diseases among others. Recent research has focused on applying language models over large databases of proteins or molecules for self-supervised representation learning, such as ESM [150] and ProteinBERT [11] for protein sequences, or M1former for the molecule simplified molecular-input line-entry system (SMILES) [153]. These models have exhibited remarkable success in tasks such as predicting protein interactions, binding affinity between drugs and targets, and protein functions and structures. However, these existing pre-trained sequence-based models often neglect to incorporate background knowledge from diverse sources, for example, biological structural knowledge. Nonetheless, recent research indicates that incorporating existing expressive factual knowledge can improve results in downstream machine learning tasks. To enhance Protein Language Models (PLM), approaches such as OntoProtein [194] and KeAP [196] use a KG of protein sequences augmented with textual annotations from the Gene Ontology (GO). OntoProtein was the first to inject gene ontology descriptions into a PLM for sequences to predict protein interactions, function and contact prediction. OntoProtein proposes to reconstruct masked amino acids while minimizing the embedding distance between the contextual representation of proteins and associated knowledge terms. Similarly, ProtST [188] uses a dataset of protein sequences augmented with textual property descriptions from biomedical texts and jointly trains a PLM with a biomedical language model. Knowledge Graphs are suitable data models for expressing heterogeneous knowledge and facilitating end-to-end learning [183]. An entity in a KG can have multiple attributes with different modalities - where each modality provides extra information about the entity - as well as relations to and from entities in other sources. Graph Neural Networks (GNN) have been used to capture inter-dependencies and diverse types of interactions between heterogeneous entity types and multimodal attributes in KGs [102]. They achieve this by iteratively aggregating information from neighbouring nodes (through a process called message passing) and employing scoring functions to optimize the learned embeddings for downstream tasks. Otter-Knowledge [102] incorporates a heterogeneous KG (schema-based, containing concepts and their attributes) from diverse sources and modalities, _i.e._, each node has a particular mode that qualifies its type (text, image, protein sequence, molecule, etc.) and initial embeddings for each node are computed based on their modality. A GNN is then used to enrich protein and molecule representations and train a model to produce final node embeddings. The model is able to produce representations for entities that were not seen during training and achieve state-of-the-art results in the Therapeutic Data Commons (TDC) benchmarks [74] for drug-target binding affinity prediction. TxGNN [73] uses a GNN pre-trained on a large heterogeneous, multi-relational KG of diseases and therapeutic candidates constructed from various knowledge bases. TxGNN obtains a signature vector for each disease based on its neighboring proteins, exposure and other biomedical entities to compute a disease similarity and predict drug indication/contranidation for poorly characterized diseases. **Protein Function Prediction with the Gene Ontology.** Conducting physical experiments for identifying protein functions is time and resource consuming. With the development of machine learning, protein function prediction (which is the task of predicting a given protein with multiple and potentially hierarchical classes - functions - defined in GO) has been widely investigated in recent years [195, 173]. A large part of these works such as GOLabler [191] focus on exploring feature extraction, feature ensemble, and automatic feature learning of the proteins. For example, GOLabler [191] utilizes five kinds of different protein sequence information while DeepGraphGO [190] builds a network of proteins and learns protein features via a Graph Neural Network. Recent methods attempt to further exploit inter-function (class) relationships that are defined in GO for better performance. For example, DeepGOZero [101] and HMI [187] use formal semantics including the class hierarchy, class disjointness axioms and complex class restrictions in OWL as additional constraints for training the multi-label classifier for protein function prediction. Protein function prediction is a representative multi-label classification problem where complex relationships of the labels are defined in a KG and can be used for performance augmentation. It is quite common in machine learning applications in the life sciences, such as the above mentioned automated clinical coding where the codes' semantics are modeled by the ICD ontology, and ecotoxicological effect prediction where the multiple affected species of a chemical to predict form a taxonomy. **Predictions for Healthcare using Ontologies with Clinical Data.** Digital Healthcare involves predictions using clinical data and ontologies, including diagnosis (_e.g._, rare diseases) and procedure predictions (_e.g._, ICU readmissions). A related concept is personalized medicine, which is achieved through the matching and fusion of knowledge from diverse sources, and plays a significant role in the prediction tasks. This often involves matching multiple ontologies [158], integrating curated databases (_e.g._, pharmacogenomics, molecules and proteins knowledge bases), mining knowledge from scientific literature [186] and person-centered clinical knowledge extracted from EHR or claim data, with distinguishing risk factors or cohorts' demographics (_e.g._, age and gender), which could enhance predictions related to adverse effects [125] or rare diseases for which there are not enough labeled datasets [2]. For example, SHEPHERD [2] incorporates a multi-relational KG (extracted from PrimeKG [20]) of diseases, phenotypes and genes, and leverages patient simulated data to discover novel connections between patients' clinical, phenotype and gene information to accelerate the diagnoses of rare diseases. Knowledge-guided learning is achieved by training a GNN to represent each patient's subgraphs of phenotypes in relation to other gene, phenotype, and disease associations within the KG, such that embeddings are informed by all of the existing biomedical knowledge captured in the network topology. The approach in [16] constructs a KG (using expressive OWL ontologies) to predict ICU (intensive care units) readmission risk by enriching EHR data with semantic annotations from various biomedical ontologies in BioPortal. These predictions are based on KG embedding, such as RDF2vec, OPA2vec, and TransE, and classical machine learning methods, such as Logistic Regression, Random Forest, Naive Bayes and Support Vector Machines. Drawing from the Health & Social Person-centric Ontology (HSPO) [167], which focuses on multiple clinical, social and demographic facets for a patient or cohort, the approach presented in [168] builds a person-centric KG (expressive OWL ontology with TBox and ABox) from structured and unstructured data in EHR). Subsequently, a representation learning approach using GNNs is used to predict readmissions to the ICU. ### What are the challenges? We present four of the open challenges to unlock the full potential of methods to advance knowledge discovery for the life sciences using KGs, based on the use cases above. * **How to incorporate the semantics from a KG in machine learning?** Many life science knowledge discovery tasks are modeled as a machine learning classification problem, whose input and output labels have additional valuable information in one or multiple external KGs. The challenge lies in extracting this information, optionally encoding it into vector representations, and injecting that knowledge into machine learning and pre-trained language models. Doing this effectively remains an important open challenge especially for protein-related pre-trained language models [194, 188, 196]. Besides improving the accuracy in knowledge discovery, injecting semantics from KGs can also contribute to making the model more explainable (see Section 5), but to this end, much research is still required. * **How to deal with the long-tail phenomenon in machine learning with KGs?** In machine learning classification for real-world life science knowledge discovery, the candidate labels often exhibit a long-tailed distribution, _i.e._, a small ratio of them are common with a large number of training samples available, while most of them are infrequent or even have never appeared before. For example, imbalance in training data may occur for rare diseases or adverse drug effects that affect only a small portion of the population [2, 73, 38]. KGs sometimes have encoded the relationships of the labels, and could be used to help train the model for predicting those long-tailed labels or enable the inference of such labels. * **How to create an efficient multi-modal representation of knowledge to enable discovery?** Most current state-of-the-art methods build learned graph representations based on isolated modalities. Multimodal KGs can explicitly capture labelled nodes and edges, each with well-defined meanings, across heterogeneous node types, relations and modalities (such as text, images, protein sequences, molecules fingerprints, diseases and more) [20, 102]. Incorporating KGs with multiple modalities for representation learning requires computationally scalable methods to compute the initial embeddings for each modality, as a preliminary step to learn computable representations of large knowledge. Furthermore, robust learning techniques are needed for generalizing the learned representations to nodes with unseen or missing modalities, thereby enabling the discovery of new knowledge. An example would be inferring properties of proteins for which only the sequence is known. * **How to efficiently utilize and fuse heterogeneous datasets, such as human-curated domain knowledge bases, scientific literature and person-centered health records, for knowledge discovery?** State of the art shows that representations can be enhanced by incorporating richer information available across different sources [73, 102, 158]. Bringing in more data during training is needed to learn representations that can be applied to a broader range of downstream prediction tasks. However, learning from large and diverse KGs requires addressing challenges such as alignment, noise handling, balancing rich expressive knowledge with scalability and dealing with knowledge inconsistency. Moreover, more robust learning methods are needed for generalizing the learned representation to multiple downstream tasks (_e.g._, knowledge-aware transfer, zero-shot and few-shot learning [23]). An important aspect in this regard is addressing the disparity between all of the knowledge accessible during pre-training and the knowledge accessible or relevant for the downstream fine-tuning [73, 102]. ## 5 Knowledge Graphs for Explainable AI Machine Learning (ML) and Artificial Intelligence (AI) methods are widely employed to tackle complex problems in many domains, including life sciences such as chemistry or biomedicine. Yet many of those methods operate as a "black-box", not enabling domain experts to understand the reasoning behind their predictions [92]. This is a major concern, especially for applications in areas with a potential impact on human lives, or areas with legally enforced accountability or transparency [145]. Moreover, understanding the workings of AI methods is also crucial in the context of scientific applications, such as those described in Section 4, where explaining the prediction process can help elucidate natural phenomena [41]. One way to address this issue is to employ the methods of eXplainable Artificial Intelligence (XAI). Although this is a topic long explored in the AI research community, there is still no widely-accepted definition of explainability, with many terms being used interchangeably, such as interpretability, comprehensibility, understandability and transparency [8]. Barredo _et al._ define explainability as the ability of a model to make its functioning clearer to an audience [8]. A slightly different definition is given in the previous survey [56]: "an interface between humans and a decision maker that is at the same time both an accurate proxy of the decision maker and comprehensible to humans". Both definitions focus on the audience, for _whom_ is the model explainable, but the second suggests an explanation is another artefact produced by a model or alongside the model. There are two distinguishable audiences in the context of the life sciences: scientists (researchers) and healthcare practitioners [169]. For the first group, the explanation is used as a guide to understanding within life sciences research for scientific discovery. As a result, the explanation may exist in a well-bounded context of a hypothesis or research project. On the other hand, practitioners are involved directly in decisions with impact on healthcare. They need to consider the output of the model in an open context, and sometimes also to explain the output to a patient who is not a domain expert. A number of approaches for XAI emerge from the literature and broadly contain two parts: (1) transparent box design, which includes algorithms such as decision trees, where models can be directly interpreted by users and therefore an explanation of an output results in simply following the decision paths that relate input to output; (2) post hoc interpretability, which provides an explanation to a black-box model using additional methods such as probing, perturbing, or by constructing surrogate models for general ML or AI methods [92, 169]. Utilization of KGs can greatly enhance XAI qualities as KGs are ideal for improving the model's interpretability, explainability, and understandability. Some methods are directly built around KGs and thus take full advantage of them. Examples of those methods may include methods that are using paths [163], predicting links, or performing reasoning [34]. Other methods can be enhanced using the KG (_e.g._, [128]). Yet the enhancement effect greatly depends on the place where KGs are employed and iteratively applied: _pre-model_ (_e.g._, KG construction, potentially multi-modal), _in-model_ (_e.g._, integrating KG with machine learning models), and _post-model_ (_e.g._, reviewing and updating KG by domain experts to be applied in the next iteration to enhance machine learning models and their explanability) [145]. For example in in-model use, a model can be pre-trained using a KG, and an example of a pre-trained language model is SapBERT [110], which utilises synonyms in the UMLS Metathesaurus to further pre-train a BERT language model. This can not only be beneficial for performance [194], but can also potentially enhance post-model explanation since the trained features are aligned with the KG [145]. ### What has been done: use cases and recent developments **Explainable AI for Healthcare Practice.** The utilization of AI in healthcare practice raises the concern of leaving life-critical decisions to black-box models [145, 169]. For example, in the field of precision medicine which aims at tailoring drug treatments and dosages to each patient, clinicians require more information from a model than a simple binary decision [8]. The interpretability and explainability of AI models is thus an essential characteristic to make outputs understandable and transparent. This would enforce both clinicians' and patients' trust in models by complementing (and not substituting) clinicians' explanations [21, 145, 169]. To illustrate, this direction has been envisioned for several healthcare scenarios. Explainable AI models could support the experts in finding clinical trials that are appropriate based on patient history [169]. Counterintuitive or unreliable predictions that could have serious consequences could be explained, and thus prevented [169, 15, 91]. Some also envision such models to be used to explain and debunk healthcare-related misinformation [145]. As aforementioned, it is noteworthy that different kinds of explanations should be employed depending on the target audience, _e.g._, scientific explanations for evidence or trace-based explanations for treatment [21]. **Explainable AI for Knowledge Discovery.** As introduced in Section 4, KGs can support knowledge discovery in life science, including the explainability of the process and the discovered units. In this view, Ritoski and Paulheim [149] explain that ontologies, linked data, and KGs are used in the interpretation step of a data mining process, _e.g._, for interpreting sequential patterns in patient data [78], or to describe subgroups in a semantic subgroup discovery process [171]. KGs can also serve both as the basis for knowledge discovery processes and the interpretation process. For example, Linked Open Data connecting drugs and adverse reactions can be analyzed with Hidden Conditional Random Fields to predict adverse drug reactions, where the paths from selected drugs to outcomes visually explain the prediction [88]. Similarly, Bresso _et al._[13] leverage features extracted from KGs (interpretable features such as paths, neighbors, path patterns) and white box models (_e.g._, decision trees) to reproduce expert classifications of drugs causing or not specific adverse drug reactions. The rules extracted from the decision trees contain features that provide explanations for the molecular mechanisms behind these adverse reactions according to experts. Sousa _et al._[161] employ KGs to explain both protein-protein interaction predictions and gene-disease association predictions based on shared semantic aspects. **Explainable AI for KG Construction** The final use case considers the situation that XAI is applied to KGs themselves. We discussed the challenge to support human intervention in KG construction in Section 3.4. Recent KG construction gradually relies on data-driven, deep learning based methods to automatically induce knowledge from data. The deep learning models are opaque, and thus the process requires explainability. The resulting KG may not be accountable to be used for downstream applications. _Trustworthy KG engineering_ is proposed in [193] to highlight the importance of embedding explainable AI and human intervention in the KG life cycle. XAI methods have been applied in many NLP related tasks (entity and relation extraction, entity resolution, link prediction, etc.) in KG construction from texts. The XAI methods rely either on feature-based explanations or knowledge-based explanations. While feature-based explanations try to infer explanations from the data or the models' interpretation of the data, knowledge-based explanations aim to interpret the process with rules, reasoning paths, and structured contextual information. Rules and paths have mainly been used for explanation, especially for link prediction, a task comprehensively surveyed in [193]. ### What are the challenges? * **How to integrate KGs for better XAI, especially with recent deep learning and language model based methods?** KG may provide better data provenance for the model output. This can ensure explainability for communicating the model to domain experts in data science applications [8]. In terms of recent generative LLMs, life science KGs, with careful curation based on scientific publications, may help to provide provenance data to the answers generated by LLMs. Studies need to understand to what extent, and how, LLMs can be applied to induce knowledge (_e.g._, by probing LLMs with biomedical ontologies [66]), which then may provide a foundation to create better approaches to integrate KGs with LLMs. Another area is neuro-symbolic methods which may provide models that are inherently more interpretable (see further discussions in Section 6.1). Also, regarding language models (especially LLMs), they are capable of generating fluent texts, which can potentially serve as textual explanation generators from symbolic knowledge for XAI. Meanwhile, a key issue is the hallucination of LLMs, and KGs may support better prompting, fine-tuning and interpretable inference of LLMs for higher decisiveness and trustfulness [136]. * **How to evaluate XAI methods that involve KG?** How to measure the quality of explanations, to ensure they are corresponding to users? The majority (around 70%) of XAI studies for KG construction do not evaluate the quality of the explanations or only informally visualize or comment on a limited number of cases to show the intuitive outcome [193]. Also, an XAI method needs to consider the target audience, as the explainability is to be finally received by a group of humans [8]. For instance, only a small number of current approaches to XAI for KG construction involve a user study, human evaluation or task-specific metrics [193]. Evaluating the quality of explanations requires some expert evaluation performed as ex-post evaluation, and well-defined metrics are needed for this task. An example is in [58] to use a combination of users' scores for each predicted explanation in a KG link prediction task, where there are multiple possible explanations. More expert validated and automated evaluation methods and associated metrics are required for KG-related XAI. ## 6 Discussion and Conclusion In this work, we have summarized the recent developments of KG research in life science on three important topics - KG Construction and Management, Life Science Knowledge Discovery, and KG for XAI. While each topic has its specific challenges, there are some common challenges and trends for the life science KG research in general. ### Overall challenges and trends Meanwhile, more scalable and efficient knowledge retrieval, query and reasoning systems, including life science KGs and mapping repositories, are still worthy of investigation and development. **Evolution and Quality Assurance of KGs.** KGs need to be updated as new data and knowledge are emerging, and the schema and facts can easily become outdated or less useful for existing applications in life sciences. In terms of KG construction, we discussed ontology extension as a use case to address the evolution issue or emergence of new concepts and relations, and also instance matching to extend new instances for the KG. Updating KGs is also a prerequisite for life science knowledge discovery and knowledge discovery methods should be able to support the evolution of KGs with _e.g._, the capabilities of continuous learning and zero-shot learning. Quality assurance is another issue for KGs, including the tasks of knowledge error detection and correction, knowledge completion, knowledge canonicalization, etc. On the one hand, more effective KG quality assurance methods and systems should be developed, including schema and constraint languages for quality verification and learning-based models for prediction (_e.g._, [25] combines both for fact correction); on the other hand, knowledge discovery methods should be robust to noisy KGs by investigating _e.g._, robust KG embeddings and multi-modal representation learning. **Heterogeneity in KGs: Multi-domain and Multi-modality.** KGs contain heterogeneous information, which brings challenges to their construction, representation, and reasoning. Different schema and data in KGs can have different focuses in their scopes and domains. Integrating data of different domains for building _multi-domain_ KGs is difficult with challenges in _e.g._, ontology and data matching. Besides, recent studies have explored integrating different modalities to construct _Multi-modal_ KGs [27, 124, 178], for instance text [135], images [181], etc. One challenge to address is how to learn effective machine learning models over multi-modal KGs fused from different sources (patients' records, curated knowledge bases, and scientific literature) to support scientific discovery as well as KG construction and management. Another challenge is developing accurate and efficient knowledge representation approaches for texts and images in multi-modal KG construction. For example, careful consideration should be given to when to simply use an annotation property to associate an image with an entity, and when to use a property with specific semantics to connect an image and an entity. **Human Interaction and Explainability with KGs.** In KG construction, human experts are required for many sub-tasks of KG construction and provide oversight [193]. In life science knowledge discovery, human experts are necessary to finally validate the predicted new knowledge. The whole process of interacting with KG in life sciences requires explainability, especially when sub-symbolic models (_e.g._, pre-trained language models) are used. How to generate clear explanations for human interaction and how to evaluate the quality of explanations remains a challenge, as well as how to achieve consensus regarding scientific understanding with automatically discovered knowledge when organizing knowledge in life science [131]. The recent growth of _Neuro-Symbolic methods_ suggests their support for explainability [90, 91, 152]. A recent survey [91] summarizes XAI in bioinformatics with a chapter on knowledge-based explanations, whereas Karim [90, Chapter 8] provides a neuro-symbolic framework for KG construction and utilisation for medical experts' decision making in the cancer domain. The approach presented in [152] is another recent example of neuro-symbolic integration for image classification with KG-based XAI in the cultural heritage domain. **Personalized and Customized KGs**. A key challenge for KG construction is customisation, as we discussed in Section 3, to construct application-oriented KGs, where relevant sub-KGs have to be extracted for large-scale KGs (_a.k.a._ modulation) and integrated with other knowledge and data from different sources. Besides, many life science KGs are about individuals, _e.g._, patients in healthcare applications, where Personal Health KG enables the integration of instance-level (or patient-level) information and their computation is required [124]. An example is the Personal Health KG in [22] that supports the dietary recommendation for users, where the construction and population of the KG requires reusing and integrating existing ontologies, dietary guidelines, and time-series patient data. The other examples of KGs integrating patients' EHR data [168, 16] are presented in Section 4.1. In personal KG construction, personal data should be protected. KG scalability should also be considered in order to be used on small devices such as cellphones. This is still a big challenge that has been rarely considered in using KGs in the life sciences. **Distributed KGs.** The value of healthcare data for improving clinical knowledge and standard of care and the potential of semantic technologies to further enhance it are well recognized. However, a responsible use of healthcare data at the global level (beyond each healthcare provider and even each country) must take into account both legal and ethical issues in data sharing, privacy and security. Distributed knowledge graphs can mitigate these issues, by allowing for access control and privacy protection. Furthermore, distributed knowledge graphs can also address the challenges of scientific data ownership and stewardship by enabling the decentralized publishing of high quality data. Several approaches for federated querying and embedding of knowledge graphs have been proposed in recent years [26, 138, 159], however a wide adoption of semantic technologies in healthcare is still lacking, with a proliferation of terminological standards and a disconnection between data and meaning. **Representation Learning with KGs: Symbolic and Sub-symbolic Integration**. Across the topics and use cases, we see the importance of transforming symbolic knowledge into sub-symbolic representations or combining both representations. The combination of both the neural and the traditional symbolic representation methods leads to a trend in neural-symbolic approaches in the field [12]. Recently, Pre-trained and Large Language Models provide new methods to transfer self-supervised learning from a vast amount of corpora to support KG construction, _e.g._, OntoGPT [18] and OntoLAMA [66]. LLMs are especially good at representing texts of life science publications in sub-symbolic spaces for semantic understanding. KGs may also provide a layer of explainability by validating the output of LLMs. A recent survey [136] proposes a roadmap for integrating LLMs and KGs. OntoProtein [194] is a recent example of how to integrate KGs into the process of pre-training LLMs in the bioinformatic domain, thus achieving improved results on protein-related knowledge discovery tasks. Also, geometry-informed representations of more formal KGs, especially in hyperbolic spaces or using complex geometric structures, _e.g._, [19, 101], can usually represent the structure of the KG with low dimensional vectors. Graph Neural Networks may also support the encoding of KG structures in a more explainable way with logical rules [31]. ### Conclusion Knowledge Graphs have become a popular and effective method to represent heterogeneous concepts, relations, and data in life sciences. They require scalable solutions to represent and reason with heterogeneous data and require constant updates. Throughout this work, we covered the main topics and their corresponding use cases of KGs in multiple life science domains such as protein analysis, drug discovery, ecotoxicology, and healthcare, and summarized the corresponding challenges. As new methods in knowledge representation appear, for instance the recent trends of human-in-the-loop, sub-symbolic knowledge representations, pre-trained and large language models, and neuro-symbolic integration, we envisage deeper applications of KGs to life science processes, that support the construction of more applicable KGs and the discovery of more reliable scientific knowledge, with explainability and human interaction better supported. KGs in combination with other modern machine learning and natural language processing techniques will become a foundation for AI for the life sciences. ## Appendix A Terms in Knowledge Graphs and Life Sciences Below we provide a list of key terms used in this paper, as well as their definitions and explanations. Note we mainly use the original sentences in the sources that are referenced as the definitions. **Description Logics**: a family of knowledge representation languages that can be used to represent knowledge of an application domain. DLs differ from their predecessors, such as semantic networks and frames, in that they are equipped with logic-based semantics, the same semantics as that of classical first-order logic. Most ontologies are implemented in OWL, whose semantics are given by the Description Logic \(\mathcal{SROIQ}\). [6] **TBox** and **ABox**: the two components of domain knowledge in Description Logics, _i.e._, a terminological part called the TBox and an assertional part called the ABox, with the combination of a TBox and an ABox being called a knowledge base (KB). The TBox represents knowledge about the structure of the domain (similar to a database schema), while the ABox represents knowledge about a concrete situation (similar to a database instance). [6] **Semantic Networks**: a graph structure for representing knowledge in patterns of interconnected nodes and arcs [162]. We use the term to denote a graph of concepts and relations without formal semantics. **Gene Ontology**: The Gene Ontology (GO) knowledgebase provides a comprehensive, structured, computer-accessible representation of gene function, for genes from any cellular organism or virus [5, 29]. **SNOMED-CT**: Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) is a structured clinical vocabulary. It has a general and comprehensive coverage of clinical terms to support electronic healthcare systems and clinical applications. [39, 28] **UMLS (UMLS Metahesaurus** and **UMLS Semantic Networks**): Unified Medical Language System (UMLS) is a repository of biomedical vocabularies developed by the US National Library of Medicine. The UMLS is composed of three "knowledge sources", a Metathesaurus, a semantic network, and a lexicon. The UMLS Metathesaurus is a comprehensive effort for integrating biomedical ontologies through mappings. The UMLS Semantic Networks define the types or categories, or Semantic Types, of all Metathesaurus concepts and their relationships, or Semantic Relations. [10, 28] **ChEBI**: Chemical Entities of Biological Interest (ChEBI) is a database and ontology containing information about chemical entities of biological interest. [63] **Symbolic vs. subsymbolic representations**: Rooted in cognitive science, symbolic systems of human cognition are related to the representation and manipulation of symbols; sub-symbolic or connectionist systems are most generally associated with the metaphor of a neuron, _e.g._, perceptrons as an early system [96]. In terms of AI, symbolic systems contain logic-based and knowledge representations, while subsymbolic systems typically contain neural networks and deep learning based methods [42]. Neural language models and pre-trained language models [87] are also classified under subsymbolic systems. **Pre-trained and Large Language Models**: Neural language modelling is the task of using neural network approaches to predict words from prior their contexts in a sequence. Pre-training is the process of learning some sort of representation (usually neural embedding based) of meaning for words or sentences by processing very large amounts of text (or other data in a sequence form, _e.g._, proteins and KG facts). This results in pre-trained language models. The dominating architecture for neural language modeling is Transformer-based models, including BERT, its domain specific versions, and later large variants, like the GPT series. The pre-trained language models of very large sizes are recently coined Large Language Models (LLMs). [87] **Neuro-symbolic representations**: refers to the integration of neural networks and symbolic representations to design AI models that base their prediction on both data and knowledge. [42] ## Appendix B Authors' Contributions All authors participated in the planning and discussions of this work. JH and HD finished the abstract and "Introduction". VT, JC and EJR contributed to "Knowledge Graphs in the Life Sciences". VT contributed to the main part of "Knowledge Graph Construction and Management", with contributions of use cases from JC, HD, PM, EJR, and JH. VL and JC contributed to "Life Science Knowledge Discovery". PM, PS, HD, and CP contributed to "Knowledge Graphs for Explainable AI". HD, JC, and CP contributed to "Discussion and Conclusion" based on discussions with other team members. All authors contributed to the final revision of this paper.
2309.05651
Testing external photoevaporation in the $σ$-Orionis cluster with spectroscopy and disk mass measurements
The evolution of protoplanetary disks is regulated by an interplay of several processes, either internal to the system or related to the environment. As most of the stars and planets have formed in massive stellar clusters, studying the effects of UV radiation on disk evolution is of paramount importance. Here we test the impact of external photoevaporation on the evolution of disks in the $\sigma$ Orionis cluster by conducting the first combined large-scale UV to IR spectroscopic and mm-continuum survey of this region. We study a sample of 50 targets located at increasing distances from the central, OB system $\sigma$ Ori. We combine new VLT/X-Shooter spectra with new and previously published ALMA measurements of disk dust and gas fluxes and masses. We confirm the previously found decrease of $M_{\rm dust}$ in the inner $\sim$0.5 pc of the cluster. This is particularly evident when considering the disks around the more massive stars ($\ge$ 0.4 $M_{\odot}$), where those located in the inner part ($<$ 0.5 pc) have $M_{\rm dust}$ about an order of magnitude lower than the more distant ones. About half of the sample is located in the region of the $\dot{M}_{\rm acc}$ vs $M_{\rm disk}$ expected by models of external photoevaporation, namely showing shorter disk lifetimes. These are observed for all targets with projected separation from $\sigma$ Ori $<$ 0.5 pc, proving that the presence of a massive stellar system affects disk evolution. External photoevaporation is a viable mechanism to explain the observed shorter disk lifetimes and lower $M_{\rm dust}$ in the inner $\sim$0.5 pc of the cluster. Follow-up observations of the low stellar mass targets are crucial to confirm the dependence of the external photoevaporation process with stellar host mass. This work confirms that the effects of external photoevaporation are significant down to impinging radiation as low as $\sim 10^{4}$ G$_0$.
K. Maucó, C. F. Manara, M. Ansdell, G. Bettoni, R. Claes, J. Alcala, A. Miotello, S. Facchini, T. J. Haworth, G. Lodato, J. P. Williams
2023-09-11T17:42:52Z
http://arxiv.org/abs/2309.05651v1
# Testing external photoevaporation in the \(\sigma\)-Orionis cluster ###### Abstract Context:The evolution of protoplanetary disks is regulated by an interplay of several processes, either internal to the system or related to the environment. As most of the stars and planets, including our own Solar System, have formed in massive stellar clusters that contain OB-type stars, studying the effects of UV radiation on disk evolution is of paramount importance. Aims:Here we test the impact of external photoevaporation on the evolution of disks in the mid-age (\(\sim\)3-5 Myr) \(\sigma\)-Orionis cluster by conducting the first combined large-scale UV to IR spectroscopic and mm-continuum survey of this region. Methods:We study a sample of 50 targets located at increasing distances from the central, massive OB system \(\sigma\)-Ori. We combine new spectra obtained with VLT/X-Shooter, used to measure mass accretion rates and stellar masses, with new and previously published ALMA measurements of disk dust and gas fluxes and masses. Results:We confirm the previously found decrease of \(M_{\rm dust}\) in the inner \(\sim\)0.5 pc of the cluster. This is particularly evident when considering the disks around the more massive stars (\(\geq\) 0.4 \(M_{\odot}\)), where those located in the inner part (\(<\) 0.5 pc) of the cluster have \(M_{\rm dust}\) about an order of magnitude lower than the more distant ones. About half of the sample is located in the region of the \(M_{\rm acc}\) vs \(M_{\rm disk}\) expected by models of external photoevaporation, namely showing shorter disk lifetimes than expected for their ages. The shorter disk lifetimes is observed for all targets with projected separation from \(\sigma\)-Ori \(<\) 0.5 pc, proving that the presence of a massive stellar system affects disk evolution. Conclusions:External photoevaporation is a viable mechanism to explain the observed shorter disk lifetimes and lower \(M_{\rm dust}\) in the inner \(\sim\)0.5 pc of the \(\sigma\)-Orionis cluster, where the effects of this process are more pronounced. Follow-up observations of the low stellar mass targets are crucial to constrain disk dispersion time scales in the cluster and to confirm the dependence of the external photoevaporation process with stellar host mass. This work confirms that the effects of external photoevaporation are significant down to at least impinging radiation as low as \(\sim 10^{4}\) G\({}_{0}\). ## 1 Introduction Protoplanetary disks, made of gas and dust, are the byproduct of the star formation process and are the places where planets form. Their evolution is mediated by the interplay of several physical processes most likely acting simultaneously, which makes understanding disk evolution challenging (Manara et al., 2023, for a review). The standard theory is framed in the steady-state viscous paradigm, where the transfer of angular momentum in the disk drives its evolution, and results in accretion onto the central star (e.g., Hartmann et al., 2016). Dispersal mechanisms, such as winds and outflows, also contribute to the evolution through the depletion of disk material (e.g., Frank et al., 2014; Ercolano & Pascucci, 2017; Winter & Haworth, 2022; Pascucci et al., 2022). Mass loss processes can have an internal origin, such as inside-out clearing produced by the ionizing radiation of the host star, or come from an external source, for example, the local environment. Dynamical interactions between stars and external photoevaporation, driven by high-energy radiation fields from OB massive stars, are among the most commonly discussed processes affecting disk evolution in clustered environments (e.g., Winter et al., 2018; Reiter & Parker, 2022; Cuello et al., 2023). Given the variety of properties found in planetary systems in our Galaxy, the way forward for understanding disk evolution must include the analysis of general disk and host star properties measured in a large statistical sample of systems at different evolutionary stages and environments. This makes it possible to identify correlations between the parameters (e.g., disk mass, disk radii, mass accretion rates) and their possible connection with the age of the region or its environment. Thanks to the availability of sensitive, wide-band optical spectrographs, such as the X-Shooter instrument on the Very Large Telescope (VLT), and radio interferometers, like the Atacama Large Millimeter and sub-millimeter Array (ALMA), it is now possible to measure some of these general properties (Miotello et al., 2023, for a review). In particular, the mass accreted onto the central star per unit time (\(\dot{M}_{\rm acc}\)), drawn from UV-optical spectra, and the disk mass (\(M_{\rm disk}\)), from ALMA observations, have proven to be very useful for this task (Manara et al., 2023). For instance, surveys of young stars in different star-forming regions (SFRs) have found a tentative trend of decreasing \(\dot{M}_{\rm acc}\) with age (e.g., Sicilia-Aguilar et al., 2010; Antoniucci et al., 2014; Briceno et al., 2019; Manzo-Martinez et al., 2020), predicted by viscous evolution models (e.g., Lynden-Bell & Pringle, 1974; Hartmann et al., 1998). This observational trend, however, has large uncertainties, mainly due to unreliable age estimates for individual stars (e.g., Soderblom et al., 2014) and correlated uncertainties between stellar properties and estimated individual ages (Da Rio et al., 2014). Finally, an unexpectedly large fraction of high accretors are found in old (\(>\)5Myr) regions (Ingleby et al., 2014; Manara et al., 2020, 2021; Testi et al., 2022). Furthermore, measurements of \(M_{\rm disk}\) (estimated from dust emission and assuming a gas-to-dust ratio of 100) are now available for large samples of disks (e.g., Ansdell et al., 2017, 2016; Pascucci et al., 2016; Barenfeld et al., 2016; Grant et al., 2021; van Terwisga & Hacar, 2023), which in combination with \(\dot{M}_{\rm acc}\), has allowed us to connect what is happening in the innermost regions (\(\lesssim\)1 au) with outer disk properties and thus test disk evolution models. According to viscous evolution, \(\dot{M}_{\rm acc}\) should positively correlate with \(\dot{M}_{\rm disk}\) (predicted from the gas mass) in such a way as to expect a tighter correlation at older ages (Rosotti et al., 2017; Lodato et al., 2017; Somigliana et al., 2022). The \(M_{\rm disk}-\dot{M}_{\rm acc}\) relation has now been empirically established for nearby SFRs, although with a (puzzling) large spread regardless of the age of the region (e.g., Manara et al., 2016, 2020; Mulders et al., 2017), pointing to a deviation from the purely viscous evolution theory, possibly toward a further importance of MHD winds in driving accretion in the disk (Manara et al., 2023; Tabone et al., 2022, Somigliana et al. subm.). However, these studies have mainly focused on nearby (\(<\) 300 pc) low-mass SFRs that distinctly lack OB stars (e.g., Taurus, Andrews et al., 2013), and do not represent the environment where most planets have formed or the birth environment of our Solar System (e.g., Lada & Lada, 2003; Fatuzzo & Adams, 2008; Adams, 2010; Winter et al., 2020). Given the increasing relevance attributed to environmental factors in modulating disk evolution and planet formation, several authors have now included the effects of external photoevaporation by massive stars in models of viscous disk evolution (e.g., Clarke, 2007; Anderson et al., 2013; Facchini et al., 2016; Haworth et al., 2018; Sellek et al., 2020; Coleman & Haworth, 2022). The ratio \(M_{\rm disk}/\dot{M}_{\rm acc}\) has gained particular attention as a proxy of disk evolution, and as a possible discriminant between external effects and other internal disk evolution mechanisms. Rosotti et al. (2017) showed that externally irradiated disks show a \(\dot{M}_{\rm disk}/\dot{M}_{\rm acc}\) significantly lower than the expected value for a given system age, due to the radical disk mass depletion characteristic of this scenario. Similarly, external truncation in multiple stellar systems leads to a similar decrease of \(M_{\rm disk}/\dot{M}_{\rm acc}\)(Zagaria et al., 2022). An ideal region to test these aforementioned predictions is the \(\sigma\)-Orionis cluster. Its intermediate age (\(\sim\)3-5 Myr, Oliveira et al., 2004; Hernandez et al., 2014), makes it young enough to remain bound, yet old enough for its central OB system (\(\sigma\)-Ori, Caballero, 2007) to have left its imprint. In contrast to the more extreme examples of externally irradiated disks we have, the Orion proplyds (O'dell et al., 1993), where EUV photons drive mass loss and shape the proplyds in close proximity (\(<\)0.03 pc) to \(\theta^{1}\) Ori C (Johnstone et al., 1998), the dispersal of disks in \(\sigma\)-Orionis is controlled by far-UV (FUV) radiation (e.g., Adams et al., 2004; Facchini et al., 2016; Haworth et al., 2018) as a result of the lower mass of its OB system (compared to \(\theta^{1}\) Ori C) and the larger separation of the stars to the center, depleting the disks close to \(\sigma\)-Ori. This was shown in the ALMA survey of \(\sigma\)-Orionis (Ansdell et al., 2017), which found a dearth of massive (\(M_{\rm disk}\)\(>\) 3\(M_{\oplus}\)) disks close (\(<\) 0.5 pc) to the central OB stars, and a smooth distance-dependent trend in the disk dust mass distribution, in line with previous results in the NGC2024 and the Orion Nebula Clusters (Mann et al., 2014, 2015), and in other less massive regions in Orion (van Terwisga & Hacar, 2023). This observed depletion of disk masses in \(\sigma\)-Orionis was later reproduced using external photoevaporative models (Winter et al., 2020). However, several other effects are at play, including dynamics in the clusters, and this trend could be coincidental (Parker et al., 2021). Just measuring disk dust masses is not enough to firmly assess the effects of external photoevaporation on disk evolution in massive star-forming regions. Two additional observational probes can be used. The ratio of forbidden emission lines is also a way to detect signs of externally photoevaporated disks. Rigliaco et al. (2009) used this probe to claim that the SO587 disk in \(\sigma\)-Orionis is currently being externally photoevaporated, and has also been supported by photoevaporative models recently (Ballabio et al., 2023). Additional forbidden emission line data analyzed by Gangi et al. (2023) for 3 targets in the \(\sigma\)-Orionis cluster are however still not conclusive tell-tale tests of external photoevaporation, due both to the strong nebular contamination and the small sample. The other observational proxy of external photoevaporation, the correlation between \(M_{\rm disk}\) and \(\dot{M}_{\rm acc}\), has not yet been well established due to the lack of accurate mass accretion rates for sources with detected sub-mm fluxes. Previous estimates of accretion rates for \(\sigma\)-Orionis members were obtained either for a small sub-sample of very low-mass stars (Rigliaco et al., 2012) or using indirect tracers such as U-band photometry (Rigliaco et al., 2011) or the H\(\alpha\) line from low-resolution spectroscopy (Mauco et al., 2016). Therefore, this latter proxy is for the first time used in this work for the \(\sigma\)-Orionis cluster. Here we present the results of the first large-scale spectroscopic survey of disk-bearing stars in the \(\sigma\)-Orionis cluster in which mass accretion rates are analyzed together with - new and previously published - disk masses. Our main objective is to study, for the first time, the relationship between \(\dot{M}_{\rm acc}\) and \(M_{\rm disk}\), and to further constrain the dependence of \(M_{\rm disk}\) with the distance from the massive system \(\sigma\)-Ori. After describing the sample in Sect. 2 and the observations and data reduction in Sect.3, we present our results on stellar parameters, and disk mass estimates in Sect. 4. We discussed the implications of our findings in the context of external photoevaporation in Sect. 5. Finally, we summarize our conclusions in Sect. 6. ## 2 Sample The \(\sigma\)-Orionis cluster is located in the Orion OB1 association, which is one of the largest and nearest OB associations spanning over 200 deg\({}^{2}\) on the sky (see the review in Reipurth, 2008). Their OB stars were first recognized by Garrison (1967) as a group of 15 B-type stars around the massive hierarchical triple system \(\sigma\)-Ori, whose most massive component is an O9.5V star (Caballero, 2007; Simon-Diaz et al., 2015), shaping the photodissociation region known as the Horsehead Nebula (e.g., Abergel et al., 2003; Pety et al., 2005) and setting the UV field strength in the cluster (see Fig. 11). In the last decades, several hundred low-mass stars and brown dwarfs have been already identified as part of the cluster (e.g., Reipurth 2008). The disks around the low-mass stars were first identified using _Spitzer_ photometry (Hernandez et al. 2007; Luhman et al. 2008) and then followed with _Herschel_(Mauco et al. 2016) and, more recently, imaged with ALMA at 1.3 mm (Ansdell et al. 2017) and followed down to the brown dwarf limit (Damian et al. 2023a,b). The low reddening toward its center (E(B-V) \(\lesssim\) 0.1 mag, e.g., Brown et al. 1994; Bejar et al. 1999; Sherry et al. 2008) makes it an excellent natural laboratory to study protoplanetary disk evolution in the entire range of stellar masses and in the context of externally irradiated disks in moderate-to-high UV environments. Our X-Shooter sample consists of 50 disk-bearing stars in the \(\sigma\)-Orionis cluster with ALMA observations (Ansdell et al. 2017) and located at different projected distances from \(\sigma\)-Ori (see Fig. 1). Of the 50 stars observed with X-Shooter, 43 have been detected by ALMA. The sample includes the objects studied in Rigliaco et al. (2012, 2009), and mainly consists of late-K and M spectral type (SpT) stars at different evolutionary stages based on the classification of their spectral energy distribution (SED, Luhman et al. 2003), as reported by Hernandez et al. (2007); Rigliaco et al. (2011); Mauco et al. (2016). Our sample includes five disks with central cavities or transition disks (TD), one class I star (SO1153), which in the Luhman et al. (2003) classification points to a strong IR excess rather than to an embedded object (this source is visible at UV-optical wavelengths), and the rest are class II stars. The list of the observed targets is reported in Table 1. The _Gaia_ EDR3 astrometric solutions for the sample are generally good, with low renormalized unit weight errors (RUWE). Only 8 targets (SO397, SO490, SO563, SO583, SO587, SO736, SO823, SO897) have RUWE values \(>\) 1.4, considered an appropriate nominal limit for _Gaia_ EDR3 (Gaia Collaboration et al. 2021). For all targets, we assumed the individual distances inverting the parallaxes from Gaia EDR3 (arithmetic distances, Gaia Collaboration et al. 2021). We then estimated the average distance to the cluster, considering only sources with RUWE \(<\) 1.4, and found a median distance of 401 pc. This is compatible with the values reported by Damian et al. (2023b). Therefore, for all our targets we assumed their arithmetic distances unless the values were unreliable - RUWE \(>\) 1.4 and/or distance differing more than 60 pc from the mean distance to the region (target SO936) - or not available (targets SO435, SO562, and SO1155), in which case we assumed the median distance to the members of the region. Distances for the sample are also listed in Table 1. Through this analysis, we found four targets, namely SO73, SO299, SO411, and SO848, whose distances are lower than the median by \(\sim\)40 pc and yet have RUWE values \(<\)1.4. These can be possible members of the more sparse Orion OB1a sub-association in front of \(\sigma\)-Orionis (Briceno et al. 2019). For SO411 this seems to be the case based on its proper motions (Perez-Blanco et al. 2018), however, for the rest of these stars we cannot know for certain. Therefore, we have included them in the analysis assuming their arithmetic distances from _Gaia_, and we have pointed them out whenever they appear as outliers from the main population. Similarly, the star SO828 with a distance of 449.5 pc (i.e., \(\sim\)50 pc away from the median distance to the members of the region) is treated in the same way. ## 3 Observations, and data reduction ### Spectroscopy with VLT/X-Shooter Observations were carried out between October 2019 and February 2020 (Pr.Id. 0104.C-0454(A), PI Ansdell) and between November 2021 and January 2022 (Pr.Id. 108.22CB.001, PI Ansdell) in Service Mode at the ESO Very Large Telescope (VLT). The X-Shooter instrument (Vernet et al. 2011) was used for all observations. This instrument acquires spectra simultaneously in three arms: UVB (\(\lambda\sim 300-550\) nm), VIS (\(\lambda\sim 500-1050\) nm), and NIR (\(\lambda\sim 1000-2500\) nm). All the stars were observed with a nodding pattern using a set of narrow slits (1.0"-0.4"-0.4" in the UVB-VIS-NIR arms, respectively, yielding the highest spectral resolution (\(\sim\) 5400, 18400, 11600, respectively). For flux calibrating the spectra, a short (\(\sim\)1 min to 10 min depending on target brightness) exposure in stare with a set of wide slits (5.0") prior to the science exposure was taken. Data reduction was done using the X-Shooter pipeline v.3.2.0 (P104 data) and v.3.5.0 (P108 data) (Modigliani et al. 2010) run within the ESO Reflex environment (Freudling et al. 2013) using the same procedure as in previous similar analyzes (e.g., Alcala et al. 2017; Manara et al. 2020; Venuti et al. 2019). The pipeline runs the classical reduction steps, including bias subtraction, flat-fielding, wavelength calibration, flexure and atmospheric dispersion correction, background removal (in stare mode) or combination of spectra obtained in a nodding cycle, and the extraction of the 1D spectrum. Telluric correction was then performed on the high-resolution spectra with the molec-fit tool (Smette et al. 2015), which models the telluric absorption lines on the observed spectra using information on the atmospheric conditions in the night. Finally, the high-resolution spectra were rescaled to those obtained with the wider slit in order to account for slit losses and obtain absolute flux calibration. This methodology leads to accurate flux calibration of the spectra (e.g., Manara et al. 2021). Particular care was taken in the case of the resolved binary system SO1267, where the two traces of the two targets, separated by 1.4", were manually extracted using the IRAF software. Throughout this paper, the source indicated as SO1267 refers to SO1267A. For the targets observed on nights with humidity higher than \(\sim\)40% or with PWV\(\sim\)9.5 mm, we did use the flux Figure 1: Spatial distribution of \(\sigma\)-Orionis sources (points), and massive O-B stars (star symbols) in the cluster. The massive, multiple system \(\sigma\)-Ori is indicated in cyan while the rest of the B-type stars in gray. The color bar shows the incident FUV field strength (in terms of the dimensionless parameter \(G_{\rm o}\)) due to the massive stars. Black circles show projected distances of 0.5, 1.2, and 2.0 pc. standard observed in the closest night with optimal conditions, to avoid introducing incorrect shapes in the NIR arm of the spectra. Finally, for SO844 and SO1154 we did rescale the narrow slit spectra to non-simultaneous photometric data, since the wide slit spectra had non-reliable fluxes lower than the narrow slit ones, possibly due to the presence of thin cirrus at the time of the observations. ### ALMA cycle 4 data In this paper we use new, higher sensitivity Band 6 Cycle 4 ALMA observations obtained with eight Execution Blocks (EBs) on 29, 30 October 2016, 2, 3 November 2016, 14 May 2017, 2, and 4 July 2017 (Project ID: 2016.100447.S; PI: Williams). The array configuration used between 40 and 44 12m antennas, with baselines of \(\sim\)20-2650 m in July 2017, leading to a spatial resolution of \(\sim\)0.18", and shorter baselines of \(\sim\)15 - 1125 m in May 2017 and in 2016, with corresponding spatial resolution \(\sim\)0.26". The correlator setup included two broad-band continuum windows centered on 234.3 and 216.5 GHz with bandwidths of 1.875 GHz and channel widths of 31.25 and 1.129 MHz, respectively. The bandwidth-weighted mean continuum frequency was 225.77 GHz (1.33 mm). The spectral windows covered the \({}^{12}\)CO (230.538 GHz), \({}^{13}\)CO (220.399 GHz), and C\({}^{18}\)O (219.560 GHz) \(J=2-1\) transitions at velocity resolutions of 0.079 - 0.096 km/s. These spectral windows had bandwidths of 58.59 MHz and channel widths of 60.6 kHz - 0.071 MHz. The raw data were pipeline calibrated at NRAO using the CASA package (version 4.7.2). The pipeline calibration included: absolute flux calibration with observations of J0522-3627 or J0423-0120; bandpass calibration with observations of J0510+1800 or J0522-3627; and gain calibration with observations of J0532-0307. We estimate an absolute flux calibration error of \(\sim\)10% based on the amplitude variations of gain calibrators over time. The imaging of the continuum and line data was performed similarly to what was done by Ansdell et al. (2017), cleaning with a Briggs robust weighting parameter of 0.5. We find a median 1.33 mm continuum RMS of 50 \(\mu\)Jy and the median \({}^{12}\)CO RMS is 11 mJy in 0.5 km s\({}^{-1}\) channels. The achieved RMS for the Representative Window centered on \({}^{13}\)CO (\(J=2-1\)) (220.399 GHz) is of 9.5 mJy Beam\({}^{-1}\) with a bandwidth of 0.096km/s and a 0.30\(\times\)0.22 arcsec beam, while the requested sensitivity was of 3.3 mJy \({}^{-1}\)over 1.0 km s\({}^{-1}\) and a beam size of 0.22 arcsec. The achieved continuum RMS is of 4.5 10\({}^{-2}\) mJy Beam\({}^{-1}\) with a bandwidth of 3.4 GHz and a 0.27\(\times\)0.19 arcsec beam. Continuum and \({}^{12}\)CO images are shown in Fig. 16, and 17, respectively, in Appendix C.1. ## 4 Results ### Stellar and accretion properties X-Shooter provides absolute flux calibrated spectra with sufficient spectral resolution and wavelength coverage to simultaneously characterize stellar, accretion, wind, jet, and ionization properties of young stellar objects (e.g, Bacciotti et al., 2011; Rigliaco et al., 2012; Alcala et al., 2014; Frasca et al., 2017; Manara et al., 2016, 2021). The continuum regions needed to determine stellar and accretion parameters range from \(\lambda\sim\) 300-364 nm (the Balmer continuum) to \(\lambda\sim\) 700 nm (where several molecular bands are present). Various absorption lines along the spectrum are required to constrain stellar spectral type and photospheric parameters (e.g., Manara et al., 2013a). In order to derive the stellar and accretion properties of the targets, we follow the same fitting procedure as Manara et al. (2013a). In short, we model the spectra by adding a photospheric template spectrum plus a slab model to match the observed, dereddened spectrum. The grid of Class III photospheric templates includes targets with SpT from G- to late M taken from Manara et al. (2013b, 2017a), different slab models, and extinction values (\(A_{V}\)), assuming the Cardelli et al. (1989) reddening law (\(R_{V}\) = 3.1). The output from the models is the excess luminosity due to accretion (\(L_{\rm acc}\)), given by the integrated flux of Figure 3: Ratio between the accretion and stellar luminosities vs effective temperature. \(\sigma\)-Orionis sources are indicated by orange circles while stars in other young SFRs by gray symbols. The dotted and dashed lines represent the locus of the chromospheric emission defined by Manara et al. (2013b, 2017a). Downward triangles indicate the non-accretors identified in this work. Figure 2: Hertzsprung-Russell diagram for \(\sigma\)-Orionis disks (orange circles) including those from R12. Sources from other SFRs are shown by gray symbols. Isochrones for 1, 3, 5, 10, 30, and 100 Myr from Siess et al. (2000) are overplotted. Evolutionary tracks are from Baraffe et al. (2015). the best-fit slab models, and the stellar luminosity (\(L_{\star}\)), which is estimated by measuring the normalization of the Class III templates that best match the observations. Distances were estimated as described in Sect. 2. In Fig. D.5 in the appendix, we show the best-fit spectrum of each of our targets. We note that, as expected, \(A_{V}\) is typically low, reaching values above or equal to 1.0 mag only in 7 targets. For the sake of comparison with other star-forming regions, we considered the same assumptions as Manara et al. (2023) and derived all the stellar and accretion parameters in a similar way. Therefore, we measure all luminosities (\(L_{\star}\), \(L_{\rm acc}\)) using the new _Gaia_ distances and obtain \(T_{\rm eff}\) from SpT using the calibration by Herczeg & Hillenbrand (2014). In Table 2 we list the stellar and accretion parameters estimated for our sample, including those from the Rigliaco et al. (2012) sample, which are recalculated with the same assumptions that we just stated, including rescaling the distance from 360 pc to the _Gaia_-based ones. Using the \(T_{\rm eff}\) and \(L_{\star}\) from the best-fit we were able to locate each target on the Hertzsprung-Russell diagram (HRD), as shown in Fig. 2. The targets in the \(\sigma\)-Orionis cluster are located in the region of the HRD consistent with their expected age (3-5 Myrs). Three targets are located at lower \(L_{\star}\) with respect to the bulk of the population at the same \(T_{\rm eff}\), namely SO500, SO848, and, SO1154. SO500 is a known brown dwarf (Rigliaco et al. 2011) and its location on the HRD is in line with other substellar objects. For SO1154, partial obscuration of the star by a highly inclined disk could explain their positions on the HRD. A highly inclined disk can add gray extinction and make the star under-luminous, resulting in more uncertain estimates of \(L_{\star}\) and of the mass accretion rate (Alcala et al. 2014). This target is the one with the highest measured \(A_{V}\)=1.8 mag, supporting the hypothesis of (partial) obscuration by a disk. Finally, SO848 could either be also a highly inclined disk, or a foreground object, as discussed in Sect. 2. In order to check the estimates from the fit, we compared the values of \(L_{\rm acc}\) obtained with the fitting procedure described above, with those derived from the luminosity of 10 emission lines, namely CaK, H\(\delta\), H\(\gamma\), H\(\beta\), HeI587 nm, H\(\alpha\), HeI667 nm, Pay, Pa\(\beta\), Br\(\gamma\), using the relations between line and accretion luminosity by Alcala et al. (2017). The mean value of \(L_{\rm acc}\) derived from the emission lines is generally in agreement with the one obtained by fitting the continuum in the X-Shooter spectrum with no dependence on the wavelengths of the lines, pointing toward correctly estimated \(A_{V}\). Figure 3 shows the ratio between accretion and stellar luminosities as a function of the effective temperature, which is a diagram used to check whether the measured accretion luminosity is larger than typical chromospheric emission (Manara et al. 2013a). Assuming the locus of chromospheric emission defined by Manara et al. (2017a), we found 6 non-accreting targets in our sample (downward triangles). As shown in Fig. D.5 these sources exhibit negligible UV excess, in line with their non-accreting nature. The rest of accreting targets have similar \(L_{\rm acc}\)/\(L_{\star}\) values at any given \(T_{\rm eff}\) as those found in other star-forming regions, in line with previous results. After locating the targets in the HRD, we derive \(M_{\star}\) using the non-magnetic models of Baraffe et al. (2015) for colder stars (\(T_{\rm eff}\leq\)3900 K), and of Feiden (2016) for hotter stars (\(T_{\rm eff}>\)3900 K). For targets having stellar properties outside of the range of values sampled by these models, we used the Siess et al. (2000) models instead. Finally, the \(\dot{M}_{\rm acc}\) is obtained from the classic relation \(\dot{M}_{\rm acc}=1.25\times L_{\rm acc}R_{\star}/(GM_{\star})\) from Hartmann et al. (1998), using \(L_{\rm acc}\) from the fit. The stellar and accretion parameters of the sample are found in Table 2. The relation between \(\dot{M}_{\rm acc}\) and \(M_{\star}\) is shown in Fig. 4 (top panel). Given the expected uncertainties on both quantities (error bar), the \(\sigma\)-Orionis disks seem to populate the same parameter space as the one covered by other young SFRs like Lupus, and Chameleon I, and even by the older (5-10 Myr; Pecaut & Mamajek 2016) Upper-Scorpius (USco). This will be further discussed in Sect. 5.2. ### Disk masses The disk masses are estimated through their submm ALMA flux at 1.3 mm (band 6) from cycle 3 (C3, Ansdell et al. 2017) and, when available, from our new, deeper ALMA observations taken in cycle 4 (C4) and reported here (Sect. 3.2). ALMA continuum fluxes are estimated as in Ansdell et al. (2017), that is by fitting point-source models to the visibility data using the _uvmodelfit_ routine in _CASA_. More information on the ALMA data is Figure 4: _Top_: Mass accretion rates vs stellar mass. The expected uncertainties are indicated by the error bars at the top left. _Bottom_: Disk masses vs stellar mass. All the targets from our sample and from Rigliaco et al. (2012) are plotted. Downward triangles indicate upper limits. The vertical dashed line indicates a \(M_{\star}\)= 0.4 \(M_{\odot}\). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Name & RA\({}_{2000}\) & Dec\({}_{2000}\) & Distance & \(d_{\rm p}\) & Log \(G_{o}\) & Disk type \\ & hh:mm:ss.s & dd:mm:ss.s & [pc] & [pc] & & \\ \hline SO73 & 05:37:30.95 & -02:23:42.8 & 359.2\({}^{+2.4}_{-4.2}\) & 2.32 & 2.34 & – \\ SO299 & 05:38:00.97 & -02:26:07.9 & 355.5\({}^{+4.4}_{-4.3}\) & 1.52 & 2.70 & TD \\ SO341 & 05:38:06.74 & -02:30:22.8 & 409.0\({}^{+4.4}_{-4.3}\) & 1.31 & 2.83 & II \\ SO362 & 05:38:08.27 & -02:35:56.3 & 402.3\({}^{+4.8}_{-4.6}\) & 1.07 & 3.01 & II \\ SO397 & 05:38:13.20 & -02:26:08.8 & 401.0 & 1.47 & 2.73 & II \\ SO411 & 05:38:14.12 & -02:15:59.8 & 365.5\({}^{+2.2}_{-2.2}\) & 2.28 & 2.35 & TD \\ SO467 & 05:38:21.19 & -02:54:11.1 & 383.3\({}^{+9.0}_{-8.6}\) & 2.13 & 2.41 & – \\ SO490 & 05:38:23.58 & -02:20:47.6 & 401.0 & 1.88 & 2.52 & II \\ SO500 & 05:38:25.44 & -02:42:41.3 & 409.2\({}^{+5.4}_{-3.72}\) & 0.98 & 3.09 & II \\ SO518 & 05:38:27.26 & -02:45:09.7 & 399.0\({}^{+4.0}_{-3.9}\) & 1.18 & 2.93 & II \\ SO520 & 05:38:27.51 & -02:35:04.2 & 402.6\({}^{+6.5}_{-6.3}\) & 0.52 & 3.64 & II \\ SO540 & 05:38:29.16 & -02:16:15.7 & 406.0\({}^{+3.6}_{-3.5}\) & 2.38 & 2.32 & II \\ SO562 & 05:38:31.41 & -02:36:33.8 & 401.0 & 0.39 & 3.88 & II \\ SO563 & 05:38:31.58 & -02:35:14.9 & 401.0 & 0.39 & 3.88 & II \\ SO583 & 05:38:33.68 & -02:44:14.2 & 401.0 & 1.01 & 3.06 & II \\ SO587 & 05:38:34.06 & -02:36:37.5 & 401.0 & 0.32 & 4.06 & II \\ SO646 & 05:38:39.03 & -02:45:32.2 & 404.6\({}^{+6.8}_{-6.6}\) & 1.13 & 2.96 & II \\ SO662 & 05:38:40.27 & -02:30:18.5 & 401.2\({}^{+3.4}_{-3.3}\) & 0.68 & 3.41 & II \\ SO682 & 05:38:42.28 & -02:37:14.8 & 409.8\({}^{+4.8}_{-4.7}\) & 0.17 & 4.63 & II \\ SO687 & 05:38:43.02 & -02:36:14.6 & 412.8\({}^{+4.3}_{-4.2}\) & 0.06 & 5.52 & II \\ SO694 & 05:38:43.87 & -02:37:06.8 & 392.3\({}^{+9.6}_{-9.2}\) & 0.13 & 4.85 & – \\ SO697 & 05:38:44.23 & -02:40:19.7 & 404.5\({}^{+2.4}_{-2.4}\) & 0.51 & 3.66 & II \\ SO726 & 05:38:47.46 & -02:35:25.2 & 403.9\({}^{+7.0}_{-6.8}\) & 0.10 & 5.03 & II \\ SO736 & 05:38:48.04 & -02:27:14.2 & 401.0 & 1.03 & 3.05 & II \\ SO739 & 05:38:48.19 & -02:44:00.8 & 433.3\({}^{+2.3}_{-2.03}\) & 1.02 & 3.06 & II \\ SO774 & 05:38:52.01 & -02:46:43.7 & 403.3\({}^{+3.4}_{-3.3}\) & 1.28 & 2.86 & II \\ SO818 & 05:38:58.32 & -02:16:10.1 & 405.4\({}^{+4.2}_{-4.1}\) & 2.37 & 2.32 & TD \\ SO823 & 05:38:59.11 & -02:47:13.3 & 401.0 & 1.37 & 2.79 & II \\ SO844 & 05:39:01.37 & -02:18:27.5 & 415.5\({}^{+3.8}_{-3.7}\) & 2.18 & 2.39 & II \\ SO848 & 05:39:01.94 & -02:35:02.9 & 356.3\({}^{+18.0}_{+16.3}\) & 0.46 & 3.75 & II \\ SO859 & 05:39:02.98 & -02:41:27.2 & 407.9\({}^{+6.6}_{-6.4}\) & 0.84 & 3.22 & II \\ SO897 & 05:39:07.61 & -02:32:39.1 & 401.0 & 0.77 & 3.29 & TD \\ SO927 & 05:39:11.51 & -02:31:06.5 & 413.6\({}^{+4.8}_{-4.7}\) & 1.0 & 3.07 & II \\ SO984 & 05:39:18.83 & -02:30:53.1 & 409.6\({}^{+3.2}_{-3.1}\) & 1.18 & 2.92 & II \\ SO1036 & 05:39:25.20 & -02:38:22.0 & 395.0\({}^{+3.5}_{-3.4}\) & 1.19 & 2.92 & II \\ SO1075 & 05:39:29.35 & -02:27:21.0 & 390.0\({}^{+8.6}_{-8.2}\) & 1.60 & 2.66 & II \\ SO1152 & 05:39:39.38 & -02:17:04.5 & 398.3\({}^{+3.9}_{-3.8}\) & 2.71 & 2.21 & – \\ SO1153 & 05:39:39.82 & -02:31:21.8 & 396.6\({}^{+4.3}_{-4.2}\) & 1.68 & 2.62 & I \\ SO1154 & 05:39:39.83 & -02:33:16.0 & 401.0 & 1.64 & 2.64 & – \\ SO1155 & 05:39:39.90 & -02:43:09.0 & 401.0 & 1.81 & 2.55 & – \\ SO1156 & 05:39:40.17 & -02:20:48.0 & 403.8\({}^{+2.6}_{-2.6}\) & 2.42 & 2.30 & II \\ SO1248 & 05:39:51.73 & -02:22:47.2 & 398.4\({}^{+7.9}_{-7.6}\) & 2.47 & 2.28 & – \\ SO1260 & 05:39:53.63 & -02:33:42.7 & 386.3\({}^{+6.4}_{-6.2}\) & 1.95 & 2.49 & II \\ SO1266 & 05:39:54.21 & -02:27:32.6 & 399.1\({}^{+11.0}_{-10.4}\) & 2.24 & 2.37 & II \\ SO1267 & 05:39:54.29 & -02:24:38.6 & 400.5\({}^{+5.3}_{-5.2}\) & 2.42 & 2.30 & – \\ SO1274 & 05:39:54.60 & -02:46:34.0 & 407.3\({}^{+2.7}_{-2.7}\) & 2.42 & 2.30 & II \\ SO1327 & 05:40:01 \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline Name & SpT & \(T_{\rm eff}\) & \(A_{V}\) & \(L_{\star}\) & log \(L_{\rm acc}\) & \(M_{\star}\) & log \(M_{\rm acc}\) & F\({}_{\rm mm}\) & \(M_{\rm dust}\) & F\({}_{{}^{2}CO}\) \\ & & [K] & [mag] & [\(L_{\odot}\)] & [\(L_{\odot}\)] & [\(M_{\odot}\)]/yr & [mJy] & [\(\,\)M\({}_{\odot}\)] & [mJy] \\ \hline SO73 & M3 & 3410 & 1.0 & 0.2 & -1.13 & 0.29 & -7.89 & \(0.53\pm 0.13\) & \(1.6\pm 0.4\) & \(<66.0\) \\ SO299 & M3.5 & 3300 & 0.2 & 0.22 & -2.62 & 0.24 & -9.26 & \(1.01\pm 0.14\) & \(3.0\pm 0.4\) & \(<66.0\) \\ SO341 & M0 & 3900 & 0.8 & 0.55 & -1.18 & 0.59 & -8.14 & \(1.19\pm 0.13\) & \(3.5\pm 0.1\) & \(<\)34.35 \\ SO362 & M3 & 3410 & 1.4 & 0.6 & -0.7 & 0.3 & -7.23 & \(0.56\pm 0.13\) & \(1.6\pm 0.1\) & \(<\)34.02 \\ SO397 & M4.5 & 3085 & 0.0 & 0.24 & -2.62 & 0.19 & -9.06 & \(<0.4\) & \(<1.6\) & \(<69.0\) \\ SO411 & G4 & 5516 & 0.6 & 11.67 & -0.4 & 2.65 & -7.66 & \(5.16\pm 0.13\) & \(17.1\pm 0.1\) & \(130.35\pm 18.02\) \\ SO467 & M5.5 & 2920 & 0.3 & 0.07 & -3.18 & 0.1 & -9.57 & \(0.61\pm 0.13\) & \(2.1\pm 0.5\) & \(<66.0\) \\ SO490 & M5.5 & 2920 & 0.0 & 0.1 & -3.01 & 0.13 & -9.41 & \(<0.4\) & \(<1.6\) & \(<72.0\) \\ SO500 & M6 & 2860 & 0.0 & 0.02 & -3.84 & 0.06 & -10.22 & \(<0.4\) & \(<1.6\) & \(<63.0\) \\ SO518 & K6 & 4115 & 1.6 & 0.48 & -0.69 & 0.8 & -7.86 & \(0.52\pm 0.13\) & \(2.1\pm 0.1\) & \(96.61\pm 18.57\) \\ SO520 & M4.5 & 3085 & 0.1 & 0.23 & -2.01 & 0.18 & -8.45 & \(0.52\pm 0.14\) & \(2.0\pm 0.5\) & \(<69.0\) \\ SO540 & K6 & 4115 & 0.5 & 0.57 & -1.84 & 0.77 & -8.96 & \(10.69\pm 0.29\) & \(46.4\pm 0.3\) & \(1306.92\pm 45.33\) \\ SO562 & M5.5 & 2920 & 0.3 & 0.26 & -1.44 & 0.15 & -7.7 & \(0.71\pm 0.13\) & \(2.9\pm 0.1\) & \(<\)33.66 \\ SO563 & M0 & 3900 & 0.6 & 0.36 & -1.27 & 0.64 & -8.36 & \(0.18\pm 0.04\) & \(0.7\pm 0.1\) & \(<33.0\) \\ SO583 & K4 & 4375 & 1.0 & 4.06 & -0.69 & 1.18 & -7.62 & \(1.9\pm 0.13\) & \(7.1\pm 0.1\) & \(68.95\pm 12.52\) \\ SO587 & M4.5 & 3085 & 0.0 & 0.35 & -3.91 & 0.21 & -10.31 & \(<0.1\) & \(<0.4\) & \(<33.6\) \\ SO646 & M3.5 & 3300 & 0.0 & 0.12 & -2.9 & 0.25 & -9.66 & \(<0.4\) & \(<1.6\) & \(<69.0\) \\ SO662 & K7 & 4020 & 0.3 & 0.68 & -3.79 & 0.64 & -10.77 & \(1.54\pm 0.14\) & \(8.8\pm 0.2\) & \(<33.99\) \\ SO682 & M0 & 3900 & 0.7 & 0.76 & -2.02 & 0.57 & -8.89 & \(0.41\pm 0.14\) & \(1.0\pm 0.1\) & \(<\)30.78 \\ SO687 & M1 & 3720 & 0.8 & 0.73 & -1.21 & 0.44 & -7.94 & \(0.28\pm 0.04\) & \(1.1\pm 0.1\) & \(<32.1\) \\ SO694 & M5.5 & 2920 & 0.1 & 0.16 & -2.51 & 0.12 & -8.82 & \(0.61\pm 0.14\) & \(2.2\pm 0.5\) & \(<69.0\) \\ SO697 & K6 & 4115 & 0.2 & 0.97 & -3.11 & 0.67 & -10.05 & \(0.16\pm 0.04\) & \(0.6\pm 0.1\) & \(<33.9\) \\ SO726 & M0 & 3900 & 0.6 & 0.56 & -2.19 & 0.59 & -9.15 & \(0.18\pm 0.04\) & \(0.7\pm 0.1\) & \(<33.4\) \\ SO736 & K7 & 4020 & 0.1 & 1.49 & -1.48 & 0.55 & -8.23 & \(0.45\pm 0.14\) & \(2.8\pm 0.1\) & \(<\)35.88 \\ SO739 & M6.5 & 2815 & 0.1 & 0.1 & -3.06 & 0.1 & -9.35 & \(0.52\pm 0.14\) & \(2.3\pm 0.6\) & \(<69.0\) \\ SO774 & K7 & 4020 & 0.0 & 0.49 & -2.75 & 0.7 & -9.84 & \(0.76\pm 0.14\) & \(3.4\pm 0.1\) & \(104.2\pm 15.91\) \\ SO818 & K7 & 4020 & 0.4 & 0.29 & -2.11 & 0.78 & -9.36 & \(1.97\pm 0.15\) & \(7.5\pm 0.6\) & \(514.0\pm 58.0\) \\ SO823 & K7 & 4020 & 1.5 & 0.32 & -2.43 & 0.77 & -9.66 & \(0.17\pm 0.04\) & \(0.6\pm 0.1\) & \(<32.2\) \\ SO844 & M1 & 3720 & 0.7 & 0.62 & -1.37 & 0.44 & -8.14 & \(2.85\pm 0.14\) & \(15.3\pm 0.1\) & \(172.14\pm 16.73\) \\ SO848 & M4 & 3190 & 0.0 & 0.02 & -3.51 & 0.17 & -10.47 & \(0.52\pm 0.14\) & \(1.5\pm 0.4\) & \(<66.0\) \\ SO859 & M3 & 3410 & 0.6 & 0.41 & -1.72 & 0.29 & -8.31 & \(2.49\pm 0.14\) & \(9.7\pm 0.6\) & \(<69.0\) \\ SO897 & K6 & 4115 & 0.6 & 0.85 & -1.34 & 0.7 & -8.33 & \(1.71\pm 0.14\) & \(6.8\pm 0.1\) & \(78.54\pm 15.28\) \\ SO927 & M0 & 3900 & 0.6 & 0.33 & -1.92 & 0.65 & -9.03 & \(1.41\pm 0.15\) & \(8.0\pm 0.1\) & \(75.95\pm 10.77\) \\ SO984 & K7 reported in Appendix C.1, which includes the comparison between ALMA fluxes from C3 and C4 observations in Fig. C.1. The measured fluxes are reported in Table 2. In total, we have 6 new continuum detections from the C4 observations. These continuum fluxes were converted to dust masses taking into account the same assumptions as Manara et al. (2023) namely, following Ansdell et al. (2016), we used a prescription for the opacity, \(\kappa_{\nu}=2.3(\nu/230\rm GHz)cm^{2}/g\), taken from Beckwith et al. (1990). We used a single dust temperature, \(T_{\rm dust}=20\) K, which has been empirically demonstrated to be a good disk-average value (Tazzari et al., 2021). The total disk mass is then obtained by multiplying the \(M_{\rm dust}\) by a gas-to-dust ratio of 100. We rescaled the dust masses of Ansdell et al. (2017), which were estimated assuming \(d=385\) pc. The rescaled dust masses and their errors are reported in Table 2. The dependence of \(M_{\rm dust}\) with the stellar mass is reported in Fig. 4, and shows a similar trend of increasing dust mass with stellar mass as in other star-forming regions, although with a large spread at \(M_{\star}\)\(>\)0.4 \(M_{\odot}\) (vertical dashed line). We do not attempt a fit of the relation as in Ansdell et al. (2017), as we will describe in Sect. 5.2 how we think that, in \(\sigma\)-Orionis, the spread is possibly a consequence of external photoevaporation. We do not attempt to derive disk gas masses from the new detections of \({}^{12}\)CO in the C4 data. However, we will use the fluxes of \({}^{12}\)CO, measured as in Ansdell et al. (2017) using a curve-of-growth method on the moment 0 maps for the detected targets. In total, the C4 data lead to 13 new \({}^{12}\)CO detections. More information is provided in Appendix C.1. ## 5 Discussion ### Dependence of disk mass with projected separation (and UV flux) from \(\sigma\)-Ori As discussed in Ansdell et al. (2017), a dearth of massive (\(M_{\rm dust}>3\rm M_{\oplus}\)) disks close (\(<0.5\) pc) in projected distance to the central O9 star \(\sigma\)-Ori was found in the \(\sigma\)-Orionis region, together with a shallow distance-dependent trend in disk dust mass. This result, similarly found in Mann et al. (2014, 2015) for other clusters in Orion, suggested that external photoevaporation may be a viable mechanism for disk depletion. In this work, we have included deeper ALMA data with 6 new detections (see Sect. 3.2). The updated \(M_{\rm dust}\) distribution as a function of the projected separation from \(\sigma\)-Ori is shown in Fig. 5. We confirm the lack of any disk more massive than \(\sim 3M_{\oplus}\) in the inner \(\sim\)0.5 pc of the cluster, and again a shallow distance-dependent trend of \(M_{\rm dust}\). The new detections further reinforce the limit in the inner part of the cluster, with detections of disks as low mass as less than \(1M_{\oplus}\), and even more stringent upper limits. This strengthens the claim that many disks close to the ionizing star \(\sigma\)-Ori have extremely low masses due to its irradiation. To further quantify the level at which \(\sigma\)-Ori affects the stars, we calculate the FUV radiation field strength due to the central OB system (see Appendix A for details). This is dominated by the radiation of \(\sigma\)-Ori alone. The top axis of Fig. 5 reports this FUV radiation strength expressed in terms of the Habing unit \(G_{0}\) (\(G_{0}=1.6\times 10^{-3}\) erg cm\({}^{-2}\) s\({}^{-1}\), Habing, 1968). The range of FUV values for this region is between \(10^{2}\) and \(10^{5}\)\(G_{0}\), lower than what is usually observed in the Orion Nebula Cluster (e.g., Winter and Haworth, 2022), but still significant. Indeed, previous findings suggested that even moderate FUV fields (\(\geq 2\times 10^{3}G_{0}\)) can drive significant disk mass loss (Facchini et al., 2016; Kim et al., 2016; Haworth et al., 2018), consistent with the observed trend. In particular, the radiation received by a disk at a projected separation of \(\sim\)0.5 pc from \(\sigma\)-Ori is \(\sim 10^{4}G_{0}\), and in this range, the disks are found to have severely lower disk masses than at larger distances. However, the most massive disks (\(M_{\rm dust}\)\(\gtrsim 10M_{\oplus}\)) are found only at projected distances larger than \(\sim\)1 pc, corresponding to FUV fields of \(\sim 10^{3}G_{0}\). Moreover, the CO detections, reported in Fig. 5 as blue circles, are found only at projected distances larger than 0.5 pc, although in a much higher fraction than that reported by Ansdell et al. (2017), mainly thanks to the deeper observations of C4 that were focused on the disks around higher-mass stars (\(M_{\star}\)\(\gtrsim 0.5\) M\({}_{\odot}\)) as they tend to have brighter millimeter emission. At this lower FUV field strength than the Orion Nebula Cluster or other massive regions, \(\sigma\)-Orionis is thus offering us the unique possibility to study external photoevaporation even at \(\sim\)3-5 Myr, where the effects are clearly detectable but the disks are not yet (all) dispersed. We note that the observed distance-dependent depletion of disks has been reproduced using external photoevaporative models (Winter et al., 2020), although with overestimated (by a factor of 2) disk dust masses. Although, according to Parker et al. (2021), this could be coincidental, it is interesting to report on this new observational result to further constrain the models. Additional information to further support the external photoevaporation hypothesis is then discussed in the next sections. ### Relations with stellar host mass Thanks to large surveys of young stars performed in various SFRs, global stellar and disk properties have been estimated revealing different relations between the various parameters. Among the well-established ones is that of the \(\dot{M}_{\rm acc}\) vs \(M_{\star}\), with a steeper-than-linear relation roughly as a power law with exponent \(\sim\)2 (e.g., Hillenbrand et al., 1992; Muzerolle et al., 2003; Natta et al., 2006), and reported spreads in \(\dot{M}_{\rm acc}\) values of about 1-2 dex (e.g., Alcala et al., 2014; Manara et al., 2016, 2017, 2023; Venuti et al., 2014, 2019; Hartmann et al., 2016). Recently, evidence of a double power-law fit of this relation has also been seen (Alcala et al., 2017; Manara et al., 2017), with a very steep relation for the lowest-mass stars (\(M_{\star}\)\(<0.2-0.3\)\(M_{\odot}\)) with slope \(\sim\)4.5 followed by a flatter relation (slope\(\sim\)1) at higher \(M_{\star}\). The distribution of the measured \(\dot{M}_{\rm acc}\) as a function of the \(M_{\star}\) for \(\sigma\)-Orionis sources are shown in the top panel of Fig. 4. These values reveal a great similarity with those found in other SFRs, like Lupus (Alcala et al., 2017), Chamaeleon I (Manara et al., 2017), and even the older USco SFR (Manara et al., 2020). A flatter dependence of \(\dot{M}_{\rm acc}\) on \(M_{\star}\) seems to be present at the highest \(M_{\star}\) even in our sample, suggesting that the broken power-law could be a better fit to the data, in line with previous studies. The similar range of \(\dot{M}_{\rm acc}\) as in other typically younger SFRs is at odds with the usually assumed decline of \(\dot{M}_{\rm acc}\) with age, a prediction of viscous evolution (e.g., Hartmann et al., 1998). This is however nowadays observed in several regions, from Orion OB1 (Ingleby et al., 2014; Manara et al., 2017; Pittman et al., 2022), to TWA (Venuti et al., 2019), \(\eta\)-Cha (Rugel et al., 2018), or even in the 30 Dor region (De Marchi et al., 2017). The reason why disks can have such a high accretion rate for a time not compatible with the amount of mass accreted over their lifetime and the total available mass in the disk, is still the subject of discussion (e.g., Hartmann et al., 2006) and, it is possibly related with episodic accretion or other mechanisms (Manara et al., 2020) but, in our specific case, it could be a selection effect due to a combination of enhanced accretion due to the effects of external photoevaporation (Rosotti et al., 2017), and the focus on just the disks that are not fully dispersed yet. Similarly to other star-forming regions, a large scatter of \(\dot{M}_{\rm acc}\) at any \(M_{\star}\) is observed for the \(\sigma\)-Orionis sources. Such a spread has been demonstrated not to be due to accretion variability or other sources of uncertainty (e.g., Manara et al. 2023, for a review) and its origin remains an open question. As also shown in Rigliaco et al. (2012); Winter et al. (2020), we find a positive correlation between \(\dot{M}_{\rm acc}\) and \(M_{\star}\) and no correlation of \(\dot{M}_{\rm acc}\) with proximity to \(\sigma\)-Ori. The bottom panel of Fig. 4 shows another correlation also well established empirically for individual regions, the \(M_{\rm dust}\) vs \(M_{\star}\) relation. Several works surveying different SFRs have shown that \(M_{\rm dust}\) directly depends on \(M_{\star}\) with a slope around 1.8-2.7 with the larger values describing the older Upper Scorpius region (Ansdell et al. 2016, 2017; Barenfeld et al. 2016; Pascucci et al. 2016; Manara et al. 2023), and holds down to the brown dwarf regime (e.g., Testi et al. 2016; Sanchis et al. 2021; Rilinger and Espaillat 2021). The steepening with age has been interpreted as faster evolution of dust around low-mass stars, whether as a result of more efficient conversion of millimeter grains into larger centimeter grains or more efficient radial drift. Interestingly, the dispersion around the relation is very similar in all the regions (\(\sim\)0.8 dex). In the case of the \(\sigma\)-Orionis cluster, we find similar results as Ansdell et al. (2017) with sources populating a similar locus on this plane as in other SFRs. We also find a large scatter in \(M_{\rm dust}\) for a given stellar mass, particularly large around the more massive stars (\(M_{\star}\geq 0.4\)\(M_{\odot}\)) in our sample. Since the dispersion is present for all regions, regardless of age and environment, it has been acknowledged as an inherent property of disk populations resulting from the range of disk initial conditions and has been explained theoretically, by invoking a mixture of both the initial conditions and the evolutionary process (Pasticucci et al. 2016; Pinilla et al. 2020). However, we think that the origin of this dispersion at high stellar masses (\(M_{\star}\geq 0.4\)\(M_{\odot}\)) is possibly related to the effects of the massive star \(\sigma\)-Ori on the surrounding disks, as we discuss in the next subsection. #### 5.2.1 The effect of stellar mass on the disk mass depletion In the middle and right panels of Fig. 5, we show the distribution of \(M_{\rm dust}\) as a function of projected separation from \(\sigma\)-Ori for stars with \(M_{\star}\geq 0.4\)\(M_{\odot}\) and \(M_{\star}<0.4\)\(M_{\odot}\), respectively. Dashed lines indicate the median values of \(M_{\rm dust}\) for sources inside and outside a projected distance of 0.5 pc from the position of \(\sigma\)-Ori. Since SpT estimates are available from Hernandez et al. (2014) for a sub-sample of stars with \(M_{\rm dust}\) upper limits (gray triangles on the left panel) and without X-Shooter spectra (i.e., without stellar mass estimates), we have added them as white downward triangles on these panels. Our SpT estimates are in good agreement within the uncertainties with those reported in Hernandez et al. (2014). The only three targets that deviate more than expected are two strong accretors (SO562, SO1075) and one highly extincted star (SO823). We assigned the objects with SpT earlier than M2 in the higher mass panel, and for later SpT to the lower mass panel. The choice is motivated by the correspondence between SpT and \(M_{\star}\) found in the X-Shooter sample. The \(M_{\rm dust}\) medians taking into account these additional values are shown with a gray dashed line, while those estimated from the X-Shooter sample alone are shown with an orange dashed line. Looking at Fig. 5 we note that, within the inner 0.5 pc from \(\sigma\)-Ori, the more massive (\(M_{\star}\geq 0.4\)\(M_{\odot}\)) stars in \(\sigma\)-Orionis (middle panel), show \(M_{\rm dust}\) about an order of magnitude lower than the more distant ones considering only the targets with measured \(M_{\star}\) (orange dashed lines), or about 4 times lower when including those where only the SpT is measured (gray dashed lines), although, in this case, the median inside 0.5 pc is more uncertain given the less stringent upper limits. By contrast, low-mass stars (\(M_{\star}<0.4\)\(M_{\odot}\)) have an apparent constant distribution of \(M_{\rm dust}\), regardless of their distance from the ionizing stars (right panel). Even though this trend will still hold including the additional (\(\sim\)19) upper limits shown in the left panel for which no SpT nor \(M_{\star}\) estimate exists (as these upper limits Figure 5: Disk dust mass (\(M_{\rm dust}\)) as a function of projected separation from \(\sigma\)-Ori. _Left:_ Considering the whole sample of disks with ALMA observations. _Middle:_ Considering the more massive (\(M_{\star}\geq 0.4\)\(M_{\odot}\)) stars in our \(\sigma\) Orionis sample. _Right:_ Considering the less massive ones (\(M_{\star}<0.4\)\(M_{\odot}\)). Dashed lines show the \(M_{\rm dust}\) median inside and outside 0.5 pc for our X-Shooter sample (orange) and also including upper limits with reported SpT in Hernández et al. (2014) (gray). Orange points are continuum detections, downward triangles are 3\(\sigma\) upper limits and, \({}^{12}\)CO detections (3\(\sigma\)) are indicated by an additional blue circle. The \({}^{12}\)CO fluxes are listed in Table 2. are of the same order as our detections), this apparent flatness in the \(M_{\rm dust}\) distribution for the low-mass stars in our sample is surely affected by the low numbers statistics in this stellar mass range, mainly due to the distance of the cluster (d = 401 pc) which makes it harder to survey low-mass stars with respect to closer star-forming regions. It could be, therefore, that there are more low-mass stars inside 0.5 pc that were not targeted in the ALMA surveys because they were not part of the initial _Spitzer_ catalogs. If these are fainter at mm-wavelengths than our targets, the few low-mass objects that are detected in close proximity to \(\sigma\)-Ori could represent the high upper tail of the low-mass distribution. Deeper ALMA observations on these additional targets along with spectroscopic follow-up are needed in order to probe the apparent flatness of the \(M_{\rm dust}\) distribution of the low-mass stars in \(\sigma\)-Orionis. At the same time, the lower median \(M_{\rm dust}\) for the low-mass stars compared to the more massive ones in the outer part of the cluster (beyond 0.5 pc), is due to the known steep dependence of \(M_{\rm dust}\) with \(M_{\star}\) just discussed. It is possible, therefore, to ascribe the differences in the outer part of the cluster to other (internal) effects related to the evolution of disks as well (Pascucci et al., 2016; Pinilla et al., 2020). The large difference between the median \(M_{\rm dust}\) for the more massive stars inside and outside projected distances of 0.5 pc from \(\sigma\)-Ori points, instead, to environmental factors, like external photoevaporation, affecting the closest disks to \(\sigma\)-Ori, decreasing significantly their \(M_{\rm dust}\), as discussed in Sect. 5.1. We note that this discrepancy holds even considering the additional upper limits for targets without \(M_{\star}\) estimates from the spectroscopy presented in this work (gray dashed lines). Note as well that this discrepancy can be even larger if the two outliers (SO823 and SO1155, see Sect. 2) are not taken into account. Although the disks in the low-mass sample are in general less massive, as expected due to their faster dust evolution, the median \(M_{\rm dust}\) within the innermost region of the cluster is still lower for the high-mass star sample than for the low-mass star regime (see Fig. 5). It is worth discussing, therefore, why such an effect is observed. A possible solution to this puzzling result could be that the effects of external photoevaporation depend on the stellar mass of the host star in a more complex fashion than what is typically assumed. Indeed, for the fact that the gravitational potential is stronger for higher-mass stars, it is usually assumed that photoevaporation is more effective around lower-mass stars. This however is a very simplistic assumption, since it is known that the disk radii depend on the stellar mass as well, albeit indirectly through the already mentioned dependence of the continuum flux with the disk radii, and the fact that the disk masses are measured from the continuum flux. If the relation between the disk radii and the stellar mass is not linear, then external photoevaporation should affect the disks in a different way depending on the (unperturbed) disk radius. External photoevaporation would result in a lower disk mass obtained as a result of eroding the disk in the outer regions, at disk radii (\(R_{\rm disk}\)) larger than the gravitational radius, defined as \(R_{\rm grav}=(GM_{\star})/c_{s}^{2}\) in an isothermal system, where \(c_{s}\) is the sound speed (Winter & Haworth, 2022), or even down to 0.15 - \(R_{\rm grav}\)(Adams et al., 2004), although with lower mass-loss rates. If disks are eroded by this process, we expect the disk radii to be typically smaller than \(R_{\rm grav}\). Unfortunately, the spatial resolution of our observations (\(\sim\)0.2"\(\sim\)80 au, see Sect. 3.2) is not sufficient to properly resolve the disks. However, we obtain indirect estimates of the disk radii using the measured continuum flux, known to correlate with the disk dust radii (Tripathi et al., 2017; Andrews et al., 2018; Long et al., 2022), and the measured \({}^{12}\)CO fluxes, which can be related to the disk gas sizes under the assumption that the emission is optically thick (e.g., Zagaria et al., 2023; Toci et al., 2023; Trapman et al., 2023). In the cases where the \({}^{12}\)CO is not detected, it is possible to extrapolate the gas radii from the dust radii assuming a ratio of 3, found here for the targets with both continuum and \({}^{12}\)CO detections, and typically found in other star-forming regions (e.g., Ansdell et al., 2018). We note that this procedure is based on several assumptions, and, in particular, the latter is most probably not valid in the case of external photoevaporation, which mainly affects the gaseous component of the disk, where we expect a lower gas-to-dust radii ratio. Assuming \(c_{s}=1\) km/s (which gives \(\sim\)120 K in 1000 \(G_{0}\) environment) as representative for our sample, we can compare the inferred disk radii with the inferred gravitational radii (\(R_{\rm disk}/R_{\rm grav}\)) for each target. Although with many caveats, this analysis results in disk radii that are always smaller than the gravitational radii for all the stars in the cluster, with the lowest ratios (\(R_{\rm disk}/R_{\rm grav}<0.1\)) for disks around stars \(M_{\star}\)\(>\)0.4 \(M_{\odot}\) and with projected separation from \(\sigma\)-Ori smaller than 0.5 pc, whereas they are larger in the outer part of the cluster. This is in line with the expectations of the imprint of external photoevaporation, with a stronger effect on the inner regions of the cluster. As shown in Adams et al. (2004), the mass-loss rate due to externally irradiated disks can still be significant even for disk radii much smaller than the gravitational radius, in particular for \(R_{\rm disk}/R_{\rm grav}>0.15\), as we found in the outer part of the cluster. This reinforces the claim that even at intermediate FUV radiation fields (1-1000 \(G_{0}\)) the effects of this process can have a significant impact on the evolution of protoplanetary disks (van Terwisga & Haacar, 2023). However, we note that the dependence of \(M_{\rm disk}\) with the distance from \(\sigma\)-Ori is not as steep as it would be expected from the results of van Terwisga & Haacar (2023). The disks around the lowest mass stars, however, seem to have a constant ratio \(R_{\rm disk}/R_{\rm grav}\sim\)0.4 regardless of the distance to \(\sigma\)-Ori, which is a consequence of the flat distribution of fluxes (and disk dust masses) with projected distance from \(\sigma\)-Ori shown in Fig. 5. With all the several assumptions of our approach, namely the dependence of continuum and gas emission with the disk radii, the ratio between gas and dust disk radii, and the sensitivity of the \(R_{\rm disk}/R_{\rm grav}\) ratio to the value assumed for \(c_{s}\), our approach points to a different dependence of the effect of external photoevaporation with stellar host mass. This is particularly evident in the dependence of \(M_{\rm disk}\) with the projected distance from \(\sigma\)-Ori (Fig. 5). Our findings suggest that the large spread in the \(M_{\rm disk}\)-\(M_{\star}\) relation observed for disks around stars with \(M_{\star}\)\(>0.4M_{\odot}\) is an effect of the environment in the \(\sigma\)-Orionis cluster. If confirmed, this would shed new light on the evolution of the \(M_{\rm disk}\)-\(M_{\star}\) relation with age, which is mainly driven by the large scatter (e.g., Manara et al., 2023), leading to an interpretation where, at least for the mid-aged \(\sigma\)-Orionis region, the steepening of the relation is an effect of external photoevaporation. Work should be done in trying to properly measure disk radii in these systems, particularly around low-mass stars, to confirm whether they are less affected by external photoevaporation, or whether the different behavior with respect to the disks around higher-mass stars is due to other processes. ### \(M_{\rm acc}\)-\(M_{\rm disk}\) plane as a proxy of Disk Evolution According to the disk viscous evolution framework, \(\dot{M}_{\rm acc}\) should directly correlate with \(M_{\rm disk}\) (e.g., Hartmann et al., 1998; Rosotti et al., 2018). et al., 2017; Lodato et al., 2017; Mulders et al., 2017; Manara et al., 2023). The viscous quasi-steady state is characterized by the condition \(M_{\rm disk}\)\(\sim M_{\rm acc}\tau\), with \(\tau\) as the viscous time-scale at the outer radius of the disk (Rosotti et al., 2017). One property of this paradigm is that \(\tau\) is of the order of the system age independent of the initial conditions and the assumptions on disk viscosity (Jones et al., 2012; Lodato et al., 2017). Therefore, the ratio \(M_{\rm disk}/\dot{M}_{\rm acc}\), the so-called "disk lifetime" (\(t_{\rm disk}\)) can be used as a proxy of disk evolution (Manara et al., 2016, 2023; Rosotti et al., 2017). The dependence between \(M_{\rm disk}\) and \(\dot{M}_{\rm acc}\) has been explored extensively in the literature and found to be almost linear, albeit with a very large scatter (e.g., Manara et al., 2016, 2020, 2023; Mulders et al., 2017). The origin of the observed scatter at all ages is still unclear, although it points toward particular conditions in the viscous framework (Lodato et al., 2017), or to the necessity to include other mechanisms to explain the observations. Both Rosotti et al. (2017) and Zagaria et al. (2022) suggest that external disturbances, such as external photoevaporation or multiplicity, lead to shorter disk lifetimes, that is higher \(\dot{M}_{\rm acc}\) than the value expected by viscous evolution corresponding to the measured disk mass. Zagaria et al. (2022) found that multiplicity can explain the high accretors found in the Upper Scorpius region (Manara et al., 2020). The data presented in this work allows us, for the first time, to test whether the \(\dot{M}_{\rm acc}\)-\(M_{\rm disk}\) relation can be used to confirm the effect of external photoevaporation on disks close to a massive star. Fig. 6 shows the \(\dot{M}_{\rm acc}\)-\(M_{\rm disk}\) plane for our \(\sigma\)-Orionis disk sample. We highlight the \(t_{\rm disk}\) = 3 Myr and, 5 Myr (dashed lines), representative of the age of the cluster (Oliviera et al., 2004; Hernandez et al., 2014), for reference. We observe that the majority of the targets are located at shorter disk lifetimes than the age of the region, in line with the expectations from external photoevaporation models (Rosotti et al., 2017). In particular, 28 targets (\(\sim\) 54%) lay above the 1 Myr line, 17 targets (\(\sim\) 34%) are between the 1 Myr and 10 Myr lines, while the remaining five targets (\(\sim\) 10%) are below the 10 Myr line, and they are mainly non-accreting objects. This points toward confirming the effect of external photoevaporation on the evolution of these disks. We note, however, that the distribution of data on the \(\dot{M}_{\rm acc}\)-\(M_{\rm disk}\) plane is similar to what is observed in other SFRs. According to Zagaria et al. (2022), most of the stars in Lupus, Chameleon I, and USCo SFRs that have higher \(\dot{M}_{\rm acc}\) given their \(\dot{M}_{\rm disk}\) can be explained by multiplicity (tidally truncated disks), with the bulk of the binary population being clustered around \(M_{\rm disk}/\dot{M}_{\rm acc}\) = 0.1 Myr. Unfortunately, we do not have multiplicity information for our \(\sigma\)-Orionis sample to further test this scenario, but we have indicated in the plots the stars with RUWE values greater than 1.4, which may point to possible binaries in the cluster. Interestingly, most of the targets with high RUWE have also short disk lifetimes, suggesting that binarity might play a role also in the \(\sigma\)-Orionis cluster in the observed spread in the \(\dot{M}_{\rm acc}\)-\(M_{\rm disk}\) relation. To further check whether the short disk lifetimes could be instead related to the presence of the massive \(\sigma\)-Ori star, we show in Fig. 7 how \(t_{\rm disk}\) depends on the projected distance to the massive \(\sigma\)-Ori system. As shown, _all_ objects within 0.5 pc from \(\sigma\)-Ori (red circles) have \(t_{\rm disk}<0.5\) Myr, while disks further out can reach higher values. Outliers, having \(t_{\rm disk}<0.05\) Myr at 1 pc or beyond, correspond to objects whose distances deviate in more than \(\sim\)40 pc to the median (SO73, SO848), strong accretors (SO1155, SO362) and/or edge-on disks candidates (SO518). The low disk lifetimes of the disks closest to the OB stars along with the distance-dependent trend in disk dust mass shown in Fig. 5, robustly evidence the fact that, at least within 0.5 pc from the center, the disks are actively being externally photoevaporated. The dependence of the disk lifetime with the projected separation from \(\sigma\)-Ori further suggests that, despite the similar distribution on the \(\dot{M}_{\rm acc}\)-\(M_{\rm disk}\) plane as in other regions, the large spread observed in our \(\sigma\)-Orionis sample also supports the outside-in depletion of these disks. As stated in Lodato et al. (2017), from disk population synthesis models, a tighter \(M_{\rm disk}\)-\(\dot{M}_{\rm acc}\) correlation is expected at longer ages, so the fact that these sources show a similar spread, even at these intermediate ages, Figure 6: Distribution of the \(\dot{M}_{\rm acc}\)–\(M_{\rm disk}\) in \(\sigma\)-Orionis. The triangles indicate the upper limit on \(M_{\rm disk}\), while the vertical arrows correspond to the upper limits on the \(\dot{M}_{\rm acc}\). The dotted and dashed lines are the isochrones at some relevant ages. The ones in bold are related to the estimated age of the cluster, i.e. 3-5 Myr (Oliviera et al., 2004). Values for other star-forming regions (Manara et al., 2023) are shown as gray symbols for comparison. Figure 7: Distribution of \(t_{\rm disk}\) as a function of projected separation from \(\sigma\)-Ori. The triangles indicate the upper/lower limits on \(\dot{M}_{\rm acc}\) and \(M_{\rm disk}\). The vertical dotted line is located at 0.5 pc from \(\sigma\)-Ori. Targets within 0.5 pc are highlighted with an additional red outline. The dashed lines are related to the estimated age of the cluster, i.e. 3-5 Myr (Oliviera et al., 2004). to other younger SFRs implies a more significant deviation of these stars from purely viscous evolution. Enlarging the sample on the low disk mass side, by adding additional disk detections to the available spectroscopic data, would constrain quantitatively how many disks are consistent with the effects of external photoevaporation, or whether other effects must be considered in order to explain the observations, such as the effects of dust evolution (e.g., Sellek et al.2020) or binarity (e.g., Zagaria et al.2022). ## 6 Conclusions We conducted the first large-scale survey with both, UV-to-NIR spectroscopy with X-Shooter, and mm-interferometry with ALMA, for disk-bearing stars in the mid-age \(\sigma\)-Orionis cluster. We have derived the stellar and accretion properties of 50 targets, and shown new ALMA detections to complement the data presented by Ansdell et al. (2017). This has allowed us to test the effect of external photoevaporation from the massive star \(\sigma\)-Ori on the surrounding population of disks. Our main conclusions are: * The disks in the \(\sigma\)-Orionis cluster show similar values and spread in the \(\dot{M}_{\rm acc}-M_{\star}\) and \(M_{\rm disk}-M_{\star}\) relations as those in surveys of protoplanetary disks in other young SFRs. No correlation of \(\dot{M}_{\rm acc}\) with proximity to \(\sigma\)-Ori was found, in agreement with previous works. * We confirm the trend of decreasing \(M_{\rm dust}\) at shorter distances from the massive star \(\sigma\)-Ori, as expected from external photoevaporation. Disks around more massive stars show a more pronounced reduction in their masses if they are located in the inner 0.5 pc of the cluster than disks in the outer regions. They were also found to have the smallest \(R_{\rm disk}/R_{\rm grav}\) at these separations, which corresponds to a value of FUV radiation of \(\sim 10^{4}G_{0}\). This effect is less pronounced in the lowest mass stars, either due to a stellar mass-dependent effect of external photoevaporation or to observational biases. Due to the low number statistics, the conclusions for the low-mass regime are still to be firmly established. Our results stress the need to develop a deeper understanding of disk evolution around very low-mass stars in clustered environments. * Half of the sample lies in the expected region for externally irradiated disks on the \(\dot{M}_{\rm acc}\) vs \(M_{\rm disk}\) plane, showing disk lifetime (\(t_{\rm disk}\)) lower than expected given the age of the system. This implies that external photoevaporation may be a viable mechanism for disk depletion in the cluster. * We found a tentative increasing trend of \(t_{\rm disk}\) with projected separation from the massive OB stars. Within the first 0.5 pc, sources have very low \(t_{\rm disk}\) (\(\leq 0.5\) Myrs). This strengthens the claim that outside-in depletion plays an important role in the evolution of disks, particularly those that are in close proximity (\(<0.5\) pc) to the central OB system \(\sigma\)-Ori. While this work has shown the power of combining information on disk properties with measurements of stellar and accretion parameters as a function of projected separation from the massive OB-system \(\sigma\)-Ori, the final tell-tale test of external photoevaporation in this region is to detect the photoevaporating winds in these targets. A detailed study on wind tracers and mass-loss diagnostic (e.g., optical forbidden emission lines) of these sources using X-Shooter and high-resolution spectra can potentially confirm the above result and put better constraints on disk dispersal mechanisms in clustered environments (e.g., Hasegawa et al.2022). This has been attempted in a limited number of targets (Rigliaco et al.2009; Gangi et al.2023), and it will be assessed in a future paper (Mauco et al., in prep). ###### Acknowledgements. We thank the anonymous referee for the critical review of our work that improved the presented study. Funded by the European Union (ERC, WANDA, 101039452). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union of the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. This work benefited from discussions with the ODYSENUS team (HST AR-16129), [https://sites.bu.edu/odysseus/](https://sites.bu.edu/odysseus/). This research received financial support from the project PRIN-INAF 2019 "Spectroscopically Tracing the Disk Dispersal Evolution" (STRADE) and from the Large Grant IN2 2022 YODA (YSOs Outflows, Disks and Accretion; towards a global framework for the evolution of planet-forming systems). TH is funded by a Royal Society Dorothy Hodgkin Fellowship. S.F. is funded by the European Union under the European Union's Horizon Europe Research & Innovation Programme 101076613 (UNVELI). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union of the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. This paper makes use of the following ALMA data: ADJS/ADJACALMA/2016.10447.5 ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUINRAAO and NAOJ. This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 83283 (DUSTUSTUSTERS). This work was partly supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Ref no. FOR 2634/1 TE 1024/1-1. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the Gaia Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular, the institutions participating in the _Gaia_ Multilateral Agreement.
2309.15442
Template Model Inspired Task Space Learning for Robust Bipedal Locomotion
This work presents a hierarchical framework for bipedal locomotion that combines a Reinforcement Learning (RL)-based high-level (HL) planner policy for the online generation of task space commands with a model-based low-level (LL) controller to track the desired task space trajectories. Different from traditional end-to-end learning approaches, our HL policy takes insights from the angular momentum-based linear inverted pendulum (ALIP) to carefully design the observation and action spaces of the Markov Decision Process (MDP). This simple yet effective design creates an insightful mapping between a low-dimensional state that effectively captures the complex dynamics of bipedal locomotion and a set of task space outputs that shape the walking gait of the robot. The HL policy is agnostic to the task space LL controller, which increases the flexibility of the design and generalization of the framework to other bipedal robots. This hierarchical design results in a learning-based framework with improved performance, data efficiency, and robustness compared with the ALIP model-based approach and state-of-the-art learning-based frameworks for bipedal locomotion. The proposed hierarchical controller is tested in three different robots, Rabbit, a five-link underactuated planar biped; Walker2D, a seven-link fully-actuated planar biped; and Digit, a 3D humanoid robot with 20 actuated joints. The trained policy naturally learns human-like locomotion behaviors and is able to effectively track a wide range of walking speeds while preserving the robustness and stability of the walking gait even under adversarial conditions.
Guillermo A. Castillo, Bowen Weng, Shunpeng Yang, Wei Zhang, Ayonga Hereid
2023-09-27T07:06:02Z
http://arxiv.org/abs/2309.15442v1
# Template Model Inspired Task Space Learning for Robust Bipedal Locomotion ###### Abstract This work presents a hierarchical framework for bipedal locomotion that combines a Reinforcement Learning (RL)-based high-level (HL) planner policy for the online generation of task space commands with a model-based low-level (LL) controller to track the desired task space trajectories. Different from traditional end-to-end learning approaches, our HL policy takes insights from the angular momentum-based linear inverted pendulum (ALIP) to carefully design the observation and action spaces of the Markov Decision Process (MDP). This simple yet effective design creates an insightful mapping between a low-dimensional state that effectively captures the complex dynamics of bipedal locomotion and a set of task space outputs that shape the walking gait of the robot. The HL policy is agnostic to the task space LL controller, which increases the flexibility of the design and generalization of the framework to other bipedal robots. This hierarchical design results in a learning-based framework with improved performance, data efficiency, and robustness compared with the ALIP model-based approach and state-of-the-art learning-based frameworks for bipedal locomotion. The proposed hierarchical controller is tested in three different robots, Rabbit, a five-link underactuated planar biped; Walker2D, a seven-link fully-actuated planar biped; and Digit, a 3D humanoid robot with 20 actuated joints. The trained policy naturally learns human-like locomotion behaviors and is able to effectively track a wide range of walking speeds while preserving the robustness and stability of the walking gait even under adversarial conditions. ## I Introduction Robust bipedal robot locomotion presents a challenging problem for robotics research due to the complexity of high dimensional models, unilateral ground contacts, and nonlinear and hybrid dynamics. Common methods applied in bipedal locomotion rely on solving optimization problems using the robot's full-order or reduced-order model to find feasible trajectories that realize stable walking gaits. In general, using full-order models results in computationally expensive problems that cannot be solved in real-time [1, 2]. To reduce the computation time, reduced-order models are used to capture the dynamics of the full-order system and plan trajectories for the robot's center of mass (CoM) and end-effectors. However, the assumptions made on reduced-order models such as a constant CoM height limit their performance on dynamic locomotion behaviors and their accuracy to predict the behavior of the real robot under certain conditions. Recently, the angular momentum-based linear inverted pendulum (ALIP) has been presented as an improved alternative to the Linear Inverted Pendulum (LIP) to predict the evolution of the model's state, demonstrating in simulation and hardware experiments that the angular momentum about the contact point can be more accurately predicted than the CoM velocity [3, 4]. With the recent success of deep learning in tackling challenging control problems, machine learning-based approaches have exploited advances in physics simulators and computing power to learn locomotion policies through more structured learning frameworks. Learning from motion references has become a popular choice to exploit large amounts of data to train walking policies. The data is obtained from motion capture systems, public motion data sets, or even from video clips, and it is used as goal references in the reward design [5, 6, 7]. Other learning methods rely on using optimization to obtain a single feasible reference trajectory [8, 9], or libraries of reference trajectories [10, 11] to guide the learning. However, these approaches require large amounts of data, and the learned policy often lacks interpretability and control over the parameters of the walking gait. This makes it difficult to adjust the policy during the sim-to-real transition. As an alternative, more complex frameworks have been proposed to combine learning algorithms with model-based controllers. The authors in [12] take insights from the Hybrid Zero Dynamics (HZD) to learn joint trajectories for planar robots. In [13], an HZD-based approach is used to learn a policy that satisfies Control Barrier Functions (CBF) defined on the reduced-order dynamics. In this work, we propose a hierarchical RL-based approach to address bipedal locomotion in underactuated and fully actuated robots. At the HL stage, RL is used to train policy that learns task space commands for different walking speeds. At the LL, a model-based nonlinear controller is implemented to track the trajectories generated by the HL planning. Several RL-based approaches have been already proposed to exploit hierarchical structures. In [11, 14], a task space policy is trained to walk at different speeds. However, the method relies on solving a series of optimization problems using the Spring Linear Inverted Pendulum to create a gait library that is used as a reference for the reward and the target end-effector positions. The policy learns residual terms that are added to the task space references [14] or joint space references [11]. Different from these approaches, our method directly learns a set of task space actions that completely characterize the dynamic walking gait without the need for previously computed reference trajectories. Moreover, we use different state and action spaces that significantly simplify the complexity of the learning problem and can be generalized to both unactuated and underactuated robots. In our previous work [15], a cascade structure is implemented to compensate the learned trajectories with feedback regulators to increase the robustness of the walking gait [16, 17]. Although the method was successfully tested in hardware, the interpretability of the learned policy was limited by the complex structure imposed over the input-output mapping of the RL policy and the addition of compensation terms on top of the learned joint trajectories. On the one hand, the integration of the feedback regulators improves the robustness and sim-to-real transfer of the learned policy. On the other hand, it makes it difficult to identify the actual contribution of the learned policy to the robustness of the walking gait. This results in a policy limited to naturally exploiting the state and action spaces and a restricted walking speed range with the robot Digit, e.g., \(v_{x}\in[-0.5,0.5]\) m/s. In this work, we propose a more efficient and clean framework that completely decouples the HL learning policy from the LL controller with better insights into the selection of the state and action spaces that results in improved sample efficiency and interpretability of the policy. We demonstrate the proposed framework is general for 2D and 3D bipedal robots and can be applied even in the case of underactuated robots. Moreover, we show that the learned policy achieves enhanced performance and robustness compared with our previous work [15]. The main contributions of this paper are as follows: 1) **A simple, efficient, and general** hierarchical learning framework that fully decouples the HL planner from the LL feedback controller. Different from other task space learning approaches, our method **(i)** uses a reduced-order state for the RL, **(ii)** learns to walk from scratch, and **(iii)** computes a set of task space actions that fully characterize dynamic walking gaits. The selection of inputs and outputs is general to bipedal robots of different morphology and degrees of freedom. We show results for actuated and underactuated 2D (Rabbit, Walker2D) and 3D robots (Digit). 2) **Insightful design of the RL state space**. We use the ALIP state and speed tracking information to design a reduced-order state space for the RL that captures the complex dynamics of bipedal locomotion while simplifying the learning process. 3) **Enhanced flexibility of the policy** to naturally exploit the nonlinear dynamics of bipedal locomotion. By including the desired step length, torso orientation, and CoM's height in the action space of the RL, the policy is not restricted to particular behaviors. This allows the policy to learn natural behaviors seen in dynamic locomotion without enforcing them during training. 4) **A robust locomotion controller** that accurately tracks a wide range of walking speeds, even under external disturbances and challenging terrains, with inclinations up to 20 degrees for both underactuated and fully-actuated robots. ## II Preliminaries and Problem Formulation ### _Bipedal locomotion as a hierarchical problem_ In general, the bipedal locomotion problem can be characterized as a hybrid system determined by a collection of phases of continuous dynamics with discrete events between the transitions of the continuous phases. Formally, the hybrid system model for biped locomotion can be defined as \[\Sigma:\left\{\begin{array}{ll}\dot{x}=f(x)+g(x)u+\omega(x,u)&x\in\mathcal{X }\setminus\mathcal{H}\\ x^{+}=\Delta(x^{-})&x^{-}\in\mathcal{H},\end{array}\right. \tag{1}\] where \(x\in\mathcal{X}\subseteq\mathbb{R}^{n}\)denotes the robot states, \(u\in\mathcal{U}\subseteq\mathbb{R}^{m}\) is a vector of actuator inputs. and \(\omega\in\Omega\subseteq\mathbb{R}^{w}\) a vector of disturbances and uncertainties. The switching surface \(\mathcal{H}\) is typically the hyper-surface of points corresponding to the height of the swing leg above the ground being zero, and the reset map \(\Delta:\mathcal{H}\rightarrow\mathcal{X}\) denotes the post-impact state values \(x^{+}\) immediately after switching as a function of the pre-impact state values \(x^{-}\) right before switching. The control of the bipedal locomotion system described by equation (1) can be formulated as a hierarchical control problem composed by a HL planner and a LL tracking controller. This cascade structure is presented in Fig. 1. The high level policy \(\pi_{y}\) generates trajectories to realize walking gaits according to design parameters and HL commands, e.g., average walking speed, robustness, terrain slope, etc. The LL policy \(\pi_{m}\) computes the actuator inputs to track the desired trajectories commanded by the HL planner. The general structure presented in Fig. 1 can characterize most controller formulations used in both model-based and model-free state-of-the-art methods for bipedal locomotion. Once the HL trajectories have been generated by the policy, classic model-based control approaches such as feedback linearization, inverse dynamics QP, operational task space controllers, or simple PD controllers can be used to track the desired HL commands. The choice of the LL control policy \(\pi_{m}\) will mostly depend on the action space of the HL policy, e.g., joint space versus task space. ### _Reduced order models for HL planning_ Reduced order models have become a powerful tool for the design of HL planners for bipedal locomotion since they allow using simple dynamical models to characterize biped walking behaviors. Recently, the ALIP model has gained attention because of its advantages over LIP to predict the evolution of the state space. The learning-based approach proposed in this work is heavily inspired by recent results using ALIP as a step planner [3, 4]. Fig. 1: Hierarchical structure for bipedal locomotion **Angular Momentum-based Linear Inverted Pendulum (ALIP):** Considering the states \(\{x,L^{y}\}\), where \(x\) is the CoM position in \(x\) direction and \(L^{y}\) is the pitch component of the angular momentum about the contact point, the ALIP dynamics is given by \[\begin{bmatrix}\dot{x}\\ \dot{L}^{y}\end{bmatrix}=\begin{bmatrix}0&1/(mH)\\ mg&0\end{bmatrix}\begin{bmatrix}x\\ L^{y}\end{bmatrix}, \tag{2}\] where \(m\) is the total mass and \(H\) is the constant CoM height. The main advantage of using the ALIP model over the LIP model is that the evolution of the angular momentum about the contact point is closer to its behavior on the full-order robot's model and the actual hardware [3]. #### Ii-B1 Limitations Although ALIP does better work describing the actual behavior of the system than LIP, both are linear models subject to assumptions such as point mass body, constant CoM height, and the angular momentum about the CoM being zero during the walking gait. In addition, the prediction of the state at the end of the step depends on the step duration \(T\). This implies that an accurate prediction would depend on the perfect timing of the touchdown event, which could only happen in ideal conditions, e.g., perfect tracking of the LL controller, point-contact foot, and non-irregular walking surfaces. To analyze this effect, we compare the predicted value of \(L^{y}\) scaled by \(mH\) at the end of the step with the actual \(L^{y}\) for the five-link bipedal robot Rabbit in Fig. 2. We show the evolution of \(L^{y}\) in simulation using the MuJoCo physics engine [18]. To simulate ideal conditions on the model as closely as possible, we set the geometry of the robot's links to be very thin (to emulate point contact with the ground) and use a LL feedback linearization controller with high gains to encourage better tracking and accurate touchdown timing. For the "non-ideal" conditions, we use the real geometry and dynamic properties of the robot's links (as described in [19]), and we use an inverse dynamics QP controller [20]. The results show that under non-ideal conditions, the prediction of \(L^{y}\) at the end of the step differs significantly from its actual value. ### _Task-space LL controller_ Several approaches have been proposed in the literature to design task space controllers that consider the full-order model of the legged robot. Considering a mechanical system with configuration space \(\mathcal{Q}\) and generalized coordinates \(q\in\mathcal{Q}\), the equations of motion formulated using the method of Lagrange are given by: \[M(q)\ddot{q}+H(q,\dot{q}) =Bu+J^{T}(q)\lambda \tag{3}\] \[J(q)\ddot{q}+\dot{J}(q,\dot{q})\dot{q} =0, \tag{4}\] where \(D(q)\) is the inertia matrix, \(H(q,\dot{q})=C(q,\dot{q})\dot{q}+G(q)+F\) is the vector sum of the Coriolis, centripetal, gravitational, and additional non-conservative forces, B is the actuation matrix, and \(J(q)\) the Jacobian of the holonomic constraints. We denote that the system (3) can be expressed in the general form (1). Let \(x=\left(q^{T},\dot{q}^{T}\right)^{T}\in T\mathcal{Q}=\mathcal{X}\), then \[f(x) =\left[\begin{array}{c}\dot{q}\\ -D^{-1}(q)\left(J^{T}(q)\lambda-H(q,\dot{q})\right)\end{array}\right] \tag{5}\] \[g(x) =\left[\begin{array}{c}0\\ D(q)^{-1}B\end{array}\right]. \tag{6}\] The task space feedback controller tracks a set of desired trajectories of the form: \[y(x)=y^{a}(x)-y^{d}(\tau(x)), \tag{7}\] where \(y^{a}\) and \(y^{d}\) are smooth functions, and \(y^{d}\) characterizes the desired behavior of the system. Upon the assumption that \(y(x)\) has relative degree 1 or 2, nonlinear control methods can be applied to find a control law that drives \(y(x)\) to zero, which implies the outputs converge to their target values. ## III Method This section presents the methodology for the design of the proposed learning-based hierarchical controller for bipedal locomotion. First, we introduce the overall structure of the framework. Then, we describe the learning-based HL and model-based LL components of the framework. ### _Hierarchical structure for bipedal locomotion_ The proposed learning-based framework combines the capabilities of model-based and model-free methods into a hierarchical structure to realize robust locomotion controllers for underactuated and fully-actuated bipedal robots. Inspired by the success of reduced-order models for the online generation of HL trajectories, we use reinforcement learning to train an HL policy that maps a reduced state space inspired by the ALIP model to a set of task space commands to generate online task space trajectories for the robot's base and end-effectors. For the LL task space controller, we use well-known model-based inverse dynamics controllers to guarantee the tracking performance of the system's outputs. The proposed hierarchical structure is presented in Fig. 3. By combining the learning-based HL planner with the model-based LL controller, we obtain a robust controller capable of accurately tracking a wide range of walking speeds while preserving a good tracking performance for the task space trajectories. This significantly increases the flexibility and safety of the policy when compared with pure learning-based controllers. Fig. 2: Prediction of \(L^{y}\) at the end of each step under ideal (top) and non-ideal (bottom) conditions. ### _Reinforcement Learning for High-Level Planning_ The problem of determining a motion policy for bipedal robots can be modeled as a Markov Decision Process (MDP), which consists of a tuple of components defined as \[\mathcal{M}:=(\mathcal{S},\mathcal{A},\mathsf{P},r,\xi,\gamma). \tag{8}\] Here \(\mathcal{S}\) is the state space, and \(\mathcal{A}\) is the set of feasible actions referred to as the action space. Specifically, at time \(t\), an agent (i.e., the motion planner) takes an action \(a_{t}\in\mathcal{A}\) at state \(s_{t}\in\mathcal{S}\), transits into the next state \(s_{t+1}\in\mathcal{S}\) according to the transition probability \(\mathsf{P}(s_{t+1}|s_{t},a_{t})\) and receives a reward \(r(s_{t},a_{t},s_{t+1})\). Moreover, \(\xi\) denotes the distribution of the initial state \(s_{0}\in\mathcal{S}\), and \(\gamma\in(0,1)\) denotes the discount factor. The stochastic transition of the MDP process captures the random sampling of initial states in the policy training and dynamics uncertainty due to model mismatch and random interactions with the environment (e.g., early ground impacts). #### Iv-B1 Reduced-Order State Space Several works have already proposed using a reduced state of the robot as the observation space of the learning algorithm. However, the choice of the reduced state is made based on trial and error or empirical observations of the policy performance. In this work, we leverage recent results on the effectiveness of using angular momentum about the contact point to regulate the walking speed of biped robots [3]. Inspired by the ALIP model, we select the state \[s=(x,y,L^{x},L^{y},e_{\bar{v}^{x}},e_{\bar{v}^{y}},v_{x}^{d},v_{y}^{d}, \alpha). \tag{9}\] where \((x,y,L^{x},L^{y})\) is the ALIP state composed by the robot's base \(x\) and \(y\) position and the angular momentum about the contact point along the \(x\) and \(y\) axes, \((e_{\bar{v}_{x}},e_{\bar{v}_{y}})\) is the error between the average velocity the robot's base \((\bar{v_{x}},\bar{v_{y}})\) and the desired robot's velocity \((v_{x}^{d},v_{y}^{d})\), and \(\alpha\) is the terrain slope measured in radians. We denote that we use the robot's base position instead the CoM because of practical convenience for future hardware experiments. The CoM estimation on complex robots may result in noisy measurements, while the base position with respect to the contact point can be easily computed using forward kinematics. We assume the slope of the terrain is known by the learning agent. This assumption is reasonable since most of the bipedal robots available for research and commercial applications are equipped with perception systems to map the surrounding environment. Even in the absence of perception systems, proprioceptive approaches could be used to accurately estimate the terrain slope based on the orientation of the robot's base and feet, as we have shown in simulation and hardware in our previous work [15]. #### Iv-B2 Task Action Space The action \(a\in\mathcal{A}\) is chosen to be \[a=(p_{\text{sw},T}^{x},p_{\text{sw},T}^{y},q_{\phi},h^{d}) \tag{10}\] where \(p_{\text{sw},T}^{x},p_{\text{sw},T}^{y}\) correspond to the position of the swing foot w.r.t. the robot's base at the end of the swing phase \(T\), i.e., the landing position of the swing foot, \(q_{\phi}\) is the absolute torso pitch angle, and \(h^{d}\) is an offset added to the nominal height of the robot's base w.r.t. the stance foot. This selection of the action space encourages the flexibility of the policy to exploit the natural nonlinear dynamics of the biped robot and enhance the robustness of the policy under big disturbances, sudden speed changes, and walking at high speeds, as it will be shown in section V. The HL actions \(a\) are used to generate smooth task space trajectories for the robot's floating base and end-effectors. The trajectory for \(p_{\text{sw}}^{x,y}\) is generated using a minimum jerk trajectory of a straight line segment connecting \(p_{\text{sw},0}^{x,y}\) with \(p_{\text{sw},T}^{x,y}\). The swing foot position at the beginning of the walking step, \(p_{\text{sw},0}^{x,y}\) is computed using Forward Kinematics (FK) and updated at every touch-down event. The desired position for the swing foot at landing, \(p_{\text{sw},T}^{x,y}\) is updated by the HL policy at the frequency of \(30Hz\). The vertical trajectory of the swing foot position w.r.t. the robot's base is generated using a 5th order Bezier Polynomial parameterized by the vertical position of the foot at the beginning \((p_{\text{sw},0}^{z})\) and the end \((p_{\text{sw},T}^{z})\) of the step, and the high foot clearance \(p_{\text{sw},T/2}^{z}\). For flat ground terrain, we have \[p_{\text{sw},T}^{z}=-h^{d} \tag{11}\] We update \(p_{\text{sw},T}^{z}\) by \[p_{\text{sw},T}^{z}=-h^{d}+p_{\text{sw},T}^{x}*\tan(\alpha)-p_{\text{off}}^{ z}, \tag{12}\] where \(\alpha\) is the terrain slope and \(p_{\text{off}}^{z}=0.005\text{m}\) is a small offset added to guarantee the swing foot makes contact with the ground. The Neural Network chosen to parameterize the HL policy is a Recurrent Neural Network with 2 hidden layers, each layer with 128 units for the case of 2D robots and 256 units for 3D robots. The hidden layers use the ReLU activation function, and the output layer is bounded by the sigmoid activation function and a scaling factor to constrain the maximum value of the HL commands. ### _Low-level task space controller_ The LL task space controller is designed using standard techniques of the nonlinear systems control literature. In particular, we implement two types of model-based controllers i) Feedback Linearization (FL), and ii) Inverse Dynamics with Fig. 3: Overall structure of the proposed learning-based framework. The HL policy maps a reduced-order state to task space trajectories that are tracked by the LL policy. QP formulation (ID-QP). We evaluate the performance of the HL policy with different LL controllers and show that the learned policy is robust to any choice of the LL controller. The purpose of this evaluation is to demonstrate the versatility of the task space-based HL planner to adapt to different LL control structures without affecting the performance of the learned policy. This also provides more flexibility for the designer to use any LL control approach at their convenience. For instance, FL is easy to implement and requires less computation time, but it is known to be hard to implement on real hardware. Therefore, FL could be used during the training process of the HL policy, while any suitable ID-QP formulation could be used for hardware experiments. For more details, we refer the reader to [20, 21], where several QP formulations for bipedal locomotion are proposed with successful applications to real hardware. In this work, we use the most basic case of the ID-QP formulation in [20] for the 2D robots and the Task Space Inverse Dynamics (TSID) formulation in [20] for the 3D robot Digit. ### _Learning procedure_ The reinforcement learning algorithm we use in this work is an implementation of the Proximal Policy Optimization [22] algorithm with parallel experience collection, input normalization, and fixed covariance. The algorithm shares the same code base as the implementations in [11] and [9]. For each episode, the initial state of the robot is set randomly from a normal distribution about an initial pose corresponding to the robot standing in the double support phase. One iteration of the HL policy corresponds to the interaction of the learning agent with the environment. The HL policy takes the reduced-order state \(s\in\mathcal{S}\) and computes an action \(a\in\mathcal{A}\) that is converted in desired task space trajectories \(y^{d}\) at the time \(t\). The reference trajectories are then sent to the LL task space controller. The LL control loop runs at a frequency of 1 KHz, while the HL planner runs at 33 Hz. The maximum length of each episode is 300 steps, which corresponds to 9 seconds of simulated time. The episode has an early termination if any of the following conditions are violated: \[|q_{\phi}|<1\text{rad},\quad h<0.5\text{m}. \tag{13}\] The simple reward function (14) adopted in this work is designed to keep track of the target walking speed while realizing a stable walking gait. In particular, the terms \(r_{v_{x}},r_{v_{y}}\) encourage the tracking of the longitudinal and lateral target speeds. Since the torso pitch angle is part of the learning action space, the term \(r_{L_{\text{CoM}}}\) encourages the policy to avoid excessive changes in the torso orientation without explicitly restricting the torso pitch angle. Finally, the term \(r_{a}\) encourages the policy to avoid excessive variations between the last action and the current action. This avoids unnecessary overshooting in the commanded actions that may produce risky behaviors during the walking gait. The weighted reward function is given as: \[\mathbf{r}=\mathbf{w}^{T}[r_{v_{x}},r_{v_{y}},r_{L_{\text{CoM}}},r_{a}]^{T}, \tag{14}\] where \[r_{v_{x}} =\exp\big{(}-\left\lVert\bar{v}_{x}-v_{x}^{d}\right\rVert^{2}\big{)} \tag{15}\] \[r_{v_{y}} =\exp\big{(}-\left\lVert\bar{v}_{y}-v_{y}^{d}\right\rVert^{2}\big{)}\] (16) \[r_{L_{\text{CoM}}} =\exp\big{(}-\left\lVert L_{\text{CoM}}\right\rVert^{2}\big{)}\] (17) \[r_{a} =\exp\big{(}-\left\lVert a_{k}-a_{k-1}\right\rVert^{2}\big{)}. \tag{18}\] and \(\mathbf{w}^{T}\) is a vector of weights corresponding to each reward term. For 2D robots we use \(\mathbf{w}^{T}=[0.6,0,0.2,0.2]\) while for 3D robots we use \([0.3,0.3,0.2,0.2]\). ## IV Illustration example In this section, we show the proposed method can be generalized to both underactuated and fully actuated robots without any changes to the structure of the HL planner policy. Moreover, we demonstrate the framework can be applied in 2D and 3D bipedal robots. We use 3 different robots. **Rabbit** is a five-link, planar underactuated bipedal robot with point feet and four actuated joints, two in the hips and two in the knees. Despite its simple mechanical structure, Rabbit still provides a suitable representation of biped locomotion, which is the reason it has been considered as a test bed for advanced control theory in the field of legged robots [19]. **Walker2D** is a seven-link, planar, fully actuated bipedal robot with 6 actuated joints, two in the hips, two in the knees, and two in the ankles. The additional degrees of freedom at the ankles enable the robot to realize human-like walking gaits and balancing. Schematics of the Walker2D and Digit are shown in Fig. 4. Rabbit shares the same design and structure as Walker2D without the feet and ankle joints. **Digit** is a 3D fully actuated bipedal robot with 30 DoF and 20 actuated joints. Each leg has six actuated joints corresponding to the motors located on the robot's hip, knee, and ankle and two passive joints corresponding to the robot's shin and tarsus joints. In addition, it has four actuated joints per arm corresponding to the shoulder and elbow joints. Fig. 4 shows the kinematic structure of Digit. Fig. 4: Schematics of the robots Walker2D and Digit. ### _Task-space outputs for the LL controller._ The set of task space outputs of relative degree 2 described by equation (7) to characterize the walking gait of the biped robot are defined as follows: \[y_{2}^{a}(q):=\left[\begin{array}{c}q_{\phi}\\ h\\ p_{sw}^{x}\\ p_{sw}^{y}\\ p_{sw}^{z}\\ \phi_{sw}\end{array}\right]\rightarrow\left(\begin{array}{c}\text{tors~{} pitch angle}\\ \text{base height}\\ \text{swing foot }x\\ \text{swing foot }y\\ \text{swing foot }z\\ \text{swing foot pitch}\end{array}\right) \tag{19}\] This selection of outputs is common in the field of bipedal locomotion. The first five outputs (\(q_{\phi},h,p_{sw}^{x},p_{sw}^{y},p_{sw}^{z}\)) are valid for both underactuated and fully actuated robots. However, for an underactuated robot, it is not possible to control the horizontal position of the robot's base or CoM. Therefore, the evolution of the base velocity is indirectly controlled by the HL learned policy through the planning of touchdown position. In the case of fully actuated robots, we also consider the sixth output to control the swing foot pitch angle to be parallel to the walking surface. This contributes to reducing disturbances at the touchdown event. Although we could add an additional output (\(\phi_{sw}\)) to control the horizontal position and velocity of the robot's base, we prefer to rely on the HL planning to control the robot's speed indirectly. The objectives of this choice of design are twofold: i) devise a general framework for both underactuated and fully-actuated robots that share the same structure for the HL policy independently of the particular design of the bipedal robot. ii) simplify the design of the controller and avoid limitations of the torque ankle to control the robot's base position. Although this effect could not be significant for quasi-static locomotion gaits, it does matter when realizing agile and dynamic locomotion. ## V Simulation results In this section, we show the performance of the learned HL policy under different testing scenarios with three different robot models, including Rabbit, Walker2D, and Digit. Moreover, we analyze the contribution of the HL policy to the robustness of the walking gait, and we compare our method with similar model-based and model-free approaches. ### _Speed tracking for different velocity profiles._ We test the learned policy for tracking a velocity profile in different directions. Fig. 5 shows the velocity tracking performance of the learned HL policy for the robots Rabbit and Walker2D. To evaluate the robustness of the policy under different LL controllers, we test the same policy with the ID-QP controller and the FL controller with different tracking gains. The results show the policy effectively tracks the walking speeds in the range \([-1,1]\) m/s, even with aggressive changes in the velocity profile. To highlight the contribution of the choice of action space, we present in Fig. 6 the actions computed by the HL policy for different commanded velocities. When a steep change in the desired velocity is commanded, the policy uses the torso orientation to compensate for variations in the robot's speed and angular momentum. We denote that these behaviors are not enforced during the training process but arise naturally from the insightful design of the proposed framework. Interestingly, some of these strategies are also observed in human locomotion [23]. ### _Comparison with ALIP model-based approach_ To assess the advantages of the proposed HL planner with respect to pure model-based approaches, we compare the performance between our learning-based controller and the ALIP-based controller. Fig. 7 (top) shows the HL-RL policy (ours) outperforms the model-based controller in tracking a velocity profile, especially for high speeds. We also show the variation of \(L_{\text{CoM}}\) (bottom) to denote the trade-off the policy learns between minimizing \(L_{\text{CoM}}\) and tracking \(v_{x}^{d}\). For small speeds, \(L_{\text{CoM}}\) looks quite similar for both controllers. For high speeds, the policy learns to prioritize speed tracking over minimizing \(L_{\text{CoM}}\). This behavior is encouraged by the selection of weights in the reward function (14). ### _Comparison with other RL-based approaches_ To highlight the contribution of the proposed framework in terms of speed tracking for 3D bipedal robots and sample efficiency, we compare our method with our previous work in [15] and the end-to-end learning approach presented in [9]. Although there are no end-to-end learning approaches implemented on the Digit robot in the literature, there are several works that have done so with the robot Cassie, which shares the same leg morphology as Digit. Therefore, we choose to implement the method in [9] with Digit as it focuses on speed tracking, which makes it more comparable to our method. Moreover, the framework in [9] is the base for several SOTA end-to-end learning approaches for bipedal locomotion [14, 24, 25]. Finally, the code implementation Fig. 5: Velocity tracking performance of the learned policy with different LL controllers. The prefix R/W is used to differentiate a policy for Rabbit or Walker2D. Fig. 6: Contribution of the policy actions for different speeds. for [9] is publicly available online, which makes the comparison as fair as possible in terms of the reproducibility of their work. In Fig. 8, we present the comparison results for speed tracking on flat ground with the robot Digit using our learned HL policy, the HZDRL controller in [15], and the end-to-end RL policy in [9]. We observe that the HZDRL controller fails, i.e., the robot falls, for speeds higher than 0.5 m/s, while the end-to-end RL controller fails to track low speeds accurately and realizes a non-smooth walking motion that causes higher variance in the speed profile. This effect may be caused by the reference trajectory used to guide the learning. In our implementation of [9], we use a reference trajectory corresponding to Digit walking forward at a speed of 0.8 m/s. For more details, the reader can refer to [9]. Our learned policy can successfully track the desired walking speed in a significantly wider range compared with the other methods. Additional testing with Digit demonstrates our controller can handle desired walking speeds in the range \(v_{x}\in[-1.0,1.5]\)m/s and \(v_{y}\in[-0.5,0.5]\)m/s, including combinations of both, i.e., diagonal walking, as can be seen in the accompanying video submission. In terms of sample efficiency, Fig. 9 shows our approach uses fewer data samples to successfully train a robust policy when compared with the end-to-end learning approach. Although this is not a surprising result as several authors have studied the effects of task space learning in the data sample efficiency [14, 26], to the best of our knowledge, our method is the only task space approach that learns to walk from scratch and has been successfully implemented in 3D bipedal robots without the need of previously computed reference trajectories, e.g., the gait library used in [14]. ### _Robustness on challenging terrain_ We test our controller in terrains with slopes up to 20 degrees for Rabbit and Walker2D and 10 degrees for Digit. Fig. 10 shows a grid plot with the Root Mean Squared Error between \(\bar{v}_{x}\) and \(v_{x}^{d}\) for Digit with \(v_{x}^{d}\in[-1.0,1.0]\)m/s, \(v_{y}^{d}\in[-0.5,0.5]\)m/s, and \(\alpha\in[-10,10]\) degrees. We denote that during training, we only use \(\alpha\in[0,10]\). Yet, we test the policy in an extended range of slopes to highlight the robustness and interpolation capability of the policy to scenarios not seen during training. In this scenario, we define the robustness of the policy by its capability to keep track of \(v_{x}^{d},v_{y}^{d}\) under challenging conditions. The results show that the learned policy keeps good tracking of \(v_{x}^{d}\) in almost all conditions, except for combinations where the robot is walking backward at high speed on very steep terrain. A higher error in the tracking of \(v_{y}^{d}\) is caused by the higher variance of the instantaneous velocity along the \(y\) axis, which is expected in bipedal locomotion as the robot's torso tends to oscillate more about the sagittal plane. We also notice there is a higher error when tracking positive speeds along the \(y\) axis. This effect may be caused by a biased created during training by the exposure of the policy to more samples with negative commands of \(v_{y}\). Fig. 8: Comparison of speed tracking performance with SOTA RL-based approaches. Fig. 10: Robustness of the learned policy under different conditions of terrain slope and target speed. Fig. 7: Comparison of HL-RL with ALIP-based controller. Fig. 9: Comparison of sample efficiency between traditional end-to-end RL and our proposed method. The higher variance in the reward is caused by the effect of the exploration noise in the task-space actions, which allows the policy to explore a more diverse set of behaviors during training. ### _Robustness against disturbances._ The trained policy is also subjected to extensive tests against external disturbances \(F_{x}\in[-80,60]\)N applied in both forward and backward directions with duration \(t\in[0.15,3]\)s. Fig. 11 shows the policy reacts effectively to all the applied disturbances without falling and maintaining the tracking of the desired walking speeds. In some scenarios, the policy uses interesting combinations of the task space outputs to realize effective strategies to reject the disturbances, e.g., bending forward/backward to absorb the impact. The results are similar for Walker2D and Digit. More details can be seen in the accompanying video submission. ## VI Conclusion In this work, we present a simple and effective learning-based hierarchical approach to realize robust locomotion controllers for bipedal robots. The design of the HL policy is inspired by the reduced-order state of the ALIP model, and a set of task space commands that include the step length, torso orientation, and height. This insightful choice of state and action spaces results in a compact policy that learns effective strategies for robust and dynamic locomotion. We show the HL planner is agnostic to the choice of LL controller, and its application is general to underactuated and fully actuated 2D and 3D robots. Finally, we show the learned policy tracks a wide range of speeds even under challenging conditions, such as external forces and slopes up to 20 degrees. Future work will focus on hardware experiments with the robot Digit and the extension of the proposed hierarchical framework to integrate different behaviors such as balancing, climbing stairs, and walking over stepping stones.
2309.05889
Systemization of Knowledge (SoK)- Cross Impact of Transfer Learning in Cybersecurity: Offensive, Defensive and Threat Intelligence Perspectives
Recent literature highlights a significant cross-impact between transfer learning and cybersecurity. Many studies have been conducted on using transfer learning to enhance security, leading to various applications in different cybersecurity tasks. However, previous research is focused on specific areas of cybersecurity. This paper presents a comprehensive survey of transfer learning applications in cybersecurity by covering a wide range of domains, identifying current trends, and shedding light on under-explored areas. The survey highlights the significance of transfer learning in addressing critical issues in cybersecurity, such as improving detection accuracy, reducing training time, handling data imbalance, and enhancing privacy preservation. Additional insights are provided on the common problems solved using transfer learning, such as the lack of labeled data, different data distributions, and privacy concerns. The paper identifies future research directions and challenges that require community attention, including the need for privacy-preserving models, automatic tools for knowledge transfer, metrics for measuring domain relatedness, and enhanced privacy preservation mechanisms. The insights and roadmap presented in this paper will guide researchers in further advancing transfer learning in cybersecurity, fostering the development of robust and efficient cybersecurity systems to counter emerging threats and protect sensitive information. To the best of our knowledge, this paper is the first of its kind to present a comprehensive taxonomy of all areas of cybersecurity that benefited from transfer learning and propose a detailed future roadmap to shape the possible research direction in this area.
Sofiya Makar, Ali Dehghantanha, Fattane Zarrinkalam, Gautam Srivastava, Abbas Yazdinejad
2023-09-12T00:26:38Z
http://arxiv.org/abs/2309.05889v1
[ ###### Abstract Recent literature highlights a significant cross-impact between transfer learning and cybersecurity. Many studies have been conducted on using transfer learning to enhance security, leading to various applications in different cybersecurity tasks. However, previous research is focused on specific areas of cybersecurity. This paper presents a comprehensive survey of transfer learning applications in cybersecurity by covering a wide range of domains, identifying current trends, and shedding light on under-explored areas. The survey highlights the significance of transfer learning in addressing critical issues in cybersecurity, such as improving detection accuracy, reducing training time, handling data imbalance, and enhancing privacy preservation. Additional insights are provided on the common problems solved using transfer learning, such as the lack of labelled data, different data distributions, and privacy concerns. The paper identifies future research directions and challenges that require community attention, including the need for privacy-preserving models, automatic tools for knowledge transfer, metrics for measuring domain relatedness, and enhanced privacy preservation mechanisms. The insights and roadmap presented in this paper will guide researchers in further advancing transfer learning in cybersecurity, fostering the development of robust and efficient cybersecurity systems to counter emerging threats and protect sensitive information. To the best of our knowledge, this paper is the first of its kind to present a comprehensive taxonomy of all areas of cybersecurity that benefited from transfer learning and propose a detailed future roadmap to shape the possible research direction in this area. ## 1 Introduction Transfer learning (TL) is a popular Machine Learning (ML) method to reuse a pre-trained model's knowledge in a new model, reducing training time. It is widely studied and applied in various fields such as mechanical engineering, medicine, molecular biology, Industrial Internet of Things (IIoT), robotics and cloud computing. ML is the ability of a computer to learn from data without explicit programming. It utilizes various learning techniques such as multi-task, active, and online learning. One of these techniques is TL. Transfer learning aims to enhance conventional machine learning methods by leveraging knowledge acquired from one or multiple source tasks and applying it to enhance learning in a related target task [1]. Thus, TL greatly reduces training time and the data required for efficient training. The formal definition of TL is as follows. Given some observations from the source domain(s) and its corresponding task(s) and some observations from the target domain(s) and its corresponding task(s), TL applies the knowledge from the source domain(s) to increase the performance of the decision functions of the target domain(s). A domain \(D\) is defined in Eq. 1 as a set of inputs \(X\) from the corresponding feature space \(X\) and the marginal probability distribution \(P(X)\). \[D=(X,P(X)) \tag{1}\] A task \(T\) is defined in Eq. 2 as labels \(Y\) from the corresponding label space \(Y\) and the decision function \(f\), which is learned by a model from data. \[T=(Y,f) \tag{2}\] orCID(s): Many models output the predicted conditional probability distributions of inputs. Therefore the decision function is usually defined as in Eq. 3. \[f(X)=P(Y|X) \tag{3}\] Figure 1 provides a visualization of the TL process defined above. The recent literature highlights a significant cross-impact between TL and cybersecurity [7, 8]. On the one hand, TL can be implemented to improve the performance of various cybersecurity tasks. On the other hand, security methods can be integrated with TL to improve the security of TL-based models. A thorough survey of TL and cybersecurity can drive further research by highlighting current trends and under-explored areas. This paper first examines prior surveys, highlighting their limitations and motivating the need for a new survey. Next, we review the literature to answer the critical research questions: * **RQ1**: What are the main application and state-of-the-art of TL in cybersecurity (Section 3)? * **RQ2**: What are the unexplored areas and possible future research directions for TL in cybersecurity (Section 4)? * **RQ3**: What are the specific benefits of TL in the field of cybersecurity, and how do they contribute to enhancing the performance, efficiency, and resilience of cybersecurity systems (Section 5)? To address these questions, we conduct a comprehensive survey of TL approaches applied in various cybersecurity fields. Such a detailed and specific survey can pave the way for further study in this field and greatly assist the researchers. The literature comes with several surveys on TL and its implementations in different security domains. Figure 1: Definition of TL \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Survey Paper** & **Year** & **Pass** & **Cons** \\ \hline [14, 2] & 2022 & A detailed survey in con of the application areas of TL in security. The endpoints of possible future challenges is provided. View sensor work. & Very narrow focus; the paper only considers the implementation of TL in the fault diagnosis area. \\ \hline [22] & 2021 & The authors constructed a taxonomy of three TL methods in the intelligent PDF field and results the numerous challenges in this area. A practical analysis was performed. The authors published a state framework to address challenges and inspire future research. & Any narrow focus; the paper only considers the implementation of TL in the fault diagnosis area. The paper does not provide a conclusive future research. \\ \hline [19] & 2021 & A detailed survey on privacy-preserving federated learning. The paper proposed to achieve privacy-preserving federated learning. The authors included a scheme privacy-preserving FL. In contrast, the authors not contrained around TL, federated transfer learning in one of the classes of TL. \\ \hline [20] & 2021 & The work studies the privacy aspects of TL, including a future reading. & The survey not contrained around TL, federated transfer learning in one of the classes of TL. \\ \hline [23] & 2021 & The survey explores the TL methods developed to improve the performance of self-interested having agents, including a taxonomy. & Secure TL is not the primary focus of the survey. Does not provide a future reading. \\ \hline \end{tabular} \end{table} Table 1: Summary of Survey Papers The benefits and drawbacks of the selected survey papers are summarized in Table 1 to compare with our work. As shown in the table, our paper is the first work focusing on all implementations of TL in the field of computer security. It presents the related taxonomies as well as a future roadmap. We will examine various studies in the field to answer research questions in the following sections. We summarize our main contributions as follows: * This paper is a first-of-its-kind literature review paper on the implementations of TL in the computer security area. By examining a wide range of applications, it offers a holistic view of the potential of TL in addressing cybersecurity challenges. * Building upon the analysis of existing research, this paper proposes a comprehensive taxonomy for TL in cybersecurity. The taxonomy categorizes TL applications based on the specific cybersecurity domains and subdomains. This taxonomy offers a structured framework for understanding and classifying TL approaches in cybersecurity. * In addition to the analysis and taxonomy, this paper provides additional insights into the benefits and challenges of TL in cybersecurity. It highlights the advantages of TL and identifies its challenges. These insights contribute to a deeper understanding of the implications and potential future directions of TL in cybersecurity. * Lastly, this paper outlines future research directions and challenges that require the attention of the cybersecurity community by highlighting the unexplored areas of TL in cybersecurity. The remainder of this paper is structured as follows: Section 2 details the methodology used in this work. Section 3 reviews the state-of-the-art literature and develops the taxonomy of TL-cybersecurity cross-impact, Section 4 outlines the future roadmap, and Section 5 concludes the paper. ## 2 Methodology We have conducted a systematic literature review to develop a comprehensive overview of the research field that studies the cross-impact of TL and computer security. We performed our search by combining various keywords and inputting the resulting query into the search engine of selected digital libraries. We choose the following platforms for computer science publications: IEEExplore, ACM Digital Library, SpringerLink, ScienceDirect, and Google Scholar. We searched the publications based on the following keywords: ("transfer learning" AND ("malware" OR "security" OR "privacy" OR "attacks" OR "cybersecurity" OR "cybersecurity")) The searches were conducted on multiple occasions during the period from October 2021 to January 2022. We identified 143 papers published during or before this period that met the keywords criteria. Next, we filtered the obtained publications based on the following selection criteria: the paper must have a focus both on TL and any field in cybersecurity, the paper must be written after 2018 and published in English, and the paper must not be a pre-print. The research area of TL is relatively new, and only a limited number of publications are available for review, especially after filtering out all security-unrelated works. This fact stimulated us to relax most of the exclusion criteria. Out of 136 publications yielded by a primary search, we removed 65 by applying selection criteria, leaving us with 78 papers to review. Our search encompassed the following libraries: IEEE (39 papers), ACM (10 papers), ResearchGate (9 papers), MDPI (5 papers), Science Direct (4 papers), Springer (3 papers), and Elsevier (1 paper). Furthermore, we examined the publication year of the collected papers. Our analysis revealed the following distribution: 2016 (1 paper), 2017 (5 papers), 2018 (10 papers), 2019 (9 papers), 2020 (23 papers), 2021 (26 papers), 2022 (3 papers), and 2023 (1 paper). ## 3 State-of-the-Art This section provides an in-depth exploration of the current advancements and practical implementations of TL in the field of cybersecurity. The literature has been systematically organized into seven primary application categories, forming the foundational layer of the proposed taxonomy (Fig. 2). The literature has been further segmented through rigorous analysis into subcategories, constituting the second layer of the taxonomy. Notably, a distinct emphasis on malware analysis is observed, prompting the identification of commonalities among the relevant studies and the subsequent development of a supplementary third layer within the taxonomy. ### Privacy Privacy is a paramount concern in today's highly digitalized world, and ensuring the protection of sensitive information is crucial for any organization. This section explores the applications of TL in privacy-related aspects of cybersecurity. Neural networks (NN) pose privacy challenges, particularly in protecting training datasets that may contain sensitive information. Jin _et al._ addressed this challenge by incorporating Fully Homomorphic Encryption (FHE) in the NN training framework [9]. FHE works by allowing the computation on the encrypted data without the key. The previous work on FHE is highly limited to trivial applications since FHE requires vast computational resources. The authors proposed a new framework that efficiently implements the Gradient Descent algorithm in the encrypted domain, supporting multi-class classification networks with double-precision float-point weights. By applying TL, the authors extended the functionality of their framework to be applied to complicated real-world problems. Hu _et al._ identified the problem of privacy leakage in TL for recommendation systems, where existing research primarily focuses on improving target domain performance while neglecting the privacy of the source domain [10]. To address this issue, the authors proposed a privacy-aware neural representation called PrivNet. By simulating attacks during training and utilizing an adversarial game framework, PrivNet effectively protected the privacy of unseen users while enhancing the target performance. Experimental results demonstrated the successful disentanglement of knowledge beneficial for transfer without compromising privacy. #### 3.1.1 Federated Transfer Learning FTL is a technique that combines two important concepts: FL and TL. It aims to address the challenge of integrating and utilizing data from multiple organizations or domains while preserving privacy and improving the performance of machine learning models. In traditional TL, knowledge learned from a source domain is transferred to a target domain to improve the performance of the target model. However, FTL extends this concept by considering a scenario where data is distributed across multiple organizations or domains [11]. Each organization retains control over its data and does not share it directly with others due to privacy concerns and legal constraints. FTL was first introduced by Liu _et al._ in 2018 [12]. The authors aimed to improve the data integration process across different organizations for machine learning. They developed FTL to enhance statistical modelling in a data federation setting. The proposed framework requires minimal modifications to existing model structures and achieves the same level of accuracy as non-privacy-preserving TL. It offers flexibility and can be effectively adapted to various secure multiparty machine learning tasks. Figure 2: Taxonomy of the implementation of TL in cybersecurity A year after, Sharma _et al._ noticed that the previous implementation suffered from excessive computational overhead, rendering it impractical [13]. To enhance efficiency and security, the authors incorporated Secret Sharing (SS) into the FTL model and extended it to handle malicious players who may deviate from the protocol. By utilizing the multi-party computation SPDZ protocol, their model achieved significant improvements in the runtime and communication cost compared to previous work, reducing the execution time from 35 seconds to 0.8 seconds (semi-honest case) and 1.4 seconds (malicious case) for 500 samples. In the meantime, Gao _et al._[14] considered a different way to improve FTL. The authors addressed the limitations of existing FL approaches in handling covariate shift and feature heterogeneity without compromising privacy. They propose a TL approach called Heterogeneous Federated Transfer Learning (HFTL) to address these challenges while preserving stringent privacy. The HFTL framework incorporates privacy-preserving multi-party learning using homomorphic encryption and secret-sharing techniques. Experimental results demonstrate the security, effectiveness, and scalability of HFTL on benchmark datasets, and its application to in-hospital mortality prediction from the MIMIC-III dataset, highlighting its significance in privacy-sensitive scenarios. Wang _et al._ proposed the feature-based federated transfer learning (FbFTL) method as an innovative approach for improving the communication efficiency of federated learning via wireless links while preserving client privacy [15]. FbFTL reduces the uplink payload by over five orders of magnitude compared to existing approaches by uploading extracted features and outputs instead of parameter updates. The system design and learning algorithm are described, and a random shuffling scheme is analyzed for privacy preservation. Experimental results demonstrate the effectiveness of FbFTL in significantly reducing uplink payload and ensuring privacy preservation, making it a communication-efficient and privacy-preserving novel FTL scheme. FTL has garnered considerable interest within the academic research community, finding applications across diverse security domains including FD, image steganalysis, network analysis, and traffic classification. The subsequent sections will provide comprehensive discussions on selected implementations of FTL in these domains [16]. Overall, the ongoing advancements in privacy-preserving TL methodologies offer promising solutions for addressing privacy concerns and facilitating the development of secure and effective machine learning systems. However, further research is needed to enhance privacy preservation, improve model robustness, and explore the applicability of these techniques to various real-world scenarios. ### Authentication Authentication is pivotal in ensuring secure access and identity verification in various domains, from online services to physical premises [17]. With the increasing reliance on digital systems and the proliferation of sensitive data, robust and reliable authentication mechanisms have become paramount in mitigating unauthorized access and protecting user privacy. This section delves into the realm of authentication in the context of TL, exploring innovative approaches, challenges, and advancements that leverage TL techniques to enhance authentication systems. We will delve into different aspects of authentication, including TL for improving online authentication systems, securing authentication in the context of the IIoT, detecting signature forgery and enhancing human biometrics authentication. Chen _et al._ presented TL-PHA (TL-based physical-layer authentication), a novel physical-layer authentication scheme designed for fast online user authentication in latency-sensitive applications [18]. TL-PHA utilizes the triple-pool network, a lightweight CNN architecture, for efficient online classification. Additionally, effective data augmentation methods are incorporated to enhance the training dataset. Experimental results, using both simulated channel data and real experiment data, demonstrate the superiority of TL-PHA in terms of authentication accuracy, detection rate, and training complexity compared to alternative approaches. The proposed TL-PHA scheme represents a significant advancement as the first physical-layer authentication scheme for latency-sensitive applications that employ TL. Authentication plays a crucial role in ensuring secure and reliable communication within the IIoT ecosystem [19]. With the rapid growth of IIoT applications, ensuring the authenticity and integrity of user identities has become paramount. Wang _et al._ addresses the data security and privacy challenges in IIoT applications by proposing a novel authentication mechanism based on TL-empowered Blockchain (ATLB) [20]. The proposed ATLB combines blockchain technology for privacy preservation and TL for efficient user authentication. Different blockchains are introduced to counter collusion and Sybil attacks, while user authentication accuracy is enhanced through credit-based authentication mechanisms and a guiding network-based deep deterministic policy gradient algorithm. Additionally, TL is applied to reduce training time and enable trustworthy blockchains. Experimental results demonstrate that ATLB provides accurate authentications and achieves high throughput and low latency in various IIoT scenarios. Similarly, Arumugam _et al._ discussed the challenges related to data security and privacy during the collection of real-time and automatic data from observing applications in IIIoT [21]. The authors introduced an FTL model for authentication and privacy preservation using the novel supportive twin delayed DDPG (S-TD3) algorithm. The proposed approach utilizes the FTL blockchain to preserve privacy and security in industrial applications. The authentication process is based on user credit, and the S-TD3 algorithm trains local authentication models with high accuracy. TL is employed to reduce authentication model training time by transferring models from the local level to foreign users using the outer blockchains. The experimental results demonstrate accurate authentication for local and foreign users, as well as higher throughput and low latency in various IIoT scenarios. Signature forgery poses a significant threat to the integrity and security of authentication systems. It involves the act of falsely imitating an individual's signature to gain unauthorized access or manipulate sensitive information. Detecting signature forgery is crucial in ensuring the reliability and authenticity of signatures used for verification and legal purposes. Manikantha _et al._ conducted research focused on signature forgery detection using deep learning models and Siamese architecture [22]. The paper presents a comparative study of various deep learning models using the Siamese architecture for detecting signature forgery. TL was adopted to implement base twin networks in the Siamese NN. The results demonstrate that the VGG16 model using Euclidean distance and Gaussian Naive Bayes classifier achieves a maximum accuracy of 100% on the CEDAR and Kaggle datasets (the smallest datasets). ResNet50 achieves the highest accuracy of 98.29% for detecting forgery in Chinese signatures from the ICDAR 2011 SigComp dataset. Other models like MobileNetV2 and DenseNet121 also exhibit high accuracies for specific datasets. Human biometrics, such as fingerprints, facial features, iris patterns, and voice recognition, offer a promising approach to authentication due to their unique and intrinsic characteristics. Li _et al._ focused on improving face recognition performance by leveraging deep learning techniques and TL [23]. Traditional face recognition methods often struggle with uncontrolled environmental factors and limited training samples. The authors propose a deep CNN combined with TL and sparse representation to address these challenges. The proposed method, named T-DFE (TL-based Deep Feature Extraction), aims to overcome the limitations of traditional CNNs on small sample tasks while simplifying computational complexity. The approach involves training a CNN on augmented databases and applying TL to recognize faces with limited training samples. Experimental results using different databases demonstrate the superiority of the proposed method compared to traditional approaches. T-DFE achieves higher recognition rates in various scenarios, such as ORL, IMM, and AR databases, outperforming methods like Local Binary Pattern. On a similar note, Bonazza _et al._ aimed to compare classical ML and TL approaches for low-cost real-time face authentication [24]. The authors highlight the importance of minimizing the size of biometric data in an access control context, allowing storage on remote personal media. To meet these constraints, the study focuses on lightweight versions of algorithms. The experiments compare various methods, including Random Forest, Support Vector Machines, and TL using MobileNet v1 and v2 for face authentication. The evaluation considers authentication accuracy, storage size, and training and prediction computation times. The authors found that TL allows for improved accuracy in face authentication tasks, but the resulting networks are more time-consuming and bulkier compared to classical machine learning methods. Random Forest and Support Vector Machines fulfill real-time constraints and can be stored on a remote card, making them suitable for the given constraints. Salem _et al._ presented DeepZeroID, a privacy-preserving cloud-based biometric verification system that utilizes homomorphic encryption and TL [25]. The system addresses the vulnerability of storing sensitive biometric data on the cloud by encrypting the data and performing computations on the encrypted form. A pre-trained deep neural network is used as a feature extractor, eliminating the need to train on sensitive data and reducing the risk of information leakage. Experimental results show that DeepZeroID achieves a verification F1 score of 95.47% when verifying the combined features of iris and fingerprint inputs with zero false positives. Salunke _et al._ focused on the application of continuous user authentication using keystrokes and mouse movement behavioural patterns [26]. Traditional methods of authentication, such as passwords and biometrics, only provide static authentication. Continuous authentication, on the other hand, verifies the user's identity on an ongoing basis by analyzing their unique behavioural patterns. The study addresses the challenge of gathering sufficient data to establish a user's behavioural pattern, which can delay the implementation of continuous authentication for new users. To overcome this issue, TL is employed with a feed-forward neural network model. TL allows the model to leverage knowledge from previous learning tasks and improve accuracy with less data. The results showed that the TL model achieved 9.76% higher accuracy than the model trained without any previous learning. This indicates that TL can enhance accuracy even with limited data, which helps expedite the onboarding process for new users. The advancements in technology, particularly in machine learning and deep learning [27], have paved the way for innovative approaches to authentication, addressing the limitations of traditional methods [28]. TL emerged as a powerful technique for improving authentication accuracy with limited data and reducing time resources. The combination of TL and privacy-preserving techniques showcased the potential for developing secure and efficient authentication systems. By utilizing pre-trained neural networks as feature extractors and applying encryption methods, it was possible to achieve high accuracy in biometric verification while preserving user privacy. ### Forensics Digital forensics plays a critical role in investigating and analyzing digital evidence to uncover cybercrimes and support legal proceedings [29, 30]. With the proliferation of digital devices and the increasing complexity of cyber threats, the field of digital forensics faces new challenges in efficiently and accurately identifying and extracting relevant information from various types of data. By leveraging pre-existing models and knowledge, TL can address data scarcity and improve the accuracy and efficiency of forensic analysis tasks. In the context of digital forensics, TL finds applications in various areas, including text steganalysis and image forensics. Text steganalysis involves detecting hidden information within textual data, such as hidden messages or encrypted content. The use of Deep Neural Networks (DNNs) in text steganalysis has shown promising results; however, the increased complexity of these models leads to longer inference times, limiting their practicality. To address this issue, Peng _et al._ proposed a text steganalysis method based on multi-stage TL that enhances both inference efficiency and detection performance [31]. Experimental results demonstrate that the proposed method outperforms existing DNNs-based steganalysis methods in terms of detection accuracy and inference efficiency. Image steganalysis focuses on identifying hidden information within image files, often used for covert communication or data concealment. In image steganalysis, traditional deep-learning models require a large, diverse, and high-quality dataset for training. TL addresses the limitations of dataset quality, variety, and quantity. Ozcan _et al._ proposed a novel approach to increase the success rate and decrease the error rate in detecting stego and cover images [32]. The authors compare two series of models trained with and without TL. The experiments are conducted on two advanced steganography algorithms, HUGO and WOW, with varying payload rates. The results demonstrate that the TL-applied model outperforms the normal trained model in detecting stego images. TL allows the model to learn the fundamentals of the dataset and adapt more quickly to the operations of steganography. It improves the success rate and detection performance of the model, particularly on challenging datasets with lower payload rates. Similarly, Yang _et al._ introduced FedSteg, which employs FTL to train a secure and personalized distributed model for secure image steganalysis [33]. FedSteg addresses the challenges of aggregating scattered steganographic images and preventing the leakage of private data. It enables participants to collaborate and train a general model without sharing raw data, thus preserving user privacy. Through extensive experiments on state-of-the-art steganographic methods, FedSteg demonstrates improved performance compared to traditional non-federated steganalysis approaches. Overall, TL plays a significant role in advancing the capabilities of digital forensics by enabling the development of robust and accurate forensic models with reduced data requirements and computational overhead. It empowers forensic experts to address complex challenges in the analysis of digital multimedia content and contributes to the ongoing evolution of the field [34]. ### Cyber Threat Intelligence Cyber Threat Intelligence (CTI) is a process of locating, collecting, and analyzing data to gain knowledge about potential threats to a given system [35]. In the realm of CTI, TL has emerged as a powerful approach to enhance the effectiveness of various applications. TL offers a means to leverage knowledge and expertise obtained from existing models or domains and apply it to CTI tasks such as threat hunting, intrusion detection, analyzing the source code of public adversaries, and predicting software vulnerabilities [36]. The active search for potential threats in the network is called threat hunting. It is used to dig up the hidden adversaries that have penetrated the layer of passive defence in the organization. Activity Recognition (AR) plays a crucial role in threat hunting by enabling the identification of suspicious or malicious behaviours and activities within a system or network. Using pre-trained frameworks for AR models yields poor performance due to the significant difference between activity environments. Khan _et al._ proposed UnTran a framework that utilizes TL to generate a common feature space for both the source and target domains, enabling the recognition of unseen activities with limited labelled data in the target domain [37]. The framework combines the knowledge from a pre-trained activity model in the source domain with activity models based on raw and deep features in the target domain. The authors evaluate the UnTran framework on three real-world datasets and demonstrate its effectiveness in recognizing both seen and unseen activities in the presence of limited labelled data and imbalanced class distributions. Organizations use different intrusion detection systems to detect suspicious activities in their environment [38, 39]. ML methods have been proposed for intrusion detection in Internet-of-Things (IoT), but they often require long training times and need to learn new models from scratch when the environment changes. Yilmaz _et al._ addressed these issues by introducing a TL-based algorithm for intrusion detection in IoT [40]. TL is employed in two settings: transferring knowledge to generate suitable intrusion algorithms for new devices and transferring knowledge to detect new types of attacks. The results show that the TL approach outperforms traditional learning in detecting new attacks in terms of accuracy. The approach achieves better performance in various attack cases in different scenarios. Controller Area Network (CAN) bus is a standard system for in-vehicle communications. However, the CAN bus is susceptible to network-level attacks, and new types of intrusion attacks are constantly being discovered. Developing an efficient deep neural network-based detection mechanism for these attacks can be challenging without a large amount of intrusion data [41]. To address this challenge, Tariq _et al._ proposed CANTransfer, an intrusion detection method for the CAN bus that utilizes TL [42]. The authors train a Convolutional Long Short Term Memory (LSTM) based model using known intrusion data to detect new attacks. By applying one-shot learning, the model can adapt to detect new intrusions with a limited amount of new datasets. The article presents extensive experimentation with CAN datasets collected from two different vehicles, KIA Soul and Hyundai Sonata. The proposed CANTransfer method achieves a performance gain of 26.60% over the best baseline model for detecting new intrusions. As a tool to aid in CTI, a framework for automatically identifying and analyzing the source code from public adversaries' forums was developed by Ampel _et al._[43]. The study focuses on analyzing source code snippets posted on hacker forums, which are often noisy and unlabeled. To address this challenge, the authors propose a deep TL framework called Deep Transfer Learning for Exploit Labeling (DTL-EL) to collect and categorize hacker forum exploit source code. The DTL-EL framework leverages the learned representation from professionally labelled exploits to generalize better to hacker forum exploits. It classifies collected hacker forum exploits into eight predefined categories for proactive CTI. The performance of DTL-EL is compared to other models in the hacker forum literature. The authors concluded that DTL-EL's transferred layers better generalize to hacker forum source code compared to other deep learning and classical learning approaches. Yin _et al._ discussed the importance of predicting the exploitability of software vulnerabilities [44]. While vulnerability descriptions contain rich semantic information, the size of the vulnerability description corpus is often too small to train comprehensive Natural Language Processing models. To address this limitation, the paper proposes a framework called ExBERT for accurate exploitability prediction. ExBERT is an improved version of the BERT model that involves fine-tuning a pre-trained BERT model using a domain-specific corpus of vulnerability descriptions. The TL approach captures the semantic information in vulnerability descriptions more effectively than other feature extraction methods. The experimental results demonstrate that ExBERT achieves state-of-the-art performance in exploitability prediction, outperforming other approaches in accuracy, F1 score, precision, and recall. Detecting threats in non-English Dark Net Markets (DNM), particularly in countries like Russia and China, is crucial due to the significant presence of cybercriminals in these regions. It allows for better monitoring and understanding of cybersecurity risks originating from these countries and enables effective countermeasures. Ebrahimi _et al._ discussed this problem and addressed the challenges posed by the language barrier and limited labelled data [45]. Previous approaches have used machine translation to overcome these challenges, but translation errors can degrade threat detection performance. To improve detection, the authors proposed a deep cross-lingual TL model that learns a shared Bidirectional LSTM. TL, specifically cross-lingual knowledge transfer, is employed to capture common hacker-specific representations across languages. Deep learning techniques, such as BiLSTMs, are utilized to capture temporal patterns and extract transferable features. In conclusion, TL has emerged as a valuable approach for proactive CTI, enabling organizations to stay ahead of evolving cyber threats and protect their infrastructure effectively. ### Network Traffic Analysis Network traffic refers to the data in transit within a network during a specific period [46]. The continuous monitoring of network traffic is essential to identify any anomalous behaviour, accurately classify encrypted data packets, and promptly detect potential intrusion attacks. These activities play a critical role in maintaining the integrity, confidentiality, and smooth flow of information between servers, thereby ensuring a secure network environment[47]. TL has played a crucial role in enhancing the effectiveness of various techniques for network traffic analysis and anomaly detection. TL has been successfully implemented in models for detecting abnormal network traffic, identifying anomalies in distributed data, and classifying different types of network traffic. It has addressed challenges such as scarcity of labelled data, widely distributed data, and data privacy concerns [48]. TL has also been instrumental in detecting malicious traffic, distinguishing between normal and malicious activities, and detecting previously unseen network attacks. Furthermore, TL has been applied to traffic classification tasks, enabling the identification of different types of applications flowing through networks while maintaining user privacy. TL has proven effective in dealing with encryption technologies and successfully identifying encrypted traffic. Network traffic comes in vast quantities; therefore, it is infeasible to monitor each packet. However, traffic between specific servers is often repetitive and predictable. For this reason, security specialists and researchers focus on detecting only abnormal traffic (the anomaly), which can potentially contain malicious data. Yang _et al._ proposed a model to detect anomalies in communication network traffic [49]. A TL approach was used to deal with the scarcity of labelled data for abnormal traffic. Yehezkel _et al._ proposed normalizing autoencoder losses in various models for detecting network abnormality [50]. Usually, autoencoders are used to learn the representation of normal traffic behaviour. The autoencoder loss is then calculated to identify any abnormalities. The proposed approach normalizes the loss to make it applicable to different network settings. This model will allow transferring the loss vector to detect the abnormalities of other networks via TL. Zhao _et al._ addressed an issue of data scarcity [51]. An FTL anomaly detection model was proposed to solve this problem. The experiments showed that the model outperformed other anomaly detection models when dealing with sparse data. Xiong _et al._ discussed a similar problem of widely distributed data in the network traffic and proposed a successful solution using deep TL [52]. Wang _et al._ explored how to detect abnormal physical nodes in network slicing [53]. A TL approach was successfully implemented in the model by exploiting similarities between the nodes. Aburakhia _et al._ developed a CNN image-based anomaly detection model for extracting deep features and performing the detection analysis on these features was introduced [54]. Since pre-trained agents contain broad, deep feature representation, TL is naturally used to utilize such knowledge. IoT is the network of interconnected physical devices varying from household "smart" devices to large-scale industrial equipment[55]. However, the components of IoT can be vulnerable to outside attacks. Therefore, monitoring the IoT network traffic for any suspicious activities is important. Tien _et al._ introduced the anomaly detection model for classifying different device types in the IoT network traffic [56]. First, the model learns the significant feature of the device types, and then, by applying autoencoders as a method for TL, the obtained knowledge can be applied on other sites. Wang _et al._ proposed the CNN-based anomaly detection model for industrial control systems traffic analysis [57]. To improve the performance of the detection system and make it capable of identifying previously unknown attacks, the authors embedded TL into the residual CNN. Network Intrusion Detection System (NIDS) was developed to aid organizations in monitoring their environments for suspicious occurrences and potential compromises. Deep learning-based NIDS requires vast volumes of labelled data to be trained efficiently. Singla _et al._ developed a TL-assisted NIDS to overcome a lack of labelled data [58]. The resulting model showed better performance in terms of identifying unseen attacks compared to the models trained from scratch while requiring less training data. Taghiyarrenani _et al._ discussed the same challenge and proposed another TL-based method to distinguish between normal and malicious traffic in various network systems [59]. TL solution proved effective when compared to the baseline models. Zhao _et al._ developed a TL approach for detecting previously unseen network attacks using the knowledge gained from the known attacks [60]. The results suggested that the TL-based model performs the best in detecting unseen attacks compared to baselines. Traffic classification is a process of identifying the types of applications flowing in a network. Network traffic classification requires a secure processing system to maintain users' privacy. To achieve this, Majeed _et al._ introduced a novel FTL classification framework [61]. The authors implemented cross-silo security protocols to accommodate the privacy-preserving learning model. Singh _et al._ developed a classification model to detect malicious traffic from Darknet [62]. The authors implemented TL to utilize the knowledge from ten pre-trained CNN models and achieved a 96% accuracy rate of classifications. Li _et al._ proposed a model for detecting unseen malicious network traffic [63]. The authors incorporated the adaptation regularization TL to deal with the issue of novel malware samples. The model outperformed other traffic classification and intrusion detection counterparts. Another research was done to detect malware in the interconnected network community by Rong _et al._[64]. The TL approach was implemented to solve sparse data distributions in the training and testing datasets. One of the biggest challenges of traffic classification and analysis task is the encryption technologies implemented to improve the user's security [65]. These technologies allow malicious agents to hide their activities and evade detection. Zhang _et al._ developed a model that adopts a TL approach based on Efficientnet to identify encrypted traffic successfully [66]. The experiments showed that the proposed framework reaches 100% accuracy and recall rates while requiring a small amount of training data. ### Malware Analysis Malicious software, commonly known as malware, constitutes a form of program meticulously crafted and unleashed by malevolent individuals to execute harmful operations, including but not limited to pilfering sensitive information, assuming system control, or encrypting files for extortion purposes. In an effort to comprehend the intricate dynamics of a given malware specimen, experts undertake the crucial task of classifying it within a specific malware family via the process of malware classification. These families exhibit shared attributes that enable the development of distinctive signatures for malware detection and attribution, consequently empowering us to combat malware with increased efficiency and efficacy. Malware classification and detection have benefited significantly from the application of TL techniques [65]. TL has been successfully implemented with various deep learning models, such as Xception, VGG16, VGG19, and ResNet-50, to achieve high accuracy rates in classifying malware samples. TL has also addressed the challenges of unbalanced malware families, limited labelled data, and the need for efficient detection of unknown malware. Furthermore, TL has been instrumental in identifying malicious domain names and improving the detection of malware in cloud services and IoT devices. It has also shown promise in trojan attack detection in hardware systems. #### 3.6.1 Malware Classification NN-based malware analysis is often performed by representing a malware sample as a grayscale image to be processed by one of the CNN agents. One such agent is Xception which was designed to deal with the overfitting problem, which is common to most CNNs. Lo _et al._ implemented TL to classify malware by the Xception model pre-trained on the Keras ImageNet dataset [67]. The model achieved the highest training rate at that time. Marastoni _et al._ compared the performance of LSTM and CNN models in the malware classification task [68]. The experiments on a famous malware dataset often used to evaluate the performance of malware classification models, MALIMG, has shown that both models achieve similar accuracy rates. However, the LSTM model takes twice as long to perform the classifications compared to the CNN model. Pant _et al._ proposed another image-based malware classification model [69]. The authors used TL to transfer knowledge from the famous VGG16 model to create a custom framework to achieve a high validation accuracy. VGG16 is a CNN model that won an ImageNet (object detection and image classification) competition in 2014. VGG16 consists of sixteen deep layers: five convolution layers, three fully connected layers, five max-pooling layers, and one dense layer. Pant _et al._ have transferred the weights of the pre-trained VGG16 models and added another two dense layers and one pooling layer to perform malware classification. The VGG16-based model achieved an 88.4% accuracy rate. Prima _et al._ have also applied the VGG16 layers for the malware classification task [70]. Their model has achieved 98% accuracy on the MALIMG dataset. The unbalanced malware families affect the performance of CNN-based malware classification algorithms. Bouchaib _et al._ proposed the use of SMOTE (Synthetic Minority Oversampling Technique) to balance the dataset of families [71]. The authors have implemented TL to take advantage of the knowledge from the previously trained VGG16 model, which yielded 98% accuracy. El-Shafai _et al._ used different CNN agents pre-trained on the ImageNet dataset to perform malware classifications [72]. The VGG16-based model has achieved a 99.97% accuracy rate on the MALIMG dataset. Karanja _et al._ proposed an image-based IoT malware analysis framework using TL to adopt layers from the pre-trained VGG19 model, a famous 19-level deep CNN [73]. The adopted last layer discriminated the images of malware into corresponding malware families. The accuracy rate on the IoTPoT dataset is 89.23%. Kumar _et al._ adopted the layers of implemented another famous CNN agent, ResNet-50, to classify the IoT malware [74]. The model showed a 99.18% accuracy rate on the MALIMG dataset. It is crucial to analyze malware and generate its signature to detect, classify, and possibly even attribute future samples of the same or closely related malware. Nahmias _et al._ introduced a novel malware signature generation method, TrustSign [75]. The model is entirely unsupervised and uses deep features transferred from a pre-trained VGG-19 framework. #### 3.6.2 Malware Detection It is highly challenging to detect previously unknown malware as its samples are absent in the training datasets. Therefore, this topic has received special attention from the community. As the signature-based malware detection algorithms fail to detect previously unseen malware, Zhang _et al._ proposed a novel portable executable files feature extractor combined with the k nearest neighbours algorithm via TL to detect new malware samples successfully [76]. Similarly, Fu _et al._ introduced an LSTM model combined with a generative adversarial network to detect unseen malware for mobile devices [77]. In addition, the TL approach was implemented to create uniform feature spaces and deal with a lack of labelled data. The proposed models are designed for detecting various kinds of attacks and intrusions. Sameera _et al._ proposed a TL-assisted model to compensate for the lack of labelled data in zero-day attack detection [78]. The TL techniques were implemented in the malicious domain names detection model by Rajalakshmi _et al._[79]. It is challenging to detect malware domains since they are randomly generated by domain-generated algorithms. Moreover, such detection models must be implemented on the fly to be helpful in real-world scenarios. The authors combined the state-of-the-arc CNN model with ML classifying algorithms via TL. The final model successfully performed detections and classifications of malware-generated domains. Alshehri developed a TL-assisted image classification network to avoid the bottlenecks (various analysis obstacles) in the existing malware detection methods [80]. The author had combined two rapidly developing methods, namely TL under deep neural networks and Android malware detection algorithms, to improve the existing detection frameworks. Some research was done to detect malware attacks in cloud services. Sreelatha _et al._ utilized TL techniques to apply the knowledge from source domains to identify any abnormal activity in the communications and improve the detection rates [81]. To protect the privacy of cloud tenants while using their data for cloud computing malware detection and classification, Gao _et al._ proposed a TL-assisted classifier model. The experiments showed that TL improves the detection accuracy from 94.72% to 96.9% [82]. As it was mentioned before, IoT remains vulnerable to outside attacks [83]. The ability to quickly and efficiently detect IoT malware is crucial in ensuring the security of the vast range of tools, equipment, and devices connected to IoT. Because of the rapidly-evolving nature of IoT, the labelling process of IoT samples is highly time-consuming. Vu _et al._ proposed an autoencoder-based deep TL model to detect IoT malware which does not require fully labelled data [84]. The results show that the model improves detection rates compared to baselines. Bots remain a considerable threat to IoT. Taheri _et al._ proposed to transform network traffic data into images and train a CNN model to recognize bot activity [85]. The study showed that using TL improved the accuracy from 33.41% up to 99.98%. Trojan attack detection in hardware systems has received particular attention in the community. Trojans are any type of malware intended to mislead the user regarding its true objective. A hardware Trojan is a malicious modification in the hardware circuit systems that affect the system's functionality. Sun _et al._ proposed an electromagnetic side-channel signal analysis method for identifying trojan attacks [86]. A TL network is trained on time-frequency data to classify the attacks. The experiments showed an improvement compared to a standard trojan detection method. A TL-assisted improvement for traditional trojan hardware detection tools was developed by Faezi _et al._[87]. The proposed neural network model is pre-trained on the side-channel signals of the known trojans, and the obtained knowledge is used to report the behaviour of particular hardware chips. ### Security Evaluation Organizations regularly evaluate their security systems to determine how well they perform against new vulnerabilities [88, 89]. The use of TL has been explored in various aspects of security analysis and defence mechanisms. TL has shown promise in profiled side-channel attacks, key recovery, CAPTCHA attacks, and improving the efficiency and accuracy of cryptographic models. However, vulnerabilities in TL frameworks have also been identified, such as attacks on the TL feature extractor, weaknesses in the Softmax layer of CNN classifiers, and backdoor attacks. To mitigate these risks, researchers have proposed defence models, backdoor detection techniques, and encryption algorithms. TL has also been applied in fault diagnosis systems to enhance performance, address data privacy concerns, and handle domain shift challenges. The TL algorithm was exploited by Garg _et al._ to perform profiled side-channel attacks with two-dimensional CNN models [90]. The side-channel analysis can help attack the private keys by identifying the weak spots in cryptographical algorithms. Side-channelling analysis can be used as an efficient technique to compromise secret keys. Thapar _et al._ proposed a novel deep-learning side-channelling analysis model for more successful key recovery [91]. TL was implemented to attack devices belonging to different families. CAPTCHAs are widely used to differentiate human users from potentially malicious bots. Threat actors have been developing various techniques to attack and evade CAPTCHAs; however, most are too complex and time-consuming. Wang _et al._ conducted a quick and straightforward attack on text-based CAPTCHAs using TL to train the model on a few real-world samples [92]. A successful adversary attack on the TL feature extractor model was performed by Abdelkader _et al._[93]. The attack lowered the accuracy of the CNN classifier by 40%, proving the vulnerability of the current TL frameworks. In cryptography, keeping the private key secure is the highest priority task. Encrypting malicious traffic is a popular method used by adversaries to avoid detection by security models. Cui _et al._ conducted experimental research on attacking the keys. TL was implemented to determine the initial weights of the neural network to reduce the training time and improve the model accuracy while dealing with a lesser amount of data [94]. On the other hand, some researches focus on improving defence mechanisms. Zhang _et al._ proposed a defence model against the white-box attack on the TL framework by fine-tuning the parameters [95]. In addition, the authors conducted a black-box attack on the target model. The possible danger of backdoor attacks was addressed by Wang _et al._[96]. Due to the increasing popularity of TL, more examples of pre-trained (Teacher) models are available online, exposing them to backdoor attacks. The authors successfully attacked the learning system, which resulted in significant misclassification rates. Wu _et al._ explored how to defend TL models from the above-mentioned misclassification attacks [97]. The authors proposed a distilled differentiator to protect from both targeted and random attacks. The encryption algorithms can also be applied to ensure the better security of TL-based models. The idea to use a private key to ensure the better protection of TL systems from unauthorized access was presented by Pyone _et al._[98]. TL allows for practical training of the large protected model utilizing small portions of the training dataset. FD is a process of determining which fault has occurred in a system. Because of the development of ML algorithms, we have observed a noticeable improvement in FD techniques. Chen _et al._ addressed the lack of labelled data needed for successful FD and proposed to use TL to resolve this challenge [99]. The proposed model can perform efficient diagnoses even with a limited number of fault samples. Zhang _et al._ addressed the challenges of data privacy and domain shift in collaborative machinery fault diagnosis [100]. The authors proposed an FTL method to enable joint model training while ensuring data privacy. The method involves a federal initialization stage to maintain consistent data structures in distributed feature extraction and a federated communication stage using deep adversarial learning. Additionally, a prediction consistency scheme is incorporated to enhance model robustness. Experimental results on real-world datasets demonstrate the promising potential of the proposed FTL method for practical industrial applications in fault diagnosis. The abovementioned advancements demonstrate the potential of TL in enhancing security and improving fault diagnosis techniques in practical applications. ## 4 Future Roadmap To pave the way for future research and advancements in TL in the field of cybersecurity, several insights and considerations have emerged from our analysis. Researchers interested in continuing this work are encouraged to broaden the scale of TL applications to include the newest research developments, as the interest in TL algorithms continues to increase [29, 101]. Exploring novel application areas and domains where TL can be effectively applied will foster the growth of TL in cybersecurity. While significant progress has been made in preserving privacy using TL-based models, privacy preservation remains an active area of research interest. The possibility of reverse-engineering the transferred knowledge from the source domain and recreating the original labels is a notable concern. Future research should focus on developing robust privacy-preserving mechanisms and techniques to ensure that sensitive information remains secure during knowledge transfer. Addressing the challenge of protecting the privacy of source domain data and preventing information leakage will be crucial for the widespread adoption of TL in computer security. Furthermore, TL has often been employed to adapt knowledge from one or more CNN layers in various classification problems. Researchers typically manually freeze and transfer these layers to align the pre-trained CNN model with their specific research problem. It is foreseeable that automatic and flexible tools will emerge to expedite the knowledge transfer process from popular CNN models. These tools will simplify the adaptation of pre-trained models to new domains, enhancing the efficiency and accessibility of TL in cybersecurity applications. Choosing a source domain that is closely related to the desired target domain is crucial for successful TL. Negative transfer, where the performance of the final model is diminished after employing TL, can occur if the two domains are not closely aligned. To address this challenge, researchers need a metric to measure the degree of relatedness between domains. Developing such metrics will be critical for guiding the selection of appropriate source domains and ensuring positive knowledge transfer. Resolving this issue will have a significant impact on the effectiveness and reliability of TL across all its application fields, including computer security. Further advancements in TL algorithms are needed to improve their robustness and effectiveness in cybersecurity applications. Developing novel TL techniques that can handle more complex scenarios, such as handling adversarial attacks or incorporating domain-specific constraints, will be crucial. Exploring hybrid approaches that combine TL with other machine learning techniques, such as reinforcement learning or generative models, could also lead to improved performance and adaptability. Finally, enhancing the explainability and interpretability of TL models in cybersecurity is of utmost importance. Developing methodologies to interpret the transferred knowledge and understand the decision-making process of TL models will improve their trustworthiness and facilitate their adoption in real-world applications. Researchers should focus on designing interpretable TL architectures and developing post hoc explainability techniques to shed light on the knowledge transfer process. ## 5 Discussion and Conclusion This survey paper has been conducted in response to a need identified via an exhaustive search for a comprehensive review of the convergence of TL and computer security. We have analyzed seven key sections that highlight the diverse applications and potential of TL in addressing various challenges and concerns within the cybersecurity domain. Through this qualitative analysis of existing research, we have gained valuable insights into the advancements, methodologies, and limitations of TL in different cybersecurity contexts. Moreover, this paper has provided additional insights into the methods and techniques employed in TL-based cybersecurity research. Deep learning models, such as CNNs and RNNs, have been extensively utilized, showcasing their capability to capture complex patterns and features in diverse cybersecurity datasets. Techniques like FTL have emerged to address privacy concerns while integrating data from multiple organizations or domains. Based on our research, we can conclude that TL algorithms have been successfully applied in a vast range of computer security fields. Among those, malware analysis was the most common application of TL. More specifically, TL was frequently used to transfer the knowledge from some layers of a pre-trained CNN model into a new image-based malware classification or detection model. In addition, this paper has identified the most common problems that have been successfully addressed using TL in cybersecurity. These problems include the lack of labelled data, challenges arising from different data distributions, and privacy preservation concerns. The scarcity of labelled data is a well-known challenge in the field of AI and cybersecurity. TL offers a solution by leveraging pre-existing models and knowledge gained from other domains or tasks. By transferring knowledge from these pre-trained models, the reliance on large amounts of labelled data is significantly reduced, enabling the development of robust cybersecurity systems with limited available data. Another challenge arises from the presence of different data distributions in cybersecurity datasets. TL provides a means to bridge the gap between different domains or environments by leveraging knowledge obtained from one domain and applying it to another. By transferring the knowledge and learned features from a source domain to a target domain, TL allows the model to adapt and perform effectively in the target domain despite distributional differences. Moreover, privacy preservation is a critical concern when dealing with sensitive data in cybersecurity. TL offers a way to address this challenge by utilizing pre-trained models that have already learned relevant features and patterns. By applying TL, the need to access and utilize the original sensitive dataset is eliminated, reducing privacy risks associated with data exposure. Furthermore, the use of TL can also overcome the issue of inefficient training times. By leveraging pre-trained models, TL significantly reduces the time required to train a model from scratch. The pre-existing knowledge and learned features are transferred to the current model, allowing for faster convergence and more efficient training.
2305.19620
Graphs whose mixed metric dimension is equal to their order
The mixed metric dimension ${\rm mdim}(G)$ of a graph $G$ is the cardinality of a smallest set of vertices that (metrically) resolves each pair of elements from $V(G)\cup E(G)$. We say that $G$ is a max-mdim graph if ${\rm mdim}(G) = n(G)$. It is proved that a max-mdim graph $G$ with $n(G)\ge 7$ contains a vertex of degree at least $5$. Using the strong product of graphs and amalgamations large families of max-mdim graphs are constructed. The mixed metric dimension of graphs with at least one universal vertex is determined. The mixed metric dimension of graphs $G$ with cut vertices is bounded from the above and the mixed metric dimension of block graphs computed.
Ali Ghalavand, Sandi Klavžar, Mostafa Tavakoli
2023-05-31T07:40:40Z
http://arxiv.org/abs/2305.19620v1
# Graphs whose mixed metric dimension is equal to their order ###### Abstract The mixed metric dimension \(\mathrm{mdim}(G)\) of a graph \(G\) is the cardinality of a smallest set of vertices that (metrically) resolves each pair of elements from \(V(G)\cup E(G)\). We say that \(G\) is a max-mdim graph if \(\mathrm{mdim}(G)=n(G)\). It is proved that a max-mdim graph \(G\) with \(n(G)\geq 7\) contains a vertex of degree at least \(5\). Using the strong product of graphs and amalgamations large families of max-mdim graphs are constructed. The mixed metric dimension of graphs with at least one universal vertex is determined. The mixed metric dimension of graphs \(G\) with cut vertices is bounded from the above and the mixed metric dimension of block graphs computed. \({}^{a}\) Department of Applied Mathematics, Faculty of Mathematical Sciences, Ferdowsi University of Mashhad, P.O. Box 1159, Mashhad 91775, Iran [email protected] [email protected] \({}^{b}\) Faculty of Mathematics and Physics, University of Ljubljana, Slovenia [email protected] \({}^{c}\) Faculty of Natural Sciences and Mathematics, University of Maribor, Slovenia \({}^{d}\) Institute of Mathematics, Physics and Mechanics, Ljubljana, Slovenia **Key words:** resolving set; mixed resolving set; strong product of graphs; cut vertex; chemical graphs; block graphs Introduction The metric dimension is an extremely prolific and at the same time interesting area of graph theory, for several reasons. The main reason is certainly that the theory is extremely useful in other areas of science, for instance in computer science, chemistry, social networks, and biology, see respective papers [8, 4, 18, 17]. For more information on the metric dimension and its applications see the recent survey [16]. On the other hand, various applications also give rise to certain modifications of the basic concept, which leads to further intensive research to obtain additional insight into the classical topic and between variants. For more information on this point of view of the metric dimension see the other recent survey [7]. We also refer to a recent application of the local metric dimension to delivery services from [6]. A very interesting version of the metric dimension was introduced in 2017 by Kelenc, Kuziak, Taranenko, and Yero [5], namely mixed metric dimension, as follows. Let \(G=(V(G),E(G))\) be a graph. Then two elements \(x,y\in V(G)\cup E(G)\) are _resolved_ by a vertex \(v\in V(G)\) if \(d_{G}(x,v)\neq d_{G}(y,v)\), where \(d_{G}\) stands either for the shortest-path distance between vertices, or the distance between an edge and a vertex. The latter distance is, for an edge \(x=ww^{\prime}\) and a vertex \(v\), defined by \(d_{G}(x,v)=\min\{d_{G}(w,v),d_{G}(w^{\prime},v)\}\). A set of vertices \(W\subset V(G)\) is a _mixed resolving set_ for \(G\) if any two elements (vertices or edges) \(x,y\in V(G)\cup E(G)\) are resolved by a vertex of \(W\). We note that \(V(G)\) is always a mixed resolving set for \(G\). A mixed resolving set of the smallest cardinality is a mixed metric basis, its cardinality is the mixed metric dimension \(\mathrm{mdim}(G)\). After the seminal paper, the mixed metric dimension was investigated in many papers, cf. [2, 9, 10, 11, 12, 14, 15]. Let \(G\) be a graph and \(x\in V(G)\). Then a neighbor \(y\) of \(x\) is a _maximal neighbor of \(x\)_ if \(y\) is adjacent to all neighbors of \(x\). Denoting the order of \(G\) by \(n(G)\), we recall the following result which is the main source of inspiration for this article. **Theorem 1.1**: [5, Theorem 3.8] _If \(G\) is a graph, then \(\mathrm{mdim}(G)=n(G)\) if and only if every vertex of \(G\) has a maximal neighbour._ Let us say that a graph \(G\) with \(\mathrm{mdim}(G)=n(G)\) is a _max-mdim graph_. Theorem 1.1 thus characterizes max-mdim graphs. The main purpose of this article is to take a closer look at this class of graphs. We proceed as follows. In the rest of the introduction some further definitions are listed and a result is stated to be used later on. In the next section we prove that a max-mdim graph \(G\) with \(n(G)\geq 7\) contains a vertex of degree at least \(5\). This implies that if \(G\) is a chemical graph, then \(\mathrm{mdim}(G)\leq n(G)-1\). Afterwards we apply the strong product and amalgamations to construct large families of max-mdim graphs. In particular, the strong products \(P_{n}\boxtimes K_{2}\) are max-mdim graphs with \(\Delta=5\) where \(\Delta\) is the largest number of neighbors of a vertex in \(P_{n}\boxtimes K_{2}\). We also determine the mixed metric dimension for graphs with universal vertices. In the concluding section we consider the mixed metric dimension of graphs \(G\) with cut vertices and prove an upper bound on their mixed metric dimension. As a consequence we determine the mixed metric dimension of block graphs. In this paper, we consider finite, simple and connected graphs. Let \(G\) be a graph. The degree of \(v\in V(G)\) will be denoted by \(\deg_{G}(v)\). The (open) neighborhood of \(v\) will be denoted by \(N_{G}(v)\). Then \(\deg_{G}(v)=|N_{G}(v)|\). A pendant vertex of \(G\) is a vertex with degree one. A vertex of degree \(n(G)-1\) is a _universal vertex_. The minimum and the maximum degree of \(G\) are respectively denoted by \(\delta(G)\) and \(\Delta(G)\). The number of cut vertices of a \(G\) is denoted by \(\zeta(G)\) and the set of all cut vertices by \(\mathrm{CV}(G)\), so that \(|\mathrm{CV}(G)|=\zeta(G)\). A _block_ of a graph is a nonseparable maximal subgraph of the graph. A graph is \(2\)-_connected_ if it has no cut vertices. Note that if \(G\) is \(2\)-connected, then \(\zeta(G)=0\). \(G\) is a _block graph_ if each block of \(G\) is complete. To conclude this article introduction, we state the following result which implicitly follows from [5, Theorem 3.8]. **Lemma 1.2**: _If \(W\) is a mixed resolving set for a graph \(G\), and \(v\in V(G)\) has a maximal neighbor, then \(v\in W\)._ ## 2 Classes of max-mdim graphs and a maximum degree bound In this section we prove that max-mdim graphs necessarily contain a vertex of degree at least \(5\) as soon as their order is at least \(7\). Then we use the strong product and amalgamations to construct large families of max-mdim graphs. In particular, the strong products \(P_{n}\boxtimes K_{2}\) are max-mdim graphs with \(\Delta=5\). We also determine the mixed metric dimension for graphs with universal vertices. Since every vertex of a complete graph has a maximal neighbour, by Theorem 1.1, complete graphs are max-mdim graphs. Moreover, if \(G\) is obtained from a complete graph by removing a matching, then \(G\) is also a max-mdim graph provided that it contains at least two universal vertices. In particular, \(K_{4}-e\) is a max-mdim graph. Another small example is shown in Fig 1. In our first theorem we prove that a max-mdim graph contains a vertex of degree at least \(5\) as soon as it is not very small. **Theorem 2.1**: _If \(G\) is a (connected) max-mdim graph with \(n(G)\geq 7\), then \(\Delta(G)\geq 5\)._ of the proof. Suppose \(\Delta=2\). If \(\delta(G)=1\), the support vertex of a pendant vertex does not admit a maximal neighbor. Otherwise \(G\) is a cycle which is not a max-mdim graph. Suppose \(\Delta=3\). Let \(x\) be a vertex of \(G\) with \(\deg(x)=3\) and let \(N_{G}(x)=\{x_{1},x_{2},x_{3}\}\). Without loss generality assume that \(x_{1}\) is a maximal neighbor of \(x\), so that \(x_{1}x_{2},x_{1}x_{3}\in E(G)\) and \(\deg_{G}(x_{1})=3\). As \(n(G)\geq 7\) and \(G\) is connected, \(x_{2}\) or \(x_{3}\) is also of degree \(3\). Assume without loss of generality that \(N_{G}(x_{2})=\{x,x_{1},x_{2}^{\prime}\}\). Now, no matter which neighbor of \(x_{2}\) is its maximal neighbor, we get \(\deg_{G}(x)\geq 4\) or \(\deg_{G}(x_{1})\geq 4\), which is not possible. Suppose \(\Delta=4\). Let \(x\) be a vertex of \(G\) with \(\deg(x)=4\) and let \(y\) be its maximal neighbor. Then \(N_{G}(x)=N_{G}(y)\), say \(N_{G}(x)=N_{G}(y)=\{x_{1},x_{2},x_{3}\}\). As \(n(G)\geq 7\), there is another vertex of \(G\), without loss of generality assume it is adjacent to \(x_{1}\), denote it by \(x_{1}^{\prime}\). Consider now a maximal neighbor of \(x_{1}\). It cannot be \(x\) or \(y\) because then \(x\) or \(y\) would be adjacent to \(x_{1}^{\prime}\) and hence \(x\) or \(y\) would be of degree at least \(5\). For the same reason, a maximal neighbor of \(x_{1}\) cannot be \(x_{1}^{\prime}\). If there were another neighbor \(x_{1}^{\prime\prime}\) of \(x_{1}\), it also cannot be a maximal neighbor of \(x_{1}\). So \(x_{1}\) must have a maximal neighbor among the already introduced vertices and thus the fourth neighbor of \(x_{1}\) is from \(\{x_{2},x_{3}\}\). If \(x_{1}x_{3}\in E(G)\), then \(x_{3}\) is the only candidate for a maximal neighbor of \(x_{1}\) and therefore \(x_{1}^{\prime}x_{3}\in E(G)\), while if \(x_{1}x_{2}\in E(G)\), then we must have \(x_{1}^{\prime}x_{2}\in E(G)\). But in both cases we get isomorphic graphs, see Fig. 2, where the labeling presented is with respect to the second case. Figure 1: A max-mdim graph Figure 2: The graph \(G_{6}\) Hence, if \(\Delta=4\), then \(G\) necessary contains the graph \(G_{6}\) from Fig. 2 as an induced subgraph. Since \(n(G)\geq 7\), there is another vertex, say \(x_{1}^{\prime\prime}\), and we may assume without loss of generality that \(x_{1}^{\prime}x_{1}^{\prime\prime}\in E(G)\). We now infer that none of the vertices \(x_{1}\), \(x_{2}\), \(x_{1}^{\prime\prime}\), or a possible fourth new neighbor of \(x_{1}^{\prime}\) can be a maximal neighbor of \(x_{1}^{\prime}\). So the only possibility is that \(x_{1}^{\prime}x_{3}\in E(G)\) so that \(x_{3}\) would be a maximal neighbor of \(x_{1}^{\prime}\). But then \(x_{3}x_{1},x_{2}x_{1}\in E(G)\) which means that \(x_{1}\) would be of degree at least \(5\). \(\Box\) Note that the proof of Theorem 2.1 implies that the graph \(G_{6}\) from Fig. 2 is the unique max-mdim graph with \(n(G)=6\) and \(\Delta(G)=4\). In view of the applicability of the mixed metric dimension in chemistry [14], we recall that a graph \(G\) is called a _chemical graph_ if \(\Delta(G)\leq 4\). Theorem 2.1 cleary has the following application. **Corollary 2.2**: _If \(G\) is a chemical graph with \(n(G)\geq 7\), then \({\rm mdim}(G)\leq n(G)-1\)._ The graphs in Figs. 1 and 2 motivate us to recall the definition of the _strong product_\(G\boxtimes H\) of graphs \(G\) and \(H\). Its vertex set is \(V(G\boxtimes H)=V(G)\times V(H)\), and vertices \((g_{1},h_{1})\) and \((g_{2},h_{2})\) are adjacent if \(h_{1}=h_{2}\) and \(g_{1}\) is adjacent to \(g_{2}\), or \(g_{1}=g_{2}\) and \(h_{1}\) is adjacent to \(h_{2}\), or \(g_{1}\) is adjacent to \(g_{2}\) and \(h_{1}\) is adjacent to \(h_{2}\). A standard reference for the strong product is the book [3]. The metric dimension of strong products was investigated in [13], and the local metric dimension of strong products in [1]. We now use this graph operation to significantly increase the variety of max-mdim graphs. **Proposition 2.3**: _If \(G\) is a graph, then \(G\boxtimes K_{2}\) is a max-mdim graph._ **Proof.** Let \(n=n(G)\), let \(V(G)=\{v_{1},\ldots,v_{n}\}\) and \(V(K_{2})=\{0,1\}\). Then by the definition of the strong product, \((v_{i},1)\) is a maximal neighbor of \((v_{i},0)\), and \((v_{i},0)\) is a maximal neighbor of \((v_{i},1)\) for each \(i\in[n]\). Therefore, \(G\boxtimes K_{2}\) is a max-mdim graph by Theorem 1.1. \(\Box\) The special case \(P_{n}\boxtimes K_{2}\) of Proposition 2.3 gives an infinite family of max-mdim graphs \(G\) with \(\Delta(G)=5\). Hence Theorem 2.1 cannot be improved in general. Another source for max-mdim graphs is the following construction. Let \(G\) and \(H\) be disjoint graphs, \(e_{G}\in E(G)\) and \(e_{H}\in E(H)\). Then the graph \(A(G,e_{G};H,e_{H})\) is obtained from the disjoint union of \(G\) and \(H\) by identifying the edges \(e_{G}\) and \(e_{H}\). ("A" stands here for an amalgamation.) Actually, this identification can be done in two ways, but for our purposes any of these will do it. **Proposition 2.4**: _If \(G\) and \(H\) are max-mdim graphs, and \(e_{G}\) and \(e_{H}\) are edges whose endpoints are maximal neighbors for each other, then \(A(G,e_{G};H,e_{H})\) is a max-mdim graph._ **Proof.** Set \(A=A(G,e_{G};H,e_{H})\). Let \(e_{G}=gg^{\prime}\) and \(e_{H}=hh^{\prime}\). If \(x\in V(G)\setminus\{g,g^{\prime}\}\), then its maximal neighbor in \(G\) is also a maximal neighbor of \(x\) in \(A\). Similarly, a maximal neighbor of \(y\in V(H)\setminus\{h,h^{\prime}\}\) is a maximal neighbor of \(y\) in \(A\). Finally, \(g=h\) is a maximal neighbor of \(g^{\prime}=h^{\prime}\) in \(A\), and \(g^{\prime}=h^{\prime}\) is a maximal neighbor of \(g=h\). \(\square\) Using Proposition 2.4, we can state the following result. **Theorem 2.5**: _If \(n\geq t\geq 5\), then there exists a max-mdim graph \(G\) with \(n(G)=n\) and \(\Delta(G)=t\)._ **Proof.** Let \(H_{r}=P_{r}\boxtimes K_{2}\), \(r\geq 4\), and let \(H_{r}^{-}\) be the graph obtained from \(H_{r}\) by removing a vertex of degree \(3\). Let \(\Lambda_{k,r}=A(H_{r},e;K_{k},f)\), where \(e\) is an edge of \(H_{r}\) both of its endpoints are of degree \(3\), and \(f\) is an arbitrary (but fixed) edge of \(K_{k}\). The graph \(\Lambda_{k,r}^{-}=A(H_{r}^{-},e;K_{k},f)\) is defined analogously. See Fig. 3 where the graphs \(H_{5}\), \(H_{5}^{-}\), \(\Lambda_{5,5}\), and \(\Lambda_{5,5}^{-}\) are presented. By Proposition 2.4, each of the graphs \(H_{r}\), \(H_{r}^{-}\), \(\Lambda_{k,r}\), and \(\Lambda_{k,r}^{-}\) is a max-mdim graph. If \(n-t+1\) is even, then the graph \(\Lambda_{t-1,\frac{n-t+3}{2}}\) is of maximum degree \(t\), while if \(n-t+1\) is odd, the graph \(\Lambda_{t-1,\frac{n-t+4}{2}}^{-}\) is of maximum degree \(t\). To complete the argument note that \(n(\Lambda_{t-1,\frac{n-t+3}{2}})=(t-1)+2\left(\frac{n-t+3}{2}-1\right)=n\) and \(n(\Lambda_{t-1,\frac{n-t+4}{2}}^{-})=(t-1)+2\left(\frac{n-t+4}{2}-1\right)-1=n\). \(\square\) At the beginning of the section we have observed that a graph obtained from a complete graph by removing a matching is a max-mdim graph provided that it contains at least two universal vertices. This fact generalizes as follows. **Proposition 2.6**: _If \(G\) is a graph, then the following holds._ * _If_ \(G\) _has at least two universal vertices, then_ \(G\) _is a max-mdim graph._ * _If_ \(G\) _has exactly one universal vertex, then_ \(\mathrm{mdim}(G)=n(G)-1\)_._ **Proof.** (i) Let \(x\) and \(y\) be arbitrary universal vertices of \(G\). If \(u\in V(G)\setminus\{x,y\}\), then \(x\) (or \(y\) for that matter) is a maximal neighbor of \(u\). Moreover, \(x\) is a maximal neighbor of \(y\), and \(y\) is a maximal neighbor of \(x\). By Theorem 1.1, \(G\) is a max-mdim graph. (ii) Assume now that \(x\) is the unique universal vertex of \(G\). Let \(W\) be a mixed resolving set for \(G\). By Lemma 1.2 we get \(\mathrm{mdim}(G)\geq n(G)-1\). To complete the argument we claim that \(V(G)\backslash\{x\}\) is a mixed resolving set for \(G\). To do this, let \(\{a,b\}\subseteq\,V(G)\cup\,E(G)\). If \(a\in\,V(G)\backslash\{x\}\) and \(b=ax\), then \(\deg_{G}(a)\leq n(G)-2\). Thus \(V(G)\backslash N_{G}[a]\neq\emptyset\) and \(2=d_{G}(a,v)\neq d_{G}(ax,v)=1\) for each \(v\in V(G)\backslash N_{G}[a]\). Otherwise, there exists \(u\in V(G)\backslash\{x\}\) such that \(d_{G}(a,u)=0\) and \(d_{G}(b,u)\geq 1\), or \(d_{G}(a,u)\geq 1\) and \(d_{G}(b,u)=0\). Therefore, \(V(G)\backslash\{x\}\) is a mixed resolving set for \(G\). \(\square\) ## 3 Graphs with cut vertices In this section we consider the mixed metric dimension of graphs \(G\) with cut vertices and bound from the above their mixed metric dimension by \(n(G)-\zeta(G)\). This of course implies (as we already know) that no graph with a cut vertex is a max-mdim graph. As a consequence we determine the mixed metric dimension of block graphs. **Theorem 3.1**: _If \(W\) is a mixed resolving set of a graph \(G\), then the following holds._ * _If_ \(v\) _is a cut vertex of_ \(G\)_, then_ \(W\backslash\{v\}\) _is a mixed resolving set of_ \(G\)_._ * \(\mathrm{mdim}(G)\leq n-\zeta(G)\)_. Moreover, equality holds if and only if each vertex from_ \(V(G)\setminus\mathrm{CV}(G)\) _has a maximal neighbor in_ \(G\)_._ **Proof.** (i) If \(v\not\in W\), then we have nothing to prove, hence assume in the remainder that \(v\in W\). Let \(G_{1},\ldots,G_{k}\), \(k\geq 2\), be the components of \(G-v\), and for each \(i\in[k]\) select a neighbor \(v_{i}\) of \(v\) in \(G_{i}\). Since \(v\) is a cut vertex, \(d_{G}(vv_{i},x)=d_{G}(v,x)\) for each \(x\in V(G)\backslash V(G_{i})\). As also \(d_{G}(vv_{i},v)=d_{G}(v,v)=0\), there must be a vertex in \(W\cap V(G_{i})\) that distinguishes \(vv_{i}\) and \(v_{i}\). For each \(i\in[k]\) select such a vertex \(w_{i}\). Consider now two arbitrary elements \(a\) and \(b\) from \(V(G)\cap E(G)\). Assume first that \(a\) and \(b\) belong to some \(G_{i}\). If \(d_{G}(a,v)=d_{G}(b,v)\), then \(a\) and \(b\) are necessarily resolved by some vertex from \(W\cap V(G_{i})\). On the other hand, if \(d_{G}(a,v)\neq d_{G}(b,v)\), then \(a\) and \(b\) are resolved by each \(w_{j}\), where \(j\neq i\). Assume next that \(a\) lies in \(G_{i}\) and \(b\) belongs to \(G_{j}\), where \(i\neq j\). If \(d_{G}(a,v)=d_{G}(b,v)\), then \(a\) and \(b\) are resolved by some vertex from \(W\setminus\{v\}\). Assume that \(d_{G}(a,v)\neq d_{G}(b,v)\), let without loss of generality \(d_{G}(a,v)>d_{G}(b,v)\) holds. Then we claim that \(a\) and \(b\) are resolved by \(w_{j}\). Indeed, suppose on the contrary that \(d_{G}(a,w_{j})=d_{G}(b,w_{j})\). Then \[d_{G}(a,w_{j})=d_{G}(a,v)+d_{G}(v,w_{j})=d_{G}(b,w_{j})\leq d_{G}(b,v)+d_{G}(v,w_{j})\,,\] which in turn implies that \(d_{G}(a,v)\leq d_{G}(b,v)\), a contradiction. We have thus proved that each pair of elements from \(V(G)\cap E(G)\) is resolved by some vertex from \(W\setminus\{v\}\), hence (i) holds. (ii) Since a mixed metric basis is a mixed resolving set of smallest cardinality, the inequality \(\mbox{mdim}(G)\leq n-\zeta(G)\) follows immediately from (i). To prove the equality part, suppose first that that each vertex from \(V(G)\setminus\mbox{CV}(G)\) has a maximal neighbor. Then Lemma 1.2 together with the already proved inequality \(\mbox{mdim}(G)\leq n-\zeta(G)\) yields \(\mbox{mdim}(G)=n-\zeta(G)\). Conversely, suppose that \(\mbox{mdim}(G)=n-\zeta(G)\) and suppose on the contrary that \(v\in V(G)\setminus\mbox{CV}(G)\) has no maximal neighbor in \(G\). Then we claim that \(V(G)\setminus(\mbox{CV}(G)\cup\{v\})\) is a mixed resolving set for \(G\). Indeed, we already know that \(V(G)\setminus\mbox{CV}(G)\) is a a mixed resolving set, so the only problem could be that a neighbor \(u\) of \(v\) would not be distinguished from the edge \(uv\), because \(G-v\) is a connected graph and by (i), \(V(G)\setminus(\mbox{CV}(G)\cup\{v\})\) is a mixed resolving set for it. However, since \(v\) has no maximal neighbor, there exists \(x\in N_{G}(v)\) such that \(ux\notin E(G)\). But then \(d_{G}(x,uv)=1\) and \(d_{G}(x,u)=2\). Hence \(V(G)\setminus(\mbox{CV}(G)\cup\{v\})\) is a mixed resolving set, a contradiction to the assumption that \(\mbox{mdim}(G)=n-\zeta(G)\). \(\Box\) Clearly, no cut vertex can have a maximal neighbor. Hence the equality part of Theorem 3.1(ii) can be rephrased by saying that \(\mbox{mdim}(G)=n(G)\) if and only if every vertex of the graph \(G\) has a maximal neighbor, which is of course Theorem 1.1. Theorem 3.1 also implies the following. **Corollary 3.2**: _If \(G\) is a block graph, then \(\mbox{mdim}(G)=n-\zeta(G)\)._ **Proof.** Just observe that if \(v\in V(G)\setminus\mbox{CV}(G)\), then \(v\) is a simplicial vertex and hence clearly has a maximal neighbor in \(G\). The result then follows from Theorem 3.1(ii). \(\Box\) Corollary 3.2 in turn implies that if \(T\) is a tree, then \(\mathrm{mdim}(T)\) is the number of the leaves of \(T\), a result first proved in [5, Theorem 4.3]. ## Acknowledgments Sand Klavzar was supported by the Slovenian Research Agency (ARRS) under the grants P1-0297, J1-2452, N1-0285. ## Declaration of interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Data availability Our manuscript has no associated data.
2310.20217
Quantum Integrability and Chaos in periodic Toda Lattice with Balanced Loss-Gain
We consider equal-mass quantum Toda lattice with balanced loss-gain for two and three particles. The two-particle Toda lattice is integrable and two integrals of motion which are in involution have been found. The bound-state energy and the corresponding eigenfunctions have been obtained numerically for a few low-lying states. The three-particle quantum Toda lattice with balanced loss-gain and velocity mediated coupling admits mixed phases of integrability and chaos depending on the value of the loss-gain parameter. We have obtained analytic expressions for two integrals of motion which are in involution. Although an analytic expression for the third integral has not been found, the numerical investigation suggests integrability below a critical value of the loss-gain strength and chaos above this critical value. The level spacing distribution changes from the Wigner-Dyson to the Poisson distribution as the loss-gain parameter passes through this critical value and approaches zero. An identical behaviour is seen in terms of the gap-ratio distribution of the energy levels. The existence of mixed phases of quantum integrability and chaos in the specified ranges of the loss-gain parameter has also been confirmed independently via the study of level repulsion and complexity in higher order excited states.
Supriyo Ghosh, Pijush K. Ghosh
2023-10-31T06:30:36Z
http://arxiv.org/abs/2310.20217v1
# Quantum Integrability and Chaos in periodic Toda Lattice with Balanced Loss-Gain ###### Abstract We consider equal-mass quantum Toda lattice with balanced loss-gain for two and three particles. The two-particle Toda lattice is integrable and two integrals of motion which are in involution have been found. The bound-state energy and the corresponding eigenfunctions have been obtained numerically for a few low-lying states. The three-particle quantum Toda lattice with balanced loss-gain and velocity mediated coupling admits mixed phases of integrability and chaos depending on the value of the loss-gain parameter. We have obtained analytic expressions for two integrals of motion which are in involution. Although an analytic expression for the third integral has not been found, the numerical investigation suggests integrability below a critical value of the loss-gain strength and chaos above this critical value. The level spacing distribution changes from the Wigner-Dyson to the Poisson distribution as the loss-gain parameter passes through this critical value and approaches zero. An identical behaviour is seen in terms of the gap-ratio distribution of the energy levels. The existence of mixed phases of quantum integrability and chaos in the specified ranges of the loss-gain parameter has also been confirmed independently via the study of level repulsion and complexity in higher order excited states. ###### Contents * I Introduction * II Two Particle Toda lattice * III Periodic Toda lattice with three particles * III A Limiting Case * III.1 General Hamiltonian * III.1.1 Avoided level crossing and probability distribution * III.1.2 Level statistics & Gap-ratio distribution * IV Conclusions & Discussions * V Acknowledgements ## I Introduction The Toda lattice, a system of particles interacting via exponential potentials, has long been the subject of extensive research[1; 2; 3]. Its remarkable properties, such as solitonic solutions[4] and the integrability of its equations of motion[5; 6; 7], have made it a pivotal topic in the study of integrable systems and mathematical physics. Toda lattice is important in the context of mathematical modelling of many physical phenomena like heat propagation in lattice system[8; 9], dynamics of DNA[10], peptide bonds in the \(\alpha\)-helix[11], laser dynamics[12; 13; 14]. Various generalizations and modification of Toda lattice having realistic physical applications and mathematical importance have been considered[15; 16; 17; 18; 19]. Truncated Toda potential perturbed by weak friction and noise is important in galactic dynamics[15]. Chaotic behaviour is seen for Toda lattice with unequal masses[16]. The Balanced Loss-Gain(BLG) system is defined as the one in which the flow preserves the volume in the position-velocity state space, although the individual degrees of freedom may be subjected to gain or loss[20]. The system is non-dissipative and may admit a Hamiltonian. A novel feature of such systems is the existence of (quasi-)periodic solutions within some regions of the parameter-space, and have been studied extensively in the context of \(\mathcal{PT}\)-symmetry[21; 22; 23; 24; 19; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. The Hamiltonian formulation of generic BLG systems has been discussed in Ref. [30; 31; 32; 35] and a few such examples include systems with nonlinear interaction[27; 28; 29; 31; 32; 33], many particle systems[30; 31; 32], systems with space dependent loss-gain term[31], systems with Lorentz interaction[35], Hamiltonian chaos[33; 34; 19], oligomers[20] etc. The formalism has been extended beyond the mechanical systems --nonlinear Schrodinger equation[24; 25; 26] and nonlinear Dirac equation[20] with BLG have been studied from the viewpoint of exact solvability and existence of solutions bounded in time. A recent addition to the growing list of generalized Toda systems is a Toda lattice with BLG[19]. It has been shown that a two-particle Toda system with BLG is integrable and analytic expressions for two integrals of motion which are in involution have been found. Periodic solutions of the equations of motion have been found numerically. A three particle Hamiltonian Toda system with BLG and Velocity Mediated Coupling(VMC) has been shown to admit mixed phases of integrability and chaos. Two of the integrals of motion which are in involution have been found analytically. Although the third integral of motion has not been found analytically, which is required for showing integrability, the nu merical studies reveal integrability below a critical value of the loss-gain parameter and chaos above this critical value. The existence of mixed phases of integrability and chaos in the system has been established through the study of sensitivity to the initial conditions, Poincare Sections, Lyapunov exponent, power-spectra and autocorrelation functions[19]. A non-Hamiltonian Toda system with BLG has also been shown to admit chaotic behaviour and chaos in the system is solely induced due to BLG. The purpose of the article is to study the Hamiltonian Toda system with BLG and VMC of Ref. [19] in the quantum region. The two-particle quantum Toda system is shown to be integrable via the construction of two integrals of motion which are in involution. The translation invariance of the system allows to separate the center of mass motion and the Schodinger equation in the relative coordinate may be interpreted as that of a particle moving in an inverted harmonic oscillator plus a cosine hyperbolic potential. The effective potential is of a single-well or a symmetric/asymmetric double-well depending on the parameters of the system. The quantum bound states have been obtained numerically. The three particle Hamiltonian Toda lattice is translation invariant. The center of mass motion is separated out in the Jacobi coordinate and the effective Hamiltonian may be interpreted as that of a particle moving in a two dimensional potential, consisting of anisotropic harmonic oscillators and the Toda Potential, and subjected to an external uniform magnetic field with its magnitude being proportional to the loss-gain parameter. The system is exactly solvable in the limit, in which the Toda potential reduces to that of coupled oscillators, such that the starting Hamiltonian describes coupled oscillators with BLG and VMC. The eigenvalues and eigenfunctions are obtained analytically in this particular limit which are given by two decoupled anisotropic oscillators. In general, the quantum problem is not amenable to analytic solution. The conserved quantity corresponding to the translation invariance may be interpreted as a generalized momentum which commutes with the Hamiltonian, thereby, two of the quantum integrals of motion are found analytically. The third integral of motion that is required to establish integrability has not been found analytically so far. However, the numerical investigations suggest integrability below a critical value of the loss-gain parameter and chaos above this critical value, as is the case for the corresponding classical system. It may be recalled in this context that a link between the classical and quantum chaos may be explored by studying statistical properties of energy levels based on Random Matrix Theory(RMT)[36; 37]. In particular, for Gaussian Orthogonal Ensemble(GOE), the level statistics of quantum Hamiltonian with integrable classical counterpart follows Poisson distribution [38; 39; 40; 41; 42; 43; 44; 45; 46], while in the chaotic region it follows the Wigner distribution. The level statistics is described in terms of the spacing of nearest-neighbour eigen-energies, and unfolded data is used in the process instead of the raw data. There is an alternative method to circumvent the problems associated with unfolding procedure in some cases where the statistics of ratios of the energy-gaps for consecutive levels is studied. The statistics of gap-ratio of quantum Hamiltonian with integrable classical counterpart follows Poisson distribution, while it follows Wigner distribution in the chaotic region[47]. In this article, we establish the mixed phases of integrability and chaos by studying the level spacing distribution as well as gap-ratio distribution. The quantum transition from the chaotic to the integrable region is observed when the loss-gain strength crosses a critical value and goes to zero --the level spacing as well as the gap-ratio distributions smoothly changes from the Wigner-Dyson distribution and tends to follow the Poisson distribution. We also show the level repulsion phenomena in the energy spectra in both the two and three particle Toda lattice. It is shown through the graphical presentations that the degree of level repulsion is large in the case of the three particle system. The quantum transition from chaotic to integrable region is also confirmed independently by studying the complex behaviour of higher order excited state wave functions. The plan of this article is the following. The two-particle Toda system is studied in the next section. The three particle Toda system is presented in Sec. III and the exactly solvable limiting case is described in Sec. III.A. The numerical results for the general quantum problem is described in Sec. III.B. Finally, the results are summarized with discussions in Sec. IV. ## II Two particle Toda lattice The Hamiltonian of the Periodic Toda lattice is given as[19] \[H = 2\left(p_{1}+\frac{\gamma}{2}x_{2}\right)\left(p_{2}-\frac{ \gamma}{2}x_{1}\right) \tag{1}\] \[+ \frac{a}{b}\left\{e^{b(x_{1}-x_{2})}+e^{b(x_{2}-x_{1})}-2\right\},\] where \(\gamma\) is the strength of the BLG term and \(a\),\(b\) are the strengths of nonlinear interaction. The canonical conjugate momenta to the coordinates \((x_{1},x_{2})\) are defined as \((p_{1},p_{2})\). The Hamiltonian (1) is \({\cal P}{\cal T}\)-symmetric where the parity transformation \({\cal P}:\ x_{1}\leftrightarrow x_{2},p_{1}\leftrightarrow p_{2}\) and the time-reversal operation \({\cal T}:(x_{1},x_{2})\rightarrow(x_{1},x_{2})\), \((p_{1},p_{2})\rightarrow(-p_{1},-p_{2})\). In the limit of \(a\rightarrow\infty,b\to 0\) such that \(ab\equiv\omega^{2}\), the above Hamiltonian reduces to that of coupled oscillators with BLG which is a variant of the model considered in Ref. [21]. The generic system is translation invariant, and in order to separate out the center of mass motion, we consider the following coordinate transformation: \[x_{1}=\frac{x+y}{\sqrt{2}},\ x_{2}=\frac{x-y}{\sqrt{2}},\ p_{1}=\frac{p_{x}+p _{y}}{\sqrt{2}},\ p_{2}=\frac{p_{x}-p_{y}}{\sqrt{2}}. \tag{2}\] The Hamiltonian in terms of the new coordinates is expressed as, \[H = p_{x}^{2}-p_{y}^{2}-\gamma(xp_{y}+yp_{x})-\frac{\gamma^{2}}{4}(x^{2 }-y^{2}) \tag{3}\] \[+ \frac{a}{b}\left\{e^{b\sqrt{2}y}+e^{-b\sqrt{2}y}-2\right\}.\] The \(\mathcal{P}\) and \(\mathcal{T}\) have the standard definition in this coordinate, namely, \(\mathcal{P}:(x,y)\rightarrow(x,-y),(p_{x},p_{y})\rightarrow(p_{x},-p_{y})\) and \(\mathcal{T}:(x,y)\rightarrow(x,y),(p_{x},p_{y})\rightarrow(-p_{x},-p_{y})\). The Hamiltonian is invariant under the operation of \(\mathcal{PT}\). The Hamiltonian is not positive-definite. The eigen value equation corresponding to the Hamiltonian (3) is given as, \[\left[-\left(\partial_{x}^{2}-\partial_{y}^{2}\right)+i\gamma\left( x\partial_{y}+y\partial_{x}\right)-\frac{\gamma^{2}}{4}(x^{2}-y^{2})\right. \tag{4}\] \[+ \left.\frac{a}{b}\left\{e^{b\sqrt{2}y}+e^{-b\sqrt{2}y}-2\right\} \right]\psi=\tilde{E}\psi\] where we have used the standard coordinate representation of the Heisenberg equation and denoted \(p_{x}:=-i\partial_{x},\ p_{y}:=-i\partial_{y}\). The system is translation invariant and the associated conserved operator, \[\widehat{\Pi}=\left(-2i\frac{\partial}{\partial x}+\gamma y\right), \tag{5}\] commutes with the Hamiltonian (3). Thus, the quantum system is integrable as is the case for the corresponding classical system. The simultaneous wave function of \(\widehat{\Pi}\) and \(H\) is considered as, \[\psi(x,y)=\exp\left[\frac{i}{2}x(k-\gamma y)\right]\phi(y), \tag{6}\] which when substituted into Eq. (4) gives the following eigen value equation, \[\phi^{\prime\prime}(y)+\gamma^{2}y^{2}\phi(y)+\frac{k}{4}(k-2 \gamma y)\phi\] \[+\frac{2a}{b}\left\{\cosh(\sqrt{2}by)-1\right\}\phi(y)=\tilde{E} \phi(y). \tag{7}\] We make a transformation \(y\to y-\frac{k}{4\gamma}\) followed by a scale transformation \(y\rightarrow\sqrt{2}by\), and the resulting eigen value equation takes the following form, \[-\phi^{\prime\prime}+\left[-\lambda y^{2}-\beta\left\{\cosh\left(y+y_{0} \right)-1\right\}\right]\phi=E\phi \tag{8}\] where \(\lambda=\frac{\gamma^{2}}{4b^{4}}\), \(\beta=\frac{a}{b^{3}}\), \(E=-\frac{1}{2b^{2}}(\tilde{E}-\frac{3k^{2}}{16})\), \(y_{0}=\frac{bk}{2\sqrt{2}\gamma}\). It may be noted that \(\lambda\) is always positive and \(\beta\) can be both positive as well as negative. The eigen value equation in Eq. (8) may be interpreted as that of a particle moving in the one dimensional potential, \[V(y)=\beta\left(1-\cosh(y+y_{0})\right)-\lambda y^{2} \tag{9}\] The potential \(V(y)\) has two localized minima for \(-2\lambda<\beta<0\) representing a symmetric double-well potential for \(y_{0}=0\) and an asymmetric one for \(y_{0}\neq 0\). The potential has only one minima at \(y=0\), representing a single well, in the remaining region of the parameter-space. The quantum bound states are allowed only for \(\beta<0\). The potential and wave functions for the ground and 1st excited states of the system for \(\lambda=1,k=0\) and different values of \(\beta\) are given in the Fig. 1. The barrier between the two wells increases with the decreasing value of \(|\beta|\), and the system decomposes into two independent components separated from each other --the wave function is the set of two split up wave functions. The tunnelling probability of the particle from a Figure 1: (Color online) Plot of wave functions and potential for different values of \(\beta\) and \(\lambda\,=\,1\). Green, Red and Orange solid lines represent potential, ground state and 1st excited state wave functions, respectively. Fig.(a), Fig.(c) \(\beta\,=\,-0.5\); Fig.(b),Fig.(d) \(\beta\,=\,-0.9\); Fig.(e),Fig.(g) \(\beta\,=\,-1.7\); Fig.(f),Fig.(h) \(\beta\,=\,-2.1\) well to the other increases with the decrease of the barrier height, and the wave-function of the left and right regions tend to overlap. The standard behaviour of wave-function in a double-well represented by a quartic potential is seen in the present context. The eigen values corresponding to the quantum bound states of the system is shown in the Fig. 2 as a function of \(\beta\) for a few low lying states. We observe that the eigen values come close to each other as \(\beta\) increases, although the levels do not cross each other. The degree of level repulsion is low which is obvious for an integrable system. The ground state energy for different values of \(\beta\) and \(y_{0}\) is shown in the Fig. 3. ## III Periodic Toda lattice with three particles We consider the following Hamiltonian of a Toda lattice with BLG and VMC for three particles, \[H = \Pi_{1}\Pi_{2}+\Pi_{1}\Pi_{3}+\Pi_{2}\Pi_{3}+\frac{a}{b}\left[1- e^{b(q_{2}-q_{1})}\right. \tag{10}\] \[- \left.e^{b(q_{3}-q_{2})}-e^{b(q_{1}-q_{3})}\right]\] The generalized momenta appearing in \(H\) have the following expressions, \[\Pi_{1}=p_{1}+\frac{\gamma}{4}(q_{2}+q_{3}),\ \ \Pi_{2}=p_{2}- \frac{\gamma}{4}(q_{1}-q_{3}),\] \[\Pi_{3}=p_{3}-\frac{\gamma}{4}(q_{1}+q_{2}). \tag{11}\] The \(\gamma\) dependent terms in \(\Pi_{i}\)'s produce BLG and VMC in the respective Hamilton's equations of motion. In particular, the system is governed by the equations of motion, \[\ddot{q}_{1}+\gamma\dot{q}_{1}+\frac{\gamma}{2}\left(\dot{q}_{2}- \dot{q}_{3}\right)-a\left[e^{b(q_{2}-q_{1})}-e^{b(q_{1}-q_{3})}\right]=0\] \[\ddot{q}_{2}+\frac{\gamma}{2}\left(\dot{q}_{1}-\dot{q}_{3}\right) -a\left[e^{b(q_{3}-q_{2})}-e^{b(q_{2}-q_{1})}\right]=0\] \[\ddot{q}_{3}-\gamma\dot{q}_{3}+\frac{\gamma}{2}\left(\dot{q}_{1}- \dot{q}_{2}\right)-a\left[e^{b(q_{1}-q_{3})}-e^{b(q_{3}-q_{2})}\right]=0,\] where particle-1 and particle-3 are subjected to BLG. Each particle interacts with the remaining two particles via the Toda potential and the VMC. The above classical system has been studied in detail in Ref. [19] and shown to admit mixed phases of integrability and chaos --two of the integrals of motion have been obtained analytically. In this article, we study the quantum Hamiltonian \(H\) from the view point of its integrability and chaos. The system is translation invariant. We introduce the Jacobi coordinates \(Q_{1},Q_{2},Q_{3}\) in order to separate out the center of mass motion and to work with only two coordinates, \[Q_{j}=\frac{1}{\sqrt{j(j+1)}}\sum_{k=1}^{j}(q_{k}-jq_{j+1}),Q_{3}=\frac{1}{ \sqrt{3}}\sum_{k=1}^{3}q_{k},\] where \(j=1,2\). The momenta \(P_{j}\) in the Jacobi coordinates are related to \(p_{j}\)'s through the same relations --replace \((Q_{k},q_{k})\rightarrow(P_{k},p_{k})\). The Hamiltonian can be rewritten in terms of \((Q_{k},P_{k})\) as, \[H = \frac{\tilde{\Pi}_{3}^{2}}{4}-\frac{\tilde{\Pi}_{1}^{2}}{2}- \frac{\tilde{\Pi}_{2}^{2}}{2}-\frac{a}{b}\left[e^{-b\sqrt{2}Q_{1}}+e^{\frac{b} {\sqrt{2}}(Q_{1}+\sqrt{3}Q_{2})}\right. \tag{12}\] \[+ \left.e^{\frac{b}{\sqrt{2}}(Q_{1}-\sqrt{3}Q_{2})}-1\right]\] where \(\tilde{\Pi}_{1}\),\(\tilde{\Pi}_{2}\) and \(\tilde{\Pi}_{3}\) are the modified generalized momenta: \[\tilde{\Pi}_{1} = -P_{1}-\frac{\gamma}{4\sqrt{3}}\left(Q_{2}+\sqrt{2}Q_{3}\right)\] Figure 3: (Color online) Plot of ground state energy of 2D Toda lattice for different values of \(\beta\) and \(y_{0}\). Parametric value : \(\lambda=1\) Figure 2: (Color online) Plot of the lowest six energy levels of 2D Toda lattice. Parametric value : \(\lambda=1\) \[\tilde{\Pi}_{2} = -P_{2}+\frac{\gamma}{4\sqrt{3}}\left(Q_{1}-\sqrt{6}Q_{3}\right)\] \[\tilde{\Pi}_{3} = 2P_{3}-\frac{\gamma}{\sqrt{6}}\left(Q_{1}+\sqrt{3}Q_{2}\right) \tag{13}\] The Hamiltonian is not positive-definite even for \(\frac{a}{b}<0\). We quantize the Hamiltonian (12) by replacing \((Q_{j},P_{j})\) with the corresponding operators \((Q_{j},-i\partial_{Q_{j}})\) satisfying the commutation relations \([Q_{j},P_{l}]=i\delta_{jl}\), \([Q_{j},Q_{l}]=0\), \([P_{j},P_{l}]=0\), and denote the resulting Hamiltonian as \(\widehat{H}\). We are working with \(\hbar=1\). The translation invariance leads to a conserved quantity and the corresponding operator \[\widehat{\Pi}=-2\sqrt{3}i\frac{\partial}{\partial Q_{3}}+\frac{\gamma}{\sqrt{ 2}}(Q_{1}+\sqrt{3}Q_{2}) \tag{14}\] commutes with the Hamiltonian \(\widehat{H}\). The complete integrability requires three integrals of motion. However, we have found analytic expressions for two conserved quantities, namely, \(\widehat{H}\) and \(\widehat{\Pi}\). It will be seen later that numerical investigations indicate integrability of the system in some ranges of the parameter \(\gamma\). The commutation relation \([\widehat{H},\widehat{\Pi}]=0\) allows us to find simultaneous wave function of these two operators, \[\psi(Q_{1},Q_{2},Q_{3}) = \exp\left[\frac{i}{2\sqrt{3}}Q_{3}\left\{k-\frac{\gamma}{\sqrt{2 }}\left(Q_{1}\right.\right.\right. \tag{15}\] \[+ \left.\left.\left.\sqrt{3}Q_{2}\right)\right\}\right]\phi(Q_{1},Q _{2}),\] where \(k\) is the eigen value of the operator \(\widehat{\Pi}\). Inserting the expression of \(\psi(Q_{1},Q_{2},Q_{3})\) into the time-independent schrodinger equation \(\widehat{H}\psi=\epsilon\psi\), we get an eigen value equation in terms of an effective Hamiltonian \(H_{\rm eff}\) and energy E as \(H_{\rm eff}\phi=E\phi\), where \(E=-(\epsilon-\frac{a}{b})\). The expression of effective Hamiltonian is as follows, \[H_{\rm eff} = \frac{1}{2}\left(P_{1}+\frac{\gamma}{4\sqrt{3}}Q_{2}\right)^{2}+ \frac{1}{2}\left(P_{2}-\frac{\gamma}{4\sqrt{3}}Q_{1}\right)^{2}+V_{\rm eff}\] \[V_{\rm eff} = -\frac{\gamma^{2}}{6}\left(Q_{1}+\sqrt{3}Q_{2}+\frac{k}{\sqrt{2} \gamma}\right)^{2}+\frac{a}{b}\left[e^{-b\sqrt{2}Q_{1}}\right. \tag{16}\] \[+ \left.e^{\frac{b}{\sqrt{2}}(Q_{1}+\sqrt{3}Q_{2})}+e^{\frac{b}{ \sqrt{2}}(Q_{1}-\sqrt{3}Q_{2})}\right],\] where \(P_{1},P_{2},Q_{1},Q_{2}\) are to be treated as quantum mechanical operators as mentioned earlier. The effective Hamiltonian may be interpreted as describing a particle moving in a two-dimensional potential \(V_{\rm eff}\) and subjected to uniform "fictitious magnetic field"[30; 31; 35] perpendicular to the "\(Q_{1}-Q_{2}\)"-plane with the absolute magnitude \(\frac{|\gamma|}{2\sqrt{3}}\). In the terminology of the quantization of system with uniform magnetic field, the "fictitious gauge-potential" in \(H_{\rm eff}\) is written in the symmetric gauge[31]. One may choose the Landau gauge and the corresponding Hamiltonian \(H_{L}\) is obtained through a gauge transformation as, \[H_{L} = exp(-\frac{i\gamma}{4\sqrt{3}}Q_{1}Q_{2})\ H_{\rm eff}\ exp( \frac{i\gamma}{4\sqrt{3}}Q_{1}Q_{2}) \tag{17}\] \[= \frac{1}{2}(P_{1}+\frac{\gamma}{2\sqrt{3}}Q_{2})^{2}+\frac{P_{2} ^{2}}{2}+V_{\rm eff}\] A quadratic potential arises in \(V_{\rm eff}\) as the effect of the BLG and VMC, and the standard Toda potential gets modified. The potential is bounded for \(\frac{a}{b}>0\) and contour-plots of the same are shown in Fig. (4) for a few choices of the parameters. We study bound states of the quantum Hamiltonian \(H_{\rm eff}\). The analytical treatment of the general eigen value problem appears to be non-trivial, and will be studied numerically. However, exact solutions may be obtained in the limiting case in which the Toda potential is approximated as that of coupled oscillators. ### A Limiting Case The Toda potential reduces to that of coupled oscillators in the limit \(b\to 0,a\rightarrow\infty\) such that \(ab\equiv\omega^{2}\). The Hamiltonian \(H_{\rm eff}\) in this limit reduces to a two-dimensional anisotropic oscillator in an external uniform magnetic field, \[\tilde{H} = \frac{1}{2}\left(P_{1}+\frac{\gamma}{4\sqrt{3}}Q_{2}\right)^{2}+ \frac{1}{2}\left(P_{2}-\frac{\gamma}{4\sqrt{3}}Q_{1}\right)^{2} \tag{18}\] \[+ \frac{9\omega^{2}-\gamma^{2}}{6}Q_{1}^{2}+\frac{3\omega^{2}-\gamma ^{2}}{2}Q_{2}^{2}-\frac{\gamma^{2}}{\sqrt{3}}Q_{1}Q_{2},\] where we have taken \(k=0\) and dropped the constant term \(\frac{3a}{b}\). The term \(Q_{1}Q_{2}\) can be removed through a rotation by an angle thirty degree in the anti-clockwise direction. In particular, the transformation \[Q_{1}=\frac{\sqrt{3}}{2}X+\frac{1}{2}Y,\ Q_{2}=-\frac{1}{2}X+\frac{\sqrt{3}}{2}Y \tag{19}\] transforms \(\tilde{H}\) to the following form: \[\tilde{H}=\frac{1}{2}\left(\Pi_{X}^{2}+\Pi_{Y}^{2}\right)+\frac{1 }{2}\omega_{1}^{2}X^{2}+\frac{1}{2}\omega_{2}^{2}Y^{2}\] \[\Pi_{X}=P_{X}+\frac{\gamma}{4\sqrt{3}}Y,\ \Pi_{Y}=P_{Y}-\frac{ \gamma}{4\sqrt{3}}X, \tag{20}\] where \(P_{X}=-i\partial_{x},P_{Y}=-i\partial_{Y}\) and \(\omega_{1}^{2}=3\omega_{2}^{2},\omega_{2}^{2}=\omega_{1}^{2}-\frac{4\gamma^{2 }}{3}\). The Hamiltonian \(\tilde{H}\) is \(\mathcal{PT}\)-symmetric where the parity transformation \(\mathcal{P}:(X,Y)\rightarrow(X,-Y),(P_{X},P_{Y})\rightarrow(P_{X},-P_{Y})\) and time reversal operation \(\mathcal{T}:(X,Y)\rightarrow(X,Y),(P_{X},P_{Y})\rightarrow(-P_{X},-P_{Y})\). Phase transition occurs in this systems at \(9\omega^{2}=4\gamma^{2}\). The eigen values of the systems are real for \(\omega^{2}>\frac{4}{9}\gamma^{2}\) and the system is in unbroken \(\mathcal{PT}\)-symmetric phase. The \(\mathcal{PT}\)-symmetry is broken for \(\omega^{2}<\frac{4}{9}\gamma^{2}\) and the eigen values become imaginary. The Hamiltonian \(\tilde{H}\) describes an anisotropic two dimensional harmonic oscillator in an external uniform magnetic field which has been studied earlier[48; 49]. We follow the method outlined in Ref. [50] to obtain the eigenspectra. Define a four dimensional vector \(U=(\omega_{1}X,\Pi_{X},\omega_{2}Y,\Pi_{Y})\) in the phase-space such that \(\tilde{H}=\frac{1}{2}UU^{T}\), where \(U^{T}\) denotes the transpose of \(U\). It may be noted that \([U_{i},U_{j}]=iM_{ij}\) where the \(4\times 4\) matrix \(M\) is given by, \[M=\begin{pmatrix}0&\omega_{1}&0&0\\ -\omega_{1}&0&0&\frac{\gamma}{2\sqrt{3}}\\ 0&0&0&\omega_{2}\\ 0&-\frac{\gamma}{2\sqrt{3}}&-\omega_{2}&0\end{pmatrix}. \tag{21}\] The eigenvalues of \(iM\) are \((\Omega_{+},-\Omega_{+},\Omega_{-},-\Omega_{-})\) where \[\Omega_{\pm}=\sqrt{3\left[\omega^{2}\pm\frac{5\gamma}{24}\sqrt{\gamma^{2}+ \frac{16}{25}\omega^{2}}-\frac{5}{24}\gamma^{2}\right]}, \tag{22}\] and reality of \(\Omega_{\pm}\) is ensured for \(\omega^{2}>\frac{4}{9}\gamma^{2}\). The determinant of the matrix \(iM\) is zero for \(\omega^{2}=\frac{4}{9}\gamma^{2}\) and \(\omega=\pm\frac{2}{3}\gamma\) characterizes a critical phase for which \(\Omega_{-}=0\). There exists an orthogonal transformation \(V=O^{T}U\) such that the matrix \(M\) can be block-diagonalized as[51], \[M_{B}=O^{T}MO=\begin{pmatrix}0&\Omega_{+}&0&0\\ -\Omega_{+}&0&0&0\\ 0&0&0&\Omega_{-}\\ 0&0&-\Omega_{-}&0\end{pmatrix}. \tag{23}\] where \(O\) is an \(O(4)\) rotation matrix and \(O^{T}\) is the transpose of \(O\). The expression of \(O\) is given as, \[O=\begin{pmatrix}\frac{\sqrt{2}\omega_{1}a_{\pm}}{\Omega_{+}}({\omega_{2}}^{2 }-{\Omega_{+}}^{2})&0&\frac{\sqrt{2}\omega_{1}a_{-}}{\Omega_{-}}({\omega_{2}} ^{2}-{\Omega_{-}}^{2})&0\\ 0&\sqrt{2}a_{+}({\omega_{2}}^{2}-{\Omega_{+}}^{2})&0&\sqrt{2}a_{-}({\omega_{2} }^{2}-{\Omega_{-}}^{2})\\ 0&-\frac{\omega_{2}\gamma a_{+}}{\sqrt{6}}&0&-\frac{\omega_{2}\gamma a_{-}}{ \sqrt{6}}\\ \frac{\gamma a_{+}\Omega_{+}}{\sqrt{6}}&0&\frac{\gamma a_{-}\Omega_{-}}{ \sqrt{6}}&0\end{pmatrix}, \tag{24}\] where \[a_{\pm}=\frac{\Omega_{\pm}}{\sqrt{(\omega_{2}^{2}-{\Omega_{\pm}}^{2})^{2}({ \omega_{1}}^{2}+{\Omega_{\pm}}^{2})+\frac{\gamma^{2}{\Omega_{\pm}}^{2}}{12}( {\omega_{2}}^{2}+{\Omega_{\pm}}^{2})}}.\] The matrix \(O\) is unique modulo \(O(4)\) rotations. We denote \(M_{D}=\text{diag}(\Omega_{+},-\Omega_{+},\Omega_{-},-\Omega_{-})\) and let \(S\) diagonalizes \(iM\), i. e. \(M_{D}=S^{\dagger}(iM)S\). The similarity transformation can not change eigenvalues and let \(T\) diagonalizes \(iM_{B}\), i.e. \(M_{D}=T^{\dagger}(iM_{B})T\). The matrix \(O\) is constructed as \(O=ST^{\dagger}\). The transformed variables in the phase space \((x,p_{x},y,p_{y})\equiv(v_{1}/\sqrt{\Omega_{+}},v_{2}/\sqrt{\Omega_{+}},v_{3}/ \sqrt{\Omega_{-}},v_{4}/\sqrt{\Omega_{-}})\) where \(\Omega_{-}\neq 0\), satisfy the standard commutation relations, \[[x,p_{x}]=i,\ [y,p_{y}]=i,\ [x,y]=0=[p_{x},p_{y}]. \tag{25}\] The Hamiltonian is expressed in terms of new variables as two decoupled anisotropic harmonic oscillators, \[\tilde{H}=\frac{\Omega_{+}}{2}\left(p_{x}^{2}+x^{2}\right)+\frac{\Omega_{-}}{2 }\left(p_{y}^{2}+y^{2}\right). \tag{26}\] The Hamiltonian can be expressed in terms of two sets of annihilation and creation operators as, \[\tilde{H}=\Omega_{+}\left(a^{\dagger}a+\frac{1}{2}\right)+\Omega_{-} \left(b^{\dagger}b+\frac{1}{2}\right),\] \[a=\frac{1}{\sqrt{2}}\left(p_{x}-ix\right),\ \ b=\frac{1}{\sqrt{2}} \left(p_{y}-iy\right) \tag{27}\] The ground state is determined from the conditions \(a\Psi_{0}=0,b\Psi_{0}=0\), \[\Psi_{0}(x,y)=\frac{2}{\sqrt{\pi}}\ e^{-\frac{1}{2}\left(x^{2}+y^{2}\right)}, \ \ E_{0,0}=\frac{\Omega_{+}+\Omega_{-}}{2}. \tag{28}\] The energy eigenvalues and the eigenfunctions are, \[E_{n,m}=\left(n+\frac{1}{2}\right)\Omega_{+}+\left(m+\frac{1}{2} \right)\Omega_{-},\ n,m\in\mathbb{Z}^{\geq 0}\] \[\Psi_{n,m}(x,y)=\frac{(a^{\dagger})^{n}}{\sqrt{n!}}\frac{(b^{ \dagger})^{m}}{\sqrt{m!}}\Psi_{0}(x,y). \tag{29}\] The wave-function \(\Psi_{n,m}(x,y)\) may be expressed in terms of the original variables through a series of inverse transformations which we do not pursue here. It may be noted that there is a reduction in the phase-space for \(\omega^{2}=\frac{4}{9}\gamma^{2}\) for which \(\Omega_{-}=0\) and the eigenvalues become infinitely degenerate. The energy eigen values are complex for \(\omega^{2}<\frac{4}{9}\gamma^{2}\) and entirely real for \(\omega^{2}>\frac{4}{9}\gamma^{2}\). ### General Hamiltonian The eigen value problem for the Hamiltonian \(H_{eff}\) in its full generality seems to be unamenable for an analytical treatment. We study the quantum bound states of \(H_{eff}\) numerically, and obtain the energy spectra and wave functions by using finite difference method. The scattering states of \(H_{eff}\), if any, have been excluded from our purview of study. We have considered \(\gamma>0,k\geq 0\) and \(a=b=1\) in the numerical calculation unless otherwise stated explicitly. The eigen values and eigenfunctions thus obtained are analyzed to investigate quantum integrability and chaos in the system. In particular, we study avoid level crossing, probability distribution of wave functions in the semi-classical region, level statistics and gap-ratio distribution in order to study quantum integrability and chaos. It is worth recalling that the system in Ref. [19] is integrable for \(|\gamma|\lessapprox 1.5\) and chaotic above this critical value. We present numerical results in the following for \(\gamma\) varying from zero to values above this critical in order to search for quantum integrable and chaotic region. #### iii.2.1 Avoided level crossing and probability distribution The lowest ten energy eigen values of \(H_{\text{eff}}\) for different values of \(\gamma\), and \(k=0,a=b=1\) are shown in Fig. 5. The energies corresponding to each state decrease with increasing \(\gamma\). The energy levels are highly correlated and repel each other. The avoided level crossing is a signature of quantum chaos[36], since level-crossing gives rise to the degeneracy in the spectrum, implying symmetry and associated conserved quantities in the system. The avoided level crossing is seen for larger values of \(\gamma\) in Fig. 5. The level repulsion is not obvious from the figure for smaller values of \(\gamma\), and is shown separately in the inset of Fig. 5 for \(1\leq\gamma\leq 1.5\). The dependence of the ground state energy of \(H_{\text{eff}}+\frac{k^{2}}{12}\) on \(\gamma\) and \(k\) is shown in Fig. 6. The ground state energy is shifted for \(k\neq 0\) by an amount of the order of \(-\gamma^{2}k\) and the same behaviour is seen. The Bohr's correspondence principle allows to make a link between the classical physics and the predictions of quantum mechanics for very high excited states. Thus, a chaotic solution in classical physics should bear some signature in the quantized version of the same system. One of the objects to look for such signatures is the probability distribution of highly excited states. The probability distribution of quantum states of a classically integrable system is expected to be localized in space for low as well as very high quantum numbers. On the other hand, the probability distribution of quantum states corresponding to a classically chaotic system is expected to spread out in space for large values quantum number. The spreading out of the probability distributi Figure 6: (Color online) Plot of ground state energy for different values of \(\gamma\) and \(k\). Here \(a=b=1\) Figure 5: (Color online) Plot of lowest ten energy eigen value of 3-particle system as a function of BLG parameter \(\gamma\) for \(k=0\) and \(a=b=1\) states can be thought of as the quantum analogue of the chaotic path taken in classical region. We plot the probability distribution of the ground state and a few excited states in Fig. 7 and Fig. 8 for \(\gamma=.2\) and \(\gamma=1.6\), respectively. It is seen that the probability distribution for \(\gamma=0.2\) is localized with uniform distribution for low as well as highly excited states. On the other hand, the complex behaviour arises in the highly excited wavefunctions, and manifested in the nonuniform probability distribution for \(\gamma=1.6\). #### iii.2.2 Level statistics & Gap-ratio distribution The chaotic dynamics in classical physics is determined by studying the phase space trajectories, Lyapunov exponents, Poincare sections etc. The Schrodinger equation being a linear system, the notion of quantum chaos is not uniform, and a quantification of the same is still an active area of research. One of the standard approaches to explore signature of quantum chaos in a classically chaotic system is to study the level statistics. According to the BGS conjecture, the quantum Hamiltonian with chaotic classical dynamics must fall into one of the three classical ensembles of RMT[37] --GOE, Gaussian unitary ensembles(GUE) and Gaussian symplectic ensem Figure 8: (Color online) Plot of \(|\Phi|^{2}\) for \(k\) = 0,\(a\) = \(b\) = 1 and \(\gamma\) = 1.6. (a) Ground state; (b) 1st excited state; (c) 2nd excited state; (d) 200th excited state; (e) 300th excited state; (f) 400th excited state. Figure 7: (Color online) Plot of \(|\Phi|^{2}\) for \(k\) = 0,\(a\) = \(b\) = 1 and \(\gamma\) = 0.2. (a) Ground state; (b) 1st excited state; (c) 2nd excited state; (d) 200th excited state; (e) 300th excited state; (f) 400th excited state. bles(GSE). The level statistics of quantum Hamiltonian for GOE with integrable classical counterpart follow Poisson law \(P_{P}(s)=exp(-s)\), while it follows Wigner distribution \(P_{W}(s)=\frac{\pi s}{2}exp\left(-\frac{\pi s^{2}}{4}\right)\) in the classical chaotic region, where \(s\) is the spacing of nearest-neighbour eigenenergies [38]. We study level statistics of the eigen spectrum of the Hamiltonian \(H_{\rm eff}\). The level spacing distribution is shown by the probability density function \(\rho(s)\) of the nearest neighbour spacing of unfolded eigen value. The procedure of unfolding the raw eigen value is a way of locally rescaling the energy spectra such that the mean level density is one. The cumulative spectral function or the staircase function is defined as the number of levels with energy less than or equal to a certain value E. In particular, staircase function is defined as, \(N(E)=\sum_{n}\Theta(E-E_{n})\), where \(\Theta\) is the unit step function. The function \(N(E)\) consists of smooth part \(N_{sm}(E)\) and a fluctuating part \(N_{fl}(E)\), i. e. \(N(E)=N_{sm}+N_{fl}\). We obtain \(N_{sm}\) numerically, by fitting the staircase function with a polynomial of degree 15. The unfolded eigen values \(x_{i}\) are obtained from raw eigen values \(E_{i}\) as \(x_{i}=N_{sm}(E_{i})\). Finally The level-spacing distributions are shown by the probability density function \(\rho(s)\), where \(s=x_{i+1}-x_{i}\). The level spacing distributions of the set of eigen values of the Hamiltonian \(H_{\rm eff}\) for different values of \(\gamma\) are shown in Fig. 9 and Fig. 10. The blue and red solid lines represent the theoretical graphs for the Poisson and Wigner distributions, respectively. It is seen by comparing Figs. 9 and 10 that the nature of the level spacing distributions are similar for \(k=0\) and \(k=1\). Further, \(\rho(s)\) changes smoothly from the Poisson to Wigner distribution via some intermediate distributions as \(\gamma\) is varied from \(\gamma=0\) to \(\gamma=2\). We define a quantity \(\eta\) as follows, \[\eta=\left|\frac{\int_{0}^{s_{0}}\left[P(s)-P_{W}(s)\right]ds}{\int_{0}^{s_{0}} \left[P_{P}(s)-P_{W}(s)\right]ds}\right|, \tag{30}\] where \(s_{0}=0.4729......\) is the intersection point of \(P_{P}(s)\) and \(P_{W}(s)\). It may be noted that \(\eta=1\) for \(P(s)=P_{P}(s)\), while \(\eta=0\) for \(P(s)=P_{W}(s)\). Thus, \(\eta\) is an indicator of the nature of the distribution, and the plot of the \(\eta\) with respect to the parameter \(\gamma\) is shown in Fig. 11a. This diagram in a sense acts as the bifurcation diagram with respect to \(\gamma\). The value of \(\eta\) approaches one near \(\gamma=0\) and goes to zero as \(\gamma\) passes through the value \(\gamma\approx 1.5\) --the system transits from the integrable to the chaotic region. The unfolding procedure during the calculation of level statistics is a bit cumbersome and sometime system dependent. The statistics of the ratio of two consecutive energy-gaps[47] has been proposed as an alternative measure for the same purpose. The spacing between adjacent energy levels \(\delta_{n}=E_{n+1}-E_{n}\), and the ratio between adjacent gaps is defined as, \[0\leq\tilde{r}_{n}=min\{\delta_{n},\delta_{n-1}\}/max\{\delta_{n},\delta_{n-1 }\}\leq 1.\] The probability distribution of this ratio \(\tilde{r}\) for integrable Hamiltonian, \(P_{P}(\tilde{r})=\frac{2}{(1+\tilde{r})^{2}}\) and the mean value of this is \(\langle\tilde{r}\rangle_{P}=2\ ln{2}-1\cong 0.386\). The theoretical expression of probability distribution of \(\tilde{r}\) in case of \(GOE\) ensembles is, \(P_{GOE}(\tilde{r})=\frac{27}{4}\frac{\tilde{r}+\tilde{r}^{2}}{(1+\tilde{r}+ \tilde{r}^{2})^{2}}\). The mean value in this case, \(\langle\tilde{r}\rangle_{GOE}=0.5295\pm 0.0006\). We study the gap ratio distribution for \(k=0\) only because we have already shown that the change in \(k\) does not alter the nature of level spacing distribution. The Fig. 12 shows that the probability density distribution of gap ratio \(\tilde{r}\) is close to the Poisson distribution up to \(\gamma\approx 1.0\). This signifies integrable system. When \(\gamma\approx 1.4\) the distribution goes close to the theoretical value of the probability density distribution of GOE ensembles. This signifies the non-integrable region. The Fig. (11b) is the bifurcation diagram for this gap ratio distribution. It is see from the bifurcation diagram that while \(\gamma\gtrapprox 1.4\) the average value of gap ratio is close to \(0.52\) i.e. the system is chaotic. We show the quantum transition from integrable to chaotic region by studying nearest neighbour level spacing and gap ratio distribution. ## IV Conclusions & Discussions We have studied the periodic quantum Toda lattice with BLG for two and three particles. The two-particle Toda lattice is integrable and we have constructed two integrals of motion which are in involution. The translation invariance of the system has been used to separate out the center of mass motion, and the effective Hamiltonian in the relative coordinate is described by a particle moving in a potential consisting of harmonic plus cosine hyperbolic. The effective potential describes single or double wells depending on the strength of the loss-gain and the exponential terms. The eigen value equation has been solved numerically, and eigen energies as well as eigenfuntion corresponding to quantum bound states for a few low lying states have been presented. The qualitative behaviour is similar to that of single of double well arising out of a Quartic potential. The quantum Toda lattice with BLG and VMC for three particles is translation invariant. The effective Hamiltonian after the separation of the center of mass may be interpreted as a particle moving in a two dimensional potential, consisting of anisotropic harmonic oscillators plus Toda potential, and subjected to external uniform magnetic field with its strength being linearly proportional to the strength of the BLG. The angular frequencies of the harmonic oscillators depend quadratically on the strength of the BLG. It has been shown that the effective Hamiltonian is exactly solvable in the limit in which the Toda potential reduces to coupled oscillators. The eigenspectra and eigenfunctions have been obtained analytically by mapping the effective Hamiltonian to that of decoupled anisotropic oscillators in two dimensions via a similarity transformation. There is a limit in which reduction of a degree of freedom occur in phase space and spectra becomes infinitely degenerate. We have obtained two integrals of motion which are in involution. Although we have not found the third integral of motion which is required to show the complete integrability, the numerical investigations indicates that the three-particle quantum Toda system is integrable below a critical value of the BLG strength and chaotic above this critical value. The quantum integrability and chaos have been investigated via level statistics as well as gap-ratio distribution of the level-spacing of nearest-neighbour eigen values. In particular, we have observed the quantum transition from the chaotic to the integrable region, when the loss-gain strength crosses a critical value and goes to zero --the level spacing as well as the gap-ratio distributions smoothly changes from the Wigner-Dyson distribution and tends to follow the Poisson distribution. We have also studied the level repulsion phenomena in the energy spectra in both the two and three particle Toda lattices. It has been shown through the graphical presentations that the degree of level repulsion is large in the case of the three particle system. The quantum transition from chaotic to integrable region has also been independently confirmed by studying the complexity in higher order excited state wave functions. There are some immediate questions which may be pursued for future investigations. For example, the analytic expression for the third integral of motion is yet unknown. The standard technique of LAX pair formalism or Paineleve analysis may be employed to find an analytic expression for the same. Further, the generic problem for \(N>3\) particles is worth pursuing to see whether or not the mixed phases of intergrability and chaos persist for arbitrary number of particles. It is possible that the system may not be integrable for any range of the strength of the BLG for large \(N\). Finally, a semi-classical approach to understand the integrability and chaos is desirable. Some of these issues will be addressed in future publications. ## V Acknowledgements The work of SG is supported by Inspire fellowship(Inspire Code: IF190276) of Govt. of India
2309.13966
Hierarchies for Semidefinite Optimization in $\mathcal{C}^\star$-Algebras
Semidefinite Optimization has become a standard technique in the landscape of Mathematical Programming that has many applications in finite dimensional Quantum Information Theory. This paper presents a way for finite-dimensional relaxations of general cone programs on $\mathcal{C}^\star$-algebras which have structurally similar properties to ordinary cone programs, only putting the notion of positivity at the core of optimization. We show that well-known hierarchies for generalized problems like NPA but also Lasserre's hierarchy and to some extend symmetry reductions of generic SDPs by de-Klerk et al. can be considered from a general point of view of $\mathcal{C}^\star$-algebras in combination to optimization problems.
Gereon Koßmann, René Schwonnek, Jonathan Steinberg
2023-09-25T09:01:30Z
http://arxiv.org/abs/2309.13966v1
# Hierarchies for Semidefinite Optimization in \(\mathcal{C}^{\star}\)-Algebras ###### Abstract The class of Semidefinite Programs on matrices has a natural extension to conic optimization problems on \(\mathcal{C}^{\star}\)-algebras. Especially in the field of Quantum Information Theory many relevant optimization problems can be cast in this way. Finding exact solutions to these problems is however a notoriously hard task. In this paper we investigate a construction for finite-dimensional relaxations of general cone programs on \(\mathcal{C}^{\star}\)-algebras. This construction gives outer bounds and can be grasped in terms of positive maps and basic linear algebra. We show that well-known hierarchies like NPA [1] but also Lasserre's hierarchy [2] and to some extend symmetry reductions of generic SDPs by de-Klerk et al. [3] can be formulated from this perspective. ## I Introduction In recent years, it has become increasingly apparent that many optimization problems can be represented as cone programs. In particular the community figured out that there is an incredible strong interplay between cone programs and convex optimization problems whereby the convex set of feasible points is not directly accessible [4; 5; 2]. In quantum information the most prominent representative of this type of problem is the optimization task over separable states. Optimizing over separable states is a convex optimization problem but the description of the set of separable states is computationally hard [5]. However, the solution is a hierarchy of efficient SDPs and the complexity of the problem is translated to a convergent hierarchy of convex optimization problems. Even from another point of view, one can ask whether two parties in a quantum experiment can achieve certain probability distributions. This task was solved by the famous NPA hierarchy [1; 6]. What all these hierarchies have in common is that they map the structure of states to a finite space and have a kind of mapping of positive elements to positive elements, because in quantum information theory, a generic cone structure of positivity exists due to the state space of quantum systems. However, due to the existence of mixed states, more general forms of positivity have to be considered. In this work, we focus on this idea of pursuing positivity in the most general structure of \(\mathcal{C}^{\star}\)-algebras and assign a \(\mathcal{C}^{\star}\)-algebra to the description of a quantum experiment. In the simplest and yet also most general form, this is done by rules such as how two or more parties are spatially related [7]. This top down approach to the description of an experiment in nature inevitably leads to questions about the state space and properties of observables with respect to states. For example, if one wants to find the optimal state for a given observable over all states, this is directly a generalized form of a SDP: Optimize over all positive functionals so that they are normalized. Usually this problem is hard to tackle because the \(\mathcal{C}^{\star}\)-algebra to be considered is infinite dimensional. In this paper, we present a generalized formalism that is both mathematically sound with respect to the rules of the system (i.e., the spatial partitioning of the parties) and capable of solving general optimization problems. The success and popularity can be mainly ascribed to two different points. First, a huge variety of problems can be formulated (at least approximately) in the framework of semidefinite programming. Second, there exists algorithms, like interior-point methods, which allow for an efficient computation of the solution. More formally, a SDP describes the task of optimizing a given linear functional over the cone of positive (semidefinite) matrices of a fixed dimension \(n\), under linear constraints. In addition to our motivation in quantum information, we note in passing that many well-known problems can be formulated as \(\mathcal{C}^{\star}\)-SDP. In particular, many typical benchmark examples for quantum computing such as Boolean optimization can be understood in a general way as \(\mathcal{C}^{\star}\)-SDP. In this work, we investigate a generalization of semidefinite programming from the algebra of matrices to an arbitrary \(\mathcal{C}^{\star}\)-algebra, which is obtained by replacing the cone of positive semidefinite matrices with the cone of positive semidefinite functionals. Further, we show that the finite dimensional approximations can be rephrased as a conventional matrix SDP, turning them into a efficient solvable problem. Subsequent, we discuss its relations to well known hieraries as those of Lasserre2 and Navascues-Pironio-Acin (NPA)1. Footnote 1: i.e. a completely positive and trace-preserving linear map. Footnote 2: Recall that if the channel \(T\) is trace-preserving the dual mapping is unital. ## II Preliminaries and Introductory Example We start our considerations with a simple example coming from an operational point of view. Consider the following generic problem in quantum theory: \(H\in\mathcal{B}(\mathcal{H}_{1})\) is a (selfadjoint) bounded operator on a Hilbert space \(\mathcal{H}_{1}\) and \(\mathcal{B}_{1}(\mathcal{H}_{1})^{+}\) is the set of all positive trace-class operators. We aim to minimize the expectation value of \[\min\operatorname{tr}[\rho H] \tag{1}\] \[\operatorname{s.th.}\operatorname{tr}[\rho]=1\] (2) \[\rho\in \mathcal{B}_{1}(\mathcal{H}_{1})^{+}. \tag{3}\] This problem is inherently hard, because \(H\) is in general an operator on a infinite dimensional Hilbert space and apparently there is a priori no efficient way to handle the set of states. If we now assume to have access to a quantum channel1 Footnote 1: i.e. a completely positive and trace-preserving linear map. \[T:\mathcal{B}_{1}(\mathcal{H}_{1})\to\mathcal{B}_{1}(\mathcal{H}_{2})\] and we know an operator \(M\in\mathcal{B}(\mathcal{H}_{2})\) such that \(T^{\star}(M)=H\), then it follows that the optimization problem (1) is equivalent to2 Footnote 2: Recall that if the channel \(T\) is trace-preserving the dual mapping is unital. \[\min\operatorname{tr}[\sigma M]\] \[\operatorname{s.th.}\operatorname{tr}[\sigma]=1\] \[\sigma\in T(\mathcal{B}_{1}(\mathcal{H}_{1})^{+})\subset \mathcal{B}_{1}(\mathcal{H}_{2})^{+}.\] Now it can happen that the Hilbert space \(\mathcal{H}_{2}\) is even finite dimensional. We have shifted the entire difficulty of the problem to the description of \(T(\mathcal{B}_{1}(\mathcal{H}_{1})^{+})\), i.e. the image of the quantum channel \(T\). Nevertheless, this set contains global information of the cone \(\mathcal{B}_{1}(\mathcal{H}_{1})^{+}\) and is therefore in general hard to describe. However it could be contained in a finite dimensional space. We can make an illustrative example: consider an entanglement breaking channel, which can be given by[8; 5] \[\rho\mapsto\sum_{k}\sigma_{k}\operatorname{tr}[F_{k}\rho]\] for density operators \((\sigma_{k})_{k}\in\mathcal{B}_{1}(\mathcal{H}_{2})^{+}\) and a POVM \(\{F_{k}\}_{k}\subset\mathcal{B}(\mathcal{H}_{1})\). It is clear that the dual mapping is given by \[M\mapsto\sum_{k}\operatorname{tr}[\sigma_{k}M]F_{k}.\] Therefore, if we assume that \(\{F_{k}\}_{k}\) corresponds to the spectral decomposition of \(H\) and \(M\) is chosen in a way that \(\operatorname{tr}[\sigma_{k}M]\) is the \(k\)th eigenvalue, then we can conclude that this is in general an infinite series and therefore the cone of interesting density operators \[\operatorname{cone}(\sigma_{k}\ |\ k\in\mathbb{N})\] is hard to describe although it is contained in a possible finite dimensional space. This example provides us with some nice theoretical insights. First of all it tells us how to convert at least theoretically infinite dimensional optimization problems to finite dimensional spaces. Moreover if we can approximate \(T(\mathcal{B}_{1}(\mathcal{H}_{1})^{+})\) appropriately with linear constraints, then we can relax the problem with a standard semidefinite program. As it turns out, in the example above we have chosen the state space as the set of trace-class operators, which would correspond to the normal states. In order to avoid this discussion about duality, we abstract to general \(\mathcal{C}^{*}\)-algebras and consider the dual as the generic state space. In addition, as we will conclude, our considerations are not restricted to the special class of quantum channels. Positivity of linear maps will be the central property of this paper in comparison to completely positivity and trace-preserving maps. The structure of the document is as follows: in section III we introduce into our notion of \(\mathcal{C}^{*}\)-SDP. In section IV we show how to relax general \(\mathcal{C}^{*}\)-SDP and in section V we show a convergence result for particular assumptions. In the last section section VI we give a small overview about the connection to symmetry reductions in this context. ## III Semidefinite optimization on \(\mathcal{C}^{*}\)-algebras We generalize now our toy example from section II to a general language for so called \(\mathcal{C}^{*}\)_-SDP_ and define what we mean by optimality in this context. ### Conventional SDPs Formally, an instance of a conventional SDP is given by a set of \(m+1\) selfadjoint matrices \(\{F_{0},F_{1},\ldots,F_{m}\}\), all coming from a common space \(\mathbb{C}^{n\times n}\), and a set of real numbers \(\{f_{1},\ldots,f_{m}\}\) that together define the optimization problem \[\min \operatorname{tr}\left(\rho F_{0}\right)\] (4) s.th. \[\operatorname{tr}\left(\rho F_{i}\right)\leq f_{i} \forall i\in\{1,\ldots,m\}\] \[\rho\geq 0,\] which is conventionally referred to as the primal form of a semidefinite program. The optimization above runs over the cone of all positive selfadjoint matrices, i.e. those matrices \(\rho\) for which equivalently [10] 1. all eigenvalues are positive and \(\rho\) is self-adjoint 2. all expectations \(\operatorname{tr}\left(\rho XX^{*}\right)\) are positive 3. a decomposition \(\rho=XX^{*}\) exists. It should be noted that the cone of positive semidefinite (PSD) matrices plays two roles in the case of conventional SDPs. On one hand, PSD matrices are the positive elements of \(\mathbb{C}^{n\times n}\) from a purely algebraic perspective. This notion of positivity only depends on the form of the element itself. This corresponds to the definition (iii) with \(\rho=XX^{*}\) or to definition (i) by using the spectrum of \(\rho\). On the other hand, PSD matrices are also used to _label_ the set of positive linear functionals, see definition (ii). Indeed, by Riesz representation it can be shown that any positive linear functional \(\omega:\mathbb{C}^{n\times n}\to\mathbb{C}\) is of the from \(\omega(X)=\operatorname{tr}\left(\rho X\right)\) for a positive \(\rho\). Consequently, each of those properties (i)-(iii) could in principle give rise to a variety of different generalizations. Initially the class of optimization problems that is considered in this work could be understood as a generalization of property (ii): we generalize the matrices \(\{F_{0},F_{1},\ldots,F_{m}\}\) by self-adjoint elements of a \(C^{*}\)-algebra \(\mathcal{A}\) and consider optimizations over the cone of functionals that are positive semidefinite on all elements of the form \(XX^{*}\). ### Generalized SDPs With respect to the usual matrix product the vector space \(\mathbb{C}^{n\times n}\) naturally admits the structure of an algebra, on which it is somewhat natural and well motivated by many applications to consider the hermitian conjugation \[X^{*}:=X^{\dagger}=\overline{X}^{T} \tag{5}\] as an involution and \[\|X\|:=\sup\{\operatorname{tr}\left(\rho X\right)\mid\rho\geq 0\text{ and } \operatorname{tr}\left(\rho\right)=1\} \tag{6}\] as a norm. In the following we will write \(M_{n}(\mathbb{C}):=(\mathbb{C}^{n\times n},\cdot,\ \ ^{*},\|\cdot\|)\) in order to denote \(\mathbb{C}^{n\times n}\) equipped with this extra structure. Further this norm shares the property of being submultiplicative, that is for any \(X,Y\in M_{n}(\mathbb{C})\) one has \(\|XY\|\leq\|X\|\,\|Y\|\) and turns \(M_{n}(\mathbb{C})\) into a complete space. The abstraction of this structure, i.e. \(M_{n}(\mathbb{C})\) is nowadays known as a \(\mathcal{C}^{*}\)-algebra \(\mathcal{A}\): a Banach, i.e. norm complete, algebra equipped with a \(*\)-involution that fulfills the so called \(\mathcal{C}^{*}\)-property \(\|XX^{*}\|=\|X\|^{2\)10. The deceptively simple axioms turn out to be extremely powerful and forcing a rigid structure on a \(\mathcal{C}^{*}\)-algebra. In particular, it is possible to retrieve the initial example of \(M_{n}(\mathbb{C})\) from the abstract definition in case of finite dimensionality: any finite dimensional \(\mathcal{C}^{*}\)-algebra is isomorphic to a direct sum of matrix algebras (Artin-Wedderburn Theorem). Footnote 10: Indeed, for the more general concept of a \(*\)-algebra, where one drops the requirement of a complete norm, there exist cases where \(\rho=XX^{*}\) can have a negative eigenvalues. However, in a general algebra \(\mathcal{A}\), the three definitions (i)-(iii) are not longer equivalent and we have to make a decision for defining what a generalized SDP is. Given a general algebra \(\mathcal{A}\) it makes sense to call an element \(F\in\mathcal{A}\) selfadjoint, if \(F=F^{*}\) holds with respect to the \(*\)-involution. Furthermore, we get a generalization of positive semidefiniteness by generalizing property (iii), i.e. we say that an element \(P\in\mathcal{A}\) is positive semidefinite, written \(P\geq 0\), if it admits a decomposition \(P=XX^{*}\). Alternatively one could think about generalizing property (i) by demanding \(P\) to have a non-negative spectrum. However, it turns out that this definition would coincide with \(P=XX^{*}\), which is in fact a distinct property of \(\mathcal{C}^{*}\)-algebras11. The set of all those positive elements forms a closed convex cone in \(\mathcal{A}\), which will be denoted by \(\mathcal{P}_{+}(\mathcal{A})\). The dual of this cone will be denoted by \(\mathcal{P}_{+}(\mathcal{A})^{*}\) and can be defined algebraically11 by Footnote 11: Indeed, for the more general concept of a \(*\)-algebra, where one drops the requirement of a complete norm, there exist cases where \(\rho=XX^{*}\) can have a negative eigenvalues. \[\mathcal{P}_{+}(\mathcal{A})^{*}:=\{\rho:\mathcal{A}\mapsto\mathbb{C}\mid\ \text{linear},\quad\rho(XX^{*})\geq 0\quad\forall X\in \mathcal{A}\}. \tag{7}\] Now we are able to state the generalized version of a conventional SDP with operations on an abstract \(\mathcal{C}^{*}\)-algebra. Motivated by applications in which \(\rho\) takes the role of a quantum state, we take the path of replacing \(\mathcal{P}_{+}(\mathbb{C}^{n\times n})^{*}\) by \(\mathcal{P}_{+}(\mathcal{A})^{*}\) and therefore consider optimization problems of the following type: **Definition 1**.: For \(m\in\mathbb{N}\), let \(\{F_{0},F_{1},\ldots F_{m}\}\) be a list of selfadjoint elements from a \(\mathcal{C}^{\star}\)-algebra \(\mathcal{A}\) and let \(\{f_{1},\ldots,f_{m}\}\) be a list of real numbers. The conic optimization problem \[\inf \omega(F_{0})\] (8) s.th. \[\omega(F_{i})\leq f_{i} \forall i\in\{1,\ldots,m\} \tag{9}\] \[\omega\in\mathcal{P}_{+}(\mathcal{A})^{\star}\] is called the primal form of a semidefinite program on \(\mathcal{A}\), short \(\mathcal{C}^{\star}\)-SDP. Same as in the case of ordinary SDP, we can define feasible points. **Definition 2**.: A \(\mathcal{C}^{\star}\)-SDP is called feasible, if there is an element \(\omega\in\mathcal{P}_{+}(\mathcal{A})^{\star}\) with \[\omega(F_{i})\leq f_{i} \forall i\in\{1,\ldots,m\}. \tag{10}\] Accordingly an optimal solution is an element \(\tilde{\omega}\), such that for all feasible solutions \(\omega\in\mathcal{P}_{+}(\mathcal{A})^{\star}\) \[\omega(F_{0})\geq\tilde{\omega}(F_{0}) \tag{11}\] holds true. Because well-known finite dimensional SDPs are included in this generalization, we must be appropriately careful when adding a value to the program. This is not important for the first statements, since we first specify bounds for the value. It only gets interesting when we define and use the dual program. Therefore, the _value_ (possibly infinity) of a \(\mathcal{C}^{\star}\)-SDP is given by \[\tilde{c}:=\inf\{\omega(F_{0})\ |\ \omega(F_{i})\geq f_{i}\quad\forall i\in\{1,\ldots,m\}\},\quad\omega\in\mathcal{P}_{+}(\mathcal{A})^{\star}. \tag{12}\] Our original interest on this class of problems stems from the huge variety of well-known problem types that can obtained as particular instances of (8). In the next section we present examples of how those problems can be rephrased. However, for understanding the theoretical insights the interested reader can skip the next section. ### Examples of \(\mathcal{C}^{\star}\)-SDPs In the following we will rephrase two important optimization problems, namely the problem of polynomial optimization and the problem of optimizing over the set of quantum correlations as a \(\mathcal{C}^{\star}\)-SDP. **Example 3** (The Generalized Moment Problem).: We consider a the generalized moment problem (GMP), which can be solved Lasserre's famous polynomial optimization method [2]. Without sake of completeness we introduce concepts from Lasserre's hierarchy which are important for us later in this work. Consider a polynomial \(f\) in \(d\) variables and a compact set \(K\subset\mathbb{R}^{d}\) and the optimization task \[\inf\{f(x)\ |\ x\in K\}. \tag{13}\] One can show [2] that (13) is equivalent to the following optimization task: \[\inf \mu(f)\] s.th. \[\mu(1)=1\] \[\mu\in\mathcal{M}(K)_{+}\] whereby \(\mathcal{M}(K)_{+}\) is the set of positive Borel measures; i.e. the topological dual space of \(C(K)\). In general a GMP is given by \[\rho_{\text{mom}}=\sup_{\mu\in\mathcal{M}(K)_{+}}\int_{K}fd\mu \tag{14}\] \[\text{s.th. }\int_{K}h_{j}d\mu\leq h_{j}\quad\forall 1\leq j\leq m\] for multivariate polynomials \(f,h_{1},\ldots,h_{m}\) and real numbers \(\gamma_{1},\ldots,\gamma_{m}\). one possible point of view would be the following. Consider the set of finite positive Borel measures and in particular the _moment sequence_\(y=(y_{\alpha})_{\alpha\in\mathbb{N}^{n}}\) for each of them given by \[y_{\alpha}=\int_{K}x^{\alpha}d\mu.\] If we introduce the functional \[L_{y}:\mathbb{R}[x] \to\mathbb{R}\] \[f \mapsto L_{y}(f):=\sum_{\alpha\in\mathbb{N}^{n}}f_{\alpha}y_{\alpha}\] for \(f=\sum_{\alpha}f_{\alpha}x^{\alpha}\), so we can reformulate the optimization problem (14) to \[\sup_{y\text{ moment sequence}}L_{y}(f)\] \[\text{s.th. }L_{y}(h_{j})\leq\gamma_{j}\quad\forall 1\leq j\leq m.\] The question whether \(y\) is moment sequence can be answered with linear constraints of the following type. Define for a multivariate polynomial in \(n\) variables _moment matrices_ to be \[M_{r}(uy)_{\alpha,\beta}:=(L_{y}(ux^{\alpha}x^{\beta}))_{\alpha,\beta}\] whereby we cut the degree up to \(r\). Then Theorem 3.81 states that \(y\) has a finite Borel representing measure with support contained in a semi-algebraic set \(K=\{x\in\mathbb{R}^{n}\mid g_{j}(x)\geq 0\quad\forall 1\leq j\leq m^{4}\) if and only if Footnote 1: There is in addition a compactness criteria for technical reasons. \[L_{y}(f^{2}g_{J})\geq 0\quad\forall J\subset\{1,\ldots,m\}\quad\forall f\in \mathbb{R}[x]\] or equivalently \[M_{r}(g_{J}y)\geq 0\quad\forall J\subset\{1,\ldots,m\}\quad\forall r\in \mathbb{N}.\] One can think about these constraints as semidefinite constraints for \(y\) being a moment sequence. In particular we can introduce the dual program \[\rho_{\text{pop}}=\inf_{\lambda}\sum_{j=1}^{m}\gamma_{j}\lambda_{j} \tag{15}\] \[\text{s.th. }\sum_{j=1}^{m}\lambda_{j}h_{j}(x)-f(x)\geq 0 \quad\forall x\in K\] (16) \[\lambda_{j}\geq 0\quad\forall 1\leq j\leq m. \tag{17}\] I.e. the problem is equivalent in checking whether a polynomial is positive over \(K\). For basic semi-algebraic sets there is Putinar's Positivstellensatz12 which tells us how positive polynomials look like on these type of sets and therefore one can basically relax the dual problem (15) to a finite level version and solve the corresponding semidefinite program. The interesting observation of this story is that from a global point of view (14) is something what we will later call \(\mathcal{C}^{\star}\)-SDP, because it is an optimization problem over the dual space of the \(\mathcal{C}^{\star}\)-algebra \(C(K)\) for the compact set \(K\). Apparently this does not solve the description of the dual space of \(C(K)\), i.e. due to Riesz famous representation theorem \(\mathcal{M}(K)\), but in general it leads the eyes to the question whether one can decide positivity in a \(\mathcal{C}^{\star}\)-algebra. And this is theoretically easily done: in a \(\mathcal{C}^{\star}\)-algebra \(\mathcal{A}\) an element \(x\in\mathcal{A}\) is positive if and only if there is a \(\star\)-root, i.e. there is \(y\in\mathcal{A}\) such that \(x=y^{\star}y\). That is, there is somehow the difference of being positive in terms of polynomials, which is apparently on the one hand a hard question and solved by Putinar's Positivstellensatz and on the other hand it seems to be quite easy because we have roots at our disposal. **Example 4** (Npa).: Essentially the NPA hierarchy1 examines the question whether there is a quantum system for a given joint probability distribution. To be concrete, consider a bipartite experiment and a bipartite probability distribution. The question is whether the bipartite probability distribution can be generated by local measurements of two parties. In a mathematical sense this ansatz works with free \(\star\)-algebras. It chooses operators and works then with representations on Hilbert spaces of this free \(\star\)-algebras. Footnote 1: The \(\mathcal{C}^{\star}\)-algebra is a \(\mathcal{C}^{\star}\)-algebra. Comparing to this ansatz we assume in the paper presented here always the existence of a \(\mathcal{C}^{\star}\)-algebra at the beginning. This clarifies mathematically a lot, because we have a direct notion of positivity which is a priori not clear in NPA (one has to construct positivity from representations). Moreover, we have a simple notion of 'quadratic module', it will be a set of squares. However, the relaxed semidefinite programs in the end can become the same in particular cases. In conclusion all the selected examples among many others presented above have the common structure of a \(\mathcal{C}^{\star}\)-algebra and an optimization over a feasible region in the dual cone. Therefore it is worth to construct a theory how to solve in general such problems assuming only the global structure. ## IV Finite dimensional relaxations of generalized SDPs ### The joint numerical range In contrast to conventional SDPs, which are considered to be efficiently solvable, finding the solution to a generalized SDP may turn out to be a challenging task. In the following we will introduce methods for approximating a generalized \(\mathcal{C}^{\star}\)-SDP by a finite dimensional one. The leading intuition for the existence of such a finite approximation to a \(\mathcal{C}^{\star}\)-SDP (8) stems from the observation that, even though we may consider a wild and potentially infinite dimensional algebra \(\mathcal{A}\), all the relevant information for an optimization like (8) is in principle encoded in the finite (indeed not more that \(N\) dimensional) subspace \(\mathcal{F}\subset\mathcal{A}\) that is spanned by \(\vec{F}:=\{F_{0},\ldots,F_{N}\}\) and in an appropriate subset of its dual \(\mathcal{F}^{\star}\). It is easy to see that we can, at least formally, replace the optimization in (8) by only considering functionals from the convex cone \[\mathcal{F}^{\star}_{+}:=\mathcal{P}_{+}(\mathcal{A})^{\star}|_{\mathcal{F}} \subset\mathcal{F}^{\star}. \tag{18}\] Indeed, suppose we have a functional \(\omega\in\mathcal{P}_{+}(\mathcal{A})^{\star}\) that is a solution of (8). By definition, \(\omega\) fulfills all constraints and thus also \(\omega|_{\mathcal{F}}\) does. Clearly, we also have \(\omega(F_{0})=\omega|_{\mathcal{F}}(F_{0})\), hence the optimal value is independent of whether the restricted version \(\omega|_{\mathcal{F}}\) or the full supported functional \(\omega\) is considered. There is also a more direct way to see the finite dimensional object standing behind an generalized SDP. Consider the set \[\mathrm{JNC}_{\vec{F}}:=\{\vec{y}_{\omega}=(\omega(F_{0}),\ldots,\omega(F_{N} ))|\omega\in\mathcal{P}_{+}(\mathcal{A})^{\star}\}\subset\mathbb{R}^{N} \tag{19}\] to which we will refer as joint numerical cone (JNC). This name is chosen in correspondence to the well-known concept of a (convex) joint numerical range, which is defined similarly to (19).5 JNC\({}_{\vec{F}}\) can, indeed, be understood as embedding of \(\mathcal{F}_{+}^{\star}\) into \(\mathbb{R}^{N}\), in the sense that for any \(f\in\mathcal{F}\) parameterized by coefficients \(\vec{\lambda}\) with \(f=\sum\lambda_{\mu}F_{\mu}\), we have Footnote 5: The only difference is that in the definition of the (convex) joint numerical range \(\omega\) runs over all normalized states of a unital \(\mathcal{C}^{\star}\)-algebra. Moreover, the joint numerical range can be seen as a cone base for the JNC. \[\omega(f)=\sum\lambda_{\mu}\omega(F_{\mu})=\vec{\lambda}.\vec{y}_{\omega} \tag{20}\] Accordingly we can state any general SDP as explicit conic program \[\inf \vec{y}.\mathbf{e}_{0}\] (21) s.th. \[\vec{y}\leq\vec{f}\] \[\vec{y}\in\mathrm{JNC}_{\vec{F}} \tag{22}\] over the finite dimensional cone JNC\({}_{\vec{F}}\). Despite that this view is giving some nice perspective on the problem, it is not yet doing any work. Here, the caveat is that the exact shape of \(\mathcal{F}_{+}^{\star}\) or JNC\({}_{\vec{F}}\) is generally inaccessible, since it still contains global information about the cone \(\mathcal{P}_{+}(\mathcal{A})^{\star}\). It is worth to stress that the shape of the cone \(\mathcal{F}_{+}^{\star}\) has a priori no common properties with the usual positive cone in a finite dimensional vector space. It inherits a term of positivity from the global structure. Roughly our main task is to track positivity through the whole approximation. The mechanism of the following approximation technique can be therefore understood as a construction that provides us with numerically accessible outer approximations to \(\mathcal{F}_{+}^{\star}\) or equivalently JNC\({}_{\vec{F}}\). ### Construction of a Relaxation In this section we discuss how to approximate the value of a \(\mathcal{C}^{\star}\)-SDP with theoretical methods and a finite dimensional relaxation. Recall from (7) that the cone \(\mathcal{P}_{+}(\mathcal{A})^{\star}\) is the dual cone of \(\mathcal{P}_{+}(\mathcal{A})\). It means that inner approximations of the cone \(\mathcal{P}_{+}(\mathcal{A})\) lead to outer approximations of the cone \(\mathcal{P}_{+}(\mathcal{A})^{\star}\). But outer approximations of the dual cone are giving lower bounds to a \(\mathcal{C}^{\star}\)-SDP. In order to connect to the two very nice points of view in section II and section IV.1 we have to attenuate the ideas from section II and abstract the ideas from section IV.1. In section II we enforce the map \(T\) to be a quantum channel, i.e. completely positive and trace-preserving. As it turns out firstly it is very difficult to find the right channel which dual maps a known family of operator \(\{M_{0},M_{1},\ldots,M_{m}\}\) to operators \(\{F_{0},F_{1},\ldots,F_{m}\}\) for a \(\mathcal{C}^{\star}\)-SDP. Secondly it is not clear how to appropriate describe the set of states after applying the channel. To circumvent the first problem we use the dual point of view and construct a linear mapping \(\phi\) which is not completely positive and trace-preserving anymore from a finite dimensional space into the algebra \(\mathcal{A}\) of interest. Since positivity of a map implies positivity of the dual, we can make similar arguments even without completely positivity. In a first step we can choose random positive elements from the \(\mathcal{C}^{\star}\)-algebra to approximate the positive cone \(\mathcal{P}_{+}(\mathcal{A})\). But then we need an assignment between these abstract elements and a cone in \(\mathbb{C}^{n\times n}\) for an appropriate size \(n\). The important point is that we need a way to translate positivity to a finite calculable system. The following fundamental observation, which is in fact a generalization of the idea of'sum of squares'2 in polynomial optimization or in the same manner from the \(\star\)-isomorphism in symmetry reductions3 in generic SDPs gives a way for translating positivity to a controllable cone. Footnote 3: The only difference is that in the definition of the (convex) joint numerical range \(\omega\) runs over all normalized states of a unital \(\mathcal{C}^{\star}\)-algebra. Moreover, the joint numerical range can be seen as a cone base for the JNC. **Lemma 5** (Fundamental Lemma).: Let \(\mathcal{A}\) be a \(\mathcal{C}^{\star}\)-algebra and consider a set \[\gamma=\{\gamma_{1},\ldots,\gamma_{n}\}\subset\mathcal{A}. \tag{23}\] Then the mapping \[\phi:\mathbb{C}^{n\times n} \to\mathcal{A} \tag{24}\] \[M \mapsto\sum_{i,j=1}^{n}m_{ij}\gamma_{i}\gamma_{j}^{\star} \tag{25}\] is linear and positive. In particular let \(x=yy^{\star}\in\mathcal{P}_{+}(\mathcal{A})\) and \(y\in\operatorname{span}(\gamma)\). Then there exists \(M\geq 0\) such that \(\phi(M)=x\). Proof.: Linearity is clear. We have to show positivity. Positive elements are hermitian in \(\mathbb{C}^{n\times n}\) and therefore each positive elements have a spectral decomposition. Therefore let \(M\in\mathbb{C}^{n\times n}\) be positive. Then \[M=\sum_{k}p_{k}P_{k}\] with \(p_{k}\in\mathbb{R}_{>0}\) and \(P_{k}=c^{(k)}(c^{(k)})^{\star}\) one dimensional projections. Therefore \[\phi(M) =\sum_{k}p_{k}\phi(c^{(k)}(c^{(k)})^{\star}) \tag{26}\] \[=\sum_{k}p_{k}\sum_{i,j}c_{i}^{k}\overline{c^{k}}_{j}\gamma_{i} \gamma_{j}^{\star}\] (27) \[=\sum_{k}p_{k}(\sum_{i}c_{i}^{k}\gamma_{i})(\sum_{j}c_{j}^{k} \gamma_{j})^{\star}. \tag{28}\] But then we see that each summand over \(k\) is a square in \(\mathbb{A}\) and therefore positive. Positive sums of positive elements are positive which concludes the proof. The Lemma 5 shows us that for a given sequence of elements \(\gamma\subset\mathcal{A}\) there is a way of translating structure from a \(\mathcal{C}^{\star}\)-algebra to a finite dimensional space. But in general to use the preimage of \(\phi\) for an approximation needs the structure of the kernel of \(\phi\) because the kernel contains the important inner structure of the algebra \(\mathcal{A}\). In other words the kernel decides whether two parameterized elements \(M,M^{\prime}\in C^{n\times n}\) are equal to each other. To get a working approximation we have to have in mind that this is precisely the point where we have to involve the knowledge about the algebra. For example the relations of generators of the algebra. We are now in the situation to present a working approximation. Recall that, in fact, we want to approximate \(\mathcal{F}_{+}^{\star}\) from section IV.1. For this goal we optimally need to build a hierarchy6 of numerically accessible cones Footnote 6: actually always intersected with \(\mathcal{F}\) \[\Sigma_{2}^{(1)}\subset\ldots\subset\Sigma_{2}^{(n)}\subset\Sigma_{2}^{(n+1) }\subset\ldots\subset\mathcal{P}_{+}(\mathcal{A}).\] These cones should be the images of positive cones in \(\mathbb{C}^{n\times n}\) in Lemma 5 for different sizes of \(n\). The sequence of dual cones, as defined in (7) leads then to a sequence \[(\Sigma_{2}^{(1)})^{\star}\supset\ldots\supset(\Sigma_{2}^{(n)})^{\star} \supset(\Sigma_{2}^{(n+1)})^{\star}\supset\ldots\supset\mathcal{P}_{+}( \mathcal{A})^{\star}. \tag{29}\] These are outer approximations of \(\mathcal{P}_{+}(\mathcal{A})^{\star}\). To construct such sequences of cones we start with a sequence of elements in the \(\mathcal{C}^{\star}\)-algebra \(\mathcal{A}\) \[\gamma=\{\gamma_{1},\ldots,\gamma_{n}\}. \tag{30}\] These elements \(\gamma\) span a subspace \(V_{n}\subset\mathcal{A}\), i.e. \[V_{n}:=\text{span}(\gamma)=\left\{v\in\mathcal{A}\middle|v=\sum_{i=1}^{n}v_{i} \gamma_{i},\text{ for some }v_{1},\ldots,v_{n}\in\mathbb{C}\right\}, \tag{31}\] from which we can construct a space7 Footnote 7: the notation \(\text{conv}(\Omega)\) denotes the convex hull of a set \(\omega\) \[Q_{n}:=\text{conv}\{vw^{\star}\ |\ v,w\in V\}=\text{span}\{\gamma_{i} \gamma_{j}^{\star}\}=\text{Im}(\phi) \tag{32}\] that contains all sums of the products that could be build from elements from \(V_{n}\). To ensure that we get senseful statements about \(\mathcal{F}\) we have to consider sequences \(\gamma\) that are constructed such that the subspace \(\mathcal{F}\) is contained in \(Q_{n}\), because then the cone \(\mathcal{F}_{+}^{\star}=\mathcal{P}_{+}(\mathcal{A})^{\star}\cap\mathcal{F}^{\star}\) is contained in (29). Recall that this is exactly the difficulty with our channel in section II. Starting from the functionals, in general we can not control which element is the right preimage under the dual mapping \(T^{\star}\) there. In unital algebras this requirement could always be fulfilled by appending \(\{F_{0},\ldots,F_{N},\mathbb{I}\}\) to an initial sequence \(\gamma_{0}\). An alternative, which also works on non-unital algebras, would be to append the square roots \(\sqrt{F_{i}}\). Then we consider the cone \[\Sigma_{2}^{(n)}=\text{conv}\{vv^{\star}\ |\ v\in V\} \tag{33}\] of'sums of squares' in \(Q_{n}\). Indeed, as \(\text{dim}(Q)\leq n^{2}\) and \(\Sigma_{2}\subset Q\), we also have \(\text{dim}(\Sigma_{2})\leq n^{2}<\infty\). Since in a \(\mathcal{C}^{\star}\) algebra, all squares, as well as their convex hull, are positive we obtain \[\Sigma_{2}^{(n)}\subseteq\mathcal{P}^{+}(\mathcal{A}), \tag{34}\] which implies (29) for the corresponding dual cones. This however implies that the finite cone \(\Sigma_{2}^{\star}|_{\mathcal{F}}\) (that no longer contains inaccessible global information) is a super set \[(\Sigma_{2}^{(n)})^{\star}|_{\mathcal{F}}\supseteq\mathcal{F}_{+}^{\star}. \tag{35}\] Hence we can conclude **Proposition 1**.: The value of a \(\mathcal{C}^{\star}\)-SDP (8), defined by the sets \(\{F_{0},\ldots,F_{m}\}\) and \(\{f_{1},\ldots,f_{m}\}\), is bounded from below by the optimization \[\inf \omega(F_{0}) \tag{36}\] \[\text{s.th.} \omega(F_{i})\leq f_{i} \forall i\in\{1,\ldots,m\}\] \[\omega\in(\Sigma_{2}^{(n)})^{\star}|_{\mathcal{F}}.\] We want to mention that there are somehow some open questions from a practical point of view, which we tackle in the next section. In particular, we did not talk about any properties of the sequence \(\gamma\) and their properties of approximating the positive cone of the \(\mathcal{C}^{\star}\)-algebra. ### The approximation as a conventional SDP For this section we choose a fixed set \(\gamma\) and corresponding spaces \(V,Q\) and cone \(\Sigma_{2}\). The optimization (36) is in fact not only finite from a formal perspective rather than also from actual practicality since it can be explicitly formulated by a matrix valued SDP of \(n\times n\) matrices, as we will show next. Recall the mapping \(\phi\) from Lemma 5, defined by \[\phi:\mathbb{C}^{n\times n}\to Q\quad\phi(|i\rangle\langle j|)=\gamma_{i}\gamma_{ j}^{\star}, \tag{37}\] i.e. the map that maps a matrix \(M=\sum m_{ij}|i\rangle\langle j|\) to an algebra element \[\phi(M)=\sum m_{ij}\gamma_{i}\gamma_{j}^{\star}. \tag{38}\] This map is surjective, since (38) merely describes the span of \(\gamma_{i}\gamma_{j}^{\star}\), i.e., \(Q\). Let \(\ker(\phi)\subset\mathbb{C}^{n\times n}\) be the kernel of \(\phi\) and denote by \(\pi\) the projection map \(\mathbb{C}^{n\times n}\to\mathbb{C}^{n\times n}/\ker(\phi)\), which assigns to each element its coset. We write \([x]:=\{x+k|k\in\ker(\phi)\}=\pi^{-1}(\phi(x))\) denote the equivalence class induces by \(\ker(\phi)\). As \(\phi\) is surjective, we obtain from isomorphism theorem, that the map \(\hat{\phi}\) is an isomorphism. Now we can state the main proposition of this section. **Proposition 2** (Identification of \(Q^{\star}\)).: An element \(g\in(\mathbb{C}^{n\times n})^{\star}\) can be identified up to equivalence with a linear functional in \(Q^{\star}\) if and only if \(g\) annihilates \(\ker(\phi)\) or in other words \[g(M)=0\quad\forall M\in\ker(\phi).\] Proof.: The map in Figure 1 shows us that \(\hat{\phi}:\hat{Q}\to Q\) that acts on \(\hat{Q}\) via \[\hat{\phi}(x):=\phi([x]) \tag{39}\] is an isomorphism. To describe \(\hat{\Sigma}_{2}^{\star}\) we need to include the isomorphism \(\phi\) in an appropriate manner in our discussion. For this purpose we consider the dual mapping \[\hat{\phi}^{\star}:Q^{\star} \to\hat{Q}^{\star} \tag{40}\] \[q^{\prime} \mapsto q^{\prime}\circ\phi. \tag{41}\] Basic linear algebra states that \(\hat{\phi}^{\star}\) is an isomorphism. Furthermore a well-known result from functional analysis states that \[\pi^{\star}:(\mathbb{C}^{n\times n}/\ker(\phi))^{\star} \to\ker(\phi)^{\perp} \tag{42}\] \[\hat{q}^{\prime} \mapsto\hat{q}^{\prime}\circ\pi \tag{43}\] is an isomorphism as well. Therefore, we have the chain of isomorphisms \[Q^{\star}\stackrel{{\hat{\phi}^{\star}}}{{\longrightarrow}}\hat{ Q}^{\star}\stackrel{{\pi^{\star}}}{{\longrightarrow}}\ker(\phi)^{\perp}, \tag{44}\] whereby the annihilator is defined via \[\ker(\phi)^{\perp}:=\{g\in(\mathbb{C}^{n\times n})^{\star}\ |\ g(M)=0\quad \forall M\in\ker(\phi)\}. \tag{45}\] Figure 1: We define \(\hat{Q}:=\mathbb{C}^{n\times n}/\ker(\phi)\). The figure tells us that we get an isomorphism between \(Q\) and \(\hat{Q}\) from the isomorphism theorem. In other words this means that we constructed an image of \(Q\) with the right relations between elements in \(Q\) in a accessible space \(\mathbb{C}^{n\times n}\). The proof of Proposition 2 shows that quotient spaces become subspaces in duality. This is an important observation throughout the whole discussion. This simple but very powerful Proposition 2 leads us to an abstract but nice description of elements in \(\hat{Q}^{\star}\) and therefore of the moment matrices as generalizations of polynomial optimization in Example 3: **Corollary 6**.: The relation between linear functionals from \(Q^{\star}\) and \(\hat{Q}^{\star}\) is given by \[\operatorname{tr}(\Gamma^{\sigma}M)=\sigma(\phi(M)).\] For \(\Gamma^{\sigma}:=\phi^{\star}(\sigma)\). These matrices are also called _moment matrices_ in other hierarchies. Proof.: This is clear from the definition of the dual mapping. Including subspaces in a SDP is therefore simple, because constraints of the manner \[\operatorname{tr}(\rho K)=0\] correspond to functionals \(\operatorname{tr}(\rho\cdot)\) which have the subspaces spanned by \(K\) in the kernel. Therefore, if we add all generators of the kernel of \(\phi\), we create the annihilator. For our purpose there is only left an open discussion about the kernel of \(\phi\) and how to include such a kernel. But as we will see, it is in some sense the task of a physicist to know his experiment in such a manner that he can decide whether two experimental setups, i.e. mathematical models are equal to each other. Our examples will demonstrate this impressively. In conclusion of the discussion we can state the following main theorem of our paper: **Theorem 7** (Relaxation).: The relaxation (36) of a \(\mathcal{C}^{\star}\)-SDP that is build from a finite sequence \(\{\gamma_{1},\dots,\gamma_{n}\}\) can be formulated as an \(n\times n\) matrix valued SDP, and therefore efficiently computed. A concrete form of this SDP is obtained from any set of matrices \(M_{F_{i}}\) with \[\hat{\phi}([M_{F_{i}}])=F_{i} \tag{46}\] and any set of \(m_{k}\) matrices \(\{K_{i}\}\) with \[\operatorname{span}\{K_{j}\}=\ker(\phi) \tag{47}\] and given by \[\inf \operatorname{tr}(\rho M_{F_{0}})\] (48) s.th. \[\operatorname{tr}(\rho M_{F_{i}})\geq f_{i} \forall i\in\{1,\dots,m\}\] \[\operatorname{tr}(\rho K_{j})=0 \forall K_{j}\in\{1,\dots,m_{k}\}\] \[\rho\geq 0\] ### CHSH worked out We close this section with a familiar example. **Example 8** (The algebra generated by two projections).: One of the most famous examples of abstract language in quantum theory is the algebra generated by two projections. Consider a set of generators \[\mathcal{G}:=\{1,A_{0},A_{1},B_{0},B_{1}\}\] which should fulfill the following relations \[\mathcal{R}:=\{A_{i}^{2}=1,B_{i}^{2}=1,B_{i}^{\star}=B_{i},A_{i}^{\star}=A_{i },[A_{i},B_{j}]=0\ |\ 1\leq i,j\leq 2\}.\] It is well known that the algebra spanned by the generators and relations \(C(\mathcal{G}\mid\mathcal{R})\) becomes a \(\mathcal{C}^{*}\)-algebra [13]. Considering now a Bell inequality [14] \[F_{0}:=A_{0}B_{0}+A_{1}B_{1}+A_{0}B_{1}-A_{1}B_{0}\] leads directly to a \(\mathcal{C}^{*}\)-SDP in the following sense: \[\inf \omega(F_{0})\] (49) s.th. \[\omega(1)=1\] \[\omega\in\mathcal{P}_{+}(\mathcal{A})^{*}\]. (50) The interpretation of this problem is to optimize over all states for finding the maximal violation of the Bell inequality above (CHSH type). Solving this example with our method needs a sequence \(\gamma\). One can choose for example in a first step \[\gamma=(1,A_{0},A_{1},B_{0},B_{1},A_{0}B_{1},A_{0}A_{1},B_{0}B_{1}).\] Furthermore the relations \(\mathcal{R}\) become directly elements in the kernel of \(\phi\) in the discussion above. Therefore working out the kernel is not difficult in this example for short sequences \(\gamma\). In conclusion one can directly verify with the rules of Theorem 7 that the optimal value \(\sqrt{2}2\) will be achieved in a numerical example. ## V Convergence of the hierarchy In this section we aim to clearly under which circumstances we can prove convergence of the hierarchy by iterating Theorem 7. The question now is how the optimal value of the relaxation of the \(\mathcal{C}^{*}\)-SDP relates to its true value. For this we need the notion of base property. **Definition 9** (Base Property).: Let \((\gamma^{(n)})_{n\geq 1}\) be a sequence in \(\mathcal{A}\). Now define \[V_{n}:=\operatorname{span}(\{\gamma^{(1)},\dots,\gamma^{(n)}\}).\] We say that \((\gamma^{(n)})_{n\geq 1}\) has the base property, if for any \(\epsilon>0\) and any \(a\in\mathcal{A}\) there exists \(n_{0}\in\mathbb{N}\) such that for all \(n\geq n_{0}\) there is \(b\in V_{n}\) with \(||a-b||<\epsilon\). We emphasize at this point that there is an obvious way to translate the base property into separability of the \(\mathcal{C}^{*}\)-algebra. Namely choose a sequence with base property and consider the vector space \(V_{n}\) spanned by the subfield \(\mathbb{Q}+i\mathbb{Q}\) of \(\mathbb{C}\) (this is in fact the field extension \(\mathbb{Q}(i)\)). For example all \(\mathcal{C}^{*}\)-algebras with finite set of generators are automatically separable and fulfill therefore the base property. **Theorem 10**.: We consider a \(\mathcal{C}^{*}\)-SDP \[c^{\star}= \inf\rho(F_{0})\] s.th. \[\rho(F_{i})\leq f_{i}\] and assume that we have base property, strong duality at all levels, i.e. in particular \[c^{\star}= \sup_{\lambda}f^{T}\lambda\] s.th. \[\sum_{i}\lambda_{i}F_{i}-F_{0}\in\mathcal{P}_{+}(\mathcal{A}).\] Then we have \(|c^{\star}-c^{n}|\xrightarrow{n\to\infty}0\) with value \(c^{n}\) for each level. Proof.: Suppose that \(x\in\mathcal{P}^{+}(\mathcal{A})\). By the \(\mathcal{C}^{\star}\)-property from section III.1, there exists \(z\in\mathcal{A}\) such that \(x=z^{*}z\). By the base property, we can conclude that for sufficient large \(n\in\mathbb{N}\) there exists \(y\in V_{n}\) such that \(||y-z||<\epsilon\). In particular we have that there is \(d\in\mathcal{A}\) with \(||d||<\epsilon\) such that \(y+d=z\). Further \(y^{*}y\in\Sigma_{2}^{(n)}\). Therefore we have \[||y^{*}y-z^{*}z||=||(z-d)^{\star}(z-d)-z^{*}z||=||d^{\star}z-z^{ *}d+d^{\star}d||\leq||d^{\star}z||+||z^{\star}d||+||d^{\star}d||. \tag{51}\] By the submultiplicativity of the norm it follows that the left hand side can be done sufficiently small by choosing \(\varepsilon>0\) in dependence of \(z\) small enough. This means that we have \[\Sigma_{2}^{(n)}\subset\Sigma_{2}^{(n+1)}\subset\ldots\subset \mathcal{P}_{+}(\mathcal{A})\quad n\to\infty\] as a limit of sets. Consider for \[L:\mathbb{R}^{m} \to\mathcal{A}\] \[\lambda \mapsto\sum_{i}\lambda_{i}F_{i}-F_{0}\] the sets \[\Omega^{(n)}:=L^{-1}(\Sigma_{2}^{(n)}\cap\operatorname{im}(L))\] and in particular \[\Omega:=L^{-1}(\mathcal{P}(\mathcal{A})\cap\operatorname{im}(L)),\] the set of all \(\lambda\in\mathbb{R}^{n}\) such that \(L(\lambda)\) is positive in \(\mathcal{A}\). Then we have in particular \[\Sigma_{2}^{(n)}\cap\operatorname{im}(L)\subset\ldots\subset \mathcal{P}(\mathcal{A})\cap\operatorname{im}(L)\quad n\to\infty.\] By strong duality on all levels we conclude \[c^{n}:=\sup_{\omega\in\Omega^{(n)}}f^{T}\omega\] and \[c^{\star}:=\sup_{\omega\in\Omega}f^{T}\omega. \tag{52}\] The sequence \((c^{n})\) is monotone and bounded and therefore convergent towards \(\sup_{n}c^{n}\). Assume for contradiction that there exists \(\delta>0\) such that \[|\sup_{n}c^{n}-c^{\star}|\geq\delta.\] Then we have in particular that for all \(n\in\mathbb{N}\) \[|\sup_{\omega\in\Omega^{(n)}}f^{T}\omega-c^{\star}|\geq\delta\] and therefore \[|\sup_{\omega\in\bigcup_{n}\Omega^{(n)}}f^{T}\omega-c^{\star}|\geq\delta\] a contradiction to (52). This result has some metaphysical description. What we have done so far is build up on the assumption of having a \(\mathcal{C}^{\star}\)-algebra at hand. This means, that there is a notion of convergence inherently coming from the property of being a complete normed space. If we compare this with e.g. the NPA-hierarchy or even Lasserre's great polynomial optimization work, there is from a first view not clear what one means with 'convergence' in a topological sense. Symmetry reductions of SDPs as a special case of \(\mathcal{C}^{*}\)-Sdp Let us repeat the framework for symmetry reduction in SDPs from a theoretical perspective for finite groups. For this purpose consider a general SDP over a finite dimensional Hilbert space \(\mathcal{H}\) of the following form \[\max \operatorname{tr}[\rho F_{0}] \tag{53}\] \[\operatorname{s.th.} \operatorname{tr}[\rho F_{j}]\leq f_{j}\quad 1\leq j\leq m\] (54) \[\rho \succeq 0. \tag{55}\] A (finite) symmetry of this SDP is now a finite group \(G\) with a representation8 Footnote 8: i.e. a group homomorphism \[\Phi:G \to\operatorname{GL}(\mathcal{H})\] such that \[\Phi(g)F_{j}\Phi(g)^{\dagger} =F_{j}\quad 1\leq j\leq m\] \[\Phi(g)F_{0}\Phi(g)^{\dagger} =F_{0}.\] We assume further that the representation is unitary, which means that \(\Phi(g)^{-1}=\Phi(g)^{\dagger}\) for all \(g\in G\). This assumption is for finite groups without loss of generality. The cyclicity of the trace now yields that a SDP which contains a finite symmetry has a solution which admits this symmetry. If we now assume access to a Hilbert-Schmidt orthonormal basis of the invariant subspace \(\operatorname{End}_{\mathbb{C}G}(\mathcal{H})\subset\operatorname{End}_{ \mathbb{C}}(\mathcal{H})\) \[\gamma:=(B_{1},\dots,B_{m})\subset\mathcal{B}(\mathcal{H}) \tag{56}\] we can build up our theory in the same manner as e.g. in3. Define for Footnote 3: i.e. a group homomorphism \[B_{i}B_{j}=\sum_{k}\lambda_{ijk}B_{k}.\] the matrices \(L_{k}\in\mathbb{C}^{m\times m}\) with the rule \[(L_{k})_{i,j}:=\lambda_{ijk}.\] Applying 5 yields then the mapping \[\phi:\mathbb{C}^{m\times m} \to\operatorname{End}_{\mathbb{C}G}(\mathcal{H})\subset \operatorname{End}_{\mathbb{C}}(\mathcal{H})\] \[L_{k} \mapsto\sum_{i,j}\lambda_{ijk}B_{i}B_{j}^{\dagger}.\] Therefore it is easy to show that the optimal solution \(\rho_{\mathrm{opt}}\) is contained in \(\operatorname{End}_{\mathbb{C}G}(\mathcal{H})\), which is the set of all linear operators which commute with the operators \(\Phi(g)\), i.e. the set of intertwiner and fulfills Slater's condition if the original SDP fulfilled Slater's condition. This observation states that the effective dimension of the SDP is in fact the \(\mathbb{C}\)-vector space dimension \[m:=\dim_{\mathbb{C}}\operatorname{End}_{\mathbb{C}G}(\mathcal{H}).\] Therefore, in this case our hierarchy basically has been finished in one step, because the optimal solution is accessible in the dual space of \(\operatorname{End}_{\mathbb{C}G}(\mathcal{H})\). Furthermore there is no kernel to work out. The difficulty is really in defining the basis \((B_{1},\dots,B_{m})\). For this task, however, there are methods as presented in de-Klerk et al.3. ## VII Conclusion In this paper we have taken a new look at known problems of optimization in quantum information theory. Our relaxation for \(\mathcal{C}^{\star}\)-SDPs is based on the fundamental observation that we are in fact only interested in the values of all positive functionals of a \(\mathcal{C}^{\star}\)-algebra on a finite vector space. Thus, all functionals of interest are characterized by a convex set in the dual space of \(\mathbb{C}^{n\times n}\). This set can now be relaxed. We have chosen the way via the sum of squares in the style of Lasserre's hierarchy. Remarkably it, however, follows directly from our discussion how we can incorporate additional knowledge about, say, positive elements: one could either add them to \(\Sigma_{2}^{(n)}\) directly or write them as SDP constraints. In contrast to Lasserre's hierarchy, we can not directly talk about local criteria about e.g. the dual space of a particular \(\mathcal{C}^{\star}\)-algebra \(C(K)\) for a compact set \(K\). But our general formulation gives hope that one may find in future local criteria for the \(\mathcal{C}^{\star}\)-algebra. In general one can apparently include the ideas of Lasserre, but then one has to choose the sequence \(\gamma\), i.e. the basis of the vector space \(Q\) in this case, to be polynomials. Then there is no way to present elements other than polynomials exactly. As this work pointed out, the idea of moment matrices can be generalized and is extremely powerful. We hope, that this general point of view will stimulate a new discussion about possible relaxations for general \(\mathcal{C}^{\star}\)-SDP in particular in quantum information theory. There is new work done by Klep et al. [15], which is able to tackle nonlinear constraints in a \(\mathcal{C}^{\star}\)-SDP similar notion. This has particular importance in causal structure problem as e.g. in Ligthart et al. [16; 17] ## VIII Acknowledgements We thank Mario Berta, Tobias J. Osborne and Julius Zeiss for helpful discussions. We thank David Gross and Laurens Ligthart for organizing the workshop on SDP-hierarchies in May 2023 in Cologne. RS acknowledges financial support by the BMBF project ATIQ.
2309.13972
Audio classification with Dilated Convolution with Learnable Spacings
Dilated convolution with learnable spacings (DCLS) is a recent convolution method in which the positions of the kernel elements are learned throughout training by backpropagation. Its interest has recently been demonstrated in computer vision (ImageNet classification and downstream tasks). Here we show that DCLS is also useful for audio tagging using the AudioSet classification benchmark. We took two state-of-the-art convolutional architectures using depthwise separable convolutions (DSC), ConvNeXt and ConvFormer, and a hybrid one using attention in addition, FastViT, and drop-in replaced all the DSC layers by DCLS ones. This significantly improved the mean average precision (mAP) with the three architectures without increasing the number of parameters and with only a low cost on the throughput. The method code is based on PyTorch and is available at https://github.com/K-H-Ismail/DCLS-Audio
Ismail Khalfaoui-Hassani, Timothée Masquelier, Thomas Pellegrini
2023-09-25T09:09:54Z
http://arxiv.org/abs/2309.13972v2
# Audio classification with Dilated Convolution with Learnable Spacings ###### Abstract Dilated convolution with learnable spacings (DCLS) is a recent convolution method in which the positions of the kernel elements are learned throughout training by backpropagation. Its interest has recently been demonstrated in computer vision (ImageNet classification and downstream tasks). Here, we show that DCLS is also useful for audio tagging using the AudioSet classification benchmark. We took two state-of-the-art convolutional architectures using depthwise separable convolutions (DSC), ConvNeXt and ConvFormer, and a hybrid one using attention in addition, FastViT, and drop-in replaced all the DSC layers by DCLS ones. This significantly improved the mean average precision (mAP) with the three architectures without increasing the number of parameters and with only a low cost on the throughput. The method code is based on PyTorch and is available at [https://github.com/K-H-Ismail/DCLS-Audio](https://github.com/K-H-Ismail/DCLS-Audio). ## 1 Introduction The very popular ConvNeXt model [16], a fully convolutional model designed for vision tasks, has been successfully adapted to audio classification on AudioSet [21] by transforming audio samples to log-mel spectrograms and adapting the stem of the ConvNeXt model to fit the input audio extracts. This has improved the state of the art of audio classification using convolutional neural networks by achieving better accuracy than PANNN-type models [13], while having fewer learnable parameters. Furthermore, when used as a backbone for downstream tasks, the ConvNeXt-audio model has achieved positive, if not state-of-the-art, results for the audio captioning and audio retrieval tasks. Separately, the Dilated Convolution with Learnable Spacings (DCLS) method has already proven itself in several computer vision tasks [10]. Through a simple drop-in replacement of the model's DSC with DCLS (which can be done automatically for all layers of a model via this script 7), the DCLS convolution method has empirically proven its effectiveness for several computer vision tasks using ImagNet1k [3] trained models as backbones. This resulted in a ConvNeXt-dcls model [10] and a ConvFormer-dcls model [11], depending on the model chosen in the study, by performing the replacement in the ConvNeXt and ConvFormer models, respectively. Our aim in the present article is to show empirically that a drop-in replacement of the DCLS method in the same fully convolutional models can improve their accuracy for the task of audio classification on the AudioSet dataset without much effort, demonstrating once again the interest of the method not only on the reference benchmark for image classification but also on the reference benchmark for audio classification (AudioSet [5]). Furthermore, we add a third test model that differs slightly from the other two in that it's a hybrid model (having both DSC layers and multi-head self-attention layers, depending on the stage to which the layer belongs): FastVit [26], and again, replacing the DSC layers by DCLS improves results. This article does not claim to be the absolute state of the art on the task of classification on AudioSet, but rather tries to provide an objective comparison between known and proven convolutional models and those equipped with a DCLS convolution that would make them more efficient. ## 2 Related work Audio tagging systems were mainly based on convolutional neural networks until recently, with the adaptation of vision transformers to audio processing. The PANN-based models (_e.g._, CNN14), in particular, comprise blocks of plain \(3\times 3\) kernel convolution layers [13]. In [28], PANN-like models were enhanced, in terms of accuracy, model size and inference speed, by adding residual connections, and by modifying the kernel sizes, the stride and padding, using a "decreasing temporal size parameter". Other efficient CNN architectures, such as EfficientNet [7], were also tested in audio tagging. In [23], efficient PANNs (E-PANN) were obtained by using filter pruning. In [4], DSC layers were used, which resulted in large reductions in model complexity, together with performance gains. In [21], doing so in PANN's CNN14 also yielded significant model size reduction (about 60% relative), whilst observing a gain in performance. In this last study, ConvNeXt was adapted to perform the audio tagging task in AudioSet. It performed better or on par with the transformer-based architectures AST [6] and PaSST-S [14]. ## 3 Methods ### Dataset and configuration **Dataset.** In all the experiments in this article, we used AudioSet [5], the reference dataset in audio classification. It contains about 2 million video clips downloaded from the YouTube platform. We are only interested in the audio portion of these clips and are not using the video dataset. The audio clips available in AudioSet can vary in size, but most are 10 seconds long. If a sample is longer than that, we truncate it; if shorter, we pad it with zeros. The classification task in AudioSet consists of assigning each sample to the class or classes to which it belongs among the 527 available labels. It is thus a multi-label classification task. The majority of the excerpts correspond to one of the two classes "speech" and "music" (often both), due to their predominance on the aforementioned video hosting site. This latter fact leads to an imbalance in the dataset, with several classes being poorly represented while a few classes account for most of the dataset. We downloaded the data in 2018, and some of the YouTube links have been broken since then. Our AudioSet data contains 1,921,982 clips (unbalanced train), 21,022 clips (balanced train), and 19,393 clips (evaluation). **Metrics.** We report the usual evaluation metric for AudioSet tagging: mean average precision (mAP) which is typically the metric of interest in audio tagging. All DCLS-equipped models studied here outperform their respective baselines using this metric. **No weighted sampler.** Given the unbalanced nature of the dataset, many state-of-the-art models make good use of a weighted random sampler [13; 8; 9], where each class in the dataset is weighted by its frequency of occurrence in the dataset. This is a classic machine learning approach to mitigate data imbalance. However, as pointed out by [19], these approaches based on a weighted sampler whose oversampling rate is adjusted as a training hyperparameter seem to overfit the dataset more than anything else and do not favor the rarest classes. Since in this article, we are only interested in the comparative study between baseline models and the same models augmented with the DCLS method, we have chosen not to include weighted samplers in our training phases, even if this means losing a few points in mAP, thereby allowing a comparison that is less noisy due to the effects of sampling. Furthermore, the naive use of Mixup augmentation [31] in conjunction with a weighted sampler may turn out to be a source of undesirable behavior, since proceeding in this way could destroy the weighted sampling that was originally intended, as the Mixup acts randomly by drawing two samples without taking balancing into account. Weighting-aware approaches to Mixup such as [22] should be better investigated and implemented in order to better take advantage of both methods. **Spectrogram resolution.** Many audio classification models use raw audio signals [20; 2; 1], while a growing number of state-of-the-art models use spectrograms, taking advantage of the signal's periodic aspect by using the Short-time Fourier transform [13; 14; 9]. We prefer this second choice in order to use computer vision baselines. Additionally, the obtained spectrograms are often filtered using the mel psychoacoustic scale [24]. We use the latter filtering to obtain mel-frequency spectrograms, which we transform from the power/amplitude scale to the decibel scale. A comprehensive enumeration of the hyperparameters used to perform these transformations is given in Table 2. The final spectrogram size obtained is (\(F=128\), \(T=1001\)). In the course of our experiments, we noticed that the larger the size of the spectrograms (in both frequency and time), the greater the mAP of the models, but to the detriment of their throughput. This is a well-known phenomenon in computer vision, where higher input image resolution often leads to better vision model accuracy but also to higher computational time costs. We believe that this resolution provides a good trade-off in mAP-throughput and argue that there is a sweet spot between resolution and stem size that offers optimal performance regarding mAP-throughput. **Adapting the stem.** The three neural networks used in this study all come from the world of computer vision. It is therefore essential to adapt the stem of these models in order to process no longer natural images made up of three channels corresponding to the RGB colors, but instead a spectrogram with a single channel and a size different from the crop images initially designed for vision tasks. To this end, we used a basic stem, common to all three studied models, namely a convolution layer with a kernel size = (2, 16) and a stride = (2, 16). This stem produces maps of size (64, 62) from input spectrograms of size (128, 1001). This type of stem is similar to that originally found in the ConvNeXt model, while the ConvFormer model featured a slightly more sophisticated stem with a kernel size larger than the stride size. The FastVit model, on the other hand, came with a much more complex stem consisting of several layers based on the MobileOne block [27]. Imposing a common stem on all the models in the study means that, on the one hand, we can compare the models more accurately, knowing that the input resolution will be the same for all of them. On the other hand, in the absence of an optimal stem adapted to audio spectrograms, we use a coarse stem with which we can conduct our study. Note that the search for an ideal stem is a study in itself and that the stem presented here can always be refined and improved. **Pretraining on ImageNet1k.** Using pre-trained models on ImageNet [3] as a better initialization to solve the tagging task on AudioSet is common practice [6; 14; 21]. In the first few epochs, models initialized in this way have a clear advantage over those initialized randomly. However, this advantage is quickly regained over the course of training, and randomly initialized models often end up performing similarly to or slightly worse than pre-trained ones. We only use pre-trained models on ImageNet1k when they are available and do not cost us anything to train. Therefore, we use the symbol \(\ddagger\) to designate models that have not been pre-trained on ImageNet. **Configuration.** In Table 2 is a comprehensive list of the hyperparameters and augmentations used in this study. These are largely similar to the hyperparameters used in [16]. Note that a high drop path (0.4) is used in this work to overcome the overfitting problem encountered with the tagging task on AudioSet and that large effective batch sizes were used (4096) to speed up training. However, some instabilities during training were noted, particularly for the ConvFormer model. These instabilities are known from [30] and were resolved by using the LAMB optimizer [29], while we used AdamW [18] for the other two models. ### Models We used three different models from computer vision that we adapted to audio inputs to corroborate our results. The first one is a fully convolutional model: ConvNeXt-tiny [16]. The second one is also a purely convolutional model that outperforms the ConvNext model on the ImageNet classification task: ConvFormer-S18 [30]. The DCLS method as a replacement for DSC has already been successfully used in the latter two models for various vision tasks, including image classification on ImageNet1k [10; 11]. The third model we used is more recent and achieves the current state-of-the-art throughput-accuracy trade-off in image classification on the ImageNet dataset: FastVit-SA24 [26]. The latter is a so-called hybrid model, i.e., it contains DSC layers as well as and multi-head self-attention layers. ### DCLS substitution Considering the baseline models discussed in the previous section, we carried out the following study: we trained the baselines on the concatenation of the unbalanced train and the balanced train sets of AudioSet, then evaluated them on the evaluation subset. We repeated the same process with the same models, except that this time we replaced all DSC layers having a kernel size equal to 7 with a DCLS convolution layer. In all test cases, we used exactly the same training configuration to avoid attributing performance gains to any reason other than the replacement of the DSC layers by DCLS ones. Also, to learn the positions (and standard deviations for DCLS-Gauss) of each kernel element, we followed the same training techniques as those listed in [11]. This gave us 6 test cases to examine in total, for which we measure the mAP metric mentioned in Section 3.1 averaged over three different seeds (seeds 0, 1, and 2). ## 4 Results The results presented in Table 1 demonstrate the performance of the three models mentioned in Section 3.2 on the \(128\times 1001\) spectrograms, where the convolution method used varies. Notably, we observe that when comparing each baseline model with its DCLS-equipped counterpart, the use of DCLS-Gauss with a kernel size of \(23^{2}\) and a kernel count of \(26\) stands out, achieving a higher mAP (+0.6 on average) with an equal or lower number of parameters. This result highlights the effectiveness of DCLS-Gauss in enhancing classification performance. DCLS does, however, introduce a reduction in throughput (\(13\%\) for ConvNeXt-T and FastVit-SA24 and \(23\%\) for ConvFormer-S18) due to the use of larger kernels. The results of a previous study on ConvNeXt [21] show that an mAP of \(47.1\) can be achieved, but here we only reach \(44.8\) for the baseline; this is due to the fact that in that previous study, a higher spectrogram resolution was used (\(224\times 1001\) versus \(128\times 1001\) in this work) and that a stem size of \(4\times 4\) instead of \(2\times 16\) here was used to produce larger feature maps, which is reflected both in the large memory required to run this model and in the model's throughput. ## 5 Conclusion In conclusion, this article has demonstrated the efficacy of Dilated Convolution with Learnable Spacings (DCLS) as a method with promising applications beyond the computer vision field. By exploiting DCLS in the audio tagging task on AudioSet, we have demonstrated tangible improvements in accuracy when compared to models employing traditional DSC methods. While this work does not claim to establish an absolute state-of-the-art benchmark, it does contribute valuable insights into the potential of DCLS convolution in audio classification. This research underscores the \begin{table} \begin{tabular}{l l c c c c} \hline \hline model & ker. size & method & \# param. & mAP & \begin{tabular}{c} throughput \\ (sample / s) \\ \end{tabular} \\ \hline CNN14 [13] & & Conv. & \(80.7\mathrm{M}\) & \(43.1\) & \(378.2\) \\ PaSST-S [14] & & MHS. Attention. & \(87\mathrm{M}\) & \(47.1\) & \(88.7\) \\ ConvNeXt-T [21] & \(7^{2}\) / \(49\) & Depth. Conv. & \(28.2\mathrm{M}\) & \(47.1\) & \(153.6\) \\ \hline ConvFormer-S18\({}^{\dagger}\) & \(7^{2}\) / \(49\) & Depth. Conv. & \(26.8\mathrm{M}\) & \(43.14\pm 0.03\) & \(513.3\) \\ ConvFormer-S18\({}^{\dagger}\) & \(23^{2}\) / \(26\) & DCLS-Gauss & \(26.8\mathrm{M}\) & \(43.68\pm 0.02\) & \(396.8\) \\ FastVIT-SA24\({}^{\ddagger}\) & \(7^{2}\) / \(49\) & Depth. Conv. & \(\mathbf{21.5}\mathrm{M}\) & \(43.82\pm 0.05\) & \(\mathbf{633.6}\) \\ FastVIT-SA24\({}^{\ddagger}\) & \(23^{2}\) / \(26\) & DCLS-Gauss & \(\mathbf{21.5}\mathrm{M}\) & \(44.4\pm 0.07\) & \(551.7\) \\ ConvNeXt-T & \(7^{2}\) / \(49\) & Depth. Conv. & \(28.6\mathrm{M}\) & \(44.83\pm 0.14\) & \(591.4\) \\ ConvNeXt-T & \(23^{2}\) / \(26\) & DCLS-Gauss & \(28.6\mathrm{M}\) & \(\mathbf{45.52\pm 0.05}\) & \(509.4\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Classification mean average precision (mAP) on the evaluation set of AudioSet.** For the baselines using DSC and the DCLS-Gaussian cases, the results have been averaged over 3 distinct seeds and presented in the format mean \(\pm\) standard deviation. \(\dagger\) : trained using LAMB, \(\ddagger\) : no ImageNet pretraining. The throughputs were calculated with a single NVIDIA V100 32-GB gpu. significance of exploring novel convolutional techniques, like DCLS, and adapting them to various domains beyond their initial design. As the field of deep learning continues to evolve, such methods pave the way for broader and more efficient applications, thereby advancing the state of the art in deep learning. ## Acknowledgments This work was performed using HPC resources from GENCI-IDRIS (Grant 2023- [AD011013219R1]). Support from the ANR-3IA Artificial and Natural Intelligence Toulouse Institute is gratefully acknowledged. We would also like to thank the region of Toulouse Occitanie.
2301.13836
A New Definition of Exoplanet Habitability: Introducing the Photosynthetic Habitable Zone
It may be possible to detect biosignatures of photosynthesis in an exoplanet's atmosphere. However, such a detection would likely require a dedicated study, occupying a large amount of telescope time. It is therefore prudent, while searching for signs of life that we may recognise, to pick the best target possible. In this work, we present a new region, the ``photosynthetic habitable zone'' \textemdash the distance from a star where both liquid water and oxygenic photosynthesis can occur. It is therefore the region where detectable biosignatures of oxygenic photosynthesis are most likely to occur. Our analysis indicates that in the most ideal conditions for life and no atmospheric effects, the photosynthetic habitable zone is almost as broad as the habitable zone. On the other hand, if conditions for life are anything less than excellent and atmospheric effects are even moderate, the photosynthetic habitable zone is concentrated at larger separations around more massive stars. Such cases are also not tidally locked to their host star, which could result in planetary rotation periods similar to the Earth's. We identify five planets, Kepler-452 b, Kepler-1638 b, Kepler-1544 b and Kepler-62 e and Kepler-62 f, that are consistently in the photosynthetic habitable zone for a variety of conditions, and we predict their day lengths to be between 9 and 11 hours. We conclude that the parameter space in which we should search for signs of life is much narrower than the standard habitable zone.
C. Hall, P. C. Stancil, J. P. Terry, C. K. Ellison
2023-01-31T18:27:00Z
http://arxiv.org/abs/2301.13836v2
# A New Definition of Exoplanet Habitability: Introducing the Photosynthetic Habitable Zone ###### Abstract It may be possible to detect biosignatures of photosynthesis in an exoplanet's atmosphere. However, such a detection would likely require a dedicated study, occupying a large amount of telescope time. It is therefore prudent, while searching for signs of life that we may recognise, to pick the best target possible. In this work, we present a new region, the "photosynthetic habitable zone" --the distance from a star where both liquid water and oxygenic photosynthesis can occur. It is therefore the region where detectable biosignatures of oxygenic photosynthesis are most likely to occur. Our analysis indicates that in the most ideal conditions for life and no atmospheric and greenhouse effects, the photosynthetic habitable zone is almost as broad as the habitable zone. On the other hand, if conditions for life are anything less than excellent and atmospheric attenuation and greenhouse effects are even moderate, the photosynthetic habitable zone is concentrated at larger separations around more massive stars. Such cases are also not tidally locked to their host star, which could result in planetary rotation periods similar to the Earth's. We identify five planets, Kepler-452 b, Kepler-1638 b, Kepler-1544 b and Kepler-62 f, that are consistently in the photosynthetic habitable zone for a variety of conditions, and we predict their day lengths to be between 9 and 11 hours. We conclude that the parameter space in which we should search for signs of life is much narrower than the standard habitable zone. + Footnote †: journal: ApJL 0000-0002-8071-8084]C. Hall 0000-0002-8071-8084]P. C. Stancil 0000-0002-8071-8084]J. P. Terry 0000-0002-8071-8084]C. K. Ellison ## 1 Introduction Since the first exoplanet atmospheric detection around a 1.35 R\({}_{\rm J}\) planet (Charbonneau et al., 2002), astronomers have been pushing the limits to smaller and smaller planets, such as the detection of water vapour around the 8M\({}_{\rm E}\) planet K12-18b (Tsiaras et al., 2019). The continued discovery of exoplanet atmospheres, and increasing technological capability, has raised the prospect of finding a planet that may be inhabited by life. Subsequently, it has been determined that the characterization and detection of biosignatures --atmospheric spectral features that could indicate signs of life on a planet --should be an area of focus for astrobiology (see, e.g. Seager et al., 2012; Kaltenegger, 2017; Lammer et al., 2019). To date, over 50001 exoplanets have been discovered using a mix of ground-based and space-based methods. With the successful launch of JWST and future observatories such as the European Extremely Large Telescope (E-ELT) and the Thirty Meter Telescope (TMT), we are moving from the era of exoplanet discovery to exoplanet atmospheric characterization. However, characterizing these worlds remains an enormous challenge. For example, the most promising O\({}_{2}\) feature for JWST appears to be the O\({}_{2}\to X\) collisional induced adsorption band at 6.4 \(\mu\)m (Fauchez et al., 2020), but even for a target such as TRAPPIST 1-e (Gillon et al., 2016, 2017) this would require more than 700 transits, longer than the anticipated lifetime of JWST given that TRAPPIST 1-e has a 6 day orbital period and is only visible to JWST for less than a third of the year (Gillon et al., 2020). Even the more favourable strong O\({}_{3}\) band at 10\(\mu\)m would require more than 100 transit observations on a planet such as TRAPPIST 1-e (Lin et al., 2021) to detect it at just 3\(\sigma\). Fortunately, O\({}_{2}\) and O\({}_{3}\) are not the only biosignatures. More generally, atmospheric chemical disequilibrium, characterised by the coexistence of two or more long-term incompatible gases (Lovelock, 1965; Sagan et al., 1993; Cockell et al., 2009), can be considered a sign of ongoing life. The Archean Earth had a biogenic disequilibrium caused by the coexistence of N\({}_{2}\), CH\({}_{4}\), CO\({}_{2}\), and liquid water, which could be possible to remotely detect on an Earth-sized planet (Krissansen-Totton et al., 2018). Simultaneous detection of abundant CH\({}_{4}\) and CO\({}_{2}\) is therefore considered a biosignature. Happily, detecting this CH\({}_{4}\)-CO\({}_{2}\) pair is feasible, requiring \(\sim 5-30\) co-added JWST transits depending on if the stratosphere is dry or has a cloud or haze layer (Mikal-Evans, 2022). Observing resources are valuable and finite, so choosing the best targets to search for biosignatures of any kind is imperative. The main criterium is whether the planet can sustain liquid water on its surface by residing an appropriate semi-major axis from its host star, referred to as the habitable zone (Huang, 1959; Kasting et al., 1993). However, liquid water alone is not enough for life. Life requires energy to remain out of equilibrium with its environment. For almost all the biomass on Earth, this energy source is oxygenic photosynthesis (Bar-On et al., 2018). We therefore suggest that a new criterium be used to determine where biosignatures may be found. In this work, we demonstrate that, like the habitable zone, the _photosynthetic habitable zone_ is a bounded strip on a plot of stellar mass against semi-major axis. It occurs where both liquid water and photosynthesis is simultaneously possible. It is where the search for life in the Universe should be concentrated under the assumption of biosignatures similar to those generated by past or present Earth. We detail our calculations of the habitable zone in Section 2.1, our calculations of photosynthesis rate curves in Section 2.2, and the photosynthetic habitable zone in Section 2.3. We present our results in Section 3 and discuss assumptions and limitations in Section 4. We summarise and present our conclusion in Section 5. ## 2 Methods ### The Habitable Zone We used pre-main sequence (PMS) evolutionary models (Baraffe et al., 2015)2 to obtain stellar effective temperature, \(T_{\rm eff}\), as a function of stellar mass, \(M_{*}\). We assumed an age of 1 Gyr, corresponding to the approximate time primitive life first appeared on Earth (e.g. Dodd et al., 2017; Cavalazzi et al., 2021). The habitable zone (HZ) for each stellar mass was calculated using the method described in Kopparapu et al. (2013). We use their derived relationships between HZ effective temperature, \(T_{\rm eff}\) and stellar fluxes, \(S_{\rm eff}\), in the range 2600 K \(\leqslant T_{\rm eff}\leqslant 7200\)K: Footnote 2: [http://perso.ens-lyon.fr/isabelle.baraffe/BHAC15dir/BHAC15_iso.2mass](http://perso.ens-lyon.fr/isabelle.baraffe/BHAC15dir/BHAC15_iso.2mass) \[S_{\rm eff}\,=S_{\rm eff\,\odot}+aT_{\star}+bT_{\star}^{2}+cT_{\star}^{3}+dT_ {\star}^{4}, \tag{1}\] where \(T_{\star}=T_{\rm eff}-5780\) K, and \[S_{\rm eff\,\odot}=\frac{L_{\odot}}{4\pi R_{\odot}^{2}\sigma T_{\odot}^{4}}, \tag{2}\] where \(L\) is stellar luminosity and \(R\) is stellar radius. The coefficients \(a,b,c\) and \(d\) are determined by scenario, for example runaway greenhouse (Inner HZ) and maximum greenhouse (outer HZ). We used updated coefficient values published online 3 as per Kopparapu et al. (2013), and detail them in Table 1. The corresponding HZ distance for a given star is then Footnote 3: [http://depts.washington.edu/naivpl/sites/default/files/hz_0.shtml#overlay-context=content/hz-calculator](http://depts.washington.edu/naivpl/sites/default/files/hz_0.shtml#overlay-context=content/hz-calculator) \[d=\left(\frac{L/L_{\odot}}{S_{\rm eff}}\right)^{0.5}\mathrm{AU}, \tag{3}\] where \(L/L_{\odot}\) is the luminosity of the star compared to the Sun. ### The Photosynthesis zone Photosynthesis is the chemical reaction by which organisms use energy from sunlight to synthesis sugar from carbon dioxide and water: \[6\mathrm{CO}_{2}+6\mathrm{H}_{2}\mathrm{O}\xrightarrow{h\nu}\mathrm{C}_{6} \mathrm{H}_{12}\mathrm{O}_{6}+6\mathrm{O}_{2}. \tag{4}\] Three variables directly impact the rate of photosynthesis (Gaastra, 1959): light intensity (\(I\)), temperature (\(T\)), and carbon dioxide (CO\({}_{2}\)). Water availability indirectly impacts the rate of photosynthesis, as water stress causes plant structures to wilt reducing CO\({}_{2}\) availability (Muller et al., 2011). In this work, we make two assumptions: 1) that the photosynthetic life we consider \begin{table} \begin{tabular}{c c c} \hline \hline Constant & Inner HZ & Outer HZ \\ & (Runaway Greenhouse) & (Max. Grenhouse ) \\ \hline \(S_{\rm eff\,\odot}\) & \(1.107\) & \(0.356\) \\ \(a\) & \(1.332\times 10^{-4}\) & \(6.171\times 10^{-5}\) \\ \(b\) & \(1.580\times 10^{-8}\) & \(1.698\times 10^{-9}\) \\ \(c\) & \(-8.308\times 10^{-12}\) & \(-3.198\times 10^{-12}\) \\ \(d\) & \(-1.931\times 10^{-15}\) & \(-5.575\times 10^{-16}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Coefficients used in Eq. 1 to calculate habitable stellar fluxes, and corresponding habitable zones. is ocean based, and so has an unlimited reservoir of water and 2) the global average of CO\({}_{2}\) concentration is sufficiently high that photosynthesis is not rate-limited by CO\({}_{2}\) concentration. The chlorophyll \(a\)-normalised net photosynthetic rate, \(P\), as a function of irradiance intensity, \(I\), is given by (Eilers and Peeters, 1988): \[P(I)=\frac{I}{\alpha I^{2}+\beta I+\gamma}-R_{\rm rate}, \tag{5}\] where \(\alpha\) and \(\beta\) are dimensionless parameters, \(\gamma\) is defined as the reciprocal of the light-limited initial slope of the _P-I_ curve, and \(R_{\rm rate}\) is the dark respiration rate, the minimum rate at which glucose is combined enzymatically with oxygen to release energy and CO\({}_{2}\). Net photosynthetic rate is the total output of molecular oxygen per unit biomass per unit time [\(\mu\)mol O\({}_{2}\) mg\({}^{-1}\) h\({}^{-1}\) ]. The maximum photosynthetic rate is given by \[P_{\rm rate}^{\rm max}=\frac{1}{\beta+2\sqrt{\alpha\gamma}}-R_{\rm rate}. \tag{6}\] The parameters \(\alpha,\beta\) and \(\gamma\) in Equations 5 and 6 are determined by performing best-fit analysis to empirically determined photosynthesis rate curves of phytoplankton in the laboratory setting, the values used in this work are \(\alpha=1.0\times 10^{-5}\), \(\beta=1.0\times 10^{-3}\), and \(\gamma=2.0\)(Yang et al., 2020). Three values of the dark respiration rate are explored, which determine the quality of conditions for life. As conditions for life become less favourable, \(R_{\rm rate}\) becomes a greater fraction of \(P_{\rm rate}^{\rm max}\)(Geider and Osborne, 1989). We consider excellent conditions to be Earth-like (\(R_{\rm rate}=0.3~{}P_{\rm rate}^{\rm max}\) ), and optimistic and pessimistic conditions to be \(R_{\rm rate}=0.6~{}P_{\rm rate}^{\rm max}\) and \(R_{\rm rate}=0.8~{}P_{\rm rate}^{\rm max}\) respectively. The _P-I_ curve from Equation 5 is shown in Figure 1. The green dotted line in Figure 1 shows the line where the net photosynthesis rate, \(P(I)\) is equal to \(R_{\rm rate}\). Any irradiance intensity that causes a total net negative photosynthesis rate will result in no net oxygen being produced. Therefore, for photosynthesis to increase atmospheric content of O\({}_{2}\), \(P>0\) is the absolute lowest limit required. Next, we express \(P\) as a function of both intensity and temperature by including a temperature moderation factor (Yan and Hunt, 1999; Van der Heide et al., 2006; Kim et al., 2007; Adams et al., 2017; Collier et al., 2017): \[f_{\rm temp}=\left(\frac{T_{\rm max}-T}{T_{\rm max}-T_{\rm opt}}\right)\left( \frac{T}{T_{\rm opt}}\right)^{T_{\rm opt}/(T_{\rm max}-T_{\rm opt})} \tag{7}\] such that \[P(I,T)=f_{\rm temp}\cdot P(I). \tag{8}\] The temperature, \(T\), is the sum of the planetary equilibrium temperature and the planetary greenhouse effect \(T=T_{\rm equilibrium}+\Delta T_{\rm greenhouse}\). Planetary equilibrium temperature, \(T_{\rm equilibrium}\) is given by \[T_{\rm equilibrium}=\left(\frac{L_{*}\left(1-A_{\rm Bond}\right)}{16\sigma\pi a ^{2}}\right)^{1/4} \tag{9}\] where \(L_{*}\) is stellar luminosity, \(A_{\rm Bond}\) is the Bond albedo Figure 1: **Net photosynthetic rate versus irradiance intensity.** Dotted green line indicates where rate of photosynthetic production of O\({}_{2}\) is equal to consumption of O\({}_{2}\) during respiration, such that net photosynthetic production of O\({}_{2}\) is zero. Rate of respiration is \(R_{\rm rate}=20\)\(\mu\)mol O\({}_{2}\) mg\({}^{-1}\) h\({}^{-1}\). Orange vertical line is intensity of PAR at Earth. Figure 2: **Net photosynthetic rate versus greenhouse temperature.** Equilibrium temperatures were calculated at the inner and outer edges of the Habitable Zone, and location of Earth. Maximum \(P(T)\) occurs at higher greenhouse temperatures at larger planetary separations from host star. A fixed \(P(I)=80~{}\mu\)mol O\({}_{2}\) mg\({}^{-1}\) h\({}^{-1}\) was assumed. and \(a\) is planet semi-major axis. \(T_{\rm max}\) is the maximum temperature at which photosynthesis occurs and \(T_{\rm opt}\) is the optimum temperature at which photosynthesis occurs. Cyanobacteria, Earth's first photosynthesisers, have higher \(T_{\rm opt}\) than many photosynthesisizers, typically \(T_{\rm opt}=35^{\circ}\)C (Gorham, 1964; Zhang et al., 2015; Giannuzzi, 2018).The upper limit for photosynthetic processes is uncertain, especially since thermophililic life is a possibility. However, it is generally agreed that the upper limit for photosynthesizing cyanobacteria is \(73^{\circ}\)C (Ward et al., 2012). We therefore set \(T_{\rm opt}=35^{\circ}\)C and \(T_{\rm max}=73^{\circ}\)C in this work. We assume \(A_{\rm Bond}=0.306\), equal to Earth's Bond albedo. The resulting \(P-T\) curve from Equation 8 is shown in Fig. 2. We term the region of parameter space where \(P(I,T)>0\) (Equation 8) the "photosynthesis zone" (PZ) for convenience, and it is shown in pink in Figure. 3. Photosynthesis cannot proceed where \(P\leq 0\), since the organism's respiration rate is exceeding that of its primary production. ### The Photosynthetic Habitable Zone We suggest the existence of a _photosynthetic habitable zone_ (PHZ), a region of parameter space where the habitable zone overlaps the photosynthesis zone. It is in the photosynthetic habitable zone, rather than the habitable zone, that humanity should concentrate its search for spectral signs of life, since they can only be present where both liquid water and \(P(I,T)>0\) occur. To obtain the region where \(P(I,T)>0\), we calculate the irradiance intensity as a function of stellar mass and planet semi-major axis. We begin with the Planck function: \[B(\lambda,T)=\frac{2hc^{2}}{\lambda^{5}}\bigg{[}e^{\frac{hc}{kT_{\rm B}T}}-1 \bigg{]}^{-1}, \tag{10}\] and obtain the intensity of photosynthetically active radiation, \(I_{\rm PAR}\), by integrating between \(\lambda_{\rm min}=\)400 nm and \(\lambda_{\rm max}=\)700 nm \[I_{\rm PAR}=\int_{\lambda_{\rm min}}^{\lambda_{\rm max}}\frac{2hc^{2}/\lambda ^{5}}{e^{hc/\lambda k_{\rm B}T}-1}d\lambda. \tag{11}\] The number of photons emitted by the star per unit time, \(\dot{N}_{\star}\), in this wavelength range is then obtained by multiplying Equation 11 by the surface area of the star, and dividing by the energy of each photon, so that we obtain \[\dot{N}_{\star}=4\pi R_{\star}^{2}\int_{\lambda_{\rm min}}^{\lambda_{\rm max} }\frac{2c}{\lambda^{4}}\left[\exp\left(\frac{hc}{\lambda k_{B}T_{\star}} \right)-1\right]^{-1}d\lambda. \tag{12}\] Assuming a circular orbit, the photon flux at the top of a planetary atmosphere, \(\Phi\), at a distance \(a\) from the star is \[\Phi=\frac{\dot{N}_{\star}}{4\pi a^{2}}. \tag{13}\] We account for atmospheric attenuation such that the light intensity at the bottom of the atmosphere is \[I=f_{\rm a}\cdot\frac{\dot{N}_{\star}}{4\pi a^{2}} \tag{14}\] where \(f_{\rm a}\leq 1.0\) is the fractional attenuation due to atmospheric effects. We consider three attenuation efficiencies: no attenuation, \(f_{\rm a}=1.0\), moderate attenuation \(f_{\rm a}=0.6\), and Earth-like attenuation, \(f_{\rm a}=0.2\)(Sarmiento and Gruber, 2006; Lingam and Loeb, 2021). The intensity in Equation 14 is used in Equation 8 to obtain \(P(I,T)\) that would be achieved by a phytoplankton-like species as a function of stellar mass and planet semi-major axis. ## 3 Results We find the existence of the photosynthetic habitable zone (PHZ), shown in green in Figure 3), where photosynthesizing life could exist and therefore leave behind atmospheric biosignatures. It occurs where the \(P(I,T)>0\) region (pink in Figure 3) overlaps with the region where liquid water can exist (the Habitable Zone, blue in Figure 3). Figure 3 shows nine scenarios. On the \(y\) axis of the whole plot, atmospheric effects (attenuation and greenhouse) are increasing. On the \(x\)-axis of the whole plot, the quality of the conditions for life are increasing (i.e., the maximum photosynthetic rate is much higher than the baseline respiration rate). The "Excellent" column corresponds to conditions for photosynthesizing lifeforms on Earth with respiration rates at 30% of maximum photosynthesis rates, typical for marine phytoplankton such as _Isochrysis galbana_, _Platformonas scordiformis_, etc (Ippoliti et al., 2016; Yang et al., 2020). As the quality of conditions for life increases, the rate of respiration as a fraction of the maximum photosynthetic rate attainable decreases, resulting in a larger PZ. As atmospheric effects (attenuation and \(\Delta T_{\rm greenhouse}\)) increase to Earth-like levels, interestingly, this decreases the size of the PZ, reducing the parameter space over which biosignatures could be found. This suggests that it may be easier for large-scale photosynthesizing organisms, such as cyanobacterial mats, to occur on planets with more tenuous atmospheres. Overplotted as yellow symbols are planets that spend at least 10% of their orbit in the HZ and have radii \(R_{\rm p}<1.8\)\(\rm R_{E}\), so could have a solid surface, since planets with \(R_{\rm p}\gtrsim 1.8\)\(\rm R_{E}\) are gas-dominated (Lehmer and Catling, 2017; Fulton et al., 2017). Additionally, we also plot 3 recently identified water world candidates, Kepler-138 c and d (Piaulet et al., 2022) and TOI-1452 b (Cadieux et al., 2022). While these fall outside the HZ, Kepler-138 d and TOI-1452 b fall inside the PZ. Although liquid water has not been directly detected on these planets, measurements imply that these planets may be similar to the water-rich icy moons of the solar system, such as Europa or Enceladus. These criteria reduce the \(\sim\)5000 known exoplanets down to 29 planets of interest. Overplotted on Figure 3 is the tidal lock radius (Peale, 1977; Kasting et al., 1993): \[r_{\rm lock}=0.027\left(\frac{P_{0}t}{Q}\right)^{1/6}M_{*}^{1/3}, \tag{15}\] where \(P_{0}\) is the original rotation period of the planet, \(t\) is the age of the system (1 Gyr), \(M_{*}\) is the stellar mass and \(Q^{-1}\) is the solid body plus ocean specific dissipation function. We use \(Q=100\), for the solid line, and \(Q=100\) and \(Q=1000\) for the upper and lower limits of the tidal lock radius. We assume \(P_{0}=13.5\) hours, i.e. the day length of Earth when it was 1 Gyr old. There are several planets that are in or near the PHZ largely regardless of the quality of conditions for life Figure 3: The Photosynthetic Habitable Zone. Region where positive net photosynthesis is possible shown in pink, and habitable zone shown in blue. The overlapping region is where photosynthesis can actually occur, and is named “The photosynthetic habitable zone’, since it is possible that oxygenic photosynthesis, and therefore biosignatures, could exist on planets in this location. Overplotted in yellow markers are planets of interest, i.e. planets in the habitable zone expected to have a solid surface (\(R\lesssim 1.8\) R\({}_{\rm E}\)). Blue markers indicate water world candidates. Earth is shown by white Earth symbol. The tidal lock radius with upper and lower limits is also shown. or atmospheric effects, Kepler-452 b, Kepler-1638 b, Kepler-1544 b and Kepler-62 e and f. These planets should therefore be the most promising for detecting biosignatures. It is also possible that some of these planets with \(R\gtrsim 1.5\)\(\mathrm{R_{E}}\) are water worlds (Luque & Palle, 2022).The least promising candidates are those around the lowest mass host stars, \(M_{*}\lesssim 0.4\)\(\mathrm{M_{\odot}}\) and below, since their position in Figure 3 coincides with the PHZ for less than half of our considered scenarios. As a final result, we posit that meaningful discussions of the habitable zone should account for where photosynthesis is also possible, since almost all life on Earth depends on photosynthesis either directly or indirectly. We therefore suggest replacing the use of the phrase "Habitable Zone" with "Photosynthetic Habitable Zone", when searching for biosignatures. ## 4 Discussion Conventional photosynthesis, as experienced on Earth, takes place during the day via photosynthetically active radiation (PAR) received directly from the Sun. We do not consider here any other scenario, such as starlight PAR, moonlight PAR, planetlight PAR, or speculative biological adaptations. We work under the limiting assumption that life would exist approximately "as we know it". While other scenarios are possible in principle, they require increasingly complex caveats, such as an older universe or photosynthesis only occurring at full moon (see, e.g. Raven & Cockell, 2006). Furthermore, we focus our consideration on atmospheric biosignatures, rather than biological surface features, since we expect atmospheric signatures to be detectable even when the disk-averaged spectrum features, such as the vegetation red edge (Seager et al., 2005), are not (Cockell et al., 2009). Our analysis here intends to show a general trend rather than demarcate absolute boundaries, since the fit parameters (\(\alpha,\beta\) and \(\gamma\)) in the \(P-I\) relation can take a variety of values as long as the curve retains the same functional form (an initial slope, a peak, and steady decline due to photoinhibition of photosynthesis at high intensity (see, e.g., Platt & Jassby, 1976; Platt et al., 1981; Eilers & Peeters, 1988; Ye et al., 2013). Similarly, \(P-T\) curves may be described in multiple ways as long as the same functional form, shared between species, is obeyed. In light of this, our results should be considered a useful framework rather than hard boundaries. We have also made the assumption that photon attenuation is proportional to \(\Delta T_{\mathrm{greenhouse}}\), motivated by them both being correlated with more dense atmospheres but neglecting any difference in atmospheric composition. Furthermore, atmospheric attenuation is a function of column density, which is a function of both planet mass and planet size. While a super Earth may have a more massive atmosphere than the Earth, a super Earth also has a larger surface area which results in atmospheric mass scaling more slowly than the increase in planet mass (Elkins-Tanton & Seager, 2008). Additionally, super Earth outgassing rates may be lower than the Earth's (Stamenkovic et al., 2012), so a more rigorous consideration of atmospheric attenuation must determine the role that planet mass plays. It is unclear what effect tidal locking would have on the development of photosynthesizing life on another planet, since much of life on earth depends on 24-hour circadian cycles to regulate physiological function (Dvornyk et al., 2003). An absence of periodicity on tidally locked planets would likely have considerable consequences on the evolution of biological regulation in those systems. The night side of the planet could not support photosynthesis since it does not receive PAR. This immediately discounts half of the surface area of the planet. On the other hand, always receiving PAR could potentially increase the rate of net \(\mathrm{O_{2}}\) production, since intensity does not wax and wane during the course of the day. In either case, it seems clear that there is a link between Earth's rotation rate and oxygenation, with longer days associated with higher oxygenation rates (Klatt et al., 2021). Interestingly, our analysis indicates that the PHZ predominantly exists outside the tidal locking radius for all cases, suggesting that the search for life elsewhere in the Universe should be focused around non-tidally locked planets. Photosynthesis takes place on Earth in a variety of temperatures. In plants, it is generally constrained to lower temperature ranges (\(10^{\circ}\)C - \(40^{\circ}\)C) before suffering irreversible damage (Berry & Bjorkman, 1980), while in cyanobacteria the preferred range is somewhat higher, with the upper limit for non-thermophiles \(\sim 73^{\circ}\)C (Ward et al., 2012). A few points are worth noting - first, at low temperatures, photosynthesis is both enzyme-limited and phosphate limited due to a reduction in the availability of phosphate in chloroplasts, while at higher temperatures proteins become denatured. High or low temperatures could also affect the stability and fluidity of the cellular membranes in which photosynthesis machinery components localize, resulting in more or less efficient biochemical reactions at the membrane interface. At moderate temperatures (\(\sim\)10-35\({}^{\circ}\)C), photosynthesis is mostly limited by the rate of \(\mathrm{CO_{2}}\) diffusion. This is another limitation of our work --cyanobacteria exist in mat-like colonies that have a \(z\)-depth, and we have assumed instead that any colony is essentially infinites imally thin and we therefore do not need to consider diffusion equations. Another limitation of not considering the \(z\)-depth is that attenuation of photon flux by water occurs (Lingam and Loeb, 2021), affecting light availability to organisms found at different water column depths. Chloroplast-based photosynthesis in terrestrial plants largely depends on chlorophyll a for optimal absorbance of violet and orange light, while cyanobacteria possess additional light-harvesting phytochromes which allow light capture at wavelengths outside of optimal chlorophyll a absorbance (Kehoe, 2010). If these phytoplankton exist solely underwater, the PHZ could therefore move closer to the central star, which may increase or decrease the size of the PHZ. Advanced modelling of microbial benthic ecology would be best suited to this problem, and we leave this to future work. We have also assumed that any extant life in the Universe shares a biochemistry similar enough to photosynthesizing lifeforms on Earth that we would recognise its signatures. Even on Earth, so-called "exotic photosynthesis" exists, such as infrared photosynthesis in anoxygenic photosynthetic organisms (Heath et al., 1999). This anoxygenic photosynthesis uses hydrogen sulfide instead of water as the reductant, and produces sulphur instead of oxygen as a byproduct, e.g: \[6\mathrm{CO}_{2}+12\mathrm{H}_{2}\mathrm{S}\xrightarrow{h\nu}\mathrm{C}_{6} \mathrm{H}_{12}\mathrm{O}_{6}+12\mathrm{\ S}+6\mathrm{H}_{2}\mathrm{O}. \tag{16}\] The pigments used to carry out anaerobic photosynthesis are similar to chlorophyll, but have peak absorption in the near-IR due to molecular differences. While significant atmospheric sulphuric acid in this scenario could be considered a biosignature, there is a large risk of false-positive results due to its occurrence in many nonbiological processes (Domagal-Goldman et al., 2011), which is why it is not targeted as a biosignature. In addition to this, anoxygenic photosynthesis is likely an evolutionary precursor to oxygenic photosynthesis, with biogeochemical changes on a terrestrial planet forcing a switch to an oxygen producing version (Raven, 2007, 2009, 2009). One thing that we do not consider here is the effect of planet mass on atmospheric composition and density. For example, a super-Earth that is outside the HZ, that retains its primordial H-He dominated atmosphere could have surface temperatures that are warm enough to host liquid water (Mol Lous et al., 2022). The same could therefore also be true of regions that we have determined too cold for net positive photosynthesis. On the other hand, we also determine in this work having less atmospheric attenuation results in a broader PHZ, which counteracts the positive effect of the retained H-He atmosphere. Another limitation of our work is regarding the interplay between Bond Albedo and stellar type (Kopparapu et al., 2013) and planet variables, such as surface temperature, atmospheric composition, and planetary surface type (see, e.g., Joshi and Haberle, 2012; Shields et al., 2013; von Paris et al., 2013; Rushby et al., 2019). This is a complex problem since the Bond Albedo is a function of many variables, requiring 1D energy-balance climate models, 3D global circulation models, atmospheric chemistry and composition models and knowledge of land/ocean distribution to calculate the net planetary albedo. This interplay, along with atmospheric composition and density, is investigated in a future work (Hall et al. in prep.) A further interesting avenue of exploration is regarding the impact of CO\({}_{2}\) on planetary temperature through the greenhouse effect, along with the impact on photosynthetic rates. An inherent assumption of our work is that photosynthesis is not limited by reduced CO\({}_{2}\) availability, because the fluctuations on Earth are small. The global average concentration of CO\({}_{2}\) today is 400 ppm, and was significantly higher when life first emerged due to Earth's secondary atmosphere. Space-based observatories show the CO\({}_{2}\) concentration varies only at \(\sim\)few ppm levels (Hakkarainen et al., 2016, 2019), whereas rate limitation requires a decrease of \(\sim\)tens of ppm below this 400ppm level (e.g. Moss, 1962; Gabrielsen, 1948). However, the flip side of this is that we have not explored what this means for atmospheres rich in CO\({}_{2}\), which would be likely to increase the expected surface temperature as a function of installation. A self-consistent CO\({}_{2}\)-instellation habitable zone should be calculated for this. Finally, it could be possible that most habitable worlds in the Universe simply have no detectable signs of life (Cockell, 2014), either because they are uninhabited, are too young to have evolved life yet (\(<\) 1 Gyr based on the Earth's fossil record), have biotic chemistry at concentrations too low to detect, or the biotic atmospheric chemistry is indistinguishable from the abiotic. ### Planet rotation periods Cyanobacteria appeared in the fossil record \(\sim\)3.5 Gyr ago, just \(\sim\)1 Gyr after the Earth's formation. The concentration of O\({}_{2}\) remained at primordial values of \(\lesssim\)10\({}^{-3}\) of present atmospheric levels (PALs) until \(\sim\)2.1 Gyr, when a dramatic increase in atmospheric O\({}_{2}\) occurred --the so-called Great Oxidation Event (GOE). Earth's rotation period is currently 24 hours having been slowed by tidal interaction with the moon, but is likely to have been as low as 6 hours 4 Gyr ago (Lambeck, 1980; Cuk and Stewart, 2012; Bartlett and Stevenson, 2016). Earth's rotation period may therefore have increased by more than a factor of two since the evolution of photosynthesis. Recently, it has been postulated that the GOE occurred when Earth's daylength increased to \(\sim\)16 hours (Klatt et al., 2021). To test this hypothesis, Klatt et al. (2021) performed numerical simulations of movement of O\({}_{2}\) in cyanobacterial mats for daylengths between 12 and 52 hours, using simulated diel light cycle illumination. They found that longer days resulted in higher net O\({}_{2}\) flux through the cyanobacteria mat, and verified these results by taking measurements from real cyanobacterial colonies in controlled conditions. This lead them to conclude that increases in daylength could plausibly have influenced Earth's oxygenation, particularly around key oxidation events, and thus helped to pave the way for the evolution of plants and animals as we know them. At 1 Gyr, Earth's daylength was \(\sim\) 13 hours with gravitational modelling (Bartlett and Stevenson, 2016) predicting a spin-down to 16 hour days by \(\sim\)1.9 Gyr (late Archean). Within the next 0.3 Gyr the atmospheric O\({}_{2}\) concentration increased to \(\sim\)0.1 PAL. Assuming Earth-like biology, the work of Klatt et al. (2021) suggests that a planetary period of \(\gtrsim\)16 hours could be an important factor in producing an oxygen-rich atmosphere to support life as we know it. Unfortunately, the rotation period, or daylength, is measured only for a handful of exoplanets (2M1207 b, PSO J318.5, GQ Lup b and \(\beta\) Pic b), all of which are massive, fast rotators (for a summary see Scholz et al., 2018). An empirical spin-mass relationship, where equatorial velocity is given by \(v_{\rm eq}\propto\sqrt{M}\), is known to fit the solar system planets. However, both Mercury and Venus do not fit this trend due to tidal interactions with the Sun, and the Earth's tidal interactions with the moon also result in deviation from this relationship. The solar system trend is: \[v_{\rm eq}=A(M/\rm M_{J})^{\frac{1}{2}}, \tag{17}\] with \(v_{\rm eq}\) in units of km s\({}^{-1}\) and \(A=13.1\). The rotation period of the planet in seconds is then \[T=\frac{2\pi R}{A}\bigg{(}\frac{M}{\rm M_{J}}\bigg{)}^{\frac{1}{2}} \tag{18}\] where \(R\) is the planet radius. If the exoplanet radius is known (or estimated), then the period of the planet in 24-hour days is \[T=0.632\left(\frac{M_{\rm E}}{M}\right)^{\frac{1}{2}}\left(\frac{R}{R_{\rm E} }\right), \tag{19}\] and can therefore be predicted directly. However, for small radii planets (as we consider here), they generally do not also have associated mass measurements, but estimates can be made using a mass-radius relationship. We take the mass and radius values for low mass planets with an Earth-like composition from Table 1 of Fortney et al. (2007), which gives us a best-fit mass-radius relation of \[R_{\rm p}=M_{\rm p}^{0.27} \tag{20}\] with a high coefficient of determination \(R^{2}=0.98\). We use this to estimate planet mass, given in the fifth column of Table 2. For Kepler-452 b, we calculate the rotation rate using the observed mass and retrieve 11 hours. For all other planets in our sample, we use the mass estimate from Eq. 20. The relationship is shown in Figure 4. If longer daylengths are required for atmospheric oxygenation, then our sample of planets may not yet be rotating slowly enough. If, however, their rotation has been slowed due to moons, then these planets may have the right conditions for their own GOE. It is useful to note that alternatively, if the radius is unknown and the mass is known, then a density can be assumed, and we can instead write: \[T=0.632\left(\frac{M_{\rm E}}{M}\right)^{\frac{1}{6}}\left(\frac{\rho_{\rm E} }{\rho}\right)^{\frac{1}{3}}, \tag{21}\] A tidally-locked planet will experience constant daylight on one half of its surface, and constant darkness (or reflected moonlight Lingam and Loeb, 2020) on the Figure 4: Mass-period relationship. Day lengths of planets either measured directly (blue points) or estimated through empirically determined relationship in Eq. 19 (green and yellow points). Yellow points indicate that mass was also estimated using Eq. 20. Grey solid lines are plots of Equation 21 for a plausible range of planet densities. Horizontal dashed lines mark the range of Earth’s daylength predicted for the GOE (Klatt et al., 2021). other, and will therefore not experience the diurnal variation in light intensity of non tidally-locked planets. It is unclear whether this could be helpful or harmful to oxygenic photosynthesis. On the one hand, constant illumination means photosynthesis is always possible as long all other conditions allow, and on the other hand, dark respiration is not. A key question is therefore: is there an advantage to the light-dark cycle for life? Tang & Vincent (2000) explored this in an experiment probing the effects of daylight length and temperature on arctic cyanobacteria. The total daylength was held constant at 24 hours, and they varied the length of daytime, \(L\), between 8 and 24 hours for three fixed temperatures of 5, 15, and 25\({}^{\circ}\)C. The cyanobacteria growth rates increased with increasing L for 5\({}^{\circ}\)C, but they plateaued with \(L\) for 10\({}^{\circ}\)C and 15\({}^{\circ}\)C, resulting in a reduction in net photosynthesis that was largest for the 24 hour daylight case. Unfortunately, the experimental errors on the measured respiration rates were too large to discriminate between the different conditions. However, other work has shown that dark respiration rates are at their peak shortly after the transition from lightness to darkness, and steeply decline as the period of darkness increases (Markager et al., 1992). In a similar vein, peak O\({}_{2}\) production (rather than respiration) was found to occur at different times in the 24 hour and 52 hour daylengths of Klatt et al. (2021). Peak O\({}_{2}\) production occurred before noon in the 52 hour daylength and after noon in the 24 hour daylength. More work is therefore required to determine how tidal locking may impact the photosynthesis-respiration cycle for the tidally locked planets in this work. \begin{table} \begin{tabular}{l c c c c c c c} \hline Planet & a [au] & M\({}_{*}\) [M\({}_{\odot}\)] & M [M\({}_{\rm E}\)] & M\({}_{\rm est.}\) [M\({}_{\rm E}\)] & R [R\({}_{\rm E}\)] & P[days] & day [hrs] & day [hrs] \\ & & & & (\(R=M^{0.27}\)) & & Calculated & Estimated \\ \hline \hline GJ-1061 d & 0.054 & 0.125 & 1.64 & 1.73 & 1.16 & 13.00 & TL & TL \\ GJ-667 Cc & 0.125 & 0.330 & 3.81 & 4.95 & 1.54 & 28.10 & TL & TL \\ K2-72e & 0.106 & 0.270 & 2.21 & 2.57 & 1.29 & 24.20 & TL & TL \\ Kepler-1229 b & 0.290 & 0.430 & - & 3.48 & 1.40 & 86.80 & - & 11.39 \\ Kepler-138 c & 0.091 & 0.535 & 2.3 & 4.60 & 1.51 & 13.78 & TL & TL \\ Kepler-138 d & 0.129 & 0.525 & 2.1 & 2.03 & 1.21 & 23.09 & TL & TL \\ Kepler-1544 b & 0.542 & 0.810 & - & 8.46 & 1.78 & 168.80 & - & 9.28 \\ Kepler-1638 b & 0.745 & 0.970 & - & 10.16 & 1.87 & 259.30 & - & 8.90 \\ Kepler-1649 c & 0.083 & 0.198 & - & 1.24 & 1.06 & 19.50 & - & 14.43 \\ Kepler-1652 b & 0.165 & 0.404 & - & 5.70 & 1.60 & 38.10 & - & 10.16 \\ Kepler-186 f & 0.432 & 0.544 & - & 1.79 & 1.17 & 129.90 & - & 13.27 \\ Kepler-283 c & 0.341 & 0.664 & - & 9.19 & 1.82 & 92.70 & - & 9.11 \\ Kepler-296 f & 0.255 & 0.498 & - & 8.82 & 1.80 & 63.30 & - & 9.19 \\ Kepler-442 b & 0.409 & 0.610 & - & 3.04 & 1.35 & 112.30 & - & 11.75 \\ Kepler-452 b & 1.048 & 1.037 & 5 & 6.11 & 1.63 & 384.80 & 11.057 & 10.00 \\ Kepler-62 e & 0.427 & 0.690 & - & 5.83 & 1.61 & 122.40 & - & 10.11 \\ Kepler-62 f & 0.718 & 0.690 & - & 3.57 & 1.41 & 267.30 & - & 11.32 \\ LHS-1140 b & 0.096 & 0.179 & 6.38 & 6.25 & 1.64 & 24.70 & TL & TL \\ LP 890-9 c & 0.040 & 0.118 & - & 3.21 & 1.37 & 8.46 & - & 11.60 \\ Luyten b & 0.091 & 0.260 & 2.89 & 8.82 & 1.80 & 18.65 & TL & TL \\ Proxima Centauri b & 0.049 & 0.122 & 1.27 & 2.64 & 1.30 & 11.19 & TL & TL \\ Teegarden’s b & 0.026 & 0.093 & 1.05 & 1.08 & 1.02 & 4.91 & TL & TL \\ Teegarden’s c & 0.044 & 0.093 & 1.11 & 1.16 & 1.04 & 11.40 & TL & TL \\ TOI-1452 b & 0.061 & 0.249 & 4.82 & 6.71 & 1.67 & 11.06 & TL & TL \\ TOI-700 d & 0.163 & 0.416 & 1.72 & 1.62 & 1.14 & 37.40 & TL & TL \\ TRAPPIST-1 d & 0.022 & 0.090 & 0.39 & 0.40 & 0.78 & 4.05 & TL & TL \\ TRAPPIST-1 e & 0.029 & 0.090 & 0.69 & 0.73 & 0.92 & 6.10 & TL & TL \\ TRAPPIST-1 f & 0.038 & 0.090 & 1.04 & 1.16 & 1.04 & 9.21 & TL & TL \\ TRAPPIST-1 g & 0.047 & 0.090 & 1.32 & 1.57 & 1.13 & 12.40 & TL & TL \\ \hline \end{tabular} \end{table} Table 2: Properties of candidate exoplanets in the photosynthetic habitable zone assuming excellent conditions. Daylengths are estimated using the empirical relation of Equation 19, which is not valid if the planet is tidally-locked (TL). If the mass is not known, it is estimated using Eq. 20. Values from the NASA exoplanet archive. ## 5 Conclusion We have demonstrated the existence of a photosynthetic habitable zone (PHZ). It is the distance from the host star where the habitable zone overlaps with where photosynthesis is possible. We argue that the search for biosignatures of oxygenic photosynthesizing life forms should be concentrated in the PHZ if we expect photosynthesis in the Universe to proceed in a similar manner to photosynthesis on Earth. The PHZ becomes smaller with increasing atmospheric attenuation and \(\Delta T_{\rm greenhouse}\) (i.e., more dense atmospheres), and so may make life less likely on super-Earths, since their larger gravitational field can hold onto more atmosphere. The PHZ also becomes smaller as the conditions for life become less favourable, which we describe as respiration rate relative to maximum possible photosynthetic rate, increasing. We therefore conclude that the parameter space for signs of life is far narrower than the standard HZ. Out of the nine scenarios we considered, we found TRAPPIST 1-e to be in the photosynthetic habitable zone for one scenario - little atmospheric affects and excellent conditions for life. However, it is almost certainly tidally locked, and it is not clear how or if photosynthetic life can proceed on tidally locked planets. Furthermore, the global circulation models of tidally-locked planets by Lobo et al. (2022) find that the HZ is limited to a narrow strip along the terminator for water-limited rocky planets. This reduces both the fraction of planet surface area for liquid water and cyanobacteria mats, and potentially the amount of water for photosynthesis. It may therefore not be the best place to focus the search for signs of life. We identify five planets, Kepler-452 b, Kepler-1638 b, Kepler-1544 b and Kepler-62 e and Kepler-62 f, that are consistently in the PHZ in a variety of environments. For Kepler-452 b, we calculate that it should have a rotation period of 11 hours. The other four planets are estimated to have rotation periods between 9 and 11 hours. We suggest the search for signs of life elsewhere in the Universe should begin in earnest on the candidate planets we have identified. ## 6 Acknowledgements With special thanks from CH to Duncan H. Forgan (Forgan, 2019). CH also thanks Ken Rice for insightful discussion. This study was supported in part by resources and technical expertise from the Georgia Advanced Computing Resource Center, a partnership between the University of Georgia's Office of the Vice President for Research and Office of the Vice President for Information Technology. This work has made use of the NASA Exoplanet Catalogue [https://exoplanets.nasa.gov/discovery/exoplanet-catalog/](https://exoplanets.nasa.gov/discovery/exoplanet-catalog/), and the exoplanet archive [https://exoplanetarchive.ipac.caltech.edu/](https://exoplanetarchive.ipac.caltech.edu/)
2307.16418
DRAW: Defending Camera-shooted RAW against Image Manipulation
RAW files are the initial measurement of scene radiance widely used in most cameras, and the ubiquitously-used RGB images are converted from RAW data through Image Signal Processing (ISP) pipelines. Nowadays, digital images are risky of being nefariously manipulated. Inspired by the fact that innate immunity is the first line of body defense, we propose DRAW, a novel scheme of defending images against manipulation by protecting their sources, i.e., camera-shooted RAWs. Specifically, we design a lightweight Multi-frequency Partial Fusion Network (MPF-Net) friendly to devices with limited computing resources by frequency learning and partial feature fusion. It introduces invisible watermarks as protective signal into the RAW data. The protection capability can not only be transferred into the rendered RGB images regardless of the applied ISP pipeline, but also is resilient to post-processing operations such as blurring or compression. Once the image is manipulated, we can accurately identify the forged areas with a localization network. Extensive experiments on several famous RAW datasets, e.g., RAISE, FiveK and SIDD, indicate the effectiveness of our method. We hope that this technique can be used in future cameras as an option for image protection, which could effectively restrict image manipulation at the source.
Xiaoxiao Hu, Qichao Ying, Zhenxing Qian, Sheng Li, Xinpeng Zhang
2023-07-31T05:57:41Z
http://arxiv.org/abs/2307.16418v1
# DRAW: Defending Camera-shooted RAW against Image Manipulation ###### Abstract RAW files are the initial measurement of scene radiance widely used in most cameras, and the ubiquitously-used RGB images are converted from RAW data through Image Signal Processing (ISP) pipelines. Nowadays, digital images are risky of being nefariously manipulated. Inspired by the fact that innate immunity is the first line of body defense, we propose DRAW, a novel scheme of defending images against manipulation by protecting their sources, i.e., camera-shooted RAWs. Specifically, we design a lightweight Multi-frequency Partial Fusion Network (MPF-Net) friendly to devices with limited computing resources by frequency learning and partial feature fusion. It introduces invisible watermarks as protective signal into the RAW data. The protection capability can not only be transferred into the rendered RGB images regardless of the applied ISP pipeline, but also is resilient to post-processing operations such as blurring or compression. Once the image is manipulated, we can accurately identify the forged areas with a localization network. Extensive experiments on several famous RAW datasets, e.g., RAISE, FiveK and SIDD, indicate the effectiveness of our method. We hope that this technique can be used in future cameras as an option for image protection, which could effectively restrict image manipulation at the source. + Footnote †: Xiaoxiao Hu and Qichao Ying contribute equally to this work. +Corresponding author: Zhenxing Qian ([email protected]) ## 1 Introduction In the digital world, the credibility of the famous saying "seeing is believing" is largely at risk since nowadays people can easily manipulate critical content within an image and redistribute the fabricated version via the Internet. Owing to the fact that readers are more susceptible to well-crafted misleading material, fabricated images can be a means for some politicians to sway public opinion. In more severe cases, those fraudulent images can be used to bolster fake news or criminal investigation. Image manipulation detection [11, 55] and localization [12, 75] has become a critical area of research for decades, with the goal of distinguishing manipulated images from authentic ones and locating the manipulated areas. While early methods mainly check the integrity of the images from statistical aspects, e.g., the Photo-Response Non-Uniformity (PRNU) noise [11] and the fixed pattern noise (FPN) [39], the uprising of deep networks has greatly strengthened the capability to find traces left by a variety of manipulation [12, 77, 32]. However, the adversary is also continuously evolving both in strength and diversity. For example, recent deep-network-based image editing algorithms [67, 19] are reported to produce highly realistic images with almost no visible artifacts near the edges. Therefore, it remains a big issue whether the learned subtle forensics traces can always be present in the newly forged images. Also, though some works [75, 76] explicitly handle lossy online transmission scenarios, they still face limited performance against well-crafted forgeries, e.g., inpainting, or lossy image operations, e.g., Gaussian blurring. Inspired by the fact that innate immunity is the first line of body defense and the best weapon to mitigate diseases, safeguarding images against manipulations is an alternative and promising way of deterring malicious attackers. Indeed, the ubiquitous 8-bit RGB images are not the pristine format for reflecting how we perceive the world. They are converted from RAW files via ISP pipelines. Therefore, we Figure 1: DRAW improves the performance of image manipulation localization against lossy image operations via imperceptible protective signal injection into RAW files. propose DRAW, a proactive image protection scheme that defends camera-shooted RAW data against malicious manipulation on the RGB domain. Specifically, we propose to introduce imperceptible protective signal into the RAW data, which can be transferred into the rendered RGB images, even though various types of ISP pipelines are applied. Once these images are manipulated, the localization networks can exactly localize the forged areas regardless of image post-processing operations such as blurring, compression or color jittering. Besides, a novel Multi-frequency Partial Fusion Network (MPF-Net) is proposed to implement RAW protection, which adopts frequency learning and cross-frequency partial feature fusion to significantly decrease the computational complexity. We illustrate the functionality of DRAW in Fig. 1, which promotes accurate manipulation localization without affecting the visual quality. Extensive experiments on several famous RAW datasets, e.g., RAISE, FiveK and SIDD, prove the imperceptibility, robustness and generalizability of our method. Besides, to compare RAW-domain protection with previous works, we tempt to borrow the success of RGB-domain protection [4, 96, 83] as the baseline method for proactive manipulation localization. The results show that DRAW hosts a noticeable performance gain and a nontrivial benefit of content-related adaptive embedding. In addition, MPF-Net provides superior performance compared to classical U-Net [58] architecture with only 20.9% of its memory cost and 0.95% of its parameters. The novel lightweight architecture makes it possible to be integrated into cameras in the future, thereby changing the current situation where digital images can be freely manipulated. The contributions of this paper are three-folded, namely: 1. DRAW is the first to propose RAW protection against image manipulation. The corresponding RGB images will carry imperceptible protective signal even though various types of imaging pipelines or lossy image operations are applied. 2. With RAW protection, image manipulation localization networks can better resist lossy image operations such as JPEG compression, blurring and rescaling. 3. A novel lightweight MPF-Net is proposed for integrating RAW protection into cameras in the future, thereby potentially changing the current situation where digital images can be freely manipulated. ## 2 Related Works **Passive Image Manipulation Localization.** Many existing image forensics schemes are designed to detect special kinds of attacks, e.g., splicing detection [75, 59], copy-moving detection [33, 45] and inpainting detection [99, 42]. In addition, some universal tampering detection schemes [12, 77, 32] exploit universal noise artifacts left by manipulation. Mantra-Net [77] uses fully convolutional networks, Z-Pooling and long short-term memory cells for pixel-wise anomaly detection. MVSS-Net [12] jointly exploits the noise view and the boundary artifact using multi-view feature learning and multi-scale supervision. RGB-N [97] additionally utilizes auto-generated data augmentation for training. RIML [75] includes adversarial training, where the lossy Online Social Network (OSN) transmission is simulated by modeling noise from different sources. However, these works are still limited in generalization to well-crafted manipulations or heavy lossy operations. **Watermarking for Image Protection.** Many image protection schemes based on watermarking [51, 35, 24, 82] have been proposed. Asnani et al. [4] propose to embed templates into images for more accurate manipulation detection. Zhao et al. [96] embed watermarks as anti-Deepfake labels into the facial identity features. FakeTagger [71] embeds the identity information into the whole facial image, which can be recovered after illegal face swapping. Khachaturov et al. [37] and Yin et al [81] respectively propose to attack inpainting or Super-Resolution (SR) models by forcing them to work abnormally on the targeted images. However, these approaches do not tackle the issue of forgery localization, and many of them cannot combat lossy image operations. We alternatively introduce imperceptible protective signal into RAW data and transfer it into RGB images to aid robust manipulation localization. **Models for Limited Computing Resources.** Classical network architectures for segmentation-based tasks, e.g., U-Net [58] or FPN [47], usually require non-affordable computing resources for many small devices. MobileNet [30] and ShuffleNet [52] are early works on addressing this issue respectively via Depth-wise Separable Convolution (DSConv) and channel split & shuffle. ENet [54] proposes an asymmetric encoder-decoder architecture with early downsampling. SegNet [6] only stores the max-pooling indices of the feature maps and uses them in its decoder network to achieve good performance. Despite substantial efforts made, these networks are either still computationally demanding or sacrifice performance for model size shrinkage. We propose MPF-Net that contains only 20.9% of memory cost and 0.95% of parameters of U-Net yet provides surpassing performance in our task. Figure 2: Typical camera imaging pipeline for RAW data acquisition and subsequent RGB image signal processing. ## 3 Proposed Method ### Approach Fig. 3 depicts the pipeline design of DRAW. We denote the captured RAW data as \(\mathbf{R}\), and use a protection network \(\mathcal{P}\) to transform \(\mathbf{R}\) into the protected RAW, i.e., \(\hat{\mathbf{R}}\). The functionality of \(\mathcal{P}\) is to adaptively embed a transferrable protective signal into \(\hat{\mathbf{R}}\) for robust and accurate image manipulation localization in the RGB domain. Considering the computational limitation of imaging equipment, we use a novel lightweight MPF-Net specified in Section 3.2 to implement \(\mathcal{P}\). Next, we use the ISP layer \(\mathcal{S}\) to render \(\hat{\mathbf{R}}\) into the protected RGB image \(\hat{\mathbf{I}}\). Provided with a number of off-the-shelf deep-network-based ISP algorithms and non-differentiable conventional ISP algorithms, during training, we include a popular conventional method, i.e., LibRaw [1] and two deep-learning methods, i.e., CycleISP [87] and InvISP [78], and leave other ISP algorithms [86, 2] for evaluation. To improve generalizability, interpolation is conducted on one network-rendered RGB \(\hat{\mathbf{I}}_{net}\) and one conventional-algorithm-generated RGB \(\hat{\mathbf{I}}_{conv}\) to produce \(\hat{\mathbf{I}}\), i.e., \(\hat{\mathbf{I}}=\omega\cdot\hat{\mathbf{I}}_{conv}+(1-\omega)\cdot\hat{ \mathbf{I}}_{net}\), where \(\omega\) is uniformly within \([0,1]\). Afterward, to simulate image redistribution of \(\hat{\mathbf{I}}\), we include the hybrid attack layer \(\mathcal{A}\) to perform manipulation and lossy operations on \(\hat{\mathbf{I}}\). It comprises of modules for tampering, color adjustments, distortions (lossy operations) and cropping. In line with typical forgery detection works [12, 75], we consider inpainting, splicing and copy-moving as the most common three types of tampering, which often alter the underlying meaning of an image. In contrast, color adjustment and distortion are often considered benign yet can potentially erase traces for manipulation localization. During training, these modules can be conditionally performed according to the empirical _activation possibilities_ (85%) and in any arbitrary ordering to encourage diversity, e.g., tampering then distorting, cropping then tampering, etc. We respectively denote the attacked images as \(\hat{\mathbf{I}}_{t}\) if the tampering module is activated or \(\hat{\mathbf{I}}_{nt}\) if otherwise. The latter is identified as authentic images, whose introduction is to explicitly minimize the false alarm rate of DRAW. Detailed implementations of the modules are specified in the supplement. Besides, to closer the gap between real and simulated lossy operations and color jittering operations, we add the difference between \(\hat{\mathbf{I}}_{syn}\) and \(\hat{\mathbf{I}}_{rw}\) on to \(\hat{\mathbf{I}}_{syn}\), where \(\hat{\mathbf{I}}_{syn}\) and \(\hat{\mathbf{I}}_{rw}\) respectively denote synthetic and real-world processed image using the same setting. \(x=\hat{\mathbf{I}}_{syn}+sg(\hat{\mathbf{I}}_{rw}-\hat{\mathbf{I}}_{syn}),x \in\{\hat{\mathbf{I}}_{t},\hat{\mathbf{I}}_{nt}\}\), where \(sg\) stands for the stop-gradient operator [7]. On the recipient's side, we use the localization network \(\mathcal{D}\) to estimate the manipulated region given a doubted image that could be one of \(\hat{\mathbf{I}}_{t}\) or \(\hat{\mathbf{I}}_{nt}\). If it's an manipulated image \(\hat{\mathbf{I}}_{t}\), the predicted mask \(\hat{\mathbf{M}}_{t}\) should be close to the ground-truth \(\mathbf{M}\). Otherwise, it should be close to a zero matrix. DRAW is flexible on the selection of \(\mathcal{D}\), where many off-the-shelf networks can be applied, e.g., DRAW-HRNet [65], DRAW-MVSS [12] or DRAW-RIML [75]. **Objective Loss Functions.** We need to include fidelity terms \(\mathcal{L}_{\mathcal{P}}^{RAW}\) and \(\mathcal{L}_{\mathcal{P}}^{RGB}\) to ensure imperceptible protection. We find that the \(\ell_{1}\) distance is the best in practice to minimize modification compared to many advanced deep-network-based terms, e.g., Lpip's loss [89] and contextual loss [91]. \[\begin{split}\mathcal{L}_{\mathcal{P}}^{RAW}=\mathbb{E}_{\mathbf{ R}}\left[\left\|\mathbf{R}-\mathcal{P}\left(\mathbf{R}\right)\right\|_{1} \right],\\ \mathcal{L}_{\mathcal{P}}^{RGB}=\mathbb{E}_{\mathbf{R}}\left[ \left\|\mathcal{S}\left(\mathbf{R}\right)-\mathcal{S}\left(\mathcal{P}\left( \mathbf{R}\right)\right)\right\|_{1}\right].\end{split} \tag{1}\] Next, we include localization terms to minimize the Binary Cross Entropy (BCE) losses that respectively compare \(\hat{\mathbf{M}}_{t}\) with \(\mathbf{M}\), and \(\hat{\mathbf{M}}_{nt}\) with a zero matrix. \[\begin{split} L_{\mathcal{D}}^{T}=-\mathbb{E}_{\hat{\mathbf{I}}_ {t}}\left[\mathbf{M}\log\left(\mathcal{D}(\hat{\mathbf{I}}_{t})\right)+(1- \mathbf{M})\log\left(1-\mathcal{D}(\hat{\mathbf{I}}_{t})\right)\right],\\ L_{\mathcal{D}}^{NT}=-\mathbb{E}_{\hat{\mathbf{I}}_{\omega}} \left[\log\left(1-\mathcal{D}(\hat{\mathbf{I}}_{nt})\right)\right].\end{split} \tag{2}\] The total loss for DRAW is shown in Eq. (3), where \(\alpha,\beta,\gamma,\epsilon\) are empirically-set hyper-parameters. \[\begin{split}\mathcal{L}=\alpha\cdot\mathcal{L}_{\mathcal{P}}^{RAW }+\beta\cdot\mathcal{L}_{\mathcal{P}}^{RGB}+\gamma\cdot\mathcal{L}_{\mathcal{ D}}^{T}+\epsilon\cdot\mathcal{L}_{\mathcal{D}}^{NT},\\ \alpha=10,\beta=1,\gamma=0.02,\epsilon=0.01.\end{split} \tag{3}\] Figure 3: **Pipeline design of DRAW. We design a lightweight protection network that embeds imperceptible protective signal in the RAW domain and transfers it into the rendered RGB images. On the recipient’s side, the localization network identifies the forged areas.** ### Multi-frequency Partial Fusion Network In order to combat sophisticated image manipulation within resource-limited environments such as cellphones and cameras, it is essential to deploy a lightweight architecture yet with rich feature extraction capabilities. Fig. 4 illustrates the network design, where we first use a three-level DT-CWT transform to decompose the input into a low-frequency main component and three levels of higher-frequency subbands. Each level consists of six subbands in complex forms, representing different degrees of wavelet information. The real and imaginary parts of the subbands are then concatenated. In Fig. 5, we compare the feature pyramid of U-Net to that of DT-CWT. Vanilla convolutions can be less efficient due to the restriction of receptive field, feature redundancy, and repetition during training. In contrast, DT-CWT provides a strong prior for mitigating these issues, requiring only one layer of separable convolution and yielding richer patterns within representations. Following the initial feature extraction, we apply a "DSConv-LN-GELU" layer to further refine the extracted features, which is in short for depth-wise separable convolution [30], Layer Normalization [5] and GELU activation [28]. Next, we cascade sixteen multi-frequency partial fusion blocks in each level as feature refinement and fusion. Each block contains a Half Fourier Convolution (HFC) layer and a Partial Feature Fusion (PFF) layer. Notably, these blocks do not alter either the resolution or channel number of the features. Then we project the features back into the main components and three levels of subbands using another "DSConv-LN-GELU" layer, which are then transformed back into the RGB domain via iDT-CWT. **Half Fourier Convolution Layer (HFC).** We observe that features provided by DT-CWT provide a rich local pattern, whereas the global information representation is lacking. Considering that Fast Fourier Transform (FFT) is efficient in giving global information about the frequency components of an image [98, 41], we include both vanilla _Conv_ layer and Fast Fourier Transform (FFT) in each HFC to enable simultaneous global and local feature mining. For the HFC layer at level \(i\): \[\begin{split}\textit{HFC}_{i}:\textit{output}&=[ \textit{GB}(\textit{input}_{1}),\textit{LB}(\textit{input}_{2})],\\ \textit{input}&=[\textit{input}_{1},\textit{input} _{2}],\end{split} \tag{4}\] where we evenly split the input tensor by half, send them respectively into the Global Branch (GB) and Local Branch (LB) of the HFC layer, and concatenate the resultant features. GB contains FFT, _Conv_ layer and inverse FFT. LB is composed of a cascade of two vanilla _Conv_ layers. **Partial Feature Fusion Layer (PFF).** On fusing different groups of features, two most commonly-accepted ways are "concatenate-and-reduce" [14, 65] or "attend-to-aggregate" [25, 88]. We propose a novel paradigm of "reserve-attend-and-assemble". Specifically, we split the Figure 4: **Network design of Multi-frequency Partial fusion Network (MPF-Net). It decomposes the input into multi-level subbands and during cross-frequency feature fusion, we preserve a proportion of features learned in the current layer. \(C_{in}=C_{out}=3\) and \(C_{f}=32\).** Figure 5: **Illustration of feature mining respectively using DT-CWT transform and U-Net. DT-CWT requires fewer _Conv_ layers yet the generated features show less redundancy or repetition.** input features into two halves based on a predetermined ratio \(s\) (default 0.25), i.e., \(\mathit{input}_{i}=[\mathit{input}_{i,1},\mathit{input}_{i,2}]\) for PFF at level i. The first half of the multi-level features (\(C_{f}\cdot s\)) are resized into the size of the current level, and then separately reweighed using channel attention (CA). Next, "assemble" is done by pixel-wisely aggregating all groups of reweighed features and concatenating them with the reserved second half (\(C_{f}\cdot~{}(1-s)\)). Our paradigm can potentially mitigate the issue of over-attention on certain frequencies or covariance drift of the preserved representation, especially from shallow layers, caused by residual learning. Furthermore, we only pass higher-frequency subbands into lower levels, which also encourages each level to process unique combinations of frequencies which reduces redundancy. The operations in PFF at level \(i\) can be mathematically defined as follows. \[\mathit{PFF}_{i}:\mathit{output}=[\mathit{input}_{i,2},\sum_{j\leq i}\mathit{ CA}(\mathit{Resize}(\mathit{input}_{j,1}))] \tag{5}\] where _CA_ is composed of a global average pooling layer and a \(1\times 1\) bottleneck convolution. ## 4 Experiments ### Experimental Setups We use RAISE [15] dataset (8156 image pairs) and Canon subset (2997 image pairs) from the FiveK [9] dataset as the training set. Meanwhile, RAISE, Canon subset and Nikon subset (1600 image pairs) from FiveK as well as SIDD dataset [3] are used to evaluate DRAW. We divide them into training sets and test sets at a ratio of 85: 15. We crop each RAW image into non-overlapping sub-images sized \(512\times~{}512\). For quantitative analysis, inspired by [97, 75], we opt to arbitrarily select regions for copy-moving and inpainting and borrow segmentation masks and the sources from MS-COCO [48] dataset for splicing. For qualitative analysis, we also manually manipulate over one hundred protected images and show some of the representative examples in the figures. We train our benchmark model by jointly training \(\mathcal{P}\) with HRNet [65] as \(\mathcal{D}\). We then fix \(\mathcal{P}\) and respectively training MVSS [12] and RIML [75] as \(\mathcal{D}\) on top of the protected RGB images. All models are trained with batch size 16 on four distributed NVIDIA RTX 3090 GPUs, and we train the networks for 10 epochs in roughly one day. For gradient descent, we use Adam optimizer with the default hyper-parameters. The learning rate is \(1\times 10^{-4}\). ### Performances **Image Quality Assessment.** Fig. 6 and Table 1 respectively show the qualitative and quantitative results on the imperceptibility of the protection. Besides, we test the overall image quality of protected images using untrained ISP network, namely, Restormer [86], and another conventional ISP, namely, OpenISP [2]. Restormer is originally proposed \begin{table} \begin{tabular}{c|c c|c c|c c} \hline \multirow{2}{*}{Process} & \multicolumn{2}{c|}{\(512\times 512\)} & \multicolumn{2}{c|}{\(256\times 256\)} & \multicolumn{2}{c}{\(1024\times 1024\)} \\ & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline \([\mathbf{R},\mathbf{R}]\) & 58.43 & - & 61.67 & - & 56.41 & - \\ \([\mathbf{I},\mathbf{\hat{I}}]\) (InvISP) & 45.13 & 0.977 & 46.20 & 0.985 & 45.60 & 0.983 \\ \([\mathbf{I},\mathbf{\hat{I}}]\) (LibRaw) & 41.25 & 0.960 & 41.97 & 0.967 & 41.07 & 0.957 \\ \([\mathbf{\hat{I}},\mathbf{\hat{I}}]\) (Restormer) & 45.75 & 0.980 & 46.24 & 0.984 & 45.03 & 0.977 \\ \([\mathbf{\hat{I}},\mathbf{\hat{I}}]\) (OpenISP) & 40.52 & 0.960 & 41.95 & 0.966 & 40.34 & 0.955 \\ \hline \end{tabular} \end{table} Table 1: **Quantitative analysis on the imperceptibility of RAW protection. \([\mathbf{R},\mathbf{\hat{R}}]\):** RAW file before and after protection. \([\mathbf{I},\mathbf{\hat{I}}]\): RGB file rendered respectively from \(\mathbf{R}\) and \(\mathbf{\hat{R}}\) using different ISP pipelines. Dataset: RAISE and Canon. Figure 6: **Examples of protected images under different ISPs.** Dataset: RAISE. In each test, we apply two ISPs for rendering (upper: LibRAW / OpenISP, middle: InvISP / CycleISP, lower: OpenISP / InvISP). The RAW images are visualized through bilinear demosaicing. for image restoration, but we find that the transformer-based architecture also shows excellent performance on RGB image rendering. OpenISP is another popular open-source ISP pipeline apart from LibRaw, and we customize the pipeline by applying the most essential modules. We can observe little artifact from the protected version of RAW data and RGB. From the augmented difference, DRAW imperceptibly introduce content-related local patterns, which function like digital _locks_ onto the pixels and forgery localization is conducted by observing the integrity of these _locks_. **Robustness and Accuracy of Manipulation Localization.** We conduct comprehensive experiments on RAISE and Canon datasets under different lossy operations. The qualitative and quantitative comparisons in terms of the Recall, F1 and IoU in the pixel domain are reported in Fig. 7, Fig. 8, Table 2 and Table 3. The results under image color adjustment operations and combined attacks are included in the supplement. We find that for DRAW-HRNet, although the images are manipulated by diverse lossy operations, we succeed in localizing the tampered areas. If there are no lossy operations, the F1 scores are in most cases above 0.8. Fig. 7 further provides exampled image manipulation localization results of DRAW-HRNet under different lossy operations. Next, for fair comparison with previous arts, we fine-tune MVSS and RIML on RAISE and Canon dataset using the mechanisms proposed in the corresponding papers yet additionally considering _splicing_, _copy-moving_ and _inpainting_. When heavy image lossy operations are present, MVSS fails to detect the tampered content. While RIML exhibits better robustness due to OSN transmission simulation, its performances under blurring or inpainting attacks are still restricted. However, training these detectors based on the protected images significantly improves their robustness. **Generalizability.** We conduct additional experiments where \(\mathcal{P}\) trained on RAISE dataset is applied on different RAW datasets, i.e., Canon and SIDD, and untrained ISP pipelines, i.e., OpenISP and Restormer. Table 4 shows that raw protection can generalize to untrained cameras and ISP \begin{table} \begin{tabular}{|c|c|c c|c c|c c|c c|c c|c c|c c|c c|c c|} \hline \multirow{2}{*}{} & \multirow{2}{*}{Models} & \multicolumn{3}{c|}{No attack} & \multicolumn{3}{c|}{Rescaling} & \multicolumn{3}{c|}{AWGN} & \multicolumn{3}{c|}{JPEG90} & \multicolumn{3}{c|}{JPEG70} & \multicolumn{3}{c|}{Med. Bur} & \multicolumn{3}{c|}{GBlur} \\ \cline{3-19} & & \multicolumn{3}{c|}{Rec.} & \multicolumn{1}{c|}{F1} & IoU & Rec. & F1 & IoU & Rec. & F1 & IoU & Rec. & F1 & IoU & Rec. & F1 & IoU & Rec. & F1 & IoU \\ \hline \multirow{9}{*}{} & MVSS* &.908 &.725 &.597 &.715 &.609 &.470 & **.954** &.688 &.547 & **.944** &.627 &.481 & **.915** &.565 &.415 &.869 &.695 &.561 &.181 &.211 &.138 \\ & RIML* & **.941** & **.949** & **.908** &.732 &.795 &.702 & 900 &.918 &.863 &.869 &.892 &.821 &.777 &.818 &.721 &.900 &.918 &.857 &.096 &.142 &.094 \\ & DRAW-WINS &.867 &.874 &.793 &.553 &.636 &.514 &.886 &.854 &.764 &.878 &.856 &.767 &.820 &.789 &.680 &.732 &.770 &.658 &.320 &.419 &.301 \\ & DRAW-HRML &.897 &.926 &.876 &.877 &.910 &.856 &.928 & **.946** & **.905** &.913 &.932 &.884 &.889 & **.909** & **.849** & **.917** &.939 & **.893** & **.556** & **.839** & **.544** \\ & DRAW-HRNet &.936 &.947 &.903 & **.922** & **.934** & **.884** &.929 &.934 &.883 &.933 &.938 & **.885** &.902 &.861 &.776 & **.927** & **.940** &.891 &.552 &.638 &.523 &.632 \\ \hline \multirow{9}{*}{} & MVSS* &.833 &.781 &.703 &.677 &.636 &.544 &.861 &.755 &.668 &.771 &.627 &.527 &.653 &.471 &.366 &.795 &.731 &.640 &.339 &.336 &.258 \\ & RIML* &.888 &.889 &.856 &.774 &.793 &.737 &.896 &.895 &.861 &.829 &.835 &.788 &.694 &.719 &.657 &.850 &.856 &.811 &.557 &.572 &.493 \\ & DRAW-MVSS &.901 &.893 &.857 &.839 &.836 &.780 &.915 &.890 &.850 &.862 &.842 &.793 &.804 &.767 &.706 &.781 &.851 &.803 &.631 &.657 &.582 \\ & DRAW-HML &.915 &.925 &.910 &.875 &.895 &.868 &.906 &.918 &.899 &.884 &.899 &.874 &.845 &.866 &.829 &.897 &.910 &.888 &.774 &.811 &.768 \\ & DRAW-HRNet & **.969** & **.970** & **.959** & **.960** & **.956** & **.937** & **.962** & **.957** & **.943** & **.955** & **.951** & **.932** & **.916** & **.884** & **.839** & **.958** & **.955** & **.939** & **.915** & **.920** & **.885** \\ \hline \multirow{9}{*}{} & MVSS* &.259 &.229 &.172 &.101 &.062 &.039 &.404 &.360 &.263 &.180 &.090 &.054 &.212 &.097 &.058 &.088 &.050 &.030 &.085 &.043 &.026 \\ & RIML* &.126 &.140 &.097 &.035 &.047 &.030 &.132 &.155 &.113 &.014 &.020 &.013 &.001 &.001 &.001 &.037 &.043 &.026 &.068 &.077 &.048 \\ \cline{1-1} & DRAW-MVSS &.737 &.752 &.672 &.657 &.682 &.588 &.771 &.756 &.667 &.617 &.645 &.546 & **.515** & **.536** & **.434** &.567 &.595 &.497 &.514 &.561 &.463 \\ \cline{1-1} & DRAW-HMM &.663 &.716 &.656 &.457 &.518 &.452 &.667 &.718 &.654 &.348 &.411 &.342 &.091 &.121 &.089 &.366 &.423 &.360 &.284 &.338 &.281 \\ \cline{1-1} & DRAW-HRNet & **.776** & **.791** & **.735** & **.754** & **.760** & **.685** & **.788** & **.771** & **.697** & **.719** & **.714** & **.625** &.468 &.454 &.346 & **.732** & **.735** & **.467** & **.686** & **.704** & **.618** \\ \hline \end{tabular} \end{table} Table 2: **Average performance of different methods on forgery localization.** Dataset: RAISE. The best performances are highlighted in bold type. *: open-source pretrained models finetuned on original RAISE images with _copy-moving_, _splicing_ and _inpainting_. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{} & \multirow{2}{*}{Models} & \multicolumn{3}{c|}{No attack} & \multicolumn{3}{c|}{Rescaling} & \multicolumn{3}{c|}{JPEG70} & \multicolumn{3}{c|}{GBlur} \\ \cline{3-19} pipelines while preserving promising detection capacity. To justify the generalizability to lossy transmission, we randomly handcraft 150 manipulated images, upload them onto several famous OSNs and download them for detection. Also, we test the performance against dual JPEG and salt & pepper attack (\(p=5\%\)) which are untrained types for DRAW. As shown in Table 5, DRAW can effectively resist lossy OSN transmission, and its protection remains valuable against unknown lossy operations. **Computational Complexity.** We compare the computational requirements of MPF-Net in Table 6 with SegNet [6], ShuffleNet [52], U-Net [58] and ENet [54], which are famous lightweight models for image segmentation. MPF-Net requires lower computing resources, e.g, only 20.9% in memory cost and 0.95% in parameters compared to the classical U-Net. ### Baseline Comparisons Previous techniques in proactive image forgery detection, e.g., tag retrieval [71] or template matching [4], are not suitable for image manipulation localization. Moreover, Ying et al. [83]'s method additionally considers image self-recovery, which inevitably includes much heavier protective signal. Therefore, we alternatively build two baseline methods that respectively apply pure robust training using our proposed attack layer and apply RGB-domain protection. In the tests, MVSS is employed as localization network. The quantitative comparison results are reported in Table 7. Further details regarding the experimental settings for the two baseline methods are included in the supplement. **RAW Protection vs Pure Robust Training.** Our proposed robust training mechanism reflected in the attack layer is different from that proposed in RIML. Specifically, we render the unprotected RAW files \(\mathbf{R}\) using \(\mathcal{S}\), which are then attacked by \(\mathcal{A}\). We see that the introduction of robust training can help boost the performance of MVSS. However, the overall performance is still worse than further applying RAW protection to aid localization. In severe degrading cases such as blurring, the performance gap between RAW protection and robust training without protection regarding F1 score is more than ten percent. **RAW protection vs RGB protection.** For fair comparison, we regulate that the overall PSNR on RGB images before and after RGB protection should be above 40 dB, in line with the criterion in Table 1. We conduct qualitative experiment in Fig 8 to evaluate the effectiveness of image protection. According to the experimental results, RGB protection cannot aid robust manipulation localization if the magnitude of RGB modification is restricted. We also grayscale the augmented injected signal for better visualization and found that signal injected by RAW protection is more adaptive in magnitude to the image contents. One pos \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Forgery} & \multicolumn{2}{c|}{SegNet [6]} & ShuffleNet [52] & U-Net [58] & ENet [54] & MPF-Net \\ \hline Params & 29.5M & 0.94M & 26.35M & 0.36M & 0.25M \\ FLOPS & 0.56T & 22.9G & 0.22T & 2.34G & 7.39G \\ Memory & 465MB & 390MB & 767MB & 46MB & 160MB \\ \hline \hline \end{tabular} \end{table} Table 6: **Comparison of computational cost among lightweight image-to-image-translation or segmentation networks.** \begin{table} \begin{tabular}{c|c|c|c c c c c} \hline \hline \multicolumn{2}{c|}{Test Item} & \multicolumn{1}{c|}{Forgery} & NoAtk & Rescale & JPEG70 & M-Blur & G-Blur \\ \hline \multirow{4}{*}{Compiler} & \multirow{2}{*}{OpenISP} & Spli. &.929 &.910 &.837 &.933 &.620 \\ & & Copy. &.941 &.919 &.843 &.941 &.880 \\ & & Inpa. &.850 &.820 &.451 &.765 &.756 \\ \cline{2-8} & \multirow{4}{*}{Restormer} & Spli. &.946 &.936 &.863 &.941 &.648 \\ & & Copy. &.961 &.947 &.871 &.948 &.904 \\ & & Inpa. &.906 &.833 &.487 &.789 &.759 \\ \hline \multirow{4}{*}{Compiler} & \multirow{4}{*}{Canon} & Spli. &.936 &.925 &.845 &.931 &.596 \\ & & Copy. &.957 &.930 &.859 &.946 &.881 \\ & & Inpa. &.805 &.732 &.486 &.710 &.706 \\ \cline{1-1} \cline{2-8} & \multirow{4}{*}{SIDD} & Spli. &.928 &.909 &.832 &.911 &.574 \\ \cline{1-1} & & Copy. &.967 &.965 &.891 &.954 &.880 \\ \cline{1-1} & & Inpa. &.686 &.628 &.400 &.574 &.554 \\ \hline \hline \end{tabular} \end{table} Table 4: **Generalizability to untrained ISP pipelines or datasets.**\(\mathcal{P}\) and \(\mathcal{D}\) are trained on RAISE. Figure 8: **Qualitative analysis on performance between passive localization without image protection, with RGB protection and with RAW protection.** Dataset: RAISE. \(\mathcal{D}\): MVSS\({}^{*}\) (upper), RIML\({}^{*}\) (lower). Type: copy-moving (upper), inpainting (lower). \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Forgery} & \multicolumn{2}{c|}{S\&P} & Dual JPEG & Facebook & Weibo & \multicolumn{1}{c}{WeChat} \\ \cline{2-9} & F1 & IoU & F1 & IoU & F1 & IoU & F1 & IoU & F1 & IoU \\ \hline splicing & 839 &.885 &.657 &.683 &.917 &.920 &.902 &.897 &.763 &.728 \\ copymove & 854 &.850 &.692 &.729 &.905 &.910 &.859 &.870 &.637 &.688 \\ impainting &.687 &.711 &.377 &.423 &.665 &.598 &.623 &.577 &.410 &.355 \\ \hline \hline \end{tabular} \end{table} Table 5: **Generalizability to lossy transmission and untrained perturbations.** Dataset: RAISE. sible reason is that the densely-predicting task requires hiding more information than binary image forgery classification task, making it struggle to maintain high fidelity of the original image. In comparison, RAW protection can adaptively introduce protection with the help of content-related procedures, e.g., demosaicing and noise reduction, within the subsequent ISP algorithms that suppress unwanted artifacts and biases. Theoretically, RAW data modification enjoys a much larger search space that allows transformations from the original image into another image with high density upon sampling. ### Ablation Studies Table 8 and Fig. 9 respectively show the quantitative and qualitative results of ablation studies. In each test, we regulate that the averaged PSNR between \(\mathbf{I}\) and \(\hat{\mathbf{I}}\), with ISP pipelines evenly applied, should be within the range of 41-43 dB, to ensure imperceptible image protection. **Substituting the architecture of \(\mathcal{P}\).** We first test if using U-Net with a similar amount of parameters or ENet [54] as \(\mathcal{P}\) can achieve similar performance on splicing detection. First, though ENet contains similar amount of parameters compared to MPF-Net, the performance of image manipulation localization using ENet as \(\mathcal{P}\) is not satisfactory. Second, though U-Net with _DSConv_ provides much better result, because the channel numbers within each layer are restricted within 48 to save computational complexity, the performance is still worse than our benchmark. **Impact of components in MPF-Net.** The most noticeable difference between MPFNet with previous U-shaped networks is that feature disentanglement can be better ensured even with fewer parameters. To verify this, we respectively replace the HFC layer and PFF layer with typical alternatives, i.e., vanilla convolution and channel-wise concatenation. The performances are nearly 5-10 points weaker compared to the MPF-Net setup. First, DT-CWT is a shift-invariant wavelet transform that comes with limited redundancy. Second, partial feature fusion and partial connection are more flexible. The design explicitly keeps some of the features extracted from the current level and directly feeds them into the subsequent block. Therefore, for different levels, the input features will be different, which encourages feature disentanglement. **Impact of pipeline design.** We also tested the setting of not using the image distortion module or color adjustment module in the pipeline during training. The result is as expected that the scheme will therefore lack generalizability in overall robustness due to the fact that there are not enough random processes that can simulate the real-world situation. Besides, not introducing the difference between the real-world and simulated attacks or using only one ISP surrogate model will also impair the overall performance. ## 5 Conclusions We present DRAW that adds imperceptible protective signal to the RAW data against image manipulation. The protection can be transferred into RGB images and resist lossy operations. Extensive experiments on typical RAW datasets prove the effectiveness of DRAW. **Acknowledgment.** This work was supported by National Natural Science Foundation of China under Grant U20B2051, U1936214, U22B2047, 62072114 and U20A20178. \begin{table} \begin{tabular}{c|c c c c|c c c|c c|c c} \hline \multicolumn{2}{c|}{} & \multicolumn{3}{c|}{Used Modules} & \multicolumn{3}{c|}{Rescaling} & \multicolumn{3}{c|}{JPEG70} & \multicolumn{3}{c}{Med. Blur} & \multicolumn{3}{c}{GB blur} \\ \hline \(\mathcal{P}\) & \(\mathcal{P}^{-}\) & \(\mathcal{A}\) & \(\mathcal{D}\) & F1 & IoU & F1 & IoU & F1 & IoU & F1 & IoU \\ \hline \multirow{4}{*}{\begin{tabular}{c} \end{tabular} } & & & & & & 609 & 470 & 565 & 415 & 695 &.561 &.211 &.138 \\ & & & ✓ & ✓ & **668** & **534** & 725 & 590 & 762 &.635 &.303 &.207 \\ & & ✓ & ✓ & ✓ & 358 & 253 & 438 & 317 & 487 &.361 &.149 &.097 \\ & ✓ & ✓ & ✓ & 636 &.514 & **789** & **680** & **770** & **658** & **419** & **.301** \\ \hline \multirow{4}{*}{\begin{tabular}{c} \end{tabular} } & & & & & & 636 & 544 & 471 &.366 &.731 &.640 &.336 &.258 \\ & & & ✓ & **859** & **816** & 648 &.582 &.782 &.728 &.528 &.456 \\ & ✓ & ✓ & ✓ & 490 & 412 & 467 &.382 &.626 &.548 &.268 &.208 \\ & ✓ & ✓ & ✓ & 836 & 780 & **767** & **706** & **851** & **803** & **657** & **.582** \\ \hline \multirow{4}{*}{ \begin{tabular}{c} \end{tabular} } & & & & &.062 &.039 &.097 &.058 &.050 &.030 &.043 &.026 \\ & & & ✓ & ✓ & 605 &.494 &.231 &.159 &.398 &.297 &.342 &.249 \\ \cline{1-1} & & ✓ & ✓ & ✓ &.387 &.291 &.480 &.371 &.381 &.288 &.374 &.279 \\ \cline{1-1} & ✓ & ✓ & ✓ & **682** & **588** & **536** & **434** & **595** & **497** & **561** & **.463** \\ \hline \end{tabular} \end{table} Table 7: **Comparison with baseline methods on RAISE.** We verify the importance of RAW protection by comparing the results with those of pure robust training using \(\mathcal{A}\) and direct RGB protection. \(\mathcal{P}^{-}\): using \(\mathcal{P}\) for RGB protection. \(\mathcal{D}\): MVSS* \begin{table} \begin{tabular}{c c c} \hline \multicolumn{2}{c|}{Test} & \multicolumn{2}{c}{F1} \\ \cline{2-4} & NoAtk & JPEG70 & Mblur \\ \hline \(\mathcal{P}\) using U-Net\({}^{1}\) &.877 &.769 &.535 \\ \(\mathcal{P}\) using ENet &.324 &.137 &.092 \\ \hline MPF-Net w/o HFC &.844 &.710 &.602 \\ MPF-Net w/o DT-CWT &.852 &.751 &.626 \\ MPF-Net w/o PFF &.827 &.712 &.667 \\ \hline w/o diff from real attack &.842 &.566 &.502 \\ using only one ISP Surrogate &.648 &.455 &.267 \\ w/o Image Distortion Module &.929 &.245 &.116 \\ w/o Color Ajustment Module &.814 &.759 &.641 \\ \hline Full implementation of **DRAW** &.929 & **.838** & **.696** \\ \hline \end{tabular} \end{table} Table 8: **Ablation study on DRAW on Nikon using splicing attack.** 1: replacing _Conv_ layers with _DSConv_. Figure 9: **Examples of ablation studies of DRAW.** We observe that either replacing MPF-Net with U-Net using _DSConv_ or removing HFC module results in decreased performance. Upper: inpainting + JPEG80. Lower: copy-moving + median-blur.
2309.13782
On the Computational Benefit of Multimodal Learning
Human perception inherently operates in a multimodal manner. Similarly, as machines interpret the empirical world, their learning processes ought to be multimodal. The recent, remarkable successes in empirical multimodal learning underscore the significance of understanding this paradigm. Yet, a solid theoretical foundation for multimodal learning has eluded the field for some time. While a recent study by Lu (2023) has shown the superior sample complexity of multimodal learning compared to its unimodal counterpart, another basic question remains: does multimodal learning also offer computational advantages over unimodal learning? This work initiates a study on the computational benefit of multimodal learning. We demonstrate that, under certain conditions, multimodal learning can outpace unimodal learning exponentially in terms of computation. Specifically, we present a learning task that is NP-hard for unimodal learning but is solvable in polynomial time by a multimodal algorithm. Our construction is based on a novel modification to the intersection of two half-spaces problem.
Zhou Lu
2023-09-25T00:20:50Z
http://arxiv.org/abs/2309.13782v2
# On the Computational Benef of Multimodal Learning ###### Abstract Human perception inherently operates in a multimodal manner. Similarly, as machines interpret the empirical world, their learning processes ought to be multimodal. The recent, remarkable successes in empirical multimodal learning underscore the significance of understanding this paradigm. Yet, a solid theoretical foundation for multimodal learning has eluded the field for some time. While a recent study by [11] has shown the superior sample complexity of multimodal learning compared to its unimodal counterpart, another basic question remains: does multimodal learning also offer computational advantages over unimodal learning? This work initiates a study on the computational benefit of multimodal learning. We demonstrate that, under certain conditions, multimodal learning can outpace unimodal learning exponentially in terms of computation. Specifically, we present a learning task that is NP-hard for unimodal learning but is solvable in polynomial time by a multimodal algorithm. Our construction is based on a novel modification to the intersection of two half-spaces problem. ## 1 Introduction At the heart of human perception lies multimodality. This capability enables us to perceive and interrelate different facets of the same empirical object. It's particularly important during the infantile stage of human development, where it helps unify disparate symbols, fostering comprehensive cognition as a foundation for adulthood. The analogy of raising a child in a "room of text" alone highlights the limitations of a unimodal approach; it's bound to be counterproductive. In the realm of machine learning, multimodality plays a role analogous to its significance in human cognition. Here, we view machine learning as the machine's process of perception. Multimodal learning entails accumulating vast amounts of training data across various modalities and subsequently deploying the trained model to handle new unimodal tasks. This learning progression mirrors the transition from infancy to adulthood in humans. Empirical studies have consistently shown that models trained using multiple modalities often surpass finely-tuned unimodal models, even when evaluated on new unimodal data. In spite of notable empirical successes, like Gato [15] and GPT-4 [13], the theoretical explanations of multimodal learning remain relatively underexplored. Thus, establishing a solid theoretical foundation becomes imperative. A recent study by [11] set the stage for a broader understanding of the statistical advantages of multimodal learning. The research showed that multimodal learning achieves superior generalization bounds compared to unimodal learning, especially when the data exhibits both connection and heterogeneity. However, the question arose: does multimodal learning also present computational advantages? Our work provides an affirmative answer. We show a computational separation between multimodal and unimodal learning. Specifically, we introduce a learning task, rooted in the intersection of two half-spaces problem, which poses an NP-hard challenge for any unimodal learning algorithm. Yet, this very task yields to a polynomial solution under a multimodal learning paradigm. This dichotomy demonstrates the potential exponential computational advantage of multimodal learning over its unimodal counterpart. Coupled with the statistical insights from [11], our findings further illuminate the vast potential of multimodal learning. ### Related Works **Theoretical Multimodal Learning**: despite the empirical success of multimodal learning, a cohesive theoretical foundation was long missing in this area. Most existing theoretical findings are bound by specific assumptions and contexts. For instance, studies such as [21, 1, 5, 17] navigate multimodal learning within a multi-view framework, operating under the assumption that individual modalities are, in isolation, adequate for predictions. [18, 9] delve into algorithms pivoting on information-theoretical relationships across modalities. [16] consider the specific problem of the benefit of contrastive loss in multimodal learning with a linear data-generating model. [7] studies the generalization ability of multimodal learning in estimating the latent space representation. A recent work [11] proposes a broad-based theory on the statistical guarantee of multimodal learning. They prove that multimodal learning admits an \(O(\sqrt{m})\) improvement in generalization error over unimodal learning. This is achieved by dissecting the learning of the composition of two hypotheses, where the sum of complexities of the hypotheses is markedly smaller than that of their composition. Additionally, they pinpoint connection and heterogeneity amidst modalities as the two pivotal elements propelling these statistical advantages of multimodal learning. **Empirical Multimodal Learning**: applications of multimodal learning can be traced back to the last century, aiming at combining vision and audio data to improve the performance of speech recognition [20, 12]. As the field evolved, multimodal learning carved a niche in multimedia, enhancing capabilities in indexing and search functionalities [4, 10]. Recently, there is a trend in applying multimodal learning in deep learning practices, including modality generation [3, 6, 14] and large-scale generalist models [15, 13]. A consistently observed empirical phenomenon is that a multimodal model is able to outperform a finely-tuned unimodal model, even on unimodal population data. ## 2 Setting In this section, we delineate the setup of multimodal learning and essential background on the intersection of two half-spaces problem. ### Multimodal Learning Setup In this paper, we restrict our focus to the fundamental, yet non-trivial, scenario of two modalities for a clear exposition, adopting the setup of [11]. Formally, the multimodal learning classification framework encompasses two modalities, denoted as \(\mathcal{X},\mathcal{Y}\subset\mathbb{R}^{n}\), and a label space \(\mathcal{Z}=\{\pm\}\). Consequently, every data point can be represented as a tuple \((x,y,z)\). Given a hypothesis class \(\mathcal{H}\) and a training dataset \((X,Y,Z)\) with \(m\) data points \((x_{i},y_{i},z_{i})\), our aim in (proper) learning from \((X,Y,Z)\) is to output a hypothesis \(h\in\mathcal{H}\), that minimizes the empirical risk: \[\ell_{emp}=\frac{\sum_{i=1}^{m}\mathbf{1}_{h(x_{i},y_{i})\neq z_{i}}}{m}.\] When each data point \((x,y,z)\) adheres to a specific data distribution \(D\) over \((\mathcal{X},\mathcal{Y},\mathcal{Z})\), the goal of (properly) PAC-learning \((X,Y,Z)\) is to output a hypothesis \(h\in\mathcal{H}\), such that the population risk \[\ell_{pop}=\mathbb{E}_{(x,y,z)\sim D}[\mathbf{1}_{h(x,y)\neq z}]\] is small with high probability. In addition, we mandate a bijective mapping between \(x,y\) for any potential data point \((x,y,z)\). For brevity, we occasionally write \((\mathcal{X},\mathcal{Y},\mathcal{Z})\) for short to denote the learning problem, when it is clear from the context. The unimodal learning problems \((\mathcal{X},\mathcal{Z})\) and \((\mathcal{Y},\mathcal{Z})\) can be defined in a similar way, in which the label \(y\) or \(x\) is masked. In learning \((\mathcal{X},\mathcal{Z})\), we are given a hypothesis class \(\mathcal{H}\) and a set \((X,Z)\) of training data with \(m\) data points \((x_{i},z_{i})\). The empirical risk and the population risk are defined as follows respectively. \[\ell_{emp}=\frac{\sum_{i=1}^{m}\mathbf{1}_{h(x_{i})\neq z_{i}}}{m},\ \ \ell_{pop}=\mathbb{E}_{(x,y,z)\sim D}[\mathbf{1}_{h(x)\neq z}].\] ### Intersection of Two Half-spaces In our quest to demonstrate a computational separation between multimodal and unimodal learning, we sought to architect a specific learning challenge that presents as NP-hard for unimodal learning, but for which an efficient multimodal solution exists. A candidate of such problem is the 'intersection of two half-spaces,' formally defined below: **Definition 1** (Intersection of two half-spaces).: An instance of **IntHS** is a set of points in \(\mathbb{R}^{n}\) each labeled either '+' or '-' and the goal is to find an intersection of two half-spaces which correctly classifies the maximum number of points, where a '+' point is classified correctly if it lies inside the intersection and a '-' point is classified correctly if it lies outside of it. Previous work has shown that PAC-learning this intersection is inherently NP-hard, even in the realizable setting, encapsulated in the following result: **Theorem 2** ([8]).: _Let \(\ell\) be any fixed integer and \(\epsilon>0\) be an arbitrarily small constant. Then, given a set of labeled points in \(\mathbb{R}^{n}\) with a guarantee that there is an intersection of two half-spaces that classifies all the points correctly, there is no polynomial time algorithm to find a function \(f\) of up to \(\ell\) linear threshold functions that classifies \(\frac{1}{2}+\epsilon\) fraction of points correctly, unless NP = RP._ A slightly weaker version of the above result which will be of use is the following: **Proposition 3**.: _Let \(\epsilon>0\) be an arbitrarily small constant. Then, given a set of labeled points in \(\mathbb{R}^{n}\) with a guarantee that there is an intersection of two half-spaces that classifies all the points correctly, there is no polynomial time algorithm to find a function \(f\) of an intersection of two half-spaces that classifies \(\frac{1}{2}+\epsilon\) fraction of points correctly, unless NP = RP._ It's clear Proposition 3 is a direct consequence of Theorem 2, given that the intersection of two half-spaces naturally translates to \(\ell\) linear threshold functions with \(\ell=2\). Through out this paper we will only consider the case of proper learning with our hypothesis class including only intersections of two half-spaces. A Computational Separation between Multimodal and Unimodal Learning To demonstrate the computational benefit of multimodal learning, we present an instance in which both unimodal learning problems \((\mathcal{X},\mathcal{Z})\) and \((\mathcal{Y},\mathcal{Z})\) are NP-hard, while the multimodal learning problem \((\mathcal{X},\mathcal{Y},\mathcal{Z})\) can be solved efficiently. In particular, we require the existence of a bijective mapping \(f:\mathcal{X}\rightarrow\mathcal{Y}\) satisfying \(y=f(x)\) for any data point \((x,y,z)\in(\mathcal{X},\mathcal{Y},\mathcal{Z})\), so that the hardness result is purely computational. The task of constructing such an instance can be decomposed into three steps 1. We start by setting \((\mathcal{X},\mathcal{Z})\) as a NP-hard problem, in this case, an instance of **IntHS**. 2. Based on \((\mathcal{X},\mathcal{Z})\), we construct a bijective mapping between \(x,y\), to obtain a new NP-hard problem \((\mathcal{Y},\mathcal{Z})\) by preserving the **IntHS** structure. 3. The bijective mapping should be designed carefully, such that the multimodal problem \((\mathcal{X},\mathcal{Y},\mathcal{Z})\) can be solved efficiently. Below we describe the construction of the instance and the main idea behind. A detailed proof is provided in the next section. **Step 1:** We set one of the unimodal learning problem, say \((\mathcal{X},\mathcal{Z})\), as an instance of **IntHS**. We denote any problem of **IntHS** by \(H_{1}\cap H_{2}\) with halfspaces \(H_{1},H_{2}\) in \(\mathbb{R}^{n}\), where each \(H_{i}=(x|r_{i}^{\top}x\leq c_{i})\) is determined by the unit vector \(r_{i}\) and \(c_{i}\in\mathbb{R}\). **Step 2:** A critical observation is that, any **IntHS** problem \(H_{1}\cap H_{2}\) can be transformed into a new **IntHS** problem by applying a coordinate change, under which each \(x\) is mapped to a new point with the corresponding \(z\) remaining the same. Denote \(Q\in\mathbb{R}^{n\times n}\) as any orthogonal matrix, we obtain \(\hat{H}_{1}\cap\hat{H}_{2}\) where \(\hat{H}_{i}=(x|\hat{r}_{i}^{\top}x\leq c_{i})\) by setting \(\hat{r}_{i}=Qr_{i}\). Let \(y=Qx\), we create a new NP-hard unimodal problem \((\mathcal{Y},\mathcal{Z})\), as \(Q\) defines a bijective mapping from the set of all **IntHS** problems to itself. **Step 3:** It remains unclear how the multimodal problem \((\mathcal{X},\mathcal{Y},\mathcal{Z})\) can be easy to learn. Our strategy is to design a special \(Q\) for each \(H_{1}\cap H_{2}\), by encoding the information of \(H_{1}\cap H_{2}\) into the transformation \(Q\). Ideally, with \(n\) linearly-independent \(x_{i}\), we can recover the matrix \(Q\) by basic linear algebra. With the exact values of \(r_{1},r_{2}\) in hand, we get \(c_{1},c_{2}\) by listing the distances from all \(x\) to the hyperplane \(r_{i}^{\top}x=0\) in \(O(mn^{2})\) time. The obtained classifier achieves zero loss on the training data. However, it's challenging to directly encode the vectors \(r_{1},r_{2}\) into the \(n\times n\) matrix \(Q\). There are two main obstacles. First, how to encode the information of \(r_{1},r_{2}\) is unclear: \(Q\) is under the constraint of an orthogonal matrix, which might be violated by simply filling \(r_{1},r_{2}\) into \(Q\). Using more complicated techniques of encoding may bring other concerns such as the existence of a closed-form representation or whether decoding can be done efficiently. Second, the quality of such encoding is questionable: even if we find a way to encode \(r_{1},r_{2}\) into \(Q\), we still need to make sure \((\mathcal{Y},\mathcal{Z})\) exhausts the set of all possible **IntHS** instances. Otherwise although each \((\mathcal{Y},\mathcal{Z})\) problem is an **IntHS** instance, the set of all possible \((\mathcal{Y},\mathcal{Z})\) problems is a merely a subset of **IntHS**, preventing us from directly applying the NP-hardness result. Fortunately, we have a very simple remedy: enlarging the dimension \(n\) by twice, then using the first \(n\) coordinates for **IntHS** while the latter \(2n\) coordinates to encode the information of **IntHS**. Roughly speaking, we create \(2n\) null coordinates with no effect on the **IntHS** problem, while they carry the information of **IntHS** which can only be retrived by knowing both \(x,y\). In particular, for any **IntHS** problem \(H_{1}\cap H_{2}\), we set \(Q\) as \[Q=\begin{pmatrix}I_{n}&0&0\\ 0&\frac{r_{1}}{\sqrt{2}}&\cdots\\ 0&\frac{r_{2}}{\sqrt{2}}&\cdots\end{pmatrix}.\] The vectors \(r_{1},r_{2}\) are simply flattened and set as the first column of the second block. Since the norm of this column is \(1\), \(Q\) can be easily made feasible. The identity matrix \(I_{n}\) ensures \((\mathcal{Y},\mathcal{Z})\) exhausts the set of all possible **IntHS** instances. The main result of this paper is given by the following theorem. **Theorem 4** (Computational separation).: _There exists a multimodal learning problem \((\mathcal{X},\mathcal{Y},\mathcal{Z})\) which is PAC-learnable in polynomial time, while both unimodal learning problems \((\mathcal{X},\mathcal{Z})\), \((\mathcal{Y},\mathcal{Z})\) are NP-hard, even if there is a bijective mapping \(f:\mathcal{X}\rightarrow\mathcal{Y}\) such that \(y=f(x),\forall(x,y,z)\sim(\mathcal{X},\mathcal{Y},\mathcal{Z})\)._ Theorem 4 demonstrates that multimodal learning solves some learning tasks exponentially faster than unimodal learning. Such exponential separation explains the empirical superiority of multimodal learning from the perspective of computation, supplementing the statistical quatantees in [11]. Notably, the two pivotal factors leading to the statistical benefit of multimodal learning in [11], namely connection and heterogeneity, are also evident in our construction. In particular, the mapping \(Q\) between \(\mathcal{X},\mathcal{Y}\) is bijective, meaning there exists a perfect connection between both modalities. On the other hand, \(\mathcal{X},\mathcal{Y}\) carry different information about the problem, which is useless alone but effective when put together, indicating s strong heterogeneity. ## 4 Proof of Theorem 4 We first introduce the necessary ingredients for the construction of the learning problem. For each pair of unit vectors \(v_{1},v_{2}\in\mathbb{R}^{n}\), there exist orthogonal matrices in \(\mathbb{R}^{2n}\) with its first column to be \((\frac{v_{1}}{\sqrt{2}},\frac{v_{1}}{\sqrt{2}})\) since \(\|(\frac{v_{1}}{\sqrt{2}},\frac{v_{1}}{\sqrt{2}})\|_{2}=1\). In particular, for each pair \(v_{1},v_{2}\) we fix one such orthogonal matrix \(F\), defining a function \(F(v_{1},v_{2}):\mathbb{R}^{2n}\rightarrow\mathbb{R}^{2n\times 2n}\) as below: \[F(v_{1},v_{2})=\begin{pmatrix}\frac{v_{1}}{\sqrt{2}}&\cdots\\ \frac{v_{2}}{\sqrt{2}}&\cdots\end{pmatrix}.\] In addition, we define an orthogonal transformation matrix \(Q(v_{1},v_{2})\in\mathbb{R}^{3n\times 3n}\) as \[Q(v_{1},v_{2})=\begin{pmatrix}I_{n}&0\\ 0&F(v_{1},v_{2})\end{pmatrix}.\] The matrix \(Q(r_{1},r_{2})\) will serve as a fingerprint of an **IntHS** problem \(H_{1}\cap H_{2}\). We also define a variant of the intersection of two half-spaces problem. **Definition 5** (Low-dimensional intersection of two half-spaces).: An instance of **IntHS\({}_{\lambda}\)** is a set of points in \(\mathbb{R}^{n}\) each labeled either '+' or '-', in which the labels only depend on the first \(\lambda n\) coordinates where \(\lambda\in(0,1)\) is a constant. The goal is to find an intersection of two half-spaces which correctly classifies the maximum number of points, where a '+' point is classified correctly if it lies inside the intersection and a '-' point is classified correctly if it lies outside of it. **Lemma 6**.: _For every constant \(\lambda>0\), learning \(\textbf{IntHS}_{\lambda}\) is NP-hard._ Proof.: We prove by reduction. Suppose for contradiction \(\mathbf{IntHS}_{\lambda}\) can be learnt in polynomial time, then for each instance of \(\mathbf{IntHS}\), we can create a new instance of \(\mathbf{IntHS}_{\lambda}\) with dimension \(\frac{n}{\lambda}\) by extension. In particular, each point \(x\in\mathbb{R}^{\frac{n}{\lambda}}\) shares the same label as \(x_{[1:n]}\) in the original \(\mathbf{IntHS}\) instance. As a result, any classifier of \(\mathbf{IntHS}_{\lambda}\) applies to the \(\mathbf{IntHS}\) problem with the same accuracy, contradicting Proposition 3. Now we are ready to state the learning problem \((\mathcal{X},\mathcal{Y},\mathcal{Z})\): \(m\) data points \((x_{i},y_{i},z_{i})\) are given, where \(x_{i},y_{i}\in\mathbb{R}^{3n}\) represent the two modalities and \(z_{i}=\pm\) is the label. It's guaranteed that there is an intersection of two half-spaces that classifies all the points correctly, with supports of the defining unit vectors being the first \(n\) coordinates. In other words, it's a realizable instance of \(\mathbf{IntHS}_{\frac{1}{3}}\). In particular, there are unit vectors \(r_{1},r_{2}\in\mathbb{R}^{n}\) and constants \(c_{1},c_{2}\in\mathbb{R}\) (unknown to the learner), such that all pairs \((x_{i},z_{i})\) can be perfectly classified by \(\hat{H}_{1}\cap\hat{H}_{2}\), where \(\hat{H}_{i}=(x|\hat{r}_{i}^{\top}x\leq c_{i})\) and \(\hat{r}_{i}=(r_{i},\mathbf{0}_{2n})\). Meanwhile, \(y_{i}=Q(r_{1},r_{2})x_{i}\) holds for all data points, and all pairs \((y_{i},z_{i})\) can be perfectly classified by \(\tilde{H}_{1}\cap\hat{H}_{2}\), where \(\tilde{H}_{i}=(x|\tilde{r}_{i}^{\top}x\leq c_{i})\) and \(\tilde{r}_{i}=Q(r_{1},r_{2})(r_{i},\mathbf{0}_{2n})\). Define the hypothesis set \(\mathcal{S}\) as \[\mathcal{S}=\{h|h(x)=\mathbf{sgn}(\min(c_{1}-r_{1}^{\top}x,c_{2}-r_{2}^{\top} x)),c_{i}\in\mathbb{R},\|r_{i}\|_{2}=1\},\] which is exactly the set of all intersection of two half-spaces. We have the following results. **Lemma 7**.: _Property learning \((\mathcal{X},\mathcal{Z})\) with \(\mathcal{S}\) is NP-hard._ Proof.: It is a direct consequence of Lemma 6, noticing that \((\mathcal{X},\mathcal{Z})\) is an \(\mathbf{IntHS}_{\frac{1}{3}}\) instance. **Lemma 8**.: _Property learning \((\mathcal{Y},\mathcal{Z})\) with \(\mathcal{S}\) is NP-hard._ Proof.: Although \((\mathcal{Y},\mathcal{Z})\) is also an \(\mathbf{IntHS}_{\frac{1}{3}}\) instance, we still need to verify that \((\mathcal{Y},\mathcal{Z})\) exhausts all possible \(\mathbf{IntHS}_{\frac{1}{3}}\) instances (otherwise we can't apply Lemma 6, for example when all \((\mathcal{Y},\mathcal{Z})\) obey the same \(\mathbf{IntHS}_{\frac{1}{3}}\) instance). Notice that \(Q\) induces a mapping \(H_{1}\cap H_{2}\to H_{1}\cap H_{2}\), and it's equivalent to proving it is a surjective mapping. For any \(\mathbf{IntHS}_{\frac{1}{3}}\) instance \(\hat{H}_{1}\cap\hat{H}_{2}\) where \(\hat{H}_{i}=(x|\hat{r}_{i}^{\top}x\leq c_{i})\) and \(\hat{r}_{i}=(r_{i},\mathbf{0}_{2n})\), because \(\hat{r}_{i}\) also has support in the first \(n\) coordinates, we have that \(\hat{r}_{i}=Q(r_{1},r_{2})r_{i}\) with \(r_{i}=\hat{r}_{i}\), proving the mapping is surjective. **Lemma 9**.: _Assume \(m\geq 3n\), \((\mathcal{X},\mathcal{Y},\mathcal{Z})\) is properly learnable with \(\mathcal{S}\) (applied to \(x\) only) in \(O(mn^{2})\) time, when there exist \(3n\) data points with linearly-independent \(x_{i}\)._ Proof.: Consider the simple algorithm 1 which consists three steps: 1. find a set \(S\) of linearly-independent \(x_{i}\) (line 2-6). 2. find \(Q\) by solving a linear system of \(S\) (line 7-8). 3. rank \(x_{i}\) along the directions of \(r_{1},r_{2}\) to get \(c_{1},c_{2}\) (line 9-10). Step 1 runs in \(O(mn^{2})\) time, since testing orthogonality between two points runs in \(O(n)\) time and \(|S|=O(n)\). Step 2 runs in \(O(n^{3})\) time which is the complexity of solving a system of linear equations. Step 3 runs in \(O(mn)\) time. Under our assumption \(m\geq 3n\), the total running time is \(O(mn^{2}+n^{3}+mn)=O(mn^{2})\). We still need to verify the found classifier \(h(x)\): \[h(x)=\mathbf{sgn}(\min(c_{1}-r_{1}^{\top}x,c_{2}-r_{2}^{\top}x))\] does classify all data points correctly. By the construction of \(Q\), we know there is a classifier \(h^{*}(x)\) which classifies all data points correctly, which shares the same \(r_{i}\) with \(h(x)\): \[h^{*}(x)=\mathbf{sgn}(\min(c_{1}^{*}-r_{1}^{\top}x,c_{2}^{*}-r_{2}^{\top}x)).\] By the choice of \(c_{1},c_{2}\), we have that \(c_{1}\leq c_{1}^{*},c_{2}\leq c_{2}^{*}\). Denote \(h_{+}=\{x\in\mathbb{R}^{3n},h(x)=+\}\), we have that \[(h_{+}\cap X)\subset(h_{+}^{*}\cap X)=X_{+},\] by the fact \(h_{+}\subset h_{+}^{*}\). Meanwhile, by the construction of \(h(x)\), we have that \(X_{+}\subset h_{+}\), and further \[X_{+}=(X_{+}\cap X)\subset(h_{+}\cap X).\] As a result, \(X_{+}=h_{+}\cap X\) which means \(h(x)\) does classify all data points correctly. ``` 1:Input: \(m\) data points \((x_{i},y_{i},z_{i})\). 2:Set \(S=\{x_{1}\}\), \(t=2\). 3:while\(|S|<3n\)do 4: If \(x_{t}\) is orthogonal to each member of \(S\), add \(x_{t}\) to \(S\). 5:\(t=t+1\). 6:endwhile 7:Solving the linear system \(Qx_{i}=y_{i}\), \(\forall x_{i}\in S\). 8:Recover \(r_{1},r_{2}\) from \(Q\). 9:Let \(X_{+}\) be the set of all \(x_{i}\) with \(z_{i}=+\). 10:Set \(c_{i}=\max_{x\in X_{+}}r_{i}^{\top}x\). ``` **Algorithm 1** Learning by decoding Lemma 9 concerns only the learnability on the training data, to extend this result to PAC-learnability we introduce the following definition. **Definition 10**.: A data distribution \(D\) on \((\mathcal{X},\mathcal{Y},\mathcal{Z})\) is called non-degenerate, if \[\mathbb{P}_{(x_{i},y_{i},z_{i})\sim D,i\in[3n]}(\exists\lambda\neq\mathbf{0},s.t.\sum_{i=1}^{3n}\lambda_{i}x_{i}=0)=0.\] Most distributions whose support has non-zero measure are non-degenerate, including common uniform and Gaussian distributions. We have the following result for PAC-learnability. **Lemma 11**.: _Assume \(m\) data points are sampled from a non-degenerate distribution \(D\) and \(m\geq 3n\), \((\mathcal{X},\mathcal{Y},\mathcal{Z})\) is properly PAC-learnable with \(\mathcal{S}\) (applied to \(x\) only) in \(O(mn^{2})\) time. In particular, with probability at least \(1-\delta\), the generalization error \(\epsilon\) of algorithm 1 is upper bounded by_ \[\epsilon=O\left(\sqrt{\frac{n\log m+\log\frac{1}{\delta}}{m}}\right).\] Proof.: By the assumption that \(D\) is non-degenerate, we have that with probability 1, there exist \(3n\) data points with linearly-independent \(x_{i}\). By the conclusion of Lemma 9, the learnt classifier achieves zero loss on training data. From classic statistical learning theory, the generalization error of such classifier can be characterized by the VC-dimension of the hypothesis class. **Theorem 12** ([19]).: _With probability at least \(1-\delta\), for every \(h\) in the hypothesis class \(\mathcal{H}\), if \(h\) is consistent with \(m\) training samples, the generalization error \(\epsilon\) of \(h\) is upper bounded by_ \[\epsilon=O\left(\sqrt{\frac{d\log m+\log\frac{1}{\delta}}{m}}\right),\] _where \(d\) denotes the VC-dimension of \(\mathcal{H}\)._ We only need to determine the VC-dimension of the class of intersection of two half-spaces in \(\mathbb{R}^{3n}\). It's well known the VC-dimension of a single half-space is \(O(n)\). [2] shows that the \(k\)-fold intersection of any VC-class has VC-dimension bounded by \(O(dk\log k)\). Putting \(d=n\) and \(k=2\) concludes the proof. ## 5 Conclusion In this paper, we take a preliminary step towards unraveling the computational benefit of multimodal learning. We demonstrate an exponential separation in computation between multimodal and unimodal learning by constructing a variant of the intersection of two half-spaces problem, which is NP-hard for any unimodal algorithm but can be efficiently solved by a multimodal algorithm. Complementing the statistical merits of multimodal learning as shown in [11], our result provides a more comprehensive theoretical understanding of the power of multimodal learning. However, our result isn't without constraints. The exhibited separation fundamentally hinges on a special construction of a hardness instance. For a more general study of the computational benefit of multimodal learning, two promising research avenues emerge: 1. Can we obtain a general sufficient condition for the computational benefit of multimodal learning? Even a polynomial improvement is interesting. 2. Can we show such separation in computation for more conventional learning problems?
2309.03591
Single-electron occupation in quantum dot arrays at selectable plunger gate voltage
The small footprint of semiconductor qubits is favourable for scalable quantum computing. However, their size also makes them sensitive to their local environment and variations in gate structure. Currently, each device requires tailored gate voltages to confine a single charge per quantum dot, clearly challenging scalability. Here, we tune these gate voltages and equalize them solely through the temporary application of stress voltages. In a double quantum dot, we reach a stable (1,1) charge state at identical and predetermined plunger gate voltage and for various interdot couplings. Applying our findings, we tune a 2$\times$2 quadruple quantum dot such that the (1,1,1,1) charge state is reached when all plunger gates are set to 1 V. The ability to define required gate voltages may relax requirements on control electronics and operations for spin qubit devices, providing means to advance quantum hardware.
Marcel Meyer, Corentin Déprez, Ilja N. Meijer, Florian K. Unseld, Saurabh Karwal, Amir Sammak, Giordano Scappucci, Lieven M. K. Vandersypen, Menno Veldhorst
2023-09-07T09:34:04Z
http://arxiv.org/abs/2309.03591v1
# Single-electron occupation in quantum dot arrays at selectable plunger gate voltage ###### Abstract The small footprint of semiconductor qubits is favorable for scalable quantum computing. However, their size also makes them sensitive to their local environment and variations in gate structure. Currently, each device requires tailored gate voltages to confine a single charge per quantum dot, clearly challenging scalability. Here, we tune these gate voltages and equalize them solely through the temporary application of stress voltages. In a double quantum dot, we reach a stable (1,1) charge state at identical and predetermined plunger gate voltage and for various interdot couplings. Applying our findings, we tune a 2\(\times\)2 quadruple quantum dot such that the (1,1,1) charge state is reached when all plunger gates are set to 1 V. The ability to define required gate voltages may relax requirements on control electronics and operations for spin qubit devices, providing means to advance quantum hardware. Quantum Dot, Single-electron Occupation, Uniformity, Stress Voltage, Spin Qubit pacs: Valid PACS appear here ## I Introduction Semiconductor spin qubits have become a compelling platform for quantum computation. Single qubit gate fidelities of 99.99% [1] and two-qubit gate fidelities exceeding 99% [2; 3; 4; 5] have been demonstrated. A moderate sensitivity to thermal effects allowed for the implementation of quantum operations above one Kelvin [6; 7; 8]. Furthermore, the small size of semiconductor spin qubits and their compatibility with advanced semiconductor manufacturing [9; 10; 11] may facilitate devices with large numbers of qubits as required for practical applications. Recent advances in the material platforms supported the realization of a \(2\times 2\) qubit array in germanium [12], a linear six qubit system in silicon [13], and the operation of a 16 quantum dot crossbar array [14]. However, scaling up the number of qubits is challenging, especially when considering the numbers needed for fault-tolerant quantum computation [15; 16; 17]. A particular challenge lies in the sensitivity of qubits to their environment leading to considerable variations of their properties, a notion that was already highlighted in the seminal work on quantum computation by Loss and DiVincenzo [18]. Substantial reductions in variability have been achieved through progress in heterostructure growth and device fabrication. For instance, these efforts focus on reducing material disorder [19; 20; 21; 22; 23; 24; 25; 26], advancing device fabrication [27; 28; 29] and addressing fluctuations in mechanical stress induced by the deposition of metallic gate electrodes [30; 31; 32]. However, significant variations remain observable in current devices [14; 33; 34] and it is an open question whether sufficient uniformity can be reached through material development alone. Alternatively, fluctuations in the potential landscape can be compensated by temporarily applying stress voltages [35; 36; 37; 38]. An alternating sequence of stress voltages and pinch-off measurements has already enabled on-demand reshaping of pinch-off voltage characteristics and their homogenization without signs of reduced device stability afterwards. Furthermore, such sequences allowed to alter the potential offset of a single electron transistor (SET) at a temperature of \(\approx 4.2\) K [38]. Yet, this methodology has not been applied to individual electrons in a quantum dot. Also, overcoming qubit variations in quantum processors will require the tuning of multiple quantum dots. Here, we demonstrate the use of stress voltages to tune the potential landscape in a quantum dot array. We show that this approach allows to change and equalize the plunger gate voltages required to reach single-electron occupation in a double quantum dot without changing any other gate voltages. Importantly, we find that the resulting confining potential remains stable for hours afterwards. To illustrate its robustness and versatility, we demonstrate that the method employed can be applied at various barrier voltages and thus interdot tunnel couplings. Furthermore, we show that the procedure can be extended to homogenize the plunger gate voltages defining the single occupation charge state in a \(2\times 2\) quantum dot system. ## II Results Fig. 1.a shows a scanning electron micrograph of a device nominally identical to the one under study in this work, which is fabricated on a \({}^{28}\)Si/SiGe heterostructure [39] (see methods). The gate design allows for the formation of a \(2\times 2\) quantum dot array (white circles) and two adjacent single electron transistors (SETs) on the left and right side [40]. We form the quantum dots Q3 and Q4 underneath the plunger gates P3 and P4 and also tune up the SET below the sensor gate S1. The left side of the device is operated as an electron reservoir. Fig. 1.b depicts a charge stability diagram recorded after the initial tuning. It shows the typical honeycomb pattern of a double quantum dot and depletion down to the \((N_{3},N_{4})=(1,1)\) charge state with \(N_{i}\) the charge occupation of Q\(i\). The charge stability diagram reveals a large asymmetry in the plunger gate voltages required to reach the single-electron regime. The voltage ranges \([V^{\rm{c}}_{\rm{Pi}},V^{\rm{c}}_{\rm{Pi}}]\) from the first to the second charge transition line of the two quantum dots are indicated by a horizontal and a vertical bar (see methods for the definition). As illustrated in Fig. 1.c those ranges do not overlap for the two quantum dots and in particular we find a separation of more than 2(4) times the Q3(Q4) charging voltage \(V^{\rm{C}}_{\rm{Pi}}=V^{+}_{\rm{Pi}}-V^{-}_{\rm{Pi}}\). While this is a rather extreme case, significant asymmetries of the plunger gate voltage ranges loading a single electron are commonly observed in quantum dot devices [41; 42; 43; 14; 44]. Therefore, if single-electron occupation can be achieved at equal plunger gate voltages in the device of Fig. 1 this would provide good prospects for the homogenization of the required plunger gate voltages in other devices that already are intrinsically more uniform. ### (1,1) charge occupation at predetermined plunger gate voltage To increase the potential uniformity, we follow our previous work [38] and apply stress voltages \(V_{\rm{stress}}\) on gate electrodes to reshape the background potential landscape. We aim to tune the system such that the (1,1) charge state is reached at predetermined plunger gate voltage. Specifically we target to load a single electron per quantum dot for \(V_{\rm{P3}}=V_{\rm{P4}}=V^{\rm{T}}\) with \(V^{\rm{T}}=1\) V, 1.1 V and 1.2 V by sequentially tuning the potential below the two plunger gates following the path shown in Fig. 2.b. Fig. 2.a illustrates the employed procedure for a single plunger gate P\(i\). We apply a stress voltage \(V_{\rm{stress}}\) for \(t_{\rm{stress}}=1\) min. Afterwards, we measure charge stability diagrams around \(V_{\rm{Pi}}=V^{\rm{T}}\) and if necessary the sensor gate voltage \(V_{\rm{S1}}\) is compensated to restore maximum sensitivity of the SET. From the charge stability diagrams we then extract the voltage range \([V^{\rm{c}}_{\rm{Pi}},V^{\rm{c}}_{\rm{Pi}}]\) required to reach single charge occupation. If setting the target voltage does not yield the targeted electron occupation in Q\(i\) (\(V^{\rm{T}}\) not in \([V^{\rm{-}}_{\rm{Pi}},V^{\rm{+}}_{\rm{Pi}}]\)) the sequence is repeated with an increased (decreased) stress voltage to shift the voltage range further upward (downward). If a single electron is loaded at the target voltage configuration we stop applying stress voltages to P\(i\) and analogously tune the potential of the other quantum dot. After the initial tune up (Fig. 1), we first follow the stressing procedure to lower the required plunger gate voltage ranges \([V^{\rm{-}}_{\rm{Pi}},V^{\rm{+}}_{\rm{Pi}}]\) to reach single-electron occupancy at 1 V. During this process we adjust the barrier gate B2 voltage in order to maintain a significant tunnel rate. Then, we perform the stressing experiment and advance from point A to E in Fig. 2.b. Here, we only change the sensor gate S1 voltage and keep all other gate voltages constant (see supplementary section S8 for the voltage settings). Fig. 2.f shows charge stability diagrams recorded after tuning toward the predefined targets \(V^{\rm{T}}\). A clear shift of the (1,1) charge region to higher plunger gate voltages and then back down is observable. Furthermore, after the completion of each tuning, setting the plunger gate voltages \((V_{\rm{P3}},V_{\rm{P4}})\) to \(\mathbf{V}^{\rm{T}}=(V^{\rm{T}},V^{\rm{T}})\) (white square marker) loads a single electron per quantum dot as also high Figure 1: **Device and tuning of a double quantum dot.****(a)** Scanning electron micrograph of a device nominally identical to the one under study. Confinement (C\(i\)) and barrier (Bi and Bi\(j\)) gates are designed to define four quantum dots indicated by the white circles. Their charge occupation is controlled by four plunger (P\(i\)) gates. Confinement gates are outlined by dashed lines for clarity. A sensor quantum dot is formed under S1 and measured in transport. **(b)** Charge stability diagram showing the single-electron occupation of the Q3-Q4 double quantum dot formed underneath P3 and P4. The plotted signal is locally contrast normalized (LCN) to increase the visibility of the charge transition lines as described in the methods section. Dashed lines connect charge triple degeneracy points and thereby indicate transitions of the charge ground state which cannot be observed directly due to latching effects. The plunger gate voltage ranges \([V^{\rm{-}}_{\rm{Pi}},V^{\rm{+}}_{\rm{Pi}}]\) that set a \((1,1)\) charge state are indicated by vertical and horizontal bars. The ranges are extracted around the center point of the (1,1) charge region (see methods). Unprocessed data shown in supplementary section S6. **(c)** Plunger gate voltage ranges \([V^{\rm{-}}_{\rm{Pi}},V^{\rm{+}}_{\rm{Pi}}]\) as extracted in **(b)**. lighted in Fig. 2.e showing the extracted voltage ranges \([V_{\text{Pi}}^{-},V_{\text{Pi}}^{+}]\). This demonstrates tunability of the chemical potentials and control over the electron occupation in a double quantum dot through the temporary application of stress voltage. Note that charge latching is reduced (increased) when tuning the voltage ranges \([V_{\text{Pi}}^{-},V_{\text{Pi}}^{+}]\) upwards (downwards). This suggests a crosstalk effect of the applied stress voltages on the surrounding tunnel barrier potentials. Fig. 2.c shows the reconstructed evolution of the center point of the (1,1) charge region \(\textbf{\emph{V}}^{(1,1)}=(V_{\text{P3}}^{(1,1)},V_{\text{P4}}^{(1,1)})\) during the tuning procedure (see methods section). Overall, the experimental trajectory reproduces qualitatively the intended one shown in Fig. 2.b. The predominantly horizontal and vertical progressions in the \((V_{\text{P3}}^{(1,1)},V_{\text{P4}}^{(1,1)})\) plane suggest limited crosstalk, i.e. applying stress voltages to one gate P\(i\) only has a small effect on the charge transition voltages of the quantum dot below the other plunger gate. Quantitatively, we find slopes \(dV_{\text{Pi}}^{(1,1)}/dV_{\text{P3}}^{(1,1)}\) between \(-0.31\) V/V and \(-0.04\) V/V. The sign of these slopes is consistent with the sign of the capacitive shift of the transition line voltage of Q\(j\) when the plunger gate voltage \(V_{\text{Pi}}\) is changed (see supplementary section S1). Correcting for this effect, we obtain the change of the charge transition voltages of Q\(j\) induced exclusively by the application of stress voltages set to P\(i\). We find crosstalks of \((+0.37\pm 0.03)\) V/V and \((+0.19\pm 0.03)\) V/V for P3 on Q4 and P4 on Q3 Figure 2: **Single-electron occupation at predetermined plunger gate voltages through voltage stressing.****(a)** Schematic of the stress-measure sequence applied to shift the voltages required to obtain the \((1,1)\) charge state. Increasing stress voltages \(V_{\text{stress}}\) are applied for \(t_{\text{stress}}=1\) min interleaved by charge stability diagram measurements. **(b)** Expected trajectory for the center of the (1,1) charge region \(\textbf{\emph{V}}^{(1,1)}\) in the (\(V_{\text{P3}}\),\(V_{\text{P4}}\)) plane during the tuning procedure as defined prior to conducting the experiment. The color of the path refers to the plunger gate being stressed. **(c)** Actual trajectory of \(\textbf{\emph{V}}^{(1,1)}\) followed during the tuning procedure. The triangle, circles and diamond mark the starting point, (intermediate) targets and the endpoint of the path, respectively. Black arrows indicate the time flow. **(d)**\(V_{\text{P3}}^{(1,1)}\) (bottom) and \(V_{\text{P4}}^{(1,1)}\) (top) as a function of the applied stress voltage \(V_{\text{stress}}\). The triangle, circles and diamond mark the same points as in **(c)** and black arrows indicate the time flow. **(e)** Plunger gate voltage ranges \([V_{\text{Pi}}^{-},V_{\text{Pi}}^{+}]\) that keep the double quantum dot in the \((1,1)\) charge state after tuning (see methods). Targets are indicated by the dotted lines. **(f)** Corresponding charge stability diagrams recorded after the application of the respective stress voltage sequences. The white square markers show the target voltages \(\textbf{\emph{V}}^{\text{T}}=(V^{\text{T}},V^{\text{T}})\). Plunger gate voltage ranges \([V_{\text{Pi}}^{-},V_{\text{Pi}}^{+}]\) that keep the system in the \((1,1)\) charge state are indicated by vertical and horizontal bars. Dashed lines indicate transitions of the charge ground state which cannot be observed directly due to latching effects. Unprocessed data shown in supplementary section S6. respectively. Overall, while these crosstalk effects could be compensated for, the simple approach presented here allowed to tune the potentials of the quantum dots to the predetermined targets. In Fig. 2.d the center voltages \(V_{3}^{(1,1)}\) and \(V_{4}^{(1,1)}\) are plotted as a function of the applied stress voltage \(V_{\text{stress}}\). We recover the typical hysteresis cycle observed when tuning pinch-off voltages using an analogous method in similar devices [38]. Noticeably, for steadily decreasing stress voltages there is an initial increase in \(V_{p_{i}}^{(1,1)}\) before it rapidly drops to lower voltages at \(V_{\text{stress}}\approx-4\) V. In Fig. 2.c this manifests as non-monotonic progressions of \(\mathbf{V}^{(1,1)}\) between the target points C and D. \(V_{\text{P}4}^{(1,1)}\) and \(V_{\text{P}3}^{(1,1)}\) initially increase by 40 mV and 180 mV, respectively, before they decrease and approach \(V^{\text{T}}=1.1\) V. Summarizing, Fig. 2 demonstrates that the background potential in the quantum well can be reshaped such that each quantum dot can be occupied with one electron using uniform plunger gate voltages. ### Time stability To understand the impact of stress voltages on device stability, we record multiple charge stability diagrams as a function of time after the initial stress tuning towards \(V^{\text{T}}=1\) V (A in Fig.2.d). Fig. 3.a shows the extracted evolution of the plunger gate voltage range that keeps the quantum dots Q3 and Q4 in the single-electron occupation. Here, the time \(t\) refers to the time since the last application of a stress voltage and voltages are plotted relative to \(V^{\text{T}}\). We find that the double quantum dot system remains in the \((1,1)\) charge state for more than 15 h showing only a weak drift. This is confirmed by standard deviations of 3 mV, 3 mV, 2 mV, and 1 mV for \(V_{\text{P}3}^{-}\), \(V_{\text{P}3}^{+}\), \(V_{\text{P}4}^{-}\), and \(V_{\text{P}4}^{+}\), respectively, which remain negligible compared to the charging voltages of 148 mV and 87 mV for Q3 and Q4, respectively. Overlaying the charge stability diagrams recorded at \(t=0\) h and at \(t=17\) h, as depicted in Fig. 3.b, provides further confirmation of the device stability. Additional time traces demonstrating stability up to 40 h after the application of the last stress voltages are presented in supplementary section S2. Moreover, we find no increase in the charge noise sensed by the right SET when comparing to typical values for such devices (see supplementary section S3). Note that the charge noise amplitude measured by the SET might differ from the charge noise level that would affect the coherence of qubits in the array. Nevertheless, we conclude that there are no signs of decreased device stability caused by the application of stress voltages. ### Predetermined plunger gate voltage for tunnel coupled quantum dots We now address the question whether single-electron occupation can still be achieved by a predetermined gate voltage, when changing the coupling between the quantum dots. In our double quantum dot system, we can control the interdot coupling by adjusting the barrier gate B34 voltage to tune the system from strong to weak coupling quantum dots. We achieve this by varying the barrier gate voltages between 0 V and \(-0.5\) V. After setting a barrier gate voltage, we apply stress voltages to the plunger gates to obtain the \((1,1)\) charge state at \(\mathbf{V}^{\text{T}}=(1\text{ V},1\text{ V})\). Fig. 4.a-e shows the resulting charge stability diagrams. The charge transition line pattern changes from exhibiting nearly diagonal lines at \(V_{\text{B34}}=0\) mV towards a rectangular grid-like pattern at \(V_{\text{B34}}=-500\) mV, revealing the transition from high to low coupling. In all cases the application of stress volt Figure 3: **Stability of the (1,1) charge state after stress tuning.****(a)** Time traces of the plunger gate voltage ranges that keep the system in the \((1,1)\) charge state (see methods for the definition) after the application of a sequence of increasing stress voltages. \(t\) is the time after the application of the last stress voltage. Note that the underlying charge stability diagram measurements were interleaved with charge noise measurements on the sensor (see supplementary section S3). Additional traces are presented in supplementary section S2. **(b)** Overlay of charge stability diagrams taken at the beginning (star, olive green) and end (hexagon, light green) of the time trace shown in **(a)**. Horizontal and vertical bars indicate the respective plunger gate voltage ranges that keep the system in the (1,1) charge state. Dashed lines indicate transitions of the charge ground state which cannot be observed directly due to latching effects. Unprocessed data shown in supplementary section S6. age sequences allows to obtain the \((1,1)\) charge state at \(\mathbf{V}^{\rm T}=(1~{}\rm V,1~{}\rm V)\). This is confirmed by the extracted voltage ranges \([V_{\rm P_{i}}^{-},V_{\rm P_{i}}^{+}]\) plotted in Fig. 4.f. Crucially, this is achieved without defining virtual gates. We conclude that for a wide range of interdot couplings single-electron occupation can be achieved at predetermined plunger gate voltage independently of the applied barrier voltage. ### (1,1,1,1) charge state at (1,1,1,1) V Finally, we utilize our findings to tune a \(2\times 2\) quantum dot array such that the \((N_{1},N_{2},N_{3},N_{4})=(1,1,1,1)\) charge state is the ground state when all plunger gate voltages are set to 1 V. Starting from the Q3-Q4 double quantum dot, we form the quantum dots Q1 and Q2 which are predominantly controlled by the plunger gates P1 and P2. Then, the system is tuned solely through tailored stress voltage sequences applied to the plunger gates. Fig. 5 shows two charge stability diagrams recorded after this tuning process unveiling four sets of charge transition lines. These can be associated with the four quantum dots by analysing further charge stability diagrams recorded by sweeping additional plunger gate combinations (see supplementary section S4). Yellow, orange, red and purple dashed lines mark the first two charge addition voltages of quantum dot Q1, Q2, Q3 and Q4, respectively. The target voltage configuration \(\mathbf{V}^{\rm T}=(V_{\rm P1}^{\rm T},V_{\rm P2}^{\rm T},V_{\rm P3}^{\rm T},V_{ \rm P4}^{\rm T})=(1~{}\rm V,1~{}\rm V,1~{}\rm V)\) is shown by a white square marker and the voltage ranges that keep the system in the (1,1,1,1) charge state are indicated by horizontal and vertical bars. \(\mathbf{V}^{\rm T}\) clearly falls between the first two charge transition lines for all four quantum dots confirming that we reached the targeted configuration. Note that all quantum dots are strongly affected by plunger gate P2 and P4 as observable in Fig. 5.b. However, in Fig. 5.a the voltages on P1 and P3 only seem to Figure 4: **Single-electron occupation at predetermined plunger gate voltage for high and low interdot coupling.****(a)-(e)** Charge stability diagrams measured after tuning the system through applying stress voltages such that the (1,1) charge state is the ground state when applying the plunger gate voltages \(\mathbf{V}^{\rm T}=(1~{}\rm V,1~{}\rm V)\) (white square marker). In each case a different barrier gate voltage \(V_{\rm B34}\) is set before the tuning (labelled in the plot titles). The range of plunger gate voltages \([V_{\rm P_{i}}^{-},V_{\rm P_{i}}^{+}]\) that keep the system in the \((1,1)\) charge state is indicated by horizontal and vertical bars (see methods). Dashed lines indicate transitions of the charge ground state which cannot be observed directly due to latching effects. The unprocessed data is shown in supplementary section S6. **(f)** Plunger gate voltage ranges \([V_{\rm P_{i}}^{-},V_{\rm P_{i}}^{+}]\) extracted from **(a)-(e)**. The dotted line indicates the target voltage \(V^{\rm T}=1~{}\rm V\). affect the charge occupation of Q1 and Q3. We speculate this behavior to originate from asymmetries in the gate layout and device imperfections [40]. Crucially, we find that the stressing procedure is effective for the tuning of a nonlinear quadruple quantum dot array. ## Discussion In summary, we have shown that single-electron occupation in quantum dots can be achieved at equal predetermined plunger gate voltage, by making use of a stress-voltage based procedure. Importantly, we find that after such a tuning the systems remains stable for hours only exhibiting small progressive drifts which do not affect the charge configuration. We envision that the stressing methodology may find several applications in semiconductor quantum technology. For instance, it may facilitate the operation of crossbar arrays which crucially rely on shared gate voltages [14; 44]. While our experiments suggest tunability of the entire potential landscape, more research is needed to understand the level of control over the barrier potentials. A predetermined gate voltage to set a given charge state can also relax the requirements on control electronics and facilitate their integration. Furthermore, we envision that stressing voltages can provide tunability of other parameters. For example, the \(g\)-tensor of germanium qubits is strongly dependent on the electric field [28; 45], such that stressing voltages may provide tunability over the qubit resonance frequency. We therefore envision that stressing procedures may become a standard and essential routine in the tuning of large quantum circuits. ## Material and Methods ### Heterostructure and device fabrication The device under study in this work is fabricated on a \({}^{28}\)Si/SiGe heterostructure [39] which is based on a Si wafer. First, a linearly graded Si\({}_{1-x}\)Ge\({}_{x}\) buffer with \(x\) varying from 0 to 0.3 is grown followed by a 300 nm relaxed Si\({}_{0.7}\)Ge\({}_{0.3}\) layer. A 7 nm purified (800 ppm) \({}^{28}\)Si layer defines the quantum well and is separated from the gate stack by another 30 nm thick relaxed Si\({}_{0.7}\)Ge\({}_{0.3}\) buffer that is passivated in dichlorosilane at 500 \({}^{\circ}\)C. Phosphorus ion implantation is utilized to contact the two dimensional electron gas and a 10 nm aluminum oxide layer precedes the deposition of gate electrodes. The latter are spread across three layers and made of Ti/Pd deposited via electron beam evaporation. They are separated by 5 nm thick layers of aluminium oxide. In all cases aluminium oxide is deposited via atomic layer deposition [28]. Figure 5: **(1,1,1,1) charge state at 1 V on all plunger gates (a), (b) Charge stability diagrams recorded after applying stress voltage sequences to tune the (1,1,1,1) charge state to be the ground state when all plunger gate voltages are set to 1 V. The first two transition lines of each quantum dot are indicated by dashed lines. The voltage ranges to keep the system in the (1,1,1,1) charge state are indicated by horizontal and vertical bars (see methods). A white square marks the point when all plunger gates are at 1 V. The plotted signal is the summation of several charge stability diagrams with identical voltage ranges recorded for slightly varied voltages on the SET plunger S1 (see supplementary section S7). Contrast is enhanced by a local contrast normalization (LCN). (a) shows charge transitions of Q1 and Q3 and (b) exhibits charge transition lines of all four dots.** ### Setup and voltage pulses All measurements are performed in a dilution refrigerator at a base temperature of \(\approx 20\) mK. The gate voltages are supplied by digital analog converters (DACs) with a resolution of 18 bit and a voltage range of \(\pm 4\) V which was amplified to \(\pm 20\) V for the plunger gates. The current through the SET is measured via a current-to-voltage converter connected to a digitizer module. Confinement and stress voltages are applied via the DACs while charge stability diagrams are recorded by sending fast voltage pulses. The latter are generated by an arbitrary waveform generator (AWG). DAC and AWG voltage signals are merged with a bias tee located on the sample PCB at the mixing chamber stage. AWG pulses are modified to correct for voltage drifts caused by (dis)charging of the bias tees. Furthermore, cross-capacitive shifts from P3 and P4 on the sensing dot potential are compensated for by proportionally adjusting \(V_{\mathrm{S1}}\) when sweeping the plunger gate voltages \(V_{\mathrm{P}i}\) (\(\Delta V_{\mathrm{S1}}/\Delta V_{\mathrm{P}i}<0.01\)). ### Local contrast normalization In voltage scans spanning a large range, cross-capacitive coupling of the plunger gates to the SET can cause significant variations in sensor sensitivity. This leads to contrast fluctuations across the charge stability diagram and hampers identification of charge transition lines. We compensated for this effect by applying a local contrast normalization (LCN). In essence, a smoothed charge stability map is subtracted to compensate for a slowly varying offset after which a smoothed local variance is utilized to locally normalize the signal: \[\mathrm{LCN}(I)=\frac{I-I*f_{\mathrm{Gaussian}}}{\sqrt{(I-I*f_{\mathrm{ Gaussian}})^{2}*f_{\mathrm{Gaussian}}}}\] Here, the asterisk denominates a convolution, \(I\) is the sensor signal and \(f_{\mathrm{Gaussian}}\) refers to a normal distribution with a mean and variance chosen between 4 and 50 pixels. ### Extraction of characteristic voltages from charge stability diagrams For each charge stability diagram we identify the coordinates of the charge triple degeneracy points (triple points) that constitute the corners of the (1,1) charge region. From these we calculate the voltage ranges \([V^{-}_{\mathrm{P}i},V^{+}_{\mathrm{P}i}]\) that keep the system in the (1,1) charge state around the center point \(\textbf{\emph{V}}^{(1,1)}\) (in Fig. 1) or the target voltages \(\textbf{\emph{V}}^{\mathrm{T}}\) (in all other figures). The center point \(\textbf{\emph{V}}^{(1,1)}\) of the (1,1) charge region is determined as the centroid of the triple points at the \((2,0)-(1,1)\) and \((1,1)-(2,0)\) charge transitions. Note that the voltage ranges \([V^{-}_{\mathrm{P}i},V^{+}_{\mathrm{P}i}]\) are a measure of the maximum voltage variation on a single plunger gate for which the charge state remains constant. When taking into account more than a single gate voltage a polytope describes the applicable gate voltages that keep the charge state at single electron occupation. For instance, when considering two plunger gates the polytope would be the hexagon typically found in a double quantum dot honeycomb pattern. While we utilize one-dimensional voltage ranges \([V^{-}_{\mathrm{P}i},V^{+}_{\mathrm{P}i}]\) to ease visualizations, after all stressing experiments the target voltage point \(\textbf{\emph{V}}^{\mathrm{T}}\) lies inside the single charge occupation region (inside the respective gate voltage polytope). We have used the triple points for the analysis because of their robustness against latching effects. For instance, in Fig. 1.b the dashed lines show reconstructed charge transition lines of quantum dot Q3 which has a weak coupling to the nearby charge reservoir. Consequentially, \([V^{-}_{\mathrm{P}i},V^{+}_{\mathrm{P}i}]\) can include regions of meta-stable charge state (in between the observed and the reconstructed charge transition). This does not impact our conclusions because, at the end of all stressing experiments, the target voltage point \(\textbf{\emph{V}}^{\mathrm{T}}\) lies in a region of stable charge state. ## Data availability The data and analysis supporting this work are openly available in a public Zenodo repository at [https://doi.org/10.5281/zenodo.8322422](https://doi.org/10.5281/zenodo.8322422) [46]. ## Acknowledgements We gratefully acknowledge D. Degli-Esposti, D. Michalak and M. Mehmandoost for sharing their expertise on the underlying physics and for their valuable advice. Furthermore, we thank S. L. de Snoo for software support and all the members of the Veldhorst, Vandersypen and Scappucci group for many stimulating discussions. We acknowledge funding by Intel Corporation. This work is part of the 'Quantum Inspire - the Dutch Quantum Computer in the Cloud' project (with project number [NWA.1292.19.194]) of the NWA research program 'Research on Routes by Consortia (ORC)', which is funded by the Netherlands Organization for Scientific Research (NWO). ## Competing interest M. Veldhorst is inventor on a patent application related to this work (PCT/N L2022/050377), filling date 30 June 2022. The other authors declare no competing financial interest. Supplementary Information ## S1 Stress voltage induced crosstalk A stress voltage applied to a plunger gate P\(j\) not only alters the potential of the quantum dot Q\(j\) located directly underneath it but also affects neighbouring quantum dots Q\(i\). We investigate this crosstalk by further analyzing the tuning of the Q3-Q4 double quantum dot presented in Fig. 2 of the main text. Fig. S1.a shows the trajectory of the center \(\mathbf{V}^{(1,1)}\) of the (1,1) charge state region in the \((V_{\text{P3}},V_{\text{P4}})\) plane (same as Fig. 2.c of the main text). The crosstalk manifests as a deviation from perfectly horizontal or vertical progressions of \(\mathbf{V}^{(1,1)}\). We quantify it by applying a linear regression as exemplary shown in Fig. S1.b for the section from A to AB. The extracted slope \(s_{34}^{\gamma}\) is a measure for the crosstalk of plunger gate P4 onto quantum dot Q3. Two mechanisms can explain the observed crosstalk as illustrated in Fig. S1.c: (1) Tuning the potential landscape of Q4 through the application of stress voltages also affects the potential of Q3 even if all gate voltages are reset to their initial value afterwards. For instance, this effect could be caused by the (de)charging of traps at the interface that capacitively couple to Q3 (\(C_{34}^{\tau}\)) [35; 36; 37; 47; 48]. (2) \(V_{\rm P3}^{(1,1)}\) is defined as the middle point between the (1,0)-(1,1) and (1,1)-(1,2) charge transition at \(V_{\rm P4}=V_{\rm P4}^{(1,1)}\) (and vice versa). Due to the capacitive coupling of P4 onto Q3 (\(C_{34}^{\alpha}\)) a shift in \(V_{\rm P4}^{(1,1)}\) is therefore also reflected in \(V_{\rm P3}^{(1,1)}\). Fig. S1.d portrays the mechanism. It shows a schematic charge stability diagram before (grey charge transition lines) and after (black charge transition lines) tuning the potential below P4 through the application of stress voltages. As the Q3 charge transition lines are tilted by the cross-capacitance \(C_{34}^{\alpha}\), a change in \(V_{\rm P4}^{(1,1)}\) also results in a change of \(V_{\rm P3}^{(1,1)}\) (center point of the light and dark pink vertical bar). To quantify the latter effect we determine the slope \(s_{34}^{\alpha}\) of the Q3 charge transition lines at the (1,1) charge region. Fig. S1.e depicts an exemplary charge stability diagram during the tuning process with the respective Q3 charge transition lines indicated by dashed lines. All extracted \(s_{34}^{\alpha}\) between the points A and AB in Fig. S1.a are plotted in Fig.S1.f. We find that \(s_{34}^{\alpha}\) remains constant throughout the entire stress voltage sequence from A to AB. The same analysis steps are repeated for all sub parts between A and D of the trajectory in Fig. S1.a. Fig. S1.g summarizes all \(s_{ij}^{\bar{\gamma}}\) (diamonds) and \(s_{ij}^{\alpha}\) (downward pointing triangles). The magnitude of the cross-capacitance effect \(s_{ij}^{\alpha}\) is consistently larger than the magnitude of the measured crosstalk \(s_{ij}^{\gamma}\). To estimate the stress voltage crosstalk \(s_{ij}^{\gamma}\) solely caused by shifts of the intrinsic potential we subtract \(s_{ij}^{\alpha}\) from \(s_{ij}^{\gamma}\) and plot the difference in Fig. S1.h. We find a positive voltage stress related crosstalk, which has a similar magnitude as the capacitive effect \(s_{ij}^{\alpha}\). As \(s_{ij}^{\tau}\) and \(s_{ij}^{\alpha}\) have a different sign they partially cancel each other and lead to a reduced effective crosstalk \(s_{ij}^{\gamma}\) when applying stress voltage sequences. ## S2 Additional time traces recorded after applying stress voltages Fig. S2 shows two additional time traces not shown in Fig. 3 in the main text. Note that in Fig. S2.b and c the recording of the time traces was started 20 h and 4 h after the application of the last stress voltage, respectively. The additional curves confirm that after the application of a stress voltage tuning the system remains in a (1,1) charge state for 40 h at least only exhibiting small progressive drifts. ## S3 Charge noise after applying stress voltages As the presented tuning procedure might alter the configuration of charge traps in the heterostructure (see supplementary section S5) we investigate the system charge noise after applying stress voltages. Specifically, we measure time traces of the current through the sensing quantum dot (underneath S1) and compute the power spectral density (PSD). To obtain maximum sensitivity of the sensor current to potential fluctuations we tune the sensor plunger gate voltage \(V_{\rm S1}\) to the flank of a Coulomb peak. Fig. S3.a, b and c depict PSD spectra obtained after tuning to the target point A, C and E in Fig. 2.b, respectively. Note that target points A and C are reached by applying positively signed stress voltages and target point E is reached by applying negatively signed stress voltages. The charge noise curves follow the typical \(1/f\) frequency dependence. Therefore we fit them between 0.1 Hz and 5 Hz with \(S_{\epsilon}^{\rm fit}=A\times f^{-\kappa}\) (black line). We find noise amplitudes of \(\sqrt{A}=0.71~{}\mu\)eV/Hz\({}^{1/2}\), \(\sqrt{A}=0.60~{}\mu\)eV/Hz\({}^{1/2}\) and \(\sqrt{A}=0.78~{}\mu\)eV/Hz\({}^{1/2}\) as well as exponents \(\kappa=0.96\). \(\kappa=1.38\) and \(\kappa=1.07\) for target point A, C and E, respectively. These values are comparable to charge noise amplitudes in Si/SiGe reported in the literature [49; 50; 51] and charge noise values measured in the same device during an earlier cooldown [39]. Thus, we find no indication that a spin qubit implemented in a stressed quantum dot would be impaired by a degraded noise environment. However, further research is required as the charge noise sensed by the sensor might not be representative for the charge noise affecting qubits that are tuned in the quantum dots. ## S4 Identification of the four quantum dots In order to identify the quantum dots visible in Fig. 5 of the main text we measure multiple charge stability diagrams by sweeping all pairwise combinations of the device plunger gate voltages. The obtained charge stability diagrams are plotted in Fig. S4. The center left and bottom center panel are identical with the charge stability diagrams shown in Fig. 5 of the main text. All maps are obtained at the same gate voltage configuration and at their center point all plunger gates are set to 1 V. The charge stability diagrams can be analyzed starting from one charge transition line, e.g. the first vertical charge transition line in the center left panel (indicated by a yellow dashed line). Due to its strong coupling to plunger gate P1 we identify it as a charge transition line of quantum dot Q1. We mark the crossing point of this Q1 charge transition line with the \(V_{\rm P1}=1\) V line (vertical white line) by a yellow circle. Then we place another yellow circle marker at identical \(V_{\rm P3}\) on the \(V_{\rm P2}=1\) V line in the center panel of the figure. The vertical white lines inside one row of figure panels are identical line cuts in the gate voltage space. Therefore both marked points identify the same charge transition line of the same quantum dot (Q1). Analogously two charge stability diagrams in one column of figure panels can be compared. By repeating the process for all neighbouring charge stability diagrams one can identify the charge transition lines of four quantum dots Q1-Q4. Note that the charge transition lines of quantum dot Q4 (purple) latch when the sweep direction (black arrow in the upper right of each panel) is nearly perpendicular to the charge transition lines. Therefore the crossing point of the first Q4 charge transition line with the \(V_{\rm P1}=1\) V line in the bottom left panel and the crossing point with the \(V_{\rm P3}=1\) V line in the bottom right panel differ from the crossing point with the \(V_{\rm P2}=1\) V line in the bottom center panel. Furthermore, in the left column another nearly vertical charge transition line is visible in the background. However, it shows negligible coupling to the other charge transition lines and likely is a signature of a spurious defect quantum dot outside but close to the active device region. ## S5 Underlying physical mechanisms Applying a stress voltage to a selected gate electrode possibly alters the occupation of charge traps in the gate dielectrics and heterostructure directly underneath [35, 36, 37, 47, 48]. As the electric field bends the conduction band electrons might tunnel into or out of these charge traps. Removing the stress voltage then effectively freezes their occupation which permanently alters the intrinsic potential landscape. Charge traps can be present in the oxide layer [52; 53; 54; 55], originate from unpassivated silicon and germanium dangling bonds [53; 54; 55] or arise from mechanical stress induced by the deposition of metallic gate electrodes [30; 32]. Furthermore, also the relocation of mobile ions might change the intrinsic potential [56]. Note that these processes in general are independent of the quantum well material itself and stress-voltage-controlled shifts of the intrinsic potential also have been observed in Ge/SiGe heterostructures [38; 57]. ## S6 Raw data underlying Fig. 1-4 of the main text Fig. S5, S6, S7, and S8 display the unprocessed charge stability diagram data underlying Fig. 1.b, 2.f, 3.b, and 4.a-e of the main text, respectively. ## S7 Raw data underlying Fig. 5 of the main text Fig. S9 and Fig. S10 show the unprocessed charge stability diagram data underlying Fig. 5.a and b of the main text, respectively. Each map is recorded at a different sensor gate S1 voltage to account for the cross-capacitance effect of the plunger gates on the sensing dot potential which limits the sensing dot sensitivity to small plunger gate voltage ranges. We combine the charge stability diagrams by summing up the sensor current signals as exemplary shown in Fig. S11.a for the data shown in Fig. S10. Afterwards, the signal gradient \(\nabla I\) is calculated as depicted in Fig. S11.b. Finally, a local contrast normalization (see methods section) is applied to allow for an eased identification of charge transition lines across the full map. Fig. S11.c depicts the resulting charge stability diagram which is identical to the charge stability diagram shown in Fig. 5.b of the main text. Figure S9. **Charge stability diagrams underlying Fig. 5.a. (a)-(f)** Multiple charge stability diagrams showing charge transition lines of quantum dot Q1 and Q3. Maps are taken at various sensor gate S1 voltages as indicated above the plots. Figure S11. **Processing of the data underlying Fig. 5.b. (a)** Sum of the sensor response \(I\) of the charge stability diagrams shown in Fig. S10. (b) Gradient \(\nabla I\) of the data shown in (a). (c) Final signal \(\mathrm{LCN}(\nabla I)\) after applying a local contrast normalization to the map shown in (b). ## S8 Overview of applied gate voltage configurations Figure S12: **Gate voltage evolution during the presented experiments.** Each panel shows the gate voltage evolution of a single gate during the experiments presented in the figures of the main text as given on the x-axis. Note that \(V_{\rm 2C}=0\) V during all experiments. The inset shows an SEM image of a device nominally identical to the one under study. Confinement gates are outlined by a white dashed line. Labels indicate the gate electrode naming convention utilized throughout the manuscript and in the panels of this figure.
2309.00098
Conformal Hypergraphs: Duality and Implications for the Upper Clique Transversal Problem
Given a hypergraph $\mathcal{H}$, the dual hypergraph of $\mathcal{H}$ is the hypergraph of all minimal transversals of $\mathcal{H}$. The dual hypergraph is always Sperner, that is, no hyperedge contains another. A special case of Sperner hypergraphs are the conformal Sperner hypergraphs, which correspond to the families of maximal cliques of graphs. All these notions play an important role in many fields of mathematics and computer science, including combinatorics, algebra, database theory, etc. In this paper we study conformality of dual hypergraphs and prove several results related to the problem of recognizing this property. In particular, we show that the problem is in co-NP and can be solved in polynomial time for hypergraphs of bounded dimension. In the special case of dimension $3$, we reduce the problem to $2$-Satisfiability. Our approach has an implication in algorithmic graph theory: we obtain a polynomial-time algorithm for recognizing graphs in which all minimal transversals of maximal cliques have size at most $k$, for any fixed $k$.
Endre Boros, Vladimir Gurvich, Martin Milanič, Yushi Uno
2023-08-31T19:37:27Z
http://arxiv.org/abs/2309.00098v4
# Dually conformal hypergraphs ###### Abstract Given a hypergraph \(\mathcal{H}\), the dual hypergraph of \(\mathcal{H}\) is the hypergraph of all minimal transversals of \(\mathcal{H}\). The dual hypergraph is always Sperner, that is, no hyperedge contains another. A special case of Sperner hypergraphs are the conformal Sperner hypergraphs, which correspond to the families of maximal cliques of graphs. All these notions play an important role in many fields of mathematics and computer science, including combinatorics, algebra, database theory, etc. In this paper we study conformality of dual hypergraphs. While we do not settle the computational complexity status of recognizing this property, we show that the problem is in co-NP and can be solved in polynomial time for hypergraphs of bounded dimension. In the special case of dimension 3, we reduce the problem to 2-Satisfiability. Our approach has an implication in algorithmic graph theory: we obtain a polynomial-time algorithm for recognizing graphs in which all minimal transversals of maximal cliques have size at most \(k\), for any fixed \(k\). **Keywords:** hypergraph, conformal hypergraph, dual hypergraph, maximal clique **MSC (2020):** 05C65, 05D15, 05C69 05C85, 68R10, 05-08 ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Notation and definitions * 2.2 Representation of hypergraphs * 2.3 Subtransverals * 2.4 Conformal hypergraphs * 3 Dually conformal hypergraphs * 3.1 Basic observations * 3.2 Computing the co-occurrence graph of the dual hypergraph * 3.3 The Dual Conformality problem * 3.4 A polynomial case of Dual Conformality * 4 Graphs with small upper clique transversal number * 5 Dually conformal hypegraphs with bounded dimension * 5.1 The general case * 5.2 The case of dimension three * 5.3 The two-uniform case * 6 Discussion Introduction A _hypergraph_ is a finite set of finite sets called _hyperedges_. We consider the following two properties of hypergraphs. A hypergraph is _Sperner_[63] (also called _simple_[10, 11] or a _clutter_[61]) if no hyperedge is contained in another hyperedge. A hypergraph is _conformal_ if for each set \(U\) of vertices, if each pair of vertices in \(U\) is contained in some hyperedge, then \(U\) is contained in some hyperedge (see, e.g., [61]). Both notions play an important role in combinatorics and in many other fields of mathematics and computer science. For example, Sperner hypergraphs and their extensions have numerous applications in algebra, theory of monotone Boolean functions, and databases (see, e.g., Anderson [3] and Engel [29]). Furthermore, conformal hypergraphs are important for databases (see, e.g., Beeri, Fagin, Maier, and Yannakakis [9]) and arise naturally in algebraic topology (see Berge [10, p. 412, Exercise 1]). It is interesting to investigate the above properties in relation with the concepts of blocking and antiblocking hypergraphs. Given a hypergraph \(\mathcal{H}=(V,E)\), the _blocking hypergraph_ (or _blocker_; see, e.g., Schrijver [61]) of \(\mathcal{H}\) is the hypergraph with vertex set \(V\) whose hyperedges are exactly the minimal sets of vertices that contain at least one vertex from each hyperedge. This concept is so natural that it was studied under several other names in the literature, including _transversal hypergraph_ (see Berge [10, 11]), _hitting sets_ (see Karp [43] and also Garey and Johnson [35]), or _Menger dual_ (see Woodall [69]). Furthermore, motivated by the equivalent concept of monotone Boolean duality (see, e.g., Crama and Hammer [24]), the blocker of \(\mathcal{H}\) is also called the _dual hypergraph_ of \(\mathcal{H}\) and denoted by \(\mathcal{H}^{d}\). Indeed, in the case of Sperner hypergraphs, the operation of mapping \(\mathcal{H}\) to its dual hypergraph is an involution, that is, \((\mathcal{H}^{d})^{d}=\mathcal{H}\) (see, e.g., Berge [10] and Schrijver [61]). Hypergraph duality has many applications, for example to Nash-solvability of two-person game forms; see Edmonds and Fulkerson [26] for the zero-sum case, and Gurvich and Naumova [41] for the general two-person case. Many other applications and references can be found in the papers by Eiter and Gottlob [28] and Makino and Kameda [54]. The complexity of the _dualization problem_, that is, computing the dual hypergraph \(\mathcal{H}^{d}\) given \(\mathcal{H}\), is a famous open problem (see Fredman and Khachiyan [32]). Similarly to the blocker of a given hypergraph \(\mathcal{H}=(V,E)\), one can define the _antiblocker_ of \(\mathcal{H}\) as the hypergraph \(\mathcal{H}^{a}\) with vertex set \(V\) whose hyperedges are exactly the maximal sets of vertices that contain at most one vertex from each hyperedge (see Fulkerson [34]). The antiblocker was also called _Konig dual_ by Woodall [69]; see also McKee [56]. Blockers and antiblockers are related to perfect graphs and polyhedral combinatorics and were considered together in several papers [27, 33, 36, 40, 64]. It follows easily from the definitions that for every hypergraph \(\mathcal{H}\), its dual \(\mathcal{H}^{d}\) is always Sperner. Furthermore, as explained above, if \(\mathcal{H}\) is also Sperner, then \((\mathcal{H}^{d})^{d}=\mathcal{H}\). Analogously, for every hypergraph \(\mathcal{H}\), its antiblocker \(\mathcal{H}^{a}\) is always conformal, and if \(\mathcal{H}\) is also conformal, then \((\mathcal{H}^{a})^{a}=\mathcal{H}\), as shown by Woodall [69, 70] (see also Schrijver [61]). However, while the antiblocker \(\mathcal{H}^{a}\) is always Sperner, the dual \(\mathcal{H}^{d}\) need not be conformal. For example, all the 2-element subsets of a 3-element set form a hypergraph such that its dual is not conformal. Moreover, even if a hypergraph is conformal, its dual may fail to be conformal.1 Footnote 1: Consider the 2-uniform hypergraph \(\mathcal{H}\) given by the edges of the 5-cycle, that is, \(\mathcal{H}=(V,E)\) with \(V=\{1,2,3,4,5\}\) and \(E=\{\{1,2\},\{2,3\},\{3,4\},\{4,5\},\{5,1\}\}\). Clearly, \(\mathcal{H}\) is conformal. However, \(E(\mathcal{H}^{d})=\{\{1,2,4\},\{2,3,5\},\{3,4,1\},\{4,5,2\},\{5,1,3\}\}\). In particular, every pair of vertices belongs to a hyperedge and hence \(\mathcal{H}^{d}\) is not conformal. ### Our focus and motivations The above relations summarized in Table 1 motivate our paper studying hypergraphs whose dual is conformal. Conformal hypergraphs were characterized indep (see also [10, 11]) and Zykov [72]; the characterization leads to a polynomial-time recognition algorithm. On other other hand, the complexity of recognizing hypergraphs whose dual is conformal is open. In this paper we focus on this problem and call such hypergraphs _dually conformal_. Further motivations for the study of dually conformal hypergraphs include the following. First, variants of dual conformality are important for the dualization problem (see Khachiyan, Boros, Elbassioni, and Gurvich [44, 45, 46]). Second, dually conformal hypergraphs have an application in algorithmic graph theory. More precisely, a side result of our approach is a polynomial-time algorithm for the following problem, for any fixed positive integer \(k\): given a graph \(G\), does \(G\) admit a minimal clique transversal (that is, an inclusion-minimal set of vertices that intersects all maximal cliques) of size at least \(k\)? This problem was studied recently by Milanic and Uno [57] and was shown to be NP-hard in general. ### Our results We initiate a study of the recognition problem of dually conformal hypergraphs. As one of our main results, we develop a polynomial-time algorithm for the case of hypergraphs of bounded dimension (maximum size of a hyperedge). For hypergraphs of dimension at most \(3\) we develop an alternative approach based on \(2\)-Satisfiability. We also discuss separately the case of \(2\)-uniform hypergraphs, that is, the case of graphs. Our second main result, obtained using another polynomially solvable special case of the recognition problem of dually conformal hypergraphs, is a polynomial-time algorithm for recognizing graphs in which all minimal clique transversals have size at most \(k\), for any fixed \(k\). ### Structure of the paper In Section 2 we summarize the necessary preliminaries, including some basic properties of conformal hypergraphs, both in the Sperner case and in general. In Section 3 we present some basic results about dually conformal hypergraphs and initiate a study of the corresponding recognition problem by identifying a first polynomially solvable special case. Applications of this algorithm to graphs are presented in Section 4. In Section 5, we discuss hypergraphs of bounded dimension, both in the general case as well as in the special cases of \(3\)-uniform and \(2\)-uniform hypergraphs. We conclude the paper in Section 6 with a discussion and several open questions. ## 2 Preliminaries ### Notation and definitions A _hypergraph_ is a pair \(\mathcal{H}=(V,E)\) where \(V\) is a finite set of _vertices_ and \(E\) is a set of subsets of \(V\) called _hyperedges_ such that every vertex belongs to a hyperedge. For a hypergraph \(\mathcal{H}=(V,E)\) we write \(E(\mathcal{H})=E\) and \(V(\mathcal{H})=V\), and denote by \(\dim(\mathcal{H})=\max_{e\in E}|e|\) its _dimension_. A hypergraph \(\mathcal{H}\) is said to be \(k\)_-uniform_ if \(|e|=k\) for all \(e\in E(\mathcal{H})\). Thus, \(2\)-uniform hypergraphs \begin{table} \begin{tabular}{c|c|c} & Sperner & conformal \\ \hline blocker, \(\mathcal{H}^{d}\) & always & not always \\ \hline antiblocker, \(\mathcal{H}^{a}\) & always & always \\ \end{tabular} \end{table} Table 1: Properties of blockers and antiblockers. are precisely the (finite, simple, and undirected) graphs without isolated vertices. We only consider graphs and hypergraphs with nonempty vertex sets. For a vertex \(v\in V\) its degree \(\deg(v)=\deg_{\mathcal{H}}(v)\) is the number of hyperedges in \(E\) that contain \(v\) and \(\Delta(\mathcal{H})=\max_{v\in V}\deg(v)\) is the maximum degree of \(\mathcal{H}\). The _size_ of a hypergraph \(\mathcal{H}\) is the number of hyperedges in \(\mathcal{H}\). A hyperedge of \(\mathcal{H}\) is said to be _maximal_ if it is not contained in any other hyperedge. A hypergraph is _Sperner_ if no hyperedge contains another, or, equivalently, if every hyperedge is maximal. A _transversal_ of a hypergraph \(\mathcal{H}=(V,E)\) is a set of vertices intersecting all hyperedges. A transversal is _minimal_ if it does not contain any other transversal. Recall that the _dual hypergraph_ of a hypergraph \(\mathcal{H}=(V,E)\) is the hypergraph \(\mathcal{H}^{d}\) with vertex set \(V\), whose hyperedges are exactly the minimal transversals of \(\mathcal{H}\). **Fact 2.1** (Folklore, see, e.g., Berge [11]).: _Let \(\mathcal{H}\) be a Sperner hypergraph. Then \((\mathcal{H}^{d})^{d}=\mathcal{H}\)._ ### Representation of hypergraphs In this subsection we describe a useful data structure for representing hypergraphs. Let \(\mathcal{H}=(V,E)\) be a hypergraph. We write \(n=|V|\) and \(m=|E|\). An _incident pair_ of \(\mathcal{H}\) is a pair \((v,e)\) such that \(v\in e\in E\). We assume that \(\mathcal{H}\) is represented by a complete list of its edges, as subsets of \(V\), and equipped with a fixed pair of orderings of its vertices and edges, say \(V=\{v_{1},\ldots,v_{n}\}\) and \(E=\{e_{1},\ldots,e_{m}\}\). We first perform a preprocessing step taking time \(\mathcal{O}(|V||E|)\) in order to compute the _edge-vertex incidence matrix_ of \(\mathcal{H}\), a binary matrix \(I\in\{0,1\}^{E\times V}\) with rows indexed by the hyperedges of \(\mathcal{H}\), columns indexed by the vertices of \(\mathcal{H}\), and \(I_{e,v}=1\) if and only if \(v\in e\). Having constructed the edge-vertex incidence matrix, we can look up in constant time whether, given a vertex \(v\in V\) and hyperedge \(e\in E\), the pair \((v,e)\) is an incident pair of \(\mathcal{H}\). Next we construct a _doubly-linked representation of incident pairs_ of \(\mathcal{H}\), that is, a collection \(L\) of doubly linked lists of incident pairs, one for each vertex and one for each hyperedge. Each incident pair contains a pointer to its vertex, another one to its hyperedge, and having four links - horizontal prev and next and vertical prev and next. The horizontal links form a doubly linked circular list attached to the hyperedge, and the vertical ones form a doubly linked circular list attached to the vertex. See Figure 1 for an example. Due to the doubly linked nature insertions can be done in constant time. We can thus build the structure \(L\) in \(\mathcal{O}(|V||E|)\) time, as follows. 1. First, we initialize the doubly linked lists for each vertex and hyperedge to be the doubly linked lists consisting only of the corresponding vertex, resp. hyperedge. 2. Then, we traverse the edge-vertex incidence matrix \(I\) row by row. As we traverse a row labeled by a hyperedge \(e\), we build the doubly linked list corresponding to this hyperedge (with horizontal prev and next links) along with the pointers to \(e\). At the same time, when a new incident pair \((v,e)\) is added to the list, the doubly linked list corresponding to the vertex \(v\) is augmented with this pair (with vertical prev and next links) and the pointer to vertex \(v\). The usefulness of the above data structures is summarized in the following. **Proposition 2.2**.: _Given a hypergraph \(\mathcal{H}=(V,E)\), with \(V=\{v_{1},\ldots,v_{n}\}\) and \(E=\{e_{1},\ldots,e_{m}\}\), there is an algorithm running in time \(\mathcal{O}(|V||E|)\) that computes its edge-vertex incidence matrix and the doubly-linked representation of its incident pairs._ _Using the incidence matrix we can test in constant time the relation \(v\in e\) for all \(v\in V\) and \(e\in E\). Using the doubly-linked representation of the incident pairs we can:_ * _list the vertices of a hyperedge_ \(e\in E\) _in time linear in_ \(|e|\leq\dim(\mathcal{H})\)_;_ * _list the hyperedges containing a vertex_ \(v\in V\) _in time linear in_ \(\deg(v)\leq\Delta(\mathcal{H})\)_;_ * _compute for any two hyperedges_ \(e\) _and_ \(f\) _their union, intersection, and the two set differences_ \(e\setminus f\) _and_ \(f\setminus e\) _in time_ \(\mathcal{O}(|e|+|f|)=\mathcal{O}(\dim(\mathcal{H}))\)_; in particular, we can test in time_ \(\mathcal{O}(\dim(\mathcal{H}))\) _if_ \(e\subseteq f\)_._ Let us also remark that, when discussing the running times of algorithms on graphs (in Section 4), we assume that the adjacency lists are sorted. If they are initially not sorted, we first sort them in time \(\mathcal{O}(|V|+|E|)\) (see [38]). ### Subtransversals Given a hypergraph \(\mathcal{H}=(V,E)\), a set \(S\subseteq V\) is a _subtransversal_ of \(\mathcal{H}\) if \(S\) is subset of a minimal transversal. The following characterization of subtransversals due to Boros, Gurvich, and Hammer [15, Theorem 1] was formulated first in terms of prime implicants of monotone Boolean functions and their duals, and reproved in terms of hypergraphs in [14]. Given a set \(S\subseteq V\) and a vertex \(v\in S\), we denote by \(E_{v}(S)\) the set of hyperedges \(e\in E\) such that \(e\cap S=\{v\}\). **Theorem 2.3** (Boros, Gurvich, Elbassioni, and Khachiyan [14]).: _Let \(\mathcal{H}=(V,E)\) be a hypergraph and let \(S\subseteq V\). Then \(S\) is a subtransversal of \(\mathcal{H}\) if and only if there exists a collection of hyperedges \(\{e_{v}\in E_{v}(S):v\in S\}\) such that the set \((\bigcup_{v\in S}e_{v})\setminus S\) does not contain any hyperedge of \(\mathcal{H}\)._ Note that edges that intersect \(S\) in more than one vertex do not influence the fact whether \(S\) is a subtransversal or not. The problem of determining if a given set \(S\) is a subtransversal is \(\mathsf{NP}\)-complete even for \(2\)-uniform hypergraphs, see [14, 15]. For sets of bounded cardinality, however, Theorem 2.3 leads to a polynomial-time algorithm. **Corollary 2.4**.: _Let \(\mathcal{H}=(V,E)\) be a hypergraph with dimension \(k\) and maximum degree \(\Delta\), given by an edge-vertex incidence matrix and a doubly-linked representation of its incident pairs, and let \(S\subseteq V\). Then, there exists an algorithm running in time_ \[\mathcal{O}\left(k|E|\cdot\min\left\{\Delta^{|S|}\,,\left(\frac{|E|}{|S|} \right)^{|S|}\right\}\right)\] Figure 1: A hypergraph \(\mathcal{H}\), its edge-vertex incidence matrix, and the doubly-linked representation of its incident pairs. that determines if \(S\) is a subtransversal of \(\mathcal{H}\). In particular, if \(|S|=\mathcal{O}(1)\), the complexity is \(\mathcal{O}(k|E|\Delta^{|S|})\)._ Proof.: Note that any minimal transversal has at most as many vertices as the number of hyperedges. Thus, if \(|S|>|E|\), then we can determine in time \(\mathcal{O}(|S|+|E|)=\mathcal{O}(k|E|)\) that \(S\) is not a subtransversal. Note that \(\mathcal{O}(|S|)=\mathcal{O}(|V|)=\mathcal{O}(k|E|)\), since \(V=\bigcup_{e\in E}e\). From now on, we assume that \(|S|\leq|E|\). To a subset \(S\subseteq V\) we associate the following families of edges: \[E_{v}(S) = \{e\in E\mid e\cap S=\{v\}\}\quad\text{ for }\quad v\in S,\text{ and,}\] \[E_{\omega}(S) = \{e\in E\mid e\cap S=\emptyset\}.\] We describe the desired algorithm with the following procedure. Procedure SubTransversal: **Input:**: A hypergraph \(\mathcal{H}=(V,E)\) given by an edge-vertex incidence matrix and a doubly-linked representation \(L\) of its incident pairs, a subset \(S\subseteq V\) such that \(|S|\leq|E|\). **Output:**: Yes if \(S\) is a subset of a minimal transversal of \(\mathcal{H}\), and No otherwise. **Step 1:**: Compute the families \(E_{u}(S)\) for \(u\in S\cup\{\omega\}\qquad\text{in }\mathcal{O}(|S|+k|E|)=\mathcal{O}(k|E|)\) time. We can do this in the stated time by first traversing the set \(S\) and marking each vertex that belongs to \(S\). Then for each hyperedge \(e\in E\) we traverse the corresponding list of \(\mathcal{O}(k)\) vertices; if the hyperedge \(e\) contains no vertex from \(S\), we put it in \(E_{\omega}(S)\), and if it contains a unique vertex from \(S\), say \(v\), we put it in \(E_{v}(S)\). **1.1**: If \(E_{v}(S)=\emptyset\) for some \(v\in S\), then STOP and output No, \(\mathcal{O}(|S|)\) time. **1.2**: otherwise if \(E_{\omega}(S)=\emptyset\), then STOP and output Yes (\(S\) is a minimal transversal of \(\mathcal{H}\) in this case) \(\mathcal{O}(1)\) time. **Step 2:**: Initialize an array \(A\in\{0,1\}^{V}\) of length \(n\) by zeros in \(\mathcal{O}(|V|)=\mathcal{O}(k|E|)\) time. (Recall that \(|V|\leq k|E|\), since \(V=\bigcup_{e\in E}e\).) For each selection \(e_{v}\in E_{v}(S)\), \(v\in S\)\(\prod_{v\in S}|E_{v}(S)|\leq\min\left\{\Delta^{|S|},\left(\frac{|E|}{|S|} \right)^{|S|}\right\}\) times: **2.1**: Compute \(U=\bigcup_{v\in S}e_{v}\) in \(\mathcal{O}(k|S|)\) time. To compute the set \(U\) in time \(\mathcal{O}(k|S|)\), we first create an object for \(U\) with a root of a doubly linked list that is initially empty (next and prev point back to itself). As we look up the vertices of the edges \(e_{v}\), \(v\in S\), one by one, in total time \(O(k|S|)\) for each such vertex \(u\in e_{v}\) we first check the value of \(A_{u}\). If \(A_{u}=0\), we set \(A_{u}=1\) and then we add \(u\), with the corresponding prev and next links, to the list of \(U\). At the end of this procedure, the array \(A\) will have \(A_{u}=1\) if and only if \(u\in U\). **2.2**: STOP and output Yes if \(e\not\subseteq U\) for all \(e\in E_{\omega}(S)\quad\text{in }\mathcal{O}(k|E_{\omega}(S)|)=\mathcal{O}(k|E|)\) time. For a given \(e\in E_{\omega}(S)\) the test \(e\not\subseteq U\) can be performed in time \(\mathcal{O}(|e|)=\mathcal{O}(k)\) by scanning the doubly linked list of \(e\) and checking the corresponding entries of the array \(A\). * Restore the array \(A\) to the all-zero array in \(\mathcal{O}(k|S|)\) time. This is achieved by scanning the set \(U\) once, in linear time in the length of this set, which is \(\mathcal{O}(k|S|)\), and switching back the corresponding entries in the array \(A\) to zero. * STOP and output No in \(\mathcal{O}(1)\) time. Thus, we get two upper estimates for the running time of SubTransversal: \[\mathcal{O}\left(k|E|\Delta^{|S|}\right)\quad\text{ and }\quad\mathcal{O}\left(k|E| \left(\frac{|E|}{|S|}\right)^{|S|}\right)\,,\] as claimed. ### Conformal hypergraphs In this section we summarize some basic properties of conformal hypergraphs: a characterization of conformal Sperner hypergraphs, which establishes a close connections with graphs, a characterization of general conformal hypergraphs, and a polynomial-time recognition algorithm of conformal hypergraphs. All the graphs in this paper are finite, simple, and undirected. We use standard graph theory terminology, following West [68]. Given a hypergraph \(\mathcal{H}=(V,E)\), its _co-occurrence graph_ is the graph \(G(\mathcal{H})\) with vertex set \(V\) that has an edge between two distinct vertices \(u\) and \(v\) if there is a hyperedge \(e\) of \(\mathcal{H}\) that contains both \(u\) and \(v\). **Observation 2.5**.: _For every hypergraph \(\mathcal{H}\), every hyperedge of \(\mathcal{H}\) is a clique in the co-occurrence graph \(G(\mathcal{H})\)._ Note however that hyperedges of \(\mathcal{H}\) are not necessarily maximal cliques of \(G(\mathcal{H})\). For example, if \(\mathcal{H}\) is the complete graph \(K_{3}\), then \(G(\mathcal{H})=\mathcal{H}\), but \(G(\mathcal{H})\) has a unique maximal clique of size \(3\). Recall that a hypergraph is said to be _conformal_ if for each set \(U\) of vertices, if each pair of vertices in \(U\) is contained in some hyperedge, then \(U\) is contained in some hyperedge. It is not difficult to see that a hypergraph \(\mathcal{H}\) is conformal if and only if every maximal clique of its co-occurrence graph is a hyperedge of \(\mathcal{H}\) (in fact, this was the definition of conformality given by Berge [10, 11]). Furthermore, a Sperner hypergraph \(\mathcal{H}\) is conformal if and only if every maximal clique of its co-occurrence graph is a hyperedge of \(\mathcal{H}\) (see [9]). We now recall a characterization of Sperner conformal hypergraphs due to Beeri, Fagin, Maier, and Yannakakis [9] (see also Berge [10, 11] for the equivalence between properties 1 and 2). The _clique hypergraph_ of a graph \(G=(V,E)\) is the hypergraph with vertex set \(V\) with hyperedges exactly the maximal cliques in \(G\). **Theorem 2.6** ([9], see also [10, 11]).: _For every Sperner hypergraph \(\mathcal{H}\), the following properties are equivalent._ 1. \(\mathcal{H}\) _is conformal._ 2. \(\mathcal{H}\) _is the clique hypergraph of some graph._ 3. \(\mathcal{H}\) _is the clique hypergraph of its co-occurrence graph._ We now generalize Theorem 2.6 by characterizing the conformality property for general (not necessarily Sperner) hypergraphs. **Lemma 2.7**.: _Let \(\mathcal{H}\) be a hypergraph such that there exists a graph \(G=(V,E)\) and a collection \(\mathcal{C}\) of cliques of \(G\) containing all maximal cliques of \(G\) (and possibly some others) such that \(\mathcal{H}=(V,\mathcal{C})\). Then \(G=G(\mathcal{H})\)._ Proof.: We have \(V(G(\mathcal{H}))=V(\mathcal{H})=V=V(G)\). Furthermore, two distinct vertices \(u\) and \(v\) are adjacent in \(G\) if and only if there exists a maximal clique in \(G\) containing both \(u\) and \(v\), and they are adjacent in the co-occurrence graph \(G(\mathcal{H})\) if and only if there exists a hyperedge of \(\mathcal{H}\) containing \(u\) and \(v\). The assumption on \(\mathcal{C}\) implies that there exists a maximal clique in \(G\) containing both vertices if and only if there exists a set in \(\mathcal{C}\) containing both. Thus, since \(E(\mathcal{H})=\mathcal{C}\), we infer that graphs \(G\) and \(G(\mathcal{H})\) have the same edge sets. We conclude that \(G=G(\mathcal{H})\). **Theorem 2.8**.: _For every hypergraph \(\mathcal{H}\), the following properties are equivalent._ 1. \(\mathcal{H}\) _is conformal._ 2. _Every maximal clique in_ \(G(\mathcal{H})\) _is a maximal hyperedge of_ \(\mathcal{H}\)_._ 3. _There exists a graph_ \(G=(V,E)\) _and a collection_ \(\mathcal{C}\) _of cliques of_ \(G\) _containing all maximal cliques of_ \(G\) _(and possibly some others) such that_ \(\mathcal{H}=(V,\mathcal{C})\)_._ Proof.: We show first that property 1 implies property 2. Suppose first that \(\mathcal{H}\) is conformal, that is, every maximal clique in \(G(\mathcal{H})\) is a hyperedge of \(\mathcal{H}\). Let \(C\) be a maximal clique in \(G(\mathcal{H})\). Since \(\mathcal{H}\) is conformal, \(C\) is a hyperedge of \(\mathcal{H}\). It is in fact a maximal hyperedge, since if \(C\) is properly contained in another hyperedge \(e\) of \(\mathcal{H}\), then by Observation 2.5 we obtain that \(e\) is a clique in \(G(\mathcal{H})\) properly containing \(C\), contrary to the assumption that \(C\) is a maximal clique. Thus, property 2 holds. Next, we show that property 2 implies property 3. To this end, suppose that every maximal clique in \(G(\mathcal{H})\) is a maximal hyperedge of \(\mathcal{H}\), and let \(G=G(\mathcal{H})\) and \(\mathcal{C}=E(\mathcal{H})\). We then have \(V(\mathcal{H})=V(G)\), by Observation 2.5 every member of \(\mathcal{C}\) is a clique of \(G\), and by property 2, every maximal clique in \(G\) belongs to \(\mathcal{C}\). Thus, property 3 holds for \(G=G(\mathcal{H})\) and \(\mathcal{C}=E(\mathcal{H})\). We show next that property 3 implies property 1. Suppose that there exists a graph \(G=(V,E)\) and a collection \(\mathcal{C}\) of cliques of \(G\) containing all maximal cliques of \(G\) (and possibly some others) such that \(\mathcal{H}=(V,\mathcal{C})\). By Lemma 2.7, we have \(G=G(\mathcal{H})\). This implies that every maximal clique in \(G(\mathcal{H})=G\) is a hyperedge of \(\mathcal{H}\), thus \(\mathcal{H}\) is conformal and property 1 holds. Note that the proof of Theorem 2.8 shows that if \(\mathcal{H}=(V,\mathcal{C})\) for some graph \(G=(V,E)\) and a collection \(\mathcal{C}\) of cliques of \(G\) containing all maximal cliques of \(G\), then not only the collection \(\mathcal{C}\) but also the graph \(G\) is uniquely determined from \(\mathcal{H}\); namely, \(G\) is the co-occurrence graph of \(\mathcal{H}\). Checking conformality of a given hypergraph can be done in polynomial time, due to the following characterization. **Theorem 2.9** (Gilmore [37]; see also [10, 11, 72]).: _A hypergraph \(\mathcal{H}=(V,E)\) is conformal if and only if for every three hyperedges \(e_{1},e_{2},e_{3}\in E\) there exists a hyperedge \(e\in E\) such that_ \[(e_{1}\cap e_{2})\cup(e_{1}\cap e_{3})\cup(e_{2}\cap e_{3})\subseteq e\,.\] **Proposition 2.10**.: _Given a hypergraph \(\mathcal{H}=(V,E)\) with dimension \(k\), it can be tested in time \(\mathcal{O}(|V||E|+k|E|^{4})\) if \(\mathcal{H}\) is conformal._ Proof.: Using Proposition 2.2, we compute in time \(\mathcal{O}(|V||E|)\) the edge-vertex incidence matrix of \(\mathcal{H}\) and the doubly-linked representation of its incident pairs. We then check the conformality of \(\mathcal{H}\) by verifying the condition from Theorem 2.9. This can be done by iterating over all \(\mathcal{O}(|E|^{3})\) triples \(\{e_{1},e_{2},e_{3}\}\) of hyperedges, and for each such triple compute in time \(\mathcal{O}(k)\) the set \(S=(e_{1}\cap e_{2})\cup(e_{1}\cap e_{3})\cup(e_{2}\cap e_{3})\), and iterate over all edges \(e\in E\) to verify the inclusion \(S\subseteq e\). The overall running time of this procedure is \(\mathcal{O}(|E|^{3}\cdot(k+|E|\cdot k))=\mathcal{O}(k|E|^{4})\). Dually conformal hypergraphs We say that a hypergraph \(\mathcal{H}\) is _dually conformal_ if its dual hypergraph \(\mathcal{H}^{d}\) is conformal. In this section we present some basic observations about dually conformal hypergraphs and initiate a study of the corresponding recognition problem. While we do not settle the computational complexity status of the problem, we show that the problem is in \(\mathsf{co}\text{-}\mathsf{NP}\) and develop a polynomial-time algorithm for a special case. ### Basic observations Since the dual hypergraph of any hypergraph \(\mathcal{H}\) is the same as the dual hypergraph of the hypergraph obtained from \(\mathcal{H}\) by keeping only the inclusion-minimal hyperedges, in order to test dual conformality of a hypergraph we can assume without loss of generality that the hypergraph is Sperner. In the next proposition, we characterize the dually conformal Sperner hypergraphs using a connection with graphs. Given a graph \(G\), a set of vertices that intersects all maximal cliques of \(G\) is called a _clique transversal_ in \(G\). A clique transversal in \(G\) is _minimal_ if it does not contain any other clique transversal. **Proposition 3.1**.: _Let \(\mathcal{H}\) be a hypergraph. Then the following statements are equivalent._ 1. \(\mathcal{H}\) _is a dually conformal Sperner hypergraph._ 2. _There exists a graph_ \(G\) _such that_ \(\mathcal{H}\) _is the hypergraph of all minimal clique transversals of_ \(G\)_._ Proof.: Let \(\mathcal{H}\) be a dually conformal Sperner hypergraph. Let \(G\) be the co-occurrence graph of \(\mathcal{H}^{d}\). Since \(\mathcal{H}^{d}\) is a conformal Sperner hypergraph, Theorem 2.6 implies that \(\mathcal{H}^{d}\) is the clique hypergraph of \(G\). But then \(\mathcal{H}=(\mathcal{H}^{d})^{d}\) is exactly the hypergraph of all minimal clique transversals of \(G\). Conversely, let \(G\) be a graph and let \(\mathcal{H}\) be the hypergraph of all minimal clique transversals of \(G\). By construction, \(\mathcal{H}\) is a Sperner hypergraph. Then \(\mathcal{H}^{d}\) is the clique hypergraph of \(G\) and thus \(\mathcal{H}^{d}\) is conformal. The following characterization of dually conformal hypergraphs follows immediately from the definition. **Observation 3.2**.: _For every hypergraph \(\mathcal{H}\), the following properties are equivalent._ 1. \(\mathcal{H}\) _is dually conformal._ 2. _Every maximal clique in_ \(G(\mathcal{H}^{d})\) _is a minimal transversal of_ \(\mathcal{H}\)_._ Fix a hypergraph \(\mathcal{H}\) and let \(G=G(\mathcal{H}^{d})\). By Observation 3.2, a necessary and sufficient condition for \(\mathcal{H}\) to be dually conformal is that every maximal clique of \(G\) is a minimal transversal of \(\mathcal{H}\). Thus, in general, there are two possible reasons why \(\mathcal{H}\) could fail to be dually conformal. **Corollary 3.3**.: _Let \(\mathcal{H}\) be a hypergraph and let \(G=G(\mathcal{H}^{d})\). Then \(\mathcal{H}\) is not dually conformal if and only if one of the following two conditions holds._ 1. \(G\) _contains a maximal clique_ \(C\) _that is not a transversal of_ \(\mathcal{H}\)_, or_ 2. \(G\) _contains a maximal clique_ \(C\) _that is a transversal of_ \(\mathcal{H}\) _but not a minimal one._ As shown by the following two examples, the two conditions are independent of each other. **Example 3.4**.: The following hypergraph satisfies property (a) but not property (b). Let \(\mathcal{H}\) be the hypergraph with vertex set \(\{1,\ldots,6\}\) and hyperedges \(\{1,2\}\), \(\{1,3\}\), \(\{2,3\}\), \(\{1,4\}\), \(\{2,5\}\), \(\{3,6\}\), and \(\{4,5,6\}\). Then the hyperedges of \(\mathcal{H}^{d}\) are \(\{1,2,6\}\), \(\{1,3,5\}\), and \(\{2,3,4\}\). Its co-occurrence graph \(G=G(\mathcal{H}^{d})\) is shown in Fig. 2. Note that \(C=\{1,2,3\}\) is a maximal clique in \(G\) that is not a transversal of \(\mathcal{H}\), since it misses the hyperedge \(\{4,5,6\}\). Thus, \(\mathcal{H}\) satisfies property (a). On the other hand, all maximal cliques in \(G\) other than \(C\) are minimal transversals of \(\mathcal{H}\), and hence \(\mathcal{H}\) does not satisfy property (b). **Example 3.5**.: The following hypergraph satisfies property (b) but not property (a). Let \(G\) be the complete graph \(K_{3}\) and let \(\mathcal{H}=G\), that is, \(V(\mathcal{H})=\{1,2,3\}\) and \(E(\mathcal{H})=\{\{1,2\},\{1,3\},\{2,3\}\}\). Then \(\mathcal{H}^{d}=\mathcal{H}\) and \(G(\mathcal{H}^{d})=G\). Graph \(G\) is complete and hence contains a unique maximal clique \(C\), namely \(C=\{1,2,3\}\). This clique is a transversal of \(\mathcal{H}\) but not a minimal one. Thus, \(\mathcal{H}\) satisfies property (b) but not property (a). Furthermore, as shown by the following example, the two conditions can occur simultaneously. **Example 3.6**.: The following hypergraph satisfies both properties (a) and (b). Let \(\mathcal{H}\) have vertex set \(\{1,\ldots,6\}\) and hyperedges \(\{1,4,5\}\), \(\{1,4,6\}\), \(\{2,4,5\}\), \(\{2,5,6\}\), \(\{3,4,6\}\), \(\{3,5,6\}\), and \(\{4,5,6\}\). Then the hyperedges of \(\mathcal{H}^{d}\) are \(\{1,2,6\}\), \(\{1,3,5\}\), \(\{2,3,4\}\), \(\{4,5\}\), \(\{4,6\}\), and \(\{5,6\}\). Its co-occurrence graph \(G=G(\mathcal{H}^{d})\) is isomorphic to the complete multipartite graph \(K_{2,2,2}\), with parts \(\{1,4\}\), \(\{2,5\}\), and \(\{3,6\}\); two vertices in \(G\) are adjacent to each other if and only if they belong to different parts. Note that the set \(C=\{1,2,3\}\) is a maximal clique in \(G\) that is not a transversal of \(\mathcal{H}\), since it misses the hyperedge \(\{4,5,6\}\). Thus, \(\mathcal{H}\) satisfies property (a). Furthermore, \(C^{\prime}=\{4,5,6\}\) is a maximal clique in \(G\) that is a transversal of \(\mathcal{H}\) but not a minimal one, since it properly contains the minimal transversal \(\{4,5\}\). Hence, \(\mathcal{H}\) also satisfies property (b). ### Computing the co-occurrence graph of the dual hypergraph Immediately from Observation 2.5 we obtain the following. **Corollary 3.7**.: _Every hyperedge of \(\mathcal{H}^{d}\) is a clique in \(G(\mathcal{H}^{d})\)._ **Proposition 3.8**.: _Given a hypergraph \(\mathcal{H}=(V,E)\) with dimension \(k\) and maximum degree \(\Delta\), the co-occurrence graph of the dual hypergraph \(\mathcal{H}^{d}\) can be computed in time \(\mathcal{O}(k|E|\Delta^{2}|V|^{2})\)._ Proof.: Using Proposition 2.2, we compute in time \(\mathcal{O}(|V||E|)\) the edge-vertex incidence matrix of \(\mathcal{H}\) and the doubly-linked representation of its incident pairs. Two distinct vertices \(u\) and \(v\) in \(V\) are adjacent in the co-occurrence graph of \((\mathcal{H})^{d}\) if and only if the set \(\{u,v\}\) is a subtransversal of \(\mathcal{H}\). Applying Corollary 2.4 we can test in time \(\mathcal{O}(k|E|\Delta^{2})\) if any such set is a subtransversal of \(\mathcal{H}\). As the total number of pairs is \(\mathcal{O}(|V|^{2})\), the claimed time complexity follows. Figure 2: A hypergraph \(\mathcal{H}\), its dual hypergraph \(\mathcal{H}^{d}\), and the co-occurrence graph of \(\mathcal{H}^{d}\). **Corollary 3.9**.: _Given a hypergraph \(\mathcal{H}=(V,E)\), the co-occurrence graph of the dual hypergraph \(\mathcal{H}^{d}\) can be computed in time \(\mathcal{O}(|V|^{3}|E|^{3})\)._ Proof.: Immediate from Proposition 3.8 using the fact that the dimension and the maximum degree of \(\mathcal{H}\) are bounded by \(k\leq|V|\) and \(\Delta\leq|E|\), respectively. ### The Dual Conformality problem We are interested in the complexity of testing conformality for the dual hypergraph of a given hypergraph \(\mathcal{H}\). Formally, we introduce the following problem. \begin{tabular}{|l l|} \hline Dual Conformality & \multicolumn{1}{c|}{} \\ _Input:_ & A hypergraph \(\mathcal{H}\). \\ _Question:_ & Is the dual hypergraph \(\mathcal{H}^{d}\) conformal? \\ \hline \end{tabular} Observation 3.2 has the following algorithmic consequence. **Proposition 3.10**.: _Given a hypergraph \(\mathcal{H}=(V,E)\) with dimension \(k\) and maximum degree \(\Delta\), the Dual Conformality problem is solvable in time \(\mathcal{O}(|V|^{2}(k|E|\Delta^{2}+(|V|+|E|)\cdot|E(\mathcal{H}^{d})|))\)._ Proof.: First, we compute the co-occurrence graph \(G=G(\mathcal{H}^{d})\) of \(\mathcal{H}^{d}\). By Proposition 3.8, this can be done in time \(\mathcal{O}(k|E|\Delta^{2}|V|^{2})\). By Corollary 3.7, \(\mathcal{H}^{d}\) has only hyperedges that are cliques of \(G\). Now, the maximal cliques of \(G\) can be generated with polynomial delay using the algorithm by Tsukiyama et al. [66] on the complement of \(G\). More precisely, after a preprocessing step that takes \(\mathcal{O}(|V|^{2})\) time, the algorithm outputs all the maximal cliques of \(G\) one by one, spending time \(\mathcal{O}(|V|^{3})\) between two consecutive output cliques. We run the algorithm, and every time it outputs a maximal clique of \(G\) check if it belongs to \(\mathcal{H}^{d}\) or not. This is easy to check in time \(\mathcal{O}(|V|^{2}|E|)\): it must be a transversal of \(\mathcal{H}\) and must be minimal. If you get a NO at any time, then stop, and the answer is NO, otherwise, the answer is YES. The total running time of this approach is \(\mathcal{O}(k|E|\Delta^{2}|V|^{2}+|V|^{2})+\mathcal{O}((|V|^{3}+|V|^{2}|E|) \cdot|E(\mathcal{H}^{d})|)\), which simplifies to \(\mathcal{O}(k|E|\Delta^{2}|V|^{2}+|V|^{2}(|V|+|E|)\cdot|E(\mathcal{H}^{d})|)\). **Remark 3.11**.: The approach of the proof of Proposition 3.10 actually shows the following. Assume that there exists an algorithm for generating all maximal cliques of an \(n\)-vertex graph \(G\) with preprocessing time \(\mathcal{O}(T_{1}(n))\) and that spends time \(\mathcal{O}(T_{2}(n))\) between outputting any two consecutive maximal cliques. Then, given a hypergraph \(\mathcal{H}=(V,E)\), the Dual Conformality problem is solvable in time \(\mathcal{O}(|V|^{3}|E|^{3}+T_{1}(|V|)+(|V|^{2}|E|+T_{2}(|V|))\cdot|E(\mathcal{ H}^{d})|)\). In particular, one could apply not only the algorithm by Tsukiyama et al. but also any of the more recent faster algorithms, e.g., those in [20, 21, 55]. Of course, the size of \(\mathcal{H}^{d}\) could easily be exponential in the size of \(\mathcal{H}\), so this algorithm is exponential in the size of \(\mathcal{H}\), in the worst case.2 Accordingly, the question about computing \(\mathcal{H}^{d}\) from \(\mathcal{H}\) was typically addressed from the point of view of output-sensitive algorithms (see, e.g., [28, 49, 60]). The currently known best algorithm for computing \(\mathcal{H}^{d}\) for a general hypergraph \(\mathcal{H}\) has a running time which is linear in the output size and quasi-polynomial in the input size [32]. Footnote 2: Not on average, though. On average, the size of the dual hypergraph of a Sperner hypergraph \(\mathcal{H}\) is polynomial in the size of \(\mathcal{H}\). This follows from the proof of the main result in [47]. **Observation 3.12**.: _The Dual Conformality problem is in_ co-NP_._ Proof.: Suppose that for a given hypergraph \(\mathcal{H}\), its dual is not conformal. Then there exists a maximal clique \(C\) of the co-occurrence graph of \(\mathcal{H}^{d}\) that is not a minimal transversal of \(\mathcal{H}\). It can be verified in polynomial time whether a set \(C\subseteq V(\mathcal{H})\) satisfies all this properties. By Corollary 3.9, the co-occurrence graph \(G(\mathcal{H}^{d})\) can be computed in polynomial time. Having computed \(G(\mathcal{H}^{d})\), we can check in polynomial time if every two distinct vertices in \(C\) are adjacent in \(G(\mathcal{H}^{d})\) and whether no vertex in \(V(G(\mathcal{H}^{d}))\setminus C\) is adjacent to all vertices in \(C\). Since the hypergraph \(\mathcal{H}\) is our input, we can also check in polynomial time if \(C\) is not a minimal transversal of \(\mathcal{H}\). However, the complexity of Dual Conformality remains open in general. ### A polynomial case of Dual Conformality We develop a polynomial-time algorithm for Dual Conformality when restricted to the hypergraphs \(\mathcal{H}\) such that every maximal clique of the co-occurrence graph of \(\mathcal{H}^{d}\) is a transversal of \(\mathcal{H}\). This algorithm is then used in Section 4 to develop a polynomial-time algorithm for recognizing graphs in which all minimal clique transversals have size at most \(k\), for every fixed \(k\). Restricted Dual Conformality _Input:_ A hypergraph \(\mathcal{H}\) such that every maximal clique of \(G(\mathcal{H}^{d})\) is a transversal of \(\mathcal{H}\). _Question:_ Is the dual hypergraph \(\mathcal{H}^{d}\) conformal? **Lemma 3.13**.: _Let \(\mathcal{H}\) be a hypergraph and let \(G=G(\mathcal{H}^{d})\). Suppose that every maximal clique of \(G\) is a transversal of \(\mathcal{H}\). Then \(\mathcal{H}\) is not dually conformal if and only if \(G\) contains a vertex \(v\) such that \(N_{G}(v)\) is a transversal of \(\mathcal{H}\)._ Proof.: Assume first that there exists a vertex \(v\) of \(G\) such that \(N_{G}(v)\) is a transversal of \(\mathcal{H}\). Let \(T\) be a minimal transversal of \(\mathcal{H}\) such that \(T\subseteq N_{G}(v)\). By Corollary 3.7, every minimal transversal of \(\mathcal{H}\) is a clique in \(G\). Thus, \(T\) is a clique and since \(T\) is contained in \(N_{G}(v)\), the set \(T\cup\{v\}\) is also a clique. Let \(C\) be a maximal clique in \(G\) such that \(T\cup\{v\}\subseteq C\). Then \(C\) is a maximal clique in \(G\) that properly contains a minimal transversal of \(\mathcal{H}\) (namely \(T\)). Therefore, \(C\) is not a minimal transversal of \(\mathcal{H}\). By Observation 3.2, \(\mathcal{H}\) is not dually conformal. Assume now that \(\mathcal{H}\) is not dually conformal. By Observation 3.2, \(G\) has a maximal clique \(C\) that is not a minimal transversal of \(\mathcal{H}\). Since, by the assumption on \(\mathcal{H}\) every maximal clique of \(G\) is a transversal of \(\mathcal{H}\), there exists a minimal transversal \(T\) of \(\mathcal{H}\) properly contained in \(C\). Let \(v\) be a vertex in \(C\setminus T\). Then, since \(C\) is a clique, \(T\) is a subset of \(N_{G}(v)\). This implies that \(N_{G}(v)\) is a transversal of \(\mathcal{H}\). **Proposition 3.14**.: _Given a hypergraph \(\mathcal{H}=(V,E)\) with dimension \(k\) and maximum degree \(\Delta\) such that every maximal clique of \(G(\mathcal{H}^{d})\) is a transversal of \(\mathcal{H}\), the Restricted Dual Conformality problem is solvable in time \(\mathcal{O}(k|E|\Delta^{2}|V|^{2})\)._ Proof.: Using Proposition 2.2, we compute in time \(\mathcal{O}(|V||E|)\) the edge-vertex incidence matrix of \(\mathcal{H}\) and the doubly-linked representation of its incident pairs. Next, we compute the co-occurrence graph \(G=G(\mathcal{H}^{d})\) of \(\mathcal{H}^{d}\). By Proposition 3.8, this can be done in time \(\mathcal{O}(k|E|\Delta^{2}|V|^{2})\). Then we iterate over all vertices \(v\) of \(G\) and verify in time \(\mathcal{O}(k|E|)\) if the neighborhood of \(v\) in \(G\) is a transversal of \(\mathcal{H}\). By Lemma 3.13, if such a vertex exists, then \(G\) is not dually conformal, and otherwise it is. The total running time of this approach is \(\mathcal{O}(k|E|\Delta^{2}|V|^{2}+k|V||E|)=\mathcal{O}(k|E|\Delta^{2}|V|^{2})\) **Remark 3.15**.: The time complexity of the algorithm given by Proposition 3.14 is dominated by the time needed to compute the co-occurrence graph of \(\mathcal{H}^{d}\). The complexity of the remaining steps is only \(\mathcal{O}(k|V||E|)\). ## 4 Graphs with small upper clique transversal number In this section we shift the focus from hypergraphs to graphs and apply the results from Section 3 to a problem about clique transversals in graphs. Recall that a _clique transversal_ in a graph is a set of vertices intersecting all maximal cliques. The problem of determining the minimum size of a clique transversal has received considerable attention in the literature (see, e.g., the works by Payan in 1979 [59], by Andreae, Schughart, and Tuza in 1991 [6], by Erdos, Gallai, and Tuza in 1992 [30], as well as more recent works [5, 8, 13, 16, 22, 25, 39, 50, 51, 52, 53, 62]). Recently, Milanic and Uno initiated in [57] the study of the "upper" variant of this parameter. An _upper clique transversal_ of a graph \(G\) is a minimal clique transversal of maximum size. The _upper clique transversal number_ of a graph \(G\) is denoted by \(\tau_{c}^{+}(G)\) and defined as the maximum size of a minimal clique transversal in \(G\). In hypergraph terminology, the upper clique transversal number of a graph \(G\) is the maximum size of a hyperedge of the dual of the clique hypergraph. The corresponding decision problem is as follows. Upper Clique Transversal Milanic and Uno showed in [57] that Upper Clique Transversal is NP-complete in the classes of chordal graphs, chordal bipartite graphs, and line graphs of bipartite graphs, but solvable in linear time in the classes of split graphs and proper interval graphs. We now show that for fixed \(k\), the problem can be reduced in polynomial to the Restricted Dual Conformality problem, and is thus polynomial-time solvable. We consider the following family of problems parameterized by a positive integer \(k\), where, unlike for the Upper Clique Transversal problem, \(k\) is fixed and not part of the input. The problem is only interesting for \(k\geq 2\), since every graph with at least one vertex is a yes-instance to the 1-Upper Clique Transversal problem. Let us first note that the variant of the \(k\)-Upper Clique Transversal problem in which the family of maximal cliques of the input graph \(G\) is also part of the input admits a simple polynomial-time algorithm. It suffices to verify if there exists a set \(X\subseteq V(G)\) of size \(k-1\) that is not a clique transversal of \(G\) but is contained in some minimal clique transversal. The former condition can be checked directly using the family of maximal cliques of \(G\), and the latter condition can be checked in polynomial time since \(k\) is fixed, by Corollary 2.4. An alternative solution would be to verify if there exists a set \(X\subseteq V(G)\) of size \(k\) that is contained in some minimal clique transversal. Solving the problem without knowing the family of maximal cliques (which could be exponential in the size of \(G\)) requires more work, but is still doable in polynomial time. **Theorem 4.1**.: _For every integer \(k\geq 2\), given a graph \(G=(V,E)\), the \(k\)-Upper Clique Transversal problem is solvable in time \(\mathcal{O}(|V|^{3k-3})\)._ We prove Theorem 4.1 in several steps. One key ingredient is a polynomial-time algorithm to test if a given constant-sized set of vertices in a graph is a clique transversal.3 By definition, a set \(X\) of vertices in a graph \(G\) is a clique transversal if and only if \(X\) intersects all maximal cliques. In particular, this means that for every clique \(C\) in \(G-X\) there exists a vertex \(x\in X\) containing \(C\) in its neighborhood. As we show next, it is sufficient to require this condition for all cliques \(C\) in \(G-X\) such that \(|C|\leq|X|\). Footnote 3: Note that the assumption on the bound on the size of the set is essential. In fact, as shown by Zang [71], it is co-NP-complete to check, given a graph \(G\) and an independent set \(I\), whether \(I\) is a clique transversal in \(G\). **Lemma 4.2**.: _For every graph \(G\) and every set \(X\subseteq V(G)\), the following statements are equivalent._ 1. \(X\) _is a clique transversal in_ \(G\)_._ 2. _For every clique_ \(C\) _in_ \(G-X\)_, there exists a vertex_ \(x\in X\) _such that_ \(C\subseteq N_{G}(x)\)_._ 3. _For every clique_ \(C\) _in_ \(G-X\) _such that_ \(|C|\leq|X|\)_, there exists a vertex_ \(x\in X\) _such that_ \(C\subseteq N_{G}(x)\)_._ Proof.: Suppose \(X\) is a clique transversal in \(G\) and let \(C\) be a clique in \(G-X\). Let \(C^{\prime}\) be a maximal clique in \(G\) such that \(C\subseteq C^{\prime}\). Then \(C^{\prime}\) contains a vertex \(x\in X\). Since \(C\cup\{x\}\subseteq C^{\prime}\) and \(C^{\prime}\) is a clique, we must have \(C\subseteq N_{G}(x)\). Clearly, the second statement implies the third one. We prove that the third statement implies the first one by contraposition. Suppose that \(X\) is not a clique transversal in \(G\). Then there exists a maximal clique \(C^{\prime}\) in \(G\) such that \(C^{\prime}\cap X=\emptyset\). Since \(C^{\prime}\) is a maximal clique disjoint from \(X\), every vertex in \(X\) has a non-neighbor in \(C^{\prime}\). Selecting one such non-neighbor for each vertex in \(X\) results in a clique \(C\) in \(G-X\) such that \(|C|\leq|X|\) and every vertex in \(X\) has a non-neighbor in \(C\). Thus, there is no vertex \(x\in X\) such that \(C\subseteq N_{G}(x)\). Lemma 4.2 implies the following characterization of clique transversals of size one. A _universal vertex_ in a graph \(G\) is a vertex adjacent to all other vertices. **Corollary 4.3**.: _Given a graph \(G=(V,E)\) and a vertex \(v\in V\), the set \(\{v\}\) is a clique transversal in \(G\) if and only if \(v\) is a universal vertex in \(G\)._ Proof.: By Lemma 4.2, the singleton \(\{v\}\) is a clique transversal in \(G\) if and only if for every clique \(C\) in \(G-v\), it holds that \(C\subseteq N_{G}(v)\). If this latter condition is satisfied, then \(v\) is universal in \(G\), since otherwise for any vertex \(w\) in \(G\) nonadjacent to \(v\), the set \(C=\{w\}\) would be a clique in \(G-v\) violating the condition \(C\subseteq N_{G}(v)\). And conversely, if \(v\) is universal in \(G\), then \(N_{G}(v)=V(G)\setminus\{v\}\) and hence the condition \(C\subseteq N_{G}(v)\) is satisfied trivially for any clique \(C\) in \(G-v\). As another consequence of Lemma 4.2, we obtain that when the size of a set of vertices is bounded by a constant, testing whether the set is a clique transversal can be done in polynomial time. **Proposition 4.4**.: _For every fixed \(k\geq 1\), there is an algorithm running in time \(\mathcal{O}(|V|^{k})\) to check if, given a graph \(G=(V,E)\) and a set \(X\subseteq V(G)\) with \(|X|\leq k\), the set \(X\) is a clique transversal of \(G\)._ Proof.: If \(k=1\), then by Corollary 4.3\(X\) is a clique transversal of \(G\) if and only if \(X=\{v\}\) such that \(v\) is a universal vertex in \(G\). This condition can be tested in time \(\mathcal{O}(|V|)\). Assuming \(k\geq 2\), we first compute in time \(\mathcal{O}(|V|^{2})\) the adjacency matrix of \(G\). This will allow for testing adjacency of a pair of vertices in constant time. By Lemma 4.2, it suffices to verify if every clique \(C\) in \(G\) with size at most \(|X|\) either contains a vertex of \(X\) or is contained in the neighborhood of some vertex in \(X\). Since \(|X|\leq k\), all such cliques can be enumerated in time \(\mathcal{O}(|V|^{k})\). For each such clique \(C\), we can check in time \(\mathcal{O}(|C||X|)=\mathcal{O}(1)\) if \(C\) is disjoint from \(X\). If it is, then we iterate over all \(\mathcal{O}(1)\) vertices \(x\in X\) and for each such vertex \(x\) check the condition \(C\subseteq N_{G}(x)\) in time \(\mathcal{O}(|C|)=\mathcal{O}(1)\). If for some clique \(C\) that is disjoint from \(X\) no such vertex \(x\in X\) exists, we conclude that \(X\) is not a clique transversal in \(G\), and otherwise it is. The total running time is \(\mathcal{O}(|V|^{k})\). Furthermore, note that for every fixed \(k\), if a set \(X\subseteq V(G)\) with \(|X|\leq k\) is a clique transversal of \(G\), then we can check in polynomial time if \(X\) is a minimal clique transversal, simply by checking, for all \(x\in X\), whether the set \(X\setminus\{x\}\) is a clique transversal. This can be done in time \(\mathcal{O}(k|V|^{k-1})=\mathcal{O}(|V|^{k-1})\) by Proposition 4.4. **Corollary 4.5**.: _For every fixed \(k\), there is an algorithm running in time \(\mathcal{O}(|V|^{k})\) to check if, given a graph \(G=(V,E)\) and a set \(X\subseteq V(G)\) with \(|X|\leq k\), the set \(X\) is a minimal clique transversal of \(G\)._ **Lemma 4.6**.: _Let \(k\) be a positive integer and \(G\) be a graph. Let \(\mathcal{H}\) be the hypergraph defined as follows: the vertex set of \(\mathcal{H}\) is \(V(G)\), and the hyperedges of \(\mathcal{H}\) are precisely the minimal clique transversals \(X\) of \(G\) such that \(|X|\leq k\). Then the following statements are equivalent._ 1. \(\tau_{c}^{+}(G)\leq k\)_._ 2. \(\mathcal{H}\) _is the hypergraph of all minimal clique transversals of_ \(G\)_._ 3. \(\mathcal{H}\) _is dually conformal and_ \(G=G(\mathcal{H}^{d})\)_._ Proof.: The equivalence between items 1 and 2 follows directly from the definition of \(\tau_{c}^{+}(G)\). We thus focus on establishing the equivalence between items 2 and 3. Assume that \(\mathcal{H}\) is the hypergraph of all minimal clique transversals of \(G\), that is, \(\mathcal{H}\) is the dual hypergraph of the clique hypergraph of \(G\). By Fact 2.1, the dual hypergraph of \(\mathcal{H}\) is the clique hypergraph of \(G\). By Theorem 2.6, \(\mathcal{H}^{d}\) is conformal, that is, \(\mathcal{H}\) is dually conformal. Furthermore, Lemma 2.7 shows that \(G=G(\mathcal{H}^{d})\). Conversely, assume now that \(\mathcal{H}\) is dually conformal and \(G=G(\mathcal{H}^{d})\). Since \(\mathcal{H}^{d}\) is conformal, Theorem 2.6 implies that \(\mathcal{H}^{d}\) is the clique hypergraph of \(G(\mathcal{H}^{d})=G\). Thus, by Fact 2.1, \(\mathcal{H}=(\mathcal{H}^{d})^{d}\) is the hypergraph of all minimal clique transversals of \(G\). We now have everything ready to prove Theorem 4.1. Proof of Theorem 4.1.: We first describe the algorithm and then justify its correctness and running time. Let \(G=(V,E)\) be the input graph. The algorithm performs the following steps: 1. Compute the hypergraph \(\mathcal{H}\) defined as follows: the vertex set of \(\mathcal{H}\) is \(V\), and the hyperedges of \(\mathcal{H}\) are precisely the minimal clique transversals \(X\) of \(G\) such that \(|X|<k\). 2. Compute the co-occurrence graph \(G(\mathcal{H}^{d})\) of the dual hypergraph of \(\mathcal{H}\). 3. Check if \(G\neq G(\mathcal{H}^{d})\). 4. If \(G\neq G(\mathcal{H}^{d})\), then the algorithm determines that \(\tau_{c}^{+}(G)\geq k\) (that is, \(G\) is a yes-instance) and halts. 5. If \(G=G(\mathcal{H}^{d})\), then apply Proposition 3.14 on \(\mathcal{H}\) to test if \(\mathcal{H}^{d}\) is conformal. * If \(\mathcal{H}^{d}\) is conformal, then the algorithm determines that \(\tau_{c}^{+}(G)<k\) (that is, \(G\) is a no-instance) and halts. * If \(\mathcal{H}^{d}\) is not conformal, then the algorithm determines that \(\tau_{c}^{+}(G)\geq k\) (that is, \(G\) is a yes-instance) and halts. To prove correctness, let us first justify that, in the case when \(G=G(\mathcal{H}^{d})\), we can indeed apply Proposition 3.14 on \(\mathcal{H}\) to test if \(\mathcal{H}^{d}\) is conformal. By the definition of the hypergraph \(\mathcal{H}\), every maximal clique of \(G\) intersects every hyperedge of \(\mathcal{H}\). Thus, if \(G=G(\mathcal{H}^{d})\), then every maximal clique of \(G(\mathcal{H}^{d})\) is a transversal of \(\mathcal{H}\). This means that \(\mathcal{H}\) is indeed a valid input to the Restricted Dual Conformality problem, and hence Proposition 3.14 applies, as claimed. Furthermore, by Lemma 4.6, we have \(\tau_{c}^{+}(G)<k\) if and only if \(\mathcal{H}\) is dually conformal and \(G=G(\mathcal{H}^{d})\). Equivalently, \(\tau_{c}^{+}(G)\geq k\) if and only if one of the following conditions holds: either (i) \(G\neq G(\mathcal{H}^{d})\) or (ii) \(G=G(\mathcal{H}^{d})\) and \(\mathcal{H}^{d}\) is not conformal. This implies that each of the three outputs of the algorithm is correct. It remains to analyze the time complexity. We compute the hypergraph \(\mathcal{H}\) in time \(\mathcal{O}(|V|^{2k-1})\) by enumerating all the \(\mathcal{O}(|V|^{k-1})\) subsets \(X\) of \(V\) with size less than \(k\) and checking, for each such set \(X\), if \(X\) is a minimal clique transversal of \(G\), in time \(\mathcal{O}(|V|^{|X|})=\mathcal{O}(|V|^{k-1})\) using Corollary 4.5. Note that \(\mathcal{H}\) has \(\mathcal{O}(|V|)\) vertices and \(\mathcal{O}(|V|^{k-1})\) hyperedges. Its dimension is at most \(k-1\) and maximum degree \(\Delta=\mathcal{O}(|V|^{k-2})\). By Proposition 3.8, the co-occurrence graph of \(\mathcal{H}^{d}\) can be computed in time \(\mathcal{O}(k|E(\mathcal{H})|\Delta^{2}|V(\mathcal{H})|^{2})=\mathcal{O}(k \cdot|V|^{k-1}\cdot|V|^{2(k-2)}\cdot|V|^{2})=\mathcal{O}(|V|^{3k-3})\). The check whether the equality between the two graphs \(G\) and \(G(\mathcal{H}^{d})\) holds can be performed in time \(\mathcal{O}(|V|+|E|)\) by comparing the adjacency lists of the two graphs. Finally, testing conformality of \(\mathcal{H}^{d}\) in the case when the two graphs are the same can be done in time \(\mathcal{O}(k|E(\mathcal{H})|\Delta^{2}|V(\mathcal{H})|^{2})=\mathcal{O}(|V|^ {3k-3})\) by Proposition 3.14. As each of the remaining steps takes constant time, we conclude that the algorithm runs in time \(\mathcal{O}(|V|^{3k-3})\). We close the section with a remark about the case \(k=2\). Applying Theorem 4.1 to this case shows that given a graph \(G=(V,E)\), the 2-Upper Clique Transversal problem is solvable in time \(\mathcal{O}(|V|^{3})\). However, the problem can be solved in linear time, as a consequence of the following characterization of graphs in which all minimal clique transversals have size one. **Proposition 4.7**.: _Let \(G\) be a graph. Then \(\tau_{c}^{+}(G)=1\) if and only if \(G\) is complete._ Proof.: If \(G\) is complete, then the only minimal clique transversals are the sets consisting of a single vertex. Thus, \(\tau_{c}^{+}(G)=1\) in this case. Assume now that \(G\) is not complete. Let \(S\) be the set of universal vertices of \(G\). Note that by Corollary 4.3, \(S\) is precisely the set of vertices \(v\) such that \(\{v\}\) is a clique transversal. We claim that \(V\setminus S\) is a clique transversal. Suppose this is not the case. Then \(G\) admits a maximal clique \(C\) contained entirely in \(S\). Since \(S\) is a clique, we have \(C=S\). However, since every maximal clique contains \(C\), it follows that \(S\) is the only maximal clique in \(G\) and hence \(G\) is complete, a contradiction. This shows that \(V\setminus S\) is a clique transversal, as claimed. Thus, \(V\setminus S\) contains a minimal clique transversal, and any such clique transversal is of size at least 2, since otherwise its only vertex would belong to \(S\). Consequently, \(\tau_{c}^{+}(G)\geq 2\) Dually conformal hypegraphs with bounded dimension In this section we study dually conformal hypergraphs of bounded dimension. Recall that, given a hypergraph \(\mathcal{H}\), the _dimension_ of \(\mathcal{H}\) is the maximum cardinality of a hyperedge in \(\mathcal{H}\). By Proposition 3.1, a Sperner hypergraph is dually conformal if and only if there exists a graph \(G\) such that \(\mathcal{H}\) is the hypergraph of all minimal clique transversals of \(G\). In the case when dimension is bounded by a positive integer \(k\), we obtain a similar characterization, which in addition takes into account the upper clique transversal number of graphs. **Proposition 5.1**.: _For every hypergraph \(\mathcal{H}\) and positive integer \(k\), the following statements are equivalent._ 1. \(\mathcal{H}\) _is a dually conformal Sperner hypergraph with dimension at most_ \(k\)_._ 2. _There exists a graph_ \(G\) _with_ \(\tau_{c}^{+}(G)\leq k\) _such that_ \(\mathcal{H}\) _is the hypergraph of all minimal clique transversals of_ \(G\)_._ The proof of this proposition is very similar to the proof of Proposition 3.1, so we omit it. For a positive integer \(k\), we are interested in the complexity of the following problem. \begin{tabular}{|l|} Dimension-\(k\) Dual Conformality \\ _Input:_ & A hypergraph \(\mathcal{H}\) with dimension at most \(k\). \\ _Question:_ & Is the dual hypergraph \(\mathcal{H}^{d}\) conformal? \\ \end{tabular} In this section we develop a polynomial-time algorithm for Dimension-\(k\) Dual Conformality for any fixed positive integer \(k\). For the cases \(k\in\{2,3\}\), we also develop more direct algorithms. ### The general case We start with a technical lemma. **Lemma 5.2**.: _For every positive integer \(k\), there exists an algorithm running in time \(\mathcal{O}(|E|\Delta^{2}|V|^{2}+|E||V|^{k})\) that takes as input a hypergraph \(\mathcal{H}=(V,E)\) with dimension at most \(k\) and maximum degree \(\Delta\) and tests whether \(G=G(\mathcal{H}^{d})\) contains a maximal clique \(C\) that is not a transversal of \(\mathcal{H}\)._ Proof.: By Proposition 3.8, the graph \(G\) can be computed in time \(\mathcal{O}(k|E|\Delta^{2}|V|^{2})\), which is \(\mathcal{O}(|E|\Delta^{2}|V|^{2})\) since the dimension is constant. We show the existence of an algorithm with the stated running time that tests the negation of the stated condition, namely whether every maximal clique of \(G\) is a transversal of \(\mathcal{H}\). This condition is equivalent to the condition that every hyperedge of \(\mathcal{H}\) is a clique transversal in \(G\). Since each hyperedge \(e\) of \(\mathcal{H}\) has size at most \(k\), by Proposition 4.4 it can be tested in time \(\mathcal{O}(|V|^{k})\) whether \(e\) is a clique transversal in \(G\). Hence, the total running time of the described algorithm is \(\mathcal{O}(|E|\Delta^{2}|V|^{2}+|E||V|^{k})\). **Theorem 5.3**.: _For every positive integer \(k\), given a hypergraph \(\mathcal{H}=(V,E)\) with dimension at most \(k\) and maximum degree \(\Delta\), the Dimension-\(k\) Dual Conformality problem is solvable in time \(\mathcal{O}(|E||V|^{2}\Delta^{2}+|E||V|^{k})\)._ Proof.: We make use of the characterization of dually conformal hypergraphs given by Corollary 3.3. First we test condition (a) in time \(\mathcal{O}(|E||V|^{2}\Delta^{2}+|E||V|^{k})\) using Lemma 5.2. If condition (a) holds, we conclude that \(\mathcal{H}\) is not dually conformal. If the condition does not hold, then every maximal clique of the graph \(G=G(\mathcal{H}^{d})\) is a transversal of \(\mathcal{H}\), which means that \(\mathcal{H}\) is a valid input for the Restricted Dual Conformality problem. In this case, we test dual conformality of \(\mathcal{H}\) in time \(\mathcal{O}(k|E|\Delta^{2}|V|^{2})\) using Proposition 3.14. Since \(k\) is constant, the complexity simplifies to \(\mathcal{O}(|E|\Delta^{2}|V|^{2})\). ### The case of dimension three The case \(k=3\) of Theorem 5.3 is as follows. **Theorem 5.4**.: _Given a hypergraph \(\mathcal{H}=(V,E)\) with dimension at most \(3\) and maximum degree \(\Delta\), the Dimension-\(3\) Dual Conformality problem is solvable in time \(\mathcal{O}(|E||V|^{2}\Delta^{2}+|E||V|^{3})\)._ We now develop an alternative approach for recognizing dually conformal hypergraphs within the family of hypergraphs of dimension at most three, based on a reduction to \(2\)-Satisfiability. The running time of this algorithm matches that of Theorem 5.3. Recall that Corollary 3.3 gives two possible reasons why \(\mathcal{H}\) could fail to be dually conformal. A similar characterization is as follows. **Lemma 5.5**.: _Let \(\mathcal{H}\) be a hypergraph and let \(G=G(\mathcal{H}^{d})\). Then \(\mathcal{H}\) is not dually conformal if and only if one of the following two conditions holds._ 1. \(G\) _contains a maximal clique_ \(C\) _that is not a transversal of_ \(\mathcal{H}\)_, or_ 2. \(G\) _contains a clique_ \(C\) _and a vertex_ \(v\in C\) _such that for each hyperedge_ \(e\in E(\mathcal{H})\) _that contains_ \(v\) _we have_ \(|C\cap e|\geq 2\)_._ Proof.: By Corollary 3.3, the equivalence holds if \(G\) contains a maximal clique \(C\) that is not a transversal of \(\mathcal{H}\). Suppose now that every maximal clique of \(G\) is a transversal of \(\mathcal{H}\). In this case, by Corollary 3.3, it suffices to show that \(G\) contains a maximal clique \(C\) that is a transversal of \(\mathcal{H}\) but not a minimal one if and only if \(G\) contains a clique \(C^{\prime}\) and a vertex \(v\in C^{\prime}\) such that for all hyperedges \(e\in E(\mathcal{H})\) that contain \(v\) we have \(|C^{\prime}\cap e|\geq 2\). Suppose first that \(G\) contains a maximal clique \(C\) that is a transversal of \(\mathcal{H}\) but not a minimal one. Then there exists a vertex \(v\in C\) such that \(C\setminus\{v\}\) is a transversal of \(\mathcal{H}\). In particular, this implies that for all hyperedges \(e\in E(\mathcal{H})\) that contain \(v\) we have \(|C\cap e|\geq 2\). For the converse direction, suppose that \(G\) contains a clique \(C^{\prime}\) and a vertex \(v\in C^{\prime}\) such that for each hyperedge \(e\in E(\mathcal{H})\) that contains \(v\) we have \(|C^{\prime}\cap e|\geq 2\). Let \(C\) be a maximal clique in \(G\) such that \(C^{\prime}\subseteq C\). We claim that \(C\) is a transversal of \(\mathcal{H}\) but not a minimal one. The fact that \(C\) is a transversal of \(\mathcal{H}\) follows from the assumption that every maximal clique of \(G\) is a transversal of \(\mathcal{H}\). Furthermore, \(C\) is not a minimal transversal since \(C\setminus\{v\}\) is a transversal of \(\mathcal{H}\). To see this, consider an arbitrary hyperedge \(e\in E(\mathcal{H})\). * If \(v\in e\), then \(|C^{\prime}\cap e|\geq 2\) and hence \(|(C\setminus\{v\})\cap e|\geq|(C^{\prime}\setminus\{v\})\cap e|\geq 1\). * If \(v\not\in e\), then \((C\setminus\{v\})\cap e=C\cap e\), and \(C\cap e\neq\emptyset\) since \(C\) is a transversal of \(\mathcal{H}\). Thus, in either case, \(C\setminus\{v\}\) intersects \(e\). It follows that \(C\setminus\{v\}\) is a transversal of \(\mathcal{H}\), as claimed. Recall that condition (a) can be tested in polynomial time for any bounded dimension using Lemma 5.2. Next we show that for hypergraphs with dimension at most three, condition (c) can be tested in polynomial time using a reduction to \(2\)-Satisfiability, a well-known problem solvable in linear time (see Aspvall, Plass, and Tarjan [7]). **Lemma 5.6**.: _There exists an algorithm running in time \(\mathcal{O}(|E||V|^{2}\Delta^{2}+|V|^{3})\) that tests whether for a given hypergraph \(\mathcal{H}=(V,E)\) with dimension at most \(3\) and maximum degree \(\Delta\), the graph \(G=G(\mathcal{H}^{d})\) contains a clique \(C\) and a vertex \(v\in C\) such that for each hyperedge \(e\in E\) that contains \(v\) it holds \(|C\cap e|\geq 2\)._ Proof.: By Proposition 3.8 the co-occurrence graph \(G=G(\mathcal{H}^{d})\) of \(\mathcal{H}^{d}\) can be constructed in time \(\mathcal{O}(|E|\Delta^{2}|V|^{2})\). We develop a polynomial-time algorithm to test, given a vertex \(v\) of \(G\), whether \(G\) contains a clique \(C\) such that \(v\in C\) and for each hyperedge \(e\in E\) that contains \(v\) we have \(|C\cap e|\geq 2\). Let \(e_{1},\ldots,e_{\ell}\) be the hyperedges of \(\mathcal{H}\) that contain \(v\). We need to decide if there is a clique \(K\) in \(G\) such that \(K\subseteq N_{G}(v)\) and \(K\cap e_{i}\neq\emptyset\) for all \(i\in\{1,\ldots,\ell\}\). For each \(i\in\{1,\ldots,\ell\}\), we compute in time \(\mathcal{O}(|V|)\) the intersection \(e_{i}\cap N_{G}(v)\). If \(e_{i}\cap N_{G}(v)=\emptyset\) for some \(i\in\{1,\ldots,\ell\}\), then the desired clique \(K\) does not exist. So let us assume that \(e_{i}\cap N_{G}(v)\neq\emptyset\) for all \(i\in\{1,\ldots,\ell\}\). In this case we determine the existence of a desired clique \(K\) by solving the following instance of \(2\)-Satisfiability: * For each vertex \(u\in N_{G}(v)\) there is one variable \(x_{u}\) (with the intended meaning that \(x_{u}\) takes value true in a satisfying assignment if and only if \(u\in K\)). * For every two distinct non-adjacent vertices \(u,w\in N_{G}(v)\), we introduce the clause \(\neg x_{u}\vee\neg x_{w}\) (specifying that not both \(u\) and \(w\) can be selected in the clique \(K\)). Furthermore, for every \(i\in\{1,\ldots,\ell\}\), we introduce the clause \(\bigvee_{u\in e_{i}\cap N_{G}(v)}x_{u}\) (specifying that at least one of the vertices in \(e_{i}\cap N_{G}(v)\) should belong to \(K\)). Note that for each \(i\in\{1,\ldots,\ell\}\), we have \(v\in e_{i}\) and \(|e_{i}|\leq 3\) since \(\mathcal{H}\) has dimension at most \(3\). Consequently, \(|e_{i}\cap N_{G}(v)|\leq|e_{i}\setminus\{v\}|\leq 2\) and hence all the clauses have length one or two. The instance of \(2\)-Satisfiability is constructed so that there is a clique \(K\) in \(G\) such that \(K\subseteq N_{G}(v)\) and \(K\cap e_{i}\neq\emptyset\) for all \(i\in\{1,\ldots,\ell\}\) if and only if the conjunction of all the clauses has a satisfying assignment. There are \(\mathcal{O}(\Delta)\) intersections \(e_{i}\cap N_{G}(v)\), \(i\in\{1,\ldots,\ell\}\), which can be computed in time \(\mathcal{O}(\Delta|V|)\). There are \(\mathcal{O}(|V|)\) variables and \(\mathcal{O}(|V|^{2}+\Delta)\) clauses, hence this is a polynomial-time reduction to the linear-time solvable \(2\)-Satisfiability problem. We solve an instance of \(2\)-Satisfiability for each vertex \(v\) of \(G\), and hence the time complexity of this part of the algorithm is \(\mathcal{O}(|V|(\Delta|V|+|V|+|V|^{2}+\Delta))=\mathcal{O}(|V|^{2}(|V|+\Delta))\), resulting in the total running time of \(\mathcal{O}(|E||V|^{2}\Delta^{2}+|V|^{3})\), as claimed. Lemmas 5.2, 5.5, and 5.6 provide an alternative proof of Theorem 5.4. ### The two-uniform case In this section we analyze in some more details the case \(k=2\) of Theorem 5.3, that is, the case of \(2\)-uniform hypergraphs. Note that in this case we are dealing simply with graphs without isolated vertices; in particular, we shall also use the standard graph theory terminology and notation. In the case \(k=2\), the characterization of dually conformal hypergraphs given by Lemma 5.5 can be simplified as follows. **Lemma 5.7**.: _Let \(\mathcal{H}\) be a \(2\)-uniform hypergraph and let \(G=G(\mathcal{H}^{d})\). Then \(\mathcal{H}\) is not dually conformal if and only if one of the following two conditions holds._ 1. \(G\) _contains a maximal clique_ \(C\) _that is not a transversal of_ \(\mathcal{H}\)_, or_ 2. \(G\) _contains a vertex_ \(v\) _such that the closed neighborhood of_ \(v\) _in_ \(\mathcal{H}\) _is a clique in_ \(G\)_._ Proof.: By Lemma 5.5, it is sufficient to show that condition \((c^{*})\) from Lemma 5.7 is equivalent to condition \((c)\) from Lemma 5.5. Since \(\mathcal{H}\) is \(2\)-uniform, the inequality \(|C\cap e|\geq 2\) in condition \((c)\) is equivalent to the inclusion \(e\subseteq C\). Thus, condition \((c)\) is equivalent to following condition: \(G\) contains a vertex \(v\) and a clique \(C\) such that \(C\) contains \(v\) as well as all hyperedges \(e\) of \(\mathcal{H}\) that contain \(v\). In graph theoretic terms, this means that \(G\) contains a vertex \(v\) and a clique \(C\) such that \(C\) contains the closed neighborhood of \(v\) in \(\mathcal{H}\). If this condition is satisfied, then \(N_{\mathcal{H}}[v]\) is a clique in \(G\), too, and condition \((c^{*})\) holds. Conversely, if condition \((c^{*})\) holds and \(v\) is a vertex in \(G\) such that \(N_{\mathcal{H}}[v]\) is a clique in \(G\), then we can take \(C=N_{\mathcal{H}}[v]\) and condition \((c)\) is satisfied. Using Lemma 5.7 we now prove the announced result. **Theorem 5.8**.: _Given a \(2\)-uniform hypergraph \(\mathcal{H}=(V,E)\) with maximum degree \(\Delta\), the Dimension-\(2\) Dual Conformality problem is solvable in time \(\mathcal{O}(|E||V|^{2}\Delta^{2})\)._ Proof.: Let \(\mathcal{H}\) be the input \(2\)-uniform hypergraph and let \(G=G(\mathcal{H}^{d})\) be the co-occurrence graph of \(\mathcal{H}^{d}\). By Proposition 3.8, \(G\) can be computed in time \(\mathcal{O}(|E||V|^{2}\Delta^{2})\). By Lemma 5.7, \(\mathcal{H}\) is not dually conformal if and only if one of the conditions \((a)\) and \((c^{*})\) from the lemma holds. By Lemma 5.2, condition (a) can be tested in time \(\mathcal{O}(|E||V|^{2}\Delta^{2})\). Since we know both graphs \(G\) and \(\mathcal{H}\), condition \((c^{*})\) can also be tested in polynomial time: for each vertex \(v\) of \(G\), we compute the closed neighborhood of \(v\) in \(\mathcal{H}\) and verify if it is a clique in \(G\). For a fixed vertex \(v\) of \(G\), this can be done in time \(\mathcal{O}(\Delta^{2})\), resulting in the total time complexity of \(\mathcal{O}(|V|\Delta^{2})\). **Remark 5.9**.: The time complexity of the algorithm given by Theorem 5.8 is dominated by the time needed to compute the co-occurrence graph of \(\mathcal{H}^{d}\). The complexity of the remaining steps is only \(\mathcal{O}(|E||V|^{2}+|V|\Delta^{2})\). Recall that by Corollary 4.3, a minimal clique transversal in a graph \(G\) has size one if and only if it consists of a universal vertex. Therefore, Proposition 5.1 and its proof imply the following. **Corollary 5.10**.: _For every \(2\)-uniform hypergraph \(\mathcal{H}\), the following statements are equivalent._ 1. \(\mathcal{H}\) _is dually conformal._ 2. _There exists a graph_ \(G\) _with_ \(\tau_{c}^{+}(G)=2\) _and without universal vertices such that_ \(\mathcal{H}\) _is the hypergraph of all minimal clique transversals of_ \(G\)_._ ## 6 Discussion We have initiated the study of dually conformal hypergraphs, that is, hypergraphs whose dual hypergraph is conformal. As our main result, we developed a polynomial-time algorithm for recognizing dual conformality in hypergraphs of bounded dimension. The main problem left open by our work is of course the problem of determining the complexity of Dual Conformality. In particular, the following questions are open. **Question 1**.: _Is Dual Conformality co-NP-complete? Is it in NP? Is it in P?_ One could approach these questions by studying the Dual Conformality problem in particular classes of hypergraphs, for example on hypergraphs derived from graphs (such as matching hypergraphs [1, 67], various clique [40, 48, 58, 59, 67], independent set [40, 67], neighborhood [17, 31], separator [18, 42, 65], and dominating set hypergraphs [17, 18], etc.). If there exists a type of hypergraphs derived from graphs and a class of graphs \(\mathcal{G}\) such that for each graph \(G\in\mathcal{G}\), the corresponding hypergraph can be computed in polynomial time but testing dual conformality is co-NP-complete, this would imply co-NP-completeness of Dual Conformality. In particular, given that the conformality property of Sperner hypergraphs is closely related to clique hypergraphs of graphs (cf. Theorem 2.6), it would be natural to investigate the complexity of Dual Conformality when restricted to clique hypergraphs of graphs. This leads to the following property of graphs. A graph \(G\) is _clique dually conformal (CDC)_ if its clique hypergraph is dually conformal. **Question 2**.: _What is the complexity of recognizing CDC graphs?_ As explained above, the question is particularly interesting for graph classes with polynomially many maximal cliques. As our preliminary investigations, we were able to develop polynomial-time algorithms for testing the CDC property in the classes of split graphs and triangle-free graphs. To keep the length of this paper manageable, we shall present these results in a separate publication. Recall that our results have implications for the upper clique transversal problem in graphs. The variant of the problem in which \(k\) is part of input is known to be \(\mathsf{NP}\)-hard, see [57]. In terms of the parameterized complexity of the problem (with \(k\) as the parameter), Theorem 4.1 shows that the problem is in \(\mathsf{XP}\). This motivates the following. **Question 3**.: _Is the \(k\)-Upper Clique Transversal problem with \(k\) as parameter \(\mathsf{W[1]}\)-hard?_ We conclude with some structural questions. **Question 4**.: _Is there a real number \(r\geq 1\) such that every conformal hypergraph \(\mathcal{H}\) satisfies \((\dim(\mathcal{H})\cdot\dim(\mathcal{H}^{d}))^{r}\geq|V(\mathcal{H})|\)?_ Note that we may without loss of generality restrict our attention to Sperner conformal hypergraphs. On the other hand, the conformality assumption in Question 4 is essential, as shown by the following construction by Vladimir Gurvich and Kazuhisa Makino (personal communication), generalizing a graph construction due to Costa, Haeusler, Laber, and Nogueira [23]. Consider integers \(d\geq 2\), \(\ell\geq 1\), and \(k>d\). Define a \(d\)-uniform hypergraph \(\mathcal{H}=(V,E)\) as follows. Consider a set \(W\) of \(k\) vertices. The hypergraph \(\mathcal{H}\) contains, as hyperedges, all \(d\)-subsets of \(W\) and \(\ell\binom{k}{d-1}\) other edges, obtained as follows. To every \((d-1)\)-subset of \(W\) let us assign a new vertex and add the obtained \(d\)-set to \(E\). Moreover, let us do this \(\ell\) times for each \((d-1)\)-set. Note that \(\mathcal{H}\) is not conformal. Furthermore, the number of vertices is \(|V|=k+\ell\binom{k}{d-1}\), while \(\dim(\mathcal{H})=d\) and \(\dim(\mathcal{H}^{d})=k+\ell-(d-1)\). In particular, taking an integer \(q\geq 2\) and setting \(d=q+1\), \(k=2q\), and \(\ell=1\), we obtain \(\dim(\mathcal{H})=\dim(\mathcal{H}^{d})=q+1\), while \(|V|=\binom{2q}{q}+2q\), which is exponential in \(q\). If \(d=2\), \(k>d\) is arbitrary, and \(\ell=k-1\), we obtain the same example as in [23]. Since the general case of Question 4 is equivalent to the Sperner case, Theorem 2.6 implies that the question can be posed equivalently in graph theoretic terms. Recall that for a graph \(G\), we denote by \(\omega(G)\) the maximum size of a clique in \(G\) and by \(\tau_{c}^{+}(G)\) the upper clique transversal number of \(G\). **Question 5**.: _Is there a real number \(r\geq 1\) such that every graph \(G\) satisfies_ \[\left(\omega(G)\cdot\tau_{c}^{+}(G)\right)^{r}\geq|V(G)|\,?\] A strongly related question for the class of CIS graphs (that is, graphs in which every maximal independent set is a clique transversal) was posed by Alcon, Gutierrez, and Milanic in [2]. Denoting by \(\alpha(G)\) the maximum size of an independent set in a graph \(G\), the question is as follows. **Question 6** (Alcon, Gutierrez, and Milanic [2]).: _Is there a real number \(r\geq 1\) such that every CIS graph \(G\) satisfies_ \[(\omega(G)\cdot\alpha(G))^{r}\geq|V(G)|\,?\] Note that a positive answer to Question 6 question would imply a positive answer to Question 5 for the class of CIS graphs. For general graphs, random graphs show that the analogue of Question 6 does not hold (see, e.g., [12]). On the other hand, the famous Erdos-Hajnal conjecture (see, e.g., the survey by Chudnovsky [19]) states that the analogue of Question 6 holds when restricted to any class of graphs not containing a fixed graph \(H\) as an induced subgraph (with the value of \(r\) depending on \(H\)). In contrast, every graph is an induced subgraph of a CIS graph (see [4]). ### Acknowledgements We are grateful to Kazuhisa Makino for helpful discussions related to Question 4. Part of the work for this paper was done in the framework of bilateral projects between Slovenia and the USA and between Slovenia and the Russian federation, partially financed by the Slovenian Research and Innovation Agency (BI-US/22-24-093, BI-US/22-24-149, BI-US/20-21-018, and BI-RU/19-21-029). The work of the third author is supported in part by the Slovenian Research and Innovation Agency (I0-0035, research program P1-0285 and research projects N1-0102, N1-0160, J1-3001, J1-3002, J1-3003, J1-4008, and J1-4084). This research of the second author was included in the HSE University Basic Research Program. The work of the fourth author is partially supported by JSPS KAKENHI Grant Number JP17K00017, 20H05964 and 21K11757, Japan.
2309.16822
Twist Angle Dependence of Exciton Resonances in WSe$_2$/MoSe$_2$ Moiré Heterostructures
Van der Waals heterostructures based on TMDC semiconducting materials have emerged as promising materials due to their spin-valley properties efficiently contrived by the stacking-twist angle. The twist angle drastically alters the interlayer excitonic response by determining the spatial modulation, confining moir\'e potential, and atomic reconstruction in those systems. Nonetheless, the impact of the interlayer twist angle on the band alignment of the monolayers composing the heterostructure has received scant attention in the current research. Here, we systematically investigate the twist-angle dependence of intra- and inter-layer excitons in twisted WSe2/MoSe2 heterobilayers. By performing photoluminescence excitation spectroscopy, we identify the twist-angle dependence of interlayer emission response, where an energy redshift of about 100 meV was observed for increasing twist angles. The applied microscopic theory predicts, on the contrary, a blueshift, which suggests that additional features, such as atomic reconstruction, may also surpass the moir\'e potential confinement. Those findings also prompt the effects of dielectric screening by addressing the redshift response to the stacking layer order. Furthermore, our findings support the evidence of a band offset dependence on the twist angle for the adjacent monolayers composing the heterobilayer system. Our fundamental study of exciton resonances deepens the current understanding of the physics of twisted TMDC heterostructures and paves the way for future experiments and theoretical works.
Chirag Chandrakant Palekar, Joakim Hagel, Barbara Rosa, Samuel Brem, Ching-Wen Shih, Imad Limame, Martin von Helversen, Sefaattin Tongay, Ermin Malic, Stephan Reitzenstein
2023-09-28T20:01:34Z
http://arxiv.org/abs/2309.16822v1
# Twist Angle Dependence of Exciton Resonances ###### Abstract Van der Waals heterostructures based on TMDC semiconducting materials have emerged as promising materials due to their spin-valley properties efficiently contrived by the stacking-twist angle. The twist angle drastically alters the interlayer excitonic response by determining the spatial modulation, confining moire potential, and atomic reconstruction in those systems. Nonetheless, the impact of the interlayer twist angle on the band alignment of the monolayers composing the heterostructure has received scant attention in the current research. Here, we systematically investigate the twist-angle dependence of intra- and interlayer excitons in twisted WSe\({}_{2}\)/MoSe\({}_{2}\) heterobilayers. By performing photoluminescence excitation spectroscopy, we identify the twist-angle dependence of interlayer emission response, where an energy redshift of about 100 meV was observed for increasing twist angles. The applied microscopic theory predicts, on the contrary, a blueshift, which suggests that additional features, such as atomic reconstruction, may also surpass the moire potential confinement. Those findings also prompt the effects of dielectric screening by addressing the redshift response to the stacking layer order. Furthermore, our findings support the evidence of a band offset dependence on the twist angle for the adjacent monolayers composing the heterobilayer system. Our fundamental study of exciton resonances deepens the current understanding of the physics of twisted TMDC heterostructures and paves the way for future experiments and theoretical works. ## Introduction Transition metal dichalcogenides (TMDC) monolayers have first emerged as a novel class of semiconductors due to the appearance of a direct bandgap at two inequivalent \(\pm\)K valleys when thinned down to monolayer (ML) of material [1, 2]. Further unique physics, such as a strong spin-orbit coupling [3, 4], and a lack of inversion symmetry promote the lifting of the spin degeneracy and the locking of the spin and valley degrees of freedom, which allows the \(\pm\)K valleys to be individually addressed by circularly polarized light [5, 6]. Excitonic properties of TMDCs appear even more attractive when monolayers are vertically stacked to form van der Waals (vdW) heterostructures (HSs). Among their novelties is the presence of a type-II band alignment in MoSe\({}_{2}\)/WSe\({}_{2}\) HS which gives rise to spatially indirect interlayer excitons (IX) with, residing electrons and holes in conduction and valence bands of the adjacent layers. Moreover, the lattice mismatch and twist angle between the monolayers forms a moire superlattice in the system, which creates a periodic potential landscape capable of trapping the interlayer excitons [7, 8]. Those trapping effects, nonetheless, depend drastically on the twist angles that control the moire periodicity and, therefore, their confining potential [9, 10]. However, large twist angles leads to diminishing confining potential and delocalised excitons with altered optical response [9]. Additionally, it has been reported that a preferential stacking order of an asymmetric heterobilayer (e.g. WSe\({}_{2}\)/MoSe\({}_{2}\)/substrate and MoSe\({}_{2}\)/WSe\({}_{2}\)/substrate) alters the dielectric environment experienced by each constituent ML, and therefore, the optoelectronic response of the composed system [11]. Central to the physics of artificially stacked TMDC HS is the defined twist angle as well as the stacking order of the constitute layers, which appear as an effective and convenient tuning knob to control their optoelectronic properties [8, 12, 13]. Nonetheless, the influence of the twist angle on the band alignment of the monolayers composing the heterostructure has received scant attention in the recent research. Hence, twist angle dependent studies are highly desirable to gain detail insight into exciton resonances of the TMDC HSs. In this report, we systematically study mechanically stacked twisted WSe\({}_{2}\)/MoSe\({}_{2}\) heterobilayers with twist angles \(\theta\) varying between \(0^{0}\) and \(60^{0}\), aiming to understand twist angle dependent emission properties and excitonic resonances. By performing micro-photoluminescence excitation (\(\mu\)PLE) spectroscopy on twisted WSe\({}_{2}\)/MoSe\({}_{2}\) heterobilayers, we investigate the dependence of the twist angle on the intra- and interlayer excitonic resonances. Here, the PL response displays a drastic PL intensity reduction varying \(\theta\) from \(0^{0}\) to \(30^{0}\), where a redshift of about 100 meV in the emission energy is observed. Further we measured blueshift in emission energy of IX for \(\theta\) increasing towards \(60^{0}\). On the other hand, our microscopic theory suggests a blueshift in energy of interlayer excitons with increasing twist angle \(\theta\) up to 10\({}^{\circ}\), due to modulations in the moire potential with decreasing supercell size and delocalization of interlayer exciton in the heterobilayer. This indicates that additional band gap variations, based on atomic reconstruction or dielectric screening effects, not captured by the applied model are responsible for the observable redshift in energy of interlayer excitons. Additionally, \(\mu\)PLE measurements demonstrate a noticeable twist-angle dependence of intralayer exciton (X) resonances of WSe\({}_{2}\) and MoSe\({}_{2}\), revealing evidence of the monolayer band alignment dependence with twist angle in a TMDC HS system. Overall, we provide a fundamental and systematic study of the TMDC heterostructures, which sheds light on intriguing effects of twist angle-assisted tuning and manipulation of the excitonic resonances. ### Sample Fabrication The WSe\({}_{2}\)/MoSe\({}_{2}\) heterostructures were fabricated by employing the mechanical exfoliation [14] and dry-transfer method [15] techniques. Using a suitable low adhesive tape, the TMDC crystals are thinned and later exfoliated on a PMMA gel strip to assist the dry transfer technique. Monolayers of TMDC materials were thus identified by optical microscope images. Suitable WSe\({}_{2}\) and MoSe\({}_{2}\) monolayers were selected and aligned for the target twist angle considering the edges of each layer. Lastly, the monolayers were Figure 1: **Twisted WSe\({}_{2}\)/MoSe\({}_{2}\) heterobilayer.****a)** Schematic representing the stacking order of a WSe\({}_{2}\)/MoSe\({}_{2}\) heterobilayer on SiO\({}_{2}\)/Si substrate. **b)** IX PL emission from a WSe\({}_{2}\)/MoSe\({}_{2}\) heterobilayer with a twist angle of 56\({}^{\circ}\) at 4 K. The inset shows an optical micrograph of the fabricated twisted heterobilayer, highlighted with a red outline, consisting of MoSe\({}_{2}\) (green) and WSe\({}_{2}\) (blue). **c)** Illustration of conduction and valence band configuration around K point in Brillouin zone with momentum mismatch as result of twist angle (\(\theta\)). transferred onto a SiO\({}_{2}\) substrate, keeping the substrate temperature around 60 \({}^{\circ}\)C. Figure 1a shows a schematic of the WSe\({}_{2}\)/MoSe\({}_{2}\) HS system, in which the stacking order of the constituent monolayers is highlighted. ## Results In this work, we performed room temperature second harmonic generation (SHG) measurements to determine the twist angle between the MLs, and micro-photoluminescence (PL) to study the twist angle dependence of the interlayer exciton emission energy. All PL measurements are carried out at 4K unless mentioned otherwise. Throughout our systematic research, we fabricated six high-quality WSe\({}_{2}\)/MoSe\({}_{2}\) HSs with distinct twist angles varying from R-type stacking (close to 0\({}^{\circ}\)) to H-type stacking (close to 60\({}^{\circ}\)). More fabrication details are given in Methods section (Sample fabrication) and in the Supplementary Material, Fig. S1. Figure 1b shows the representative WSe\({}_{2}\)/MoSe\({}_{2}\) interlayer exciton photoluminescence extracted from the sample with \(\theta\) = 56\({}^{\circ}\), where one observes the emission peak at \(\approx\) 1.33 eV, likewise previous works reporting similar heterostructures [13, 16, 17, 18]. Aiming to achieve the highest photoluminescence quantum yield, we excited the sample in resonance with a WSe\({}_{2}\) neutral exciton emission energy E\({}_{\text{exc}}\) = 1.71 eV. As stated in the literature, the twist angle between stacked monolayers rules the momentum Figure 2: **Twist angle dependence of interlayer exciton emission in WSe\({}_{2}\)/MoSe\({}_{2}\) heterobilayers.****a)** Interlayer exciton PL emission from twisted WSe\({}_{2}\)/MoSe\({}_{2}\) heterobilayers with different twist angles. **b)** IX emission energy as function of twist angle exhibiting the substantial redshift from 0\({}^{\circ}\) to 25\({}^{\circ}\). Inset: IX intensity as function of twist angle with a pronounced minimum at an angle of 25\({}^{\circ}\). The interlayer exciton PL intensity drastically reduces with increasing (decreasing) twist angle from 0\({}^{\circ}\) (60\({}^{\circ}\)). mismatch around the \(\pm\)K valleys in the Brillouin zone, which promotes the formation of momentum direct (bright) or momentum indirect (dark) interlayer excitons (dark exciton in HBLs) [19, 20, 21]. At 0\({}^{\circ}\) and 60\({}^{\circ}\) twist angle, i.e. R-type stacking or H-type stacking, respectively, the type-II band alignment of the two adjacent monolayers holds the minimum momentum mismatch at \(\pm\)K points. In contrast, with the increase (decrease) of the twist angle from 0\({}^{\circ}\) (60\({}^{\circ}\)) into the largest angle misalignment (\(\theta\) = 30\({}^{\circ}\)), the transition becomes more indirect as a result of increasing momentum-space mismatch [22, 23]. Figure 1c depicts the two scenarios described above. ### Twist angle dependence It is well established that the lattice mismatch in the HS system and twist angle leads to the formation of high or low symmetry stacking orientations. Such alignments actively alters the lateral interface between the WSe\({}_{2}\) and MoSe\({}_{2}\) monolayers by modulating the physical interlayer separation (in real space) and the local atomic registry as a consequence of respective layers alignment [8, 9]. Therefore, the local atomic registry depends on the twist angle alignment, which modulates the interlayer coupling conditions. Those variations can be noticed through optical properties of IX, such as PL intensity or emission energy. To obtain a better overview of the evolution of IX emission for different twist angles, we illustrate the normalized PL response of our samples in Fig. 2a. Figure 2b shows the extracted IX emission energy as a function of twist angle, in which we observe IX energy manifesting a systematic redshift to blueshift (positive parabolic appearance) on the order of 100 meV as the angle changes from 0\({}^{\circ}\) to 60\({}^{\circ}\). Here, we notice that the PL intensity of the IX with a twist angle close to 0\({}^{\circ}\) or 60\({}^{\circ}\) is a maximum, whereas it is drastically lower for the intermediate angles (see Fig. 2b inset). Moreover, it is worth mentioning that relatively longer exposure times (10 times longer) and higher excitation powers (more than twice) were used to get a reasonable signal-to-noise ratio in PL response for HSs with twist angles of 12\({}^{\circ}\) and 25\({}^{\circ}\). This PL intensity reduction can be understood as a weakening of interlayer coupling strength for intermediate twist angles [23]. In fact, the interlayer separation between the MLs gradually increases with twist angle (up to 30\({}^{\circ}\)), and so the associated interlayer coupling strength depletes gradually [24]. Consequently, the reduced the IX exciton population leads to weakened emission intensity as observed for \(\theta\) = 25\({}^{\circ}\)(Fig. 2b, inset). On the other hand, high symmetric stacking and minimal interlayer separation result in higher interlayer coupling strength at twist angles close to 0\({}^{\circ}\) and 60\({}^{\circ}\), which promotes the efficient charge transfer between the WSe\({}_{2}\)/MoSe\({}_{2}\) interfaces [23]. To obtain a better understanding of our experimental findings, the energetic positions of the IX have been modeled with a microscopic theory, distinguishing between different interlayer coupling mechanisms that can give rise to twist angle-dependent spectral shifts of the IX resonance [9, 25, 26, 27]. Microscopic access to the moire exciton energy landscape is gained by describing the periodic moire potential as a modification of the decoupled monolayer energies, which can be solved from the Wannier equation [9, 26, 27, 28]. These modifications include stacking-dependent alignment shifts and electron/hole tunneling. The resulting spectral shift is due to an electrostatic potential [29], giving rise to a renormalization of the band structure [9, 25, 28], and the tunneling is due to the interlayer wavefunction overlap [26, 28]. Since the exciton resonance in question is assigned to the bright K-K exciton, the tunneling of carriers is weak, and thus the alignment shift plays the crucial role [25, 28]. Our calculations predict a blueshift of about 40 meV for the low-lying bright interlayer exciton, stemming from the decrease in the supercell size, which suppresses the impact of the alignment shift on the exciton, consequently delocalizing the exciton in real space (see Supplementary Material, section 5). Interestingly, the theory results contradict the experimentally observed redshift (see figure 2b). This discrepancy could be due to distinct neglected effects, as atomic reconstruction, which in the low twist angle regime strongly affects the exciton energy landscape [30, 31]. Nonetheless, compared to our experimental results, it was reported that a MoSe\({}_{2}\)/WSe\({}_{2}\) heterostructure exposed to different top and bottom environments might experience a dielectric screening asymmetry and affect the optical response of interlayer excitons [32, 33, 34]. It has observed that transferring MoSe\({}_{2}\) onto WSe\({}_{2}\) leads to a blueshift - redshift (negative parabolic appearance) of PL response as a function of twist angle [22], this also in agreement with our theoretical predictions. On the other hand, as observed in our work (see figure 2) and discussed in the Supplementary Material of Ref. [29], the samples were stacked in the reversed order - the WSe\({}_{2}\) ML was placed onto MoSe\({}_{2}\) ML--, exhibit, on the contrary, a considerable redshift - blueshift (positive parabolic appearance) dependent on the twist angle. The overall observations suggest that the stacking order of the constituent monolayers plays a substantial part in the twist-angle dependence of the IX emission energies. Importantly, the redshift in IX emission is also observed for the HSs stacked on few layers of hBN (for details see Supplementary Material, section 4). Such a scenario provides additional freedom to control the IX emission in twisted HS systems along with the deterministic twist angle. However, a straightforward understanding is still desired, and could potentially be achieved through spectroscopy measurements combined with further experiments, such as scanning probe microscopy techniques. ### Band offset modulation based on twist angle Aiming at a deeper understanding of our HS system, we performed twist-angle dependent \(\mu\)PLE measurements on all samples mentioned in Fig. 2a. The PLE signal of IX emission from the HS was recorded as a function of excitation energy, ranging from 1.75 to 1.56 eV and under constant excitation laser power. Figure 3a shows a 2D false color PLE map of IX emission, which features two distinct and characteristic resonances for the HS system with a twist angle of 56\({}^{\circ}\). The PLE resonances are associated with intralayer excitons of the constituent monolayers, \(X_{WSe_{2}}\) and \(X_{MoSe_{2}}\). The relative integrated IX intensity as a function of excitation energy is displayed in Fig. 3b, in which \(X_{WSe_{2}}\)and \(X_{MoSe_{2}}\) resonances were extracted by fitting the data with a Gaussian equation. Besides the widely reported twist angle dependent interlayer emission response of HSs, it is expected that the band structure of each monolayer will also be affected by the angle between the layers, likely due to the proximity effects, already mentioned in the section before and in [32, 33, 34]. Considering those aspects and the fact that PLE data gives information about the absorption of the individual layers and, therefore, the constituent band structures, we decided to investigate the intralayer response as a function of the twist angle of MoSe\({}_{2}\) and WSe\({}_{2}\) composing the HS. The resulting evolution of exciton resonance dependent on \(\theta\) is displayed in Fig. 4a. We first noticed that both resonances, \(X_{WSe_{2}}\) and \(X_{MoSe_{2}}\), exhibit a negative parabolic appearance with apparent blueshift. Specifically, the \(X_{WSe_{2}}\) exhibits a blueshift up to Figure 3: **Micro-photoluminescence excitation (\(\mu\)PLE) measurements on a WSe\({}_{2}\)/MoSe\({}_{2}\) heterobilayer with a twist angle of 25\({}^{\circ}\). a) IX emission as a function of excitation energy (wavelength) under constant excitation power. b) Integrated intensity (black square) as function of the excitation wavelength exhibiting resonances associated with intralayer exciton. The integrated intensity is fitted with Gaussian function (blue line).** 34 meV over the span of twist angles from 0\({}^{0}\) to 25\({}^{0}\). For twist angles close to 60\({}^{0}\), the resonances display a pronounced redshift. It positively confirms that the intrinsic intralayer exciton resonances also undergo a modification of their optical properties as a function of twist angle. One cannot exclude other external effects, such as dielectric screening or local defects and strain, which may interfere in the Coulomb interaction of excitons. However, the increment observed through PLE resonance separations with increasing twist angle is consistent even in WSe\({}_{2}\)/MoSe\({}_{2}\) HSs stacked on hBN/SiO\({}_{2}\) substrate (see Supplementary Material Fig. S5). Our findings may also reveal evidence of the monolayer band alignment dependence with twist angle in a TMDC HS system. It is well established that the band offset of conventional semiconductors is a key parameter for optoelectronic devices design because it controls the carrier occupation and, thus, the Figure 4: **Influence of twist angle on PLE resonance energies of a TMDC HS.****a)** PLE resonance energy of and for HS WSe\({}_{2}\)/MoSe\({}_{2}\) with different twist angles. **b)** Shows the inverse relationship between the PLE resonance separation as function of twist angle and with the IX emission energy. **c)** Schematic representation of the twist angle dependent change in band offset for twist angles 0\({}^{\circ}\), 10\({}^{\circ}\), 25\({}^{\circ}\) and 60\({}^{\circ}\). transport phenomena [35]. Nevertheless, the band offset can be manipulated by altering the Fermi level of the individual materials layers at thermal equilibrium by, for instance, by varying the local density of carriers (e.g. electrostatic doping) [36, 37, 38, 39, 40]. In the TMDC family, however, further aspects must be considered when studying a heterojunction formed by few- or monolayers. Few theoretical and experimental reports have discussed the band alignment of different monolayer materials [34][40], and its importance in the 2D materials heterostructure properties. However, the mechanism by which not only the choice of materials but also the angle between the layers forming the junction affects the band offset of those systems have not been yet established. To better understand the potential correlation between the band offset and twist angles in our systems, we present in Fig. 4b the two data sets corresponding to the interlayer and intralayer resonances. The blue graph in Fig. 4b shows to the PLE resonance separation \(\left(X_{WSe_{2}}-X_{MoSe_{2}}\right)\) as a function of the twist angle extracted from the PLE data of Fig. 4a. In comparison, the red graph depicts emission energy of IX (the data already presented in Fig. 2b). We first note that both exciton complexes (intra- and inter-layer) respond inversely to the twist angle, which might indicate that after the stacking, the junction of conduction and valence bands of the constituent materials, forming the type-II alignment, arranges differently for each twist angle. The bandgap of TMDCs is also sensitive to their thickness [34, 41], which suggests that the proximity of layers, already defined as deeply dependent on twist angle, may interfere in the band alignment of TMDC heterostructures as well. This scheme is seen in Fig. 4c. In the transitions lying in \(\pm\)K valleys of the materials, the intralayer exciton responds inversely to the interlayer exciton dependent on the twist angle. Our understanding based on our theoretical and experimental outcomes might not explain the exact junction formation of heterostructures in distinct dielectric screenings; nonetheless, it sheds light on the band offset dependence on the materials composing a heterobilayer as much as the twist angle between the layers is concern. ## Conclusion In this work, we fabricated high-quality twisted heterostructures consisting of WSe\({}_{2}\) and MoSe\({}_{2}\) monolayers to investigate the influence of twist angle on the intra- and inter-layer excitonic properties. The studied heterostructure samples have twist angles ranging from \(0^{0}\) to \(56^{0}\), and feature pronounced interlayer exciton emission at cryogenic temperature. The interlayer exciton features a redshift in emission energy for twist angles ranging from \(0^{0}\) and \(25^{0}\). Based on modeled microscopic theory considering the moire potential effect, the interlayer exciton emission shows a pronounced blueshift. The experimental redshift can be attributed to the proximity effect of constituent monolayers stacked in a specific order (WSe\({}_{2}\)/MoSe\({}_{2}\)). Furthermore, the photoluminescence excitation resonances also change systematically with the twist angle, which indicates a substantial influence of twist angle on the bandgap of individual materials. The separation between the WSe\({}_{2}\)and MoSe\({}_{2}\) resonances also differs with a change of twist angle, suggesting an alteration in the coupling strength between layers. This observation ultimately indicates changes in the band offset, which corresponds to the optically active type-II interlayer exciton transition, but also supports the experimentally determined reduced emission energy at twist angles leading to 30\({}^{\circ}\). Taken together, our results reveal that the TMDC heterostructure's band gap, and therefore, the excitonic response, strongly depend not only on the symmetric atomic arrangements and confining moire potentials but also on the dielectric environment and twist angle. ## Acknowledgement: Financial support by the Deutsche Forschungsgemeinschaft (DFG) by project Re2974/26-1 is gratefully acknowledged. The Marburg group acknowledges support from the DFG via SFB 1083 and the regular project 512604469. ## Conflict of Interest: The authors declare no conflict of interest.
2309.11195
Imaging performance above 150 keV of the wide field monitor on board the ASTENA concept mission
A new detection system for X-/Gamma-ray broad energy passband detectors for astronomy has been developed. This system is based on Silicon Drift Detectors (SDDs) coupled with scintillator bars; the SDDs act as a direct detector of soft (<30 keV) X-ray photons, while hard X-/Gamma-rays are stopped by the scintillator bars and the scintillation light is collected by the SDDs. With this configuration, it is possible to build compact, position sensitive detectors with unprecedented energy passband (2 keV - 10/20 MeV). The X and Gamma-ray Imaging Spectrometer (XGIS) on board the THESEUS mission, selected for Phase 0 study for M7, exploits this innovative detection system. The Wide Field Monitor - Imager and Spectrometer (WFM-IS) of the ASTENA (Advanced Surveyor of Transient Events and Nuclear Astrophysics) mission concept consists of 12 independent detection units, also based on this new technology. For the WFM-IS, a coded mask provides imaging capabilities up to 150 keV, while above this limit the instrument will act as a full sky spectrometer. However, it is possible to extend imaging capabilities above this limit by alternatively exploiting the Compton kinematics reconstruction or by using the information from the relative fluxes measured by the different cameras. In this work, we present the instrument design and results from MEGAlib simulations aimed at evaluating the effective area and the imaging performances of the WFM-IS above 150 keV.
Lisa Ferro, Leo Cavazzini, Miguel Moita, Enrico Virgilli, Filippo Frontera, Lorenzo Amati, Natalia Auricchio, Riccardo Campana, Ezio Caroli, Cristiano Guidorzi, Claudio Labanti, Piero Rosati, John B. Stephen
2023-09-20T10:30:43Z
http://arxiv.org/abs/2309.11195v1
# Imaging performance above 150 keV of the Wide Field Monitor on board the ASTENA concept mission ###### Abstract A new detection system for X-/Gamma-ray broad energy passband detectors for astronomy has been developed. This system is based on Silicon Drift Detectors (SDDs) coupled with scintillator bars; the SDDs act as a direct detector of soft (\(<\)30 keV) X-ray photons, while hard X-/Gamma-rays are stopped by the scintillator bars and the scintillation light is collected by the SDDs. With this configuration, it is possible to build compact, position sensitive detectors with unprecedented energy passband (2 keV - 10/20 MeV). The X and Gamma-ray Imaging Spectrometer (XGIS) on board the THESEUS mission, selected for Phase 0 study for M7, exploits this innovative detection system. The Wide Field Monitor - Imager and Spectrometer (WFM-IS) of the ASTENA (Advanced Survey of Transient Events and Nuclear Astrophysics) mission concept consists of 12 independent detection units, also based on this new technology. For the WFM-IS, a coded mask provides imaging capabilities up to 150 keV, while above this limit the instrument will act as a full sky spectrometer. However, it is possible to extend imaging capabilities above this limit by alternatively exploiting the Compton kinematics reconstruction or by using the information from the relative fluxes measured by the different cameras. In this work, we present the instrument design and results from MEGAlib simulations aimed at evaluating the effective area and the imaging performances of the WFM-IS above 150 keV. hard X/soft Gamma-ray astronomy, position sensitive detectors, silicon drift detectors, scintillators, point-source localization Further author information: (Send correspondence to L. Ferro) Lisa Ferro: E-mail: [email protected] ## 1 Introduction The joint detection of a Gravitational Wave (GW) event and of a Gamma Ray Burst (GRB) has opened the era of multimessenger astrophysics [1], in which the information coming from those events will allow us to understand in an unprecedented way the most extreme phenomena and conditions in the universe, such as supernovae (SNe), GRBs, active galactic nuclei galaxies (AGNs), and even investigate the fundamental laws of physics, such as the constancy of the speed of light in vacuum. Furthermore, in the field of X and gamma-ray astrophysics, many crucial questions regarding the properties and nature of the same events are still unanswered. A new generation of instruments for hard X/soft gamma-ray astrophysics is necessary to reach the localization accuracy, imaging capabilities and sensitivity necessary both to investigate the high energy sky in a meaningful way and both to grant the synergy with the new ground and sky observatories and GW interferometers that will be operative in the next decades. With those targets in mind, a new detection system for X and Gamma ray astronomy has been developed. This detection system is based on the so-called "siswich" (Silicon sandwich) system, which exploits the coupling between Silicon Drift Detectors (SDDs) and scintillator bars to obtain detectors working in a very broad energy band (from some keV up to tens of MeV), with 3-D position sensitivity, spectroscopic capabilities and a very low background [2]. In this configuration, scintillator bars are read-out on top and bottom by SDDs, as shown in Fig. 1. Low energy (\(<30\) keV) X-rays are stopped and detected by the SDDs on top, while higher energy gamma rays are stopped inside the scintillator bars and the SDDs, on top and bottom, detect the scintillation light. Exploiting the fact that Gamma-rays will trigger both the top and bottom SDDs, while X-ray trigger only the top SDDs, in addition to the differences in the charges pulses shapes for the two kind of events (direct detection in the SDDs and detection of scintillation light), we are able to distinguish between the two type of events. This configuration has been proposed for the instrument XGIS (X/Gamma-ray Imaging Spectrometer) aboard the space mission THESEUS [3], selected by ESA for Phase 0 study for the M7 program. In this paper we study a possible evolution of the XGIS concept: the Wide Field Monitor - Imager and Spectrometer (WFM-IS), proposed as part of the payload for the concept mission ASTENA (Advanced Surveyor for Transient Events and Nuclear Astrophysics). ## 2 The Wide Field Monitor onboard ASTENA The ASTENA concept mission was submitted to the ESA long-term program "Voyage 2050" with two white papers [4, 5] describing its scope and capabilities. The ASTENA concept is based on innovative technologies and is designed not only to bring a quantum leap in terms of localization, imaging and spectroscopy in the field of X and soft gamma rays astrophysics, but also to be synergistic with the next generation of multimessenger observatories that will be available in the decades to come. The ASTENA payload consists in two instruments: a Narrow Field Telescope (NFT) and a Wide Field Monitor - Imager and Spectrometer (WFM-IS). The NFT will include a Laue lens, an innovative optics based on Bragg's law of diffraction, made up by thousands of crystals properly oriented to concentrate radiation in the energy band 50-700 keV to a focal point [6]. The WFM-IS, instead, will be based on the same concept of the THESEUS/XGIS, and will consist of twelve Position Sensitive Detectors (PSD) units of \(43\times 42\) cm\({}^{2}\) cross section, topped by a double scaled coded mask. The twelve PSD units will be placed around the main body of the spacecraft in groups of two, defining six different "camera pairs", oriented with an angle of \(15^{\circ}\) with respect to the axis of the spacecraft (Fig. 2). Each PSD unit is made up by \(4\times 8\) modules, each consisting of 205 hexagonal scintillator bars, with a distance between flats of 5 mm and a length that will be optimized to grant the best localization and spectroscopic performances. At the moment, we are studying Cesium Iodide (CsI(Tl)) scintillator bars, but we will also explore the possibility to use other scintillator materials such as GAGG(Ce). The scintillators are read-out on top by linear 0.4 mm thick multi-anode Silicon Drift Detectors (SDDs) and by hexagonal single-anode SDDs on bottom. Figure 1: Representation of a small section of a full PSD unit of the WFM-IS. The scintillator bars are drawn in green, while the top and bottom SDDs in purple. Low energy X-rays are stopped by the top SDDs, while higher energy photons are stopped inside the scintillators. The double scaled coded mask on top of each PSD unit will grant the instruments of imaging capabilities up to 150 keV, with a point source localization accuracy of about 1 arcmin. This is obtained by exploiting the combined pattern of a 1-D stainless steel coded mask 0.5 mm thick, for lower energy (\(<\)30 keV) photons, and a 2-D tungsten mask 1.0 mm thick, for higher energy (30-150 keV) photons. Above 150 keV, the coded masks become too transparent and the only way to get a rough point source localization is either by exploiting the Compton kinematics reconstruction or performing a triangulation exploiting the relative differences between the fluxes measured on the different cameras. With this work we are trying to investigate how to perform this type of analysis and to estimate the degree of localization accuracy that we can get above the limit of 150 keV. ## 3 Montecarlo Model A Monte Carlo model of the WFM-IS has been implemented using the MEGALib/Geomega package [7]. The current configuration geometry assumed for the WFM-IS is shown in Fig. 3; It consists of 12 PSD units, placed in a hexagon shape, with two units on each side of the hexagon. All the units are offset by 15\({}^{\circ}\) with respect to the center axis. In this simplified geometry we did not include the coded mask and the collimator since they are almost transparent at energies \(>\)150 keV. Furthermore, other secondary components have not been modeled as the presence of gaps between the pixels or the wrapping material around the scintillator crystals. Such elements have a minimal impact on the results here presented and will be considered in an advanced engineering phase of the project. We included in the model the top and bottom SDDs as passive layers of Silicon. On the sides of each two units, a veto made of CsI was placed. At the moment, the simulated length of the CsI bars is 3 cm, but we will increase it to 5 cm to obtain an higher detection efficiency. The depth resolution along it is calculated using the formula on Ref.[8]. In Fig. 3 it is also shown the reference system and camera numbers used throughout the document. ## 4 Results of the Simulations ### Effective Area and Event Multiplicity Analyses The detection of Compton events using the WFM-IS position sensitive detectors provides a means to localize sources. To achieve this, at least two Compton interactions are required for reconstruction. Although more than two events can enhance the localization accuracy, the benefits typically reach a point of diminishing returns, where the additional information gained from each event becomes progressively insignificant. In Fig. 4 left it is shown the absolute efficiency of event multiplicity as a function of the energy for the WFM-IS. For lower energies, the single events prevail, while above \(\sim\)1 MeV the higher multiplicity events start to prevail. Figure 2: Schematic drawing of the ASTENA spacecraft in-flight configuration. The twelve WFM-IS Position Sensitive Modules (light grey) surround the body of the spacecraft (yellow). The optics of the Narrow Field Telescope is shown in red. The effective area of the WFM-IS was evaluated from simulations, both for on-axis and off-axis sources, as the ratio between the number of reconstructed Compton events and the total number of simulated events, multiplied by the surface area from which the simulated events were generated. Fig. 4 right shows the effective area against the energy for an on-axis source for energies between 150 keV and 5 MeV. Between 200 keV and 5 MeV the result approximates a power law, reaching a plateau for lower and higher energies. The effective area for off-axis sources was calculated as a function of the zenith angle and azimuth angle, \(\theta\) Figure 4: Left: The absolute efficiency of event with different multiplicity for the WFM-IS as a function of the energy. Right: WFM-IS effective area as a function of the energy. Figure 3: Monte Carlo model of the WFM-IS. It consists of a total of 12 PSD units, placed in an hexagonal shape, with 2 units on each side of the hexagon. The black grids are the instrument sensitive volumes, while the purple are vetos. The reference system and camera numbers are the ones used throughout the document. and \(\phi\), shown in Fig. 5. We can observe that the effective area decreases substantially with the increase of \(\theta\), specially for lower energies. This behavior can be exploited to localize the source. However, for higher energies, the effective area tends to be constant with \(\theta\), making it impossible to use this information to get the source localization. The same constant trend is found when we study the behavior of the effective area as a function of \(\phi\), for which the effective area remains constant independently from the value of \(\phi\), as observed in the right side of Fig. 5, in this case for all the energies. Due to the simmetry of the cameras, the result is \(90^{\circ}\) symmetric, for this fact we just show the result for \(\phi\) between \(0^{\circ}\) and \(90^{\circ}\). This result proves that the hexagonal positioning of the cameras is very effective to obtain a sensitivity independent on the azimuth angle; however, it is not useful for the location of the source. For that, we need to evaluate the data of each camera individually. ### Point Source Reconstruction Capabilities Point source reconstruction can be obtained either by studying the Compton kinematics or by exploiting the relative fluxes of the counts on the different cameras. Results on the Compton kinematics reconstruction are reported in the master thesis by Cavazzini (Ref. 7). Scintillator PSDs are a powerful tool for Compton telescopes, allowing us to obtain measurements of the direction and energy of the photons. Their main limitation is the relatively poor energy resolution, that leads to inaccuracies in the measurement of gamma-ray energies. In Fig. 6 it is shown an example of an image obtained using the WFM-IS model with the MEGALib tool Mimrec for an on-axis 1 MeV source. In Fig. 7 we also report the dependence on energy and offset angle \(\theta\) of the simulated Angular Resolution Measure (ARM) of the WFM-IS. The ARM is defined as the difference between the reconstructed and the true direction and it gives the accuracy on the localization of a source [9]. The angular resolution exhibits a high value at 200 keV due to the dominance of single events, however, it shows an improvement at higher energies, since high multiplicity events become increasingly prevalent. It is also worth noticing that the angular resolution decreases when the offset angle increases. This means that the instrument reconstructs better images off axis with respect to those on axis. This is due to the camera's offset angle that allows a better reconstruction for higher angles. To obtain a very rough determination of the source's azimuth angle, we can exploit the dependency of the azimuth angle on the measured count flux on each camera pair. We simulated a monochromatic photon wavefront with an energy of 1000 keV coming from a direction defined by the angles \((\theta,\phi)\). We varied \(\theta\) between the values \(15^{\circ}\), \(35^{\circ}\), \(40^{\circ}\), \(45^{\circ}\), \(70^{\circ}\) and \(\phi\) in the range \([0^{\circ},360^{\circ}]\) in steps of \(10^{\circ}\). We fitted the counts vs \(\phi\) curve for each of the pair of cameras with a combination of one or more cosine functions. Figure 8 shows the fit on the six pairs of WFM cameras obtained for the zenith angles of \(15^{\circ}\) and \(40^{\circ}\). We assumed the fit results as the real dependence of the counts on the cameras vs the azimuth angle of the source. We used those results to reconstruct the position of the source on the sky from the value of the counts on the six different pairs of cameras. To do Figure 5: Left: WFM-IS effective area as function of \(\theta\) for a fixed \(\phi=0^{\circ}\). Right: effective area as function of \(\phi\) for \(\theta=30^{\circ}\). so, we found the angular intervals for each pair that allow us to have a number of counts compatible with the simulated counts within five times the error on the counts. Finally, the intersection between the six angular intervals, one per camera pair, gives us an angular region corresponding to a rough localization of the source. Figure 9 shows the results of the reconstruction of the azimuth angle (simulated azimuth angle vs reconstructed azimuth angle) for \(\theta=15^{\circ}\) and \(\theta=40^{\circ}\). The average error of the reconstruction is about \(10^{\circ}\). It is worth to note that the shape of the flux curves on the six camera pairs depends on the zenith angle of the source, so, with this technique, the quality of the \(\theta\)-localization will impact also the quality of the \(\phi\)-localization. For now, this effect was not taken in account, but we are working on understanding how the reconstruction of one angle impacts on the other. In the next future, we plan to improve this technique and test it with different values of energy, polychromatic beams and, finally, with important scientific cases. ###### Acknowledgements. This work has been supported with the financial contribution from the AHEAD EU Horizon 2020 project (Integrated Activities in the High Energy Astrophysics Domain), grant agreement n. 871158. Figure 6: Reconstructed image for a on-axis 1 MeV source. Figure 7: Left: Angular resolution as a function of energy for an on-axis source. Right: Angular resolution as a function of offset angle \(\theta\) for 1 MeV source.
2309.09916
Learning Nonparametric High-Dimensional Generative Models: The Empirical-Beta-Copula Autoencoder
By sampling from the latent space of an autoencoder and decoding the latent space samples to the original data space, any autoencoder can simply be turned into a generative model. For this to work, it is necessary to model the autoencoder's latent space with a distribution from which samples can be obtained. Several simple possibilities (kernel density estimates, Gaussian distribution) and more sophisticated ones (Gaussian mixture models, copula models, normalization flows) can be thought of and have been tried recently. This study aims to discuss, assess, and compare various techniques that can be used to capture the latent space so that an autoencoder can become a generative model while striving for simplicity. Among them, a new copula-based method, the Empirical Beta Copula Autoencoder, is considered. Furthermore, we provide insights into further aspects of these methods, such as targeted sampling or synthesizing new data with specific features.
Maximilian Coblenz, Oliver Grothe, Fabian Kächele
2023-09-18T16:29:36Z
http://arxiv.org/abs/2309.09916v1
# Learning Nonparametric High-Dimensional Generative Models: ###### Abstract By sampling from the latent space of an autoencoder and decoding the latent space samples to the original data space, any autoencoder can simply be turned into a generative model. For this to work, it is necessary to model the autoencoder's latent space with a distribution from which samples can be obtained. Several simple possibilities (kernel density estimates, Gaussian distribution) and more sophisticated ones (Gaussian mixture models, copula models, normalization flows) can be thought of and have been tried recently. This study aims to discuss, assess, and compare various techniques that can be used to capture the latent space so that an autoencoder can become a generative model while striving for simplicity. Among them, a new copula-based method, the _Empirical Beta Copula Autoencoder_, is considered. Furthermore, we provide insights into further aspects of these methods, such as targeted sampling or synthesizing new data with specific features. ## 1 Introduction Generating realistic sample points of various data formats has been of growing interest in recent years. Thus, new algorithms such as _Autoencoders (AEs)_ and _Generative Adversarial Networks (GANs)_Goodfellow et al. (2014) have emerged. GANs use a discriminant model, penalizing the creation of unrealistic data from a generator and learning from this feedback. On the other hand, AEs try to find a low-dimensional representation of the high-dimensional input data and reconstruct from it the original data. To turn an AE into a generative model, the low-dimensional distribution is modeled, samples are drawn, and thereupon new data points in the original space are constructed with the decoder. We call this low dimensional representation of the data in the autoencoder the _latent space_ in the following. Based on that, _Variational Autoencoders (VAEs)_ have evolved, optimizing for a Gaussian distribution in the latent space Kingma and Welling (2014). Adversarial autoencoders (AAEs) utilize elements of both types of generative models, where a discriminant model penalizes the distance of the encoded data from a prior (Gaussian) distribution (Makhzani et al., 2016). However, such strong (and simplifying) distributional assumptions as in the VAE or AAE can have a negative impact on performance, leading to a rich literature coping with the challenge of reducing the gap between approximate and true posterior distributions (e.g., Rezende and Mohamed 2015, Tomczak and Welling 2018, Kingma et al. 2016, Gregor et al. 2015, Cremer et al. 2018, Marino et al. 2018, Takahashi et al. 2019). In this paper we discuss more flexible approaches modeling the latent space without imposing restrictions on the underlying distribution. Recently, Tagasovska et al. 2019 presented the _Vine Copula Autoencoder (VCAE)_. Their approach comprises two building blocks, an autoencoder and a vine copula which models the dependence structure in latent space. By that, they were able to create realistic, new images with samples from the fitted vine copula model in the latent space. In this work, we want to elaborate on this idea and compare various methods to model the latent space of an autoencoder to turn it into a generative model. To this end, we analyze, amongst others, the usage of _Gaussian mixture models (GMM)_ as done by Ghosh et al. 2020, the vine copula approach by Tagasovska et al. 2019, and simple multivariate _Kernel Density Estimates_. Additionally, we introduce a new, non-parametric copula approach, the _Empirical Beta Copula Autoencoder (EBCAE)_. To get a deeper understanding of how this can turn a standard autoencoder into a generative model, we inspect resulting images, check the models for their ability to generalize and compare additional features. In this study we do not aim to beat the latest SOTA generative models but want to shed light on different modeling techniques in the latent space and their characteristics in a rather straightforward autoencoder setting, which may be applied in more sophisticated models as well. Thus, we strive for simplicity and take an alternative route to more and more complex models. We believe that such an analysis in a straightforward setting is essential for understanding the effects from different sampling methods, which may then be applied in more advanced generative models. We also check whether the methods may be a simple alternative to more complex models, such as normalization flows Rezende and Mohamed (2015) or diffusion models (see, e.g., Rombach et al. 2022, Vahdat et al. 2021). More specifically, we use the well-known Real NVP (Dinh et al., 2017) as an example from these more sophisticated machine learning models in the latent space but do not elaborate on these in detail. Note that in contrast to other methods (e.g., as proposed by Oring et al. 2021, Berthelot et al. 2019 or van den Oord et al. 2017), the investigated overall approach does not restrict or change the training of the autoencoder in any form. All models considered in this work are constructed in three steps, visualized in Figure 1. First, an autoencoder, consisting of an encoder \(f\) and a decoder \(g\), is trained to find a low-dimensional representation of the data \(X\). Second, the data in the latent space \(Y\) is used to learn the best fitting representation \(Y^{\prime}\) of it. This is where the examined models differ from each other by using different methods to model the latent space. Finally, we sample from the learned representation of the latent space and feed the samples into the decoder part of the autoencoder, creating new synthetic data samples. Generative models are a vivid part of the machine learning literature. For example, new GAN developments Varshney et al. (2021), Karras et al. (2021), Lee et al. (2021), Hudson and Zitnick (2021), developments in the field of autoencoders, Larsen et al. (2016), Yoon et al. (2021), Zhang et al. (2020), Shen et al. (2020) or developments in variational autoencoders Sohn et al. (2015), Havtorn et al. (2021), Masrani et al. (2019), Xu et al. (2019) are emerging. We again want to emphasize that for the models we consider, no prior is needed, nor the optimization approach is changed, i.e., the latent space is modeled after the training of the autoencoder post-hoc. Thus, the presented approach could be transferred to other, more sophisticated, state-of-the-art autoencoders, as hinted in Ghosh et al. 2020. The general idea of creating new data by sampling in the latent space of a generative model has already been used by, e.g., Tagasovska et al. 2019, Dai and Wipf 2019, Brehmer and Cranmer 2020 or Ghosh et al. 2020, but to the best of our knowledge, no analysis and comparison of such methods have been made so far. Closely related, more and more researchers specifically address the latent space of generative models Mishne et al. (2019), Fajtl et al. (2020), Moor et al. (2020), Oring et al. (2021), Hofert et al. (2021) in their work. There, especially hierarchical methods as suggested by Maaloe et al. (2019) seem to be promising. Further, Autoencoders based on the Wasserstein Distance lately achieved excellent results by changing Figure 1: Function scheme of simple generative autoencoders. 1. An encoder \(f\) encodes the data \(X\) to a low dimensional representation \(Y\). 2.1 \(Y\) is modeled by \(Y^{\prime}\), 2.2 Generate new synthetic samples of the latent space by sampling from \(Y^{\prime}\). 3. Decode the new samples with the decoder \(g\). the regularization term of a VAE and using or learning a Gaussian Mixture Prior Tolstikhin et al. (2019); Mondal et al. (2021), analogously to our use of Gaussian Mixtures fitting the latent space distribution. This work does not propose a new 'black-box algorithm' for generating data (although we present the new EBCAE) but analyses challenges and possible answers on how autoencoders can be turned into generative models by using well-understood tools of data modeling. One of our main findings is, that is hard to find a trade-off between out-of-bound sampling and creating new pictures. We conclude that besides a pure numerical perspective and looking at new random samples of a generative model with a latent space, the resulting image of the nearest neighbor in the latent space from the training data should be inspected. We demonstrate in our experiments that copula-based approaches may be promising alternatives to traditional modeling methods since they allow for the recombination of marginal distributions from one class with the dependence structure of another class leading to new possibilities in synthesizing images and discuss targeted sampling. Our conclusion is intended to point out relevant aspects to the user and discusses the advantages and disadvantages of the models examined. The remainder of the paper is structured as follows. Section 2 introduces various methods for modeling the latent space. Besides traditional approaches, copula-based methods are introduced. Section 3 describes the implementation, evaluation, and results of the experiments carried out. In Section 4 we discuss the results and conclude the paper. Last, we provide additional experiments and insides for interested readers in the appendix. ## 2 Modeling the latent space In this section, we want to introduce and reflect on different methods to model the latent space in an autoencoder (Step 2 in Figure 1). All methods aim to fit the low-dimensional data \(Y\) as best as possible to be able to create new sample points in the latent space, which leads to new realistic images after passing the decoder. We first recap more 'traditional' statistical tools, followed by copulas as an intuitive and flexible tool for modeling high-dimensional data. We briefly explain how each approach can be used to model data in the latent space and how to obtain samples thereof. Note that we do not introduce our benchmark models, namely the standard plain vanilla _VAE_ and the _Real NVP_, and refer to the original papers instead (Kingma and Welling, 2014; Dinh et al., 2017). Pseudocode of the overall sampling approach is given in Appendix A (Algorithm 2). ### Traditional modeling methods We classify the _multivariate Gaussian distribution_, a _Kernel Density Estimation (KDE)_, and a _Gaussian Mixture Model (GMM)_ as traditional modeling methods and give a rather short treatment of each below. They are well known and can be studied in various statistics textbooks such as Hastie et al. 2001 or Bishop 2006. #### Multivariate Gaussian The probably simplest method is to assume the data in the latent space to follow a multivariate Gaussian distribution. Thus, we estimate the covariance matrix \(\hat{\Sigma}\) and mean vector \(\hat{\mu}\) of the date \(Y\). In the second step, we draw samples thereof and pass them through the decoder to generate new images. Note that this is similar to the sampling procedure in a VAE, but without forcing the latent space to be Gaussian during training. #### GMM The _Gaussian Mixture Model (GMM)_ aims to model the density of the latent space by mixing \(M\) multivariate Gaussian distributions. Thus, the Gaussian mixture model has the form \[f(x)=\sum_{m=1}^{M}\alpha_{m}\phi(x;\mu_{m},\Sigma_{m}) \tag{1}\] where \(\alpha_{m}\) denotes the mixing parameter and \(\phi\) the density of the multivariate normal distribution with mean vector \(\mu_{m}\) and covariance matrix \(\Sigma_{m}\). The model is usually fit by maximum likelihood using the EM algorithm. By combining several Gaussian distributions, it is more flexible than estimating only one Gaussian distribution as above. A GMM can be seen as some kind of kernel method (Hastie et al., 2001), having a rather wide kernel. In the extreme case, i.e., where \(m\) equals the number of points the density is estimated on, a Gaussian distribution with zero variance is centered over each point. Kernel density estimation is introduced in the following. ### Kde _Kernel Density Estimation_ is a well-known non-parametric tool for density estimation. Put simply, a KDE places a density around each data point. The total resulting estimated density is constructed by \[f(x)=\frac{1}{N\lambda}\sum_{i=1}^{N}K_{\lambda}(x_{0},x_{i}) \tag{2}\] with \(N\) being the total number of data points, \(\lambda\) the bandwidth, and \(K\) the used kernel. Note that the choice of bandwidth and kernel can affect the resulting estimated density. The kernel density estimation can be performed in univariate data as well as in multivariate data. In this work, we rely on the most commonly used kernel, the Gaussian Kernel, and a bandwidth fitted via _Silverman's rule of thumb_[Silverman, 1986] for the univariate KDEs (i.e. for estimating the marginal distributions of the latent space), while we use a grid search with 10-fold cross-validation in the multivariate case. We use kernel density estimation in multiple manners throughout this work. First, we use a multivariate KDE to model the density of the data in the latent space itself. In the case of a Gaussian kernel, it can be written by \[f(x)=\frac{1}{N\sqrt{\Sigma}2\pi}\sum_{i=1}^{N}e^{-1/2(x-x_{i})^{\gamma}\Sigma ^{-1}(x-x_{i})} \tag{3}\] where \(\Sigma\) represents the covariance matrix of the kernel, i.e., the matrix of bandwidths. Second, we ignore the dependence structure between margins and estimate the univariate densities of each dimension in the latent space by a KDE for each marginal distribution. In this way, we are able to find out whether explicitly modeling the dependence structure is necessary or not. We call that approach the _Independent modeling approach_ also denoted short by _Independent_ in the following. Last, we use univariate KDEs for modeling the marginal distributions of each dimension in the latent space and use them in the copula models described below. ### Copula based models Besides the traditional modeling methods introduced above, we apply copula based models. In the following, we first introduce copulas as a tool for high-dimensional data, which allows us to model the latent space in our application. Then, we focus on the two copula-based methods to model the latent space of the autoencoder: the _vine copula_ and the _empirical beta copula_ approach. For detailed introductions to copulas, we refer the reader to Nelsen 2006, Joe 2014, Durante and Sempi 2015. _Copulas_ have been subject to an increasing interest in the _Machine Learning_ community over the last decades, see, e.g., Dimitriev and Zhou 2021, Janke et al. 2021, Messoudi et al. 2021, Ma et al. 2021, Letizia and Tonello 2020, Liu 2019, Kulkarni et al. 2018, Tran et al. 2015. In a nutshell, copula theory enables us to decompose any \(d\)-variate distribution function into \(d\) marginal univariate distributions and their joint dependence structure, given by the copula function. Thus, copulas "couple" multiple univariate distributions into one joint multivariate distribution. More formally, a \(d\)-variate copula \(C:[0,1]^{d}\rightarrow[0,1]\) is a \(d\)-dimensional joint distribution function whose margins are uniformly distributed on the unit interval. Decomposing and coupling distributions with copulas is formalized in Theorem 2.1 going back to Sklar 1959. **Theorem 2.1** (Sklar 1959).: _Consider a \(d\)-dimensional vector of random variables \(\mathbf{Y_{i}}=(Y_{i,1},\ldots,Y_{i,d})\) with joint distribution function \(F_{\mathbf{Y}}(y_{i})=P(Y_{1}\leq y_{i,1},\ldots,Y_{d}\leq y_{i,d})\) for \(i=1,\ldots,n\). The marginal distribution functions \(F_{j}\) are defined by \(F_{j}(y_{i,j})=P(Y_{j}\leq y_{i,j})\) for \(y_{i,j}\in\mathbb{R}\), \(i=1,\ldots,n\) and \(j=1,\ldots,d\). Then, there exists a copula \(\hat{C}\), such that_ \[F_{\mathbf{Y}}(y_{1},..,y_{d})=C(F_{1}(y_{1}),\ldots,F_{d}(y_{d}))\] _for \((y_{1},\ldots,y_{d})\in\mathbb{R}^{d}\). Vice versa, using any copula \(\tilde{C}\), it follows that \(\tilde{F}_{\mathbf{Y}}(y_{1},..,y_{d}):=\tilde{C}(F_{1}(y_{1}),\ldots,F_{d}(y_{d}))\) is a proper multivariate distribution function._ This allows us to construct multivariate distributions with the same dependence structure but different margins or multivariate distributions with the same margins but different couplings/pairings, i.e., dependence structures. The simplest estimator is given by the empirical copula. It can be estimated directly on the ranks of each marginal distribution by \[\hat{C}(\mathbf{u})=\frac{1}{n}\sum_{i=1}^{n}\prod_{j=1}^{d}\mathbf{1}\bigg{\{} \frac{r_{i,j}^{(n)}}{n}\leq u_{j}\bigg{\}} \tag{4}\] with \(\mathbf{u}=(u_{1},\ldots,u_{d})\in[0,1]^{d}\) and \(r_{i,j}^{(n)}\) denoting the rank of each \(y_{i,j}\) within \((y_{1,j},\ldots,y_{n,j})\), i.e., \[r_{i,j}^{(n)}=\sum_{k=1}^{n}\mathbf{1}\{y_{k,j}\leq y_{i,j}\}. \tag{5}\] Note that \(\mathbf{u}=(u_{1},\ldots,u_{d})\) represents a quantile level, hence a scaled rank. Simultaneously, the univariate margins can be estimated using a KDE so that the full distribution latent space is governed for. Note that it is not possible to draw new samples from the empirical copula directly as no random process is involved. In our applications, the latent space is typically equipped with dimensions \(\geq 2\). Although a variety of two-dimensional copula models exist, the amount of multivariate (parametric) copula models is somewhat limited. We present two solutions to this problem in the following, namely _vine copulas_ and the _empirical beta copula_. **Vine Copula Autoencoder** _Vine copulas_ decompose the multivariate density as a cascade of bivariate building blocks organized in a hierarchical structure. This decomposition is not unique, and it influences the estimation procedure of the model. Here, we use _regular-vine (r-vine)_ models Czado (2019); Joe (2014) to model the 10, 20 and 100 dimensional latent space of the autoencoders at hand. An r-vine is built of a sequence of linked trees \(T_{i}=(V_{i},E_{i})\), with nodes \(V_{i}\) and edges \(E_{i}\) for \(i=1,\ldots,d-1\) and follows distinct construction rules which we present in Appendix B. The \(d\)-dimensional copula density can then be written as the product of its bivariate building blocks: \[c(u_{1},\ldots,u_{d})=\prod_{i=1}^{d-1}\prod_{e\in E_{i}}c_{a_{e}b_{e};D_{e}}( u_{a_{e}|D_{e}},u_{b_{e}|D_{e}}) \tag{6}\] with conditioning set \(D_{e}\) and conditional probabilities, e.g., \(u_{a_{e}|D_{e}}=\mathbb{P}(U_{a_{e}}\leq u_{a_{e}}|D_{e})\). The conditioning set \(D_{e}\) includes all variables conditioned on at the respective position in the vine structure (see Appendix B). For each resulting two-dimensional copula of conditional variables, any parametric or non-parametric copula model (as done by Tagasovska et al.2019) can be chosen. However, the construction and estimation of vine copulas is rather complicated. Hence, assuming independence for seemingly unimportant building blocks, so-called truncation, is regularly applied. Because of this, truncated vine copula models do not capture the complete dependence structure of the data, and their usage is not underpinned by asymptotic theory. We refer to Czado (2019); Czado and Nagler (2022); Aas (2016) for reviews of vine copula models. **Empirical Beta Copula Autoencoder** The _empirical beta copula_(Segers et al., 2017) avoids the problem of choosing a single, parametric multivariate copula model due to its non-parametric nature. Further, and in contrast to the presented vine copula approach, it offers an easy way to model the full, non-truncated multivariate distribution based on the univariate ranks of the joint distribution and, thus, seems to be a reasonable choice to model the latent space. The empirical beta copula is closely related to the empirical copula (see Formula 5) and is a crucial element of the Empirical-Beta-Copula Autoencoder. It is solely based on the ranks \(r_{i,j}^{(n)}\) of the original data \(\mathbf{Y}\) and can be interpreted as a continuous counterpart of the empirical copula. It is defined by \[C^{\beta}=\frac{1}{n}\sum_{i=1}^{n}\prod_{j=1}^{d}F_{n,r_{i,j}^{(n)}}(u_{j}) \tag{7}\] for \(\mathbf{u}=(u_{1},\ldots,u_{d})\in[0,1]^{d}\), where \[F_{n,r_{i,j}^{(n)}}(u_{j}) =P(U_{(r_{i,j}^{(n)})}\leq u_{j}) \tag{8}\] \[=\sum_{p=r_{i,j}^{(n)}}^{n}\binom{n}{p}u_{j}^{p}(1-u_{j})^{(n-p)} \tag{9}\] is the cumulative distribution function of a _beta distribution_, i.e., \(\mathbb{B}(r_{i,j}^{(n)},r_{i,j}^{(n)}+n-1)\). As \(r_{i,j}\) is the rank of the \(i^{\text{th}}\) element in dimension \(j\), \(U_{(r_{i,j}^{(n)})}\) represents the \(r_{i,j}^{\text{th}}\) order statistic of \(n\) i.i.d. uniformly distributed random variables on \([0,1]\). For example, if the rank of the \(i^{\text{th}}\) element in dimension \(j\) is 5, \(U_{(r_{i,j}^{(n)})}=U_{(5)^{(n)}}\) denotes the \(5^{\text{th}}\) order statistic on \(n\) i.i.d. uniformly distributed random variables. The intuition behind the empirical beta copula is as follows: Recall that the marginal distributions of a copula are uniformly distributed on \([0,1]\) and, hence, the \(k^{\text{th}}\) smallest value of scaled ranks \(r_{i,j}^{(n)}/n\) corresponds to the \(k^{\text{th}}\) order statistic \(U_{(k)}\). Such order statistics are known to follow a _beta distribution_\(\mathbb{B}(k,k+n-1)\)[David and Nagaraja, 2003]. Consequently, the mathematical idea of the empirical beta copula is to replace each indicator function of the empirical copula with the cumulative distribution function of the corresponding rank \(r_{i,j}^{(n)}\). We argue that the empirical beta copula can be seen as the naturally extended version of the empirical copula, thus, it seems to be a good choice for dependence modeling. Segers et al. 2017 further demonstrates that the empirical beta copula outperforms the empirical copula both in terms of bias and variance. A theorem stating the asymptotic behavior of the empirical copula is given in Appendix C. Synthetic samples in the latent space \(y^{\prime}\) are created by reversing the modeling path. First, random samples from the copula model \(\mathbf{u}=(u_{1},\ldots,u_{d})\) are drawn. Then, the copula samples are transformed back to the natural scale of the data by the inverse probability integral transform of the marginal distributions, i.e., \(y^{\prime}_{j}=\hat{F}_{j}(u_{j})\), where \(\hat{F}_{j}\) is the estimated marginal distribution and \(u_{j}\) the \(j\)th element of the copula sample for \(j\in\{1,\ldots,d\}\). Algorithm 1 summarizes the procedure. ``` Input: Sample \(Y\subset\mathbb{R}^{n\times d}\), new sample size \(m\) begin Compute rank matrix \(R^{n\times d}\) out of \(Y\) Estimate marginals of \(Y\) with KDE, \(\widehat{f}_{1}(y_{1}),\ldots,\widehat{f}_{d}(y_{d})\). for\(i\leq m\)do Draw random from \(I\in[1,\ldots,n]\) for\(j\leq d\)do Draw \(u_{J}\sim\mathbb{B}(R_{lj},n+1-R_{lj})\) Set \(u_{i}=(u_{11},\ldots,u_{d})\) Rescale margins by \(Y_{i}=\widehat{F}_{1}^{-1}(u_{i1}),\ldots,\widehat{F}_{d}^{-1}(u_{id})\). Output: New sample \(Y^{\prime}\) of size m ``` **Algorithm 1**Sampling from Empirical Beta Copula We now present the experiments and results of a comparative study including all mentioned methodologies to model the latent space in the next section. ## 3 Experiments In this section, we present the results of our experiments. We use the same architecture for the autoencoder in all experiments for one dataset but replace the modeling technique for the latent space for all algorithms. The architecture, as well as implementation details, are given in Appendix D. We further include a standard VAE and the Real NVP normalization flow approach modeling the latent space in our experiments to serve as a benchmark. ### Setup We first describe the overall methodology and the usage of the methods proposed in Section 2. We then introduce the used data sets and evaluation framework. ### Methodology We train an autoencoder consisting of two neural nets, an _encoder_\(f\), and a _decoder_\(g\). The encoder \(f\) maps data \(X\) from the original space to a lower-dimensional space, while the decoder \(g\) reconstructs this low-dimensional data \(Y\) from the low-dimensional latent space to the original space (see Fig. 1). We train both neural nets in a way that the reconstruction loss is minimized, i.e., that the reconstructed data \(X^{\prime}=g(f(X))\) is as similar to the original data \(X\) as possible. In the second step, we model the latent space \(Y\) data with a multivariate Gaussian distribution, a Gaussian mixture model, Kernel density estimates, the two presented copula methods and the Real NVP. Thus, we fit models with different flexibility and complexity while keeping the training process of the autoencoder untouched. Last, new samples are generated by decoding random samples from the learned model in the latent space. Note that such an approach is only reasonable when the underlying autoencoder has learned a relevant and interesting representation of the data and the latent space is smooth. We demonstrate this in Appendix E. #### Datasets We conduct experiments on one small-scale, one medium, and one large-scale dataset. The small-scale _MNIST_ dataset (LeCun et al., 2010) includes binary images of digits, while the medium-scale _SVHN_ dataset (Netzer et al., 2011) contains images of house numbers in Google Street View pictures. The large-scale _CelebA_ dataset (Liu et al., 2015) consists of celebrity images covering 40 different face attributes. We split data into a train set and a test set of 2000 samples which is a commonly used size for evaluation (Tagasovska et al., 2019; Xu et al., 2018). Note that the data sets cover different dimensionalities in the latent space, allowing for a throughout assessment of the methods under investigation. #### Evaluation Evaluation of results is performed in several ways. First, we visually compare random pictures generated by the models. Second, we evaluate the results with the framework proposed by Xu et al. 2018, since a log-likelihood evaluation is known to be incapable of assessing the quality (Theis et al., 2016) and unsuitable for non-parametric models. Based on their results, we choose five metrics in our experiments: The _earth mover distance (EMD)_, also known as _Wasserstein distance_(Vallender, 1974); the _mean maximum discrepancy (MMD)_(Gretton et al., 2007); the _1-nearest neighbor-based two-sample test (1NN)_, a special case of the classifier two-sample test (Lopez-Paz and Oquab, 2017); the _Inception Score_(Salimans et al., 2016); and the _Frechet inception distance_(Heusel et al., 2017) (the latter two over ResNet-34 softmax probabilities). In line with Tagasovska et al. 2019 and as proposed by Xu et al. 2018, we further apply the EMD, MMD, and 1NN over feature mappings in the convolution space over ResNet-34 features. For all metrics except the Inception Score, lower values are preferred. For more details on the metrics, we refer to Xu et al. 2018. Next, we evaluate the ability to generate new, realistic pictures by the different latent space modeling techniques. Therefore, we compare new samples with their nearest neighbor in the latent space stemming from the original data. This shows us whether the learned distribution covers the whole latent space, or stays too close to known examples, i.e., the model does not generalize enough. Finally, we compare other features of the tested models, such as their ability of targeted sampling and of recombining attributes. ### Results In the following, we show results for our various experiments. First, we present visual results for each of the methods investigated to gain a qualitative understanding of their differences. Second, we compare the methods in terms of performance metrics. Third, we evaluate the latent space and nearest neighbors in the latent space. Finally, we address computing times and discuss targeted sampling and recombination of image features. Figure 2: Comparison of random, synthetic samples of different Autoencoder models row by row for MNIST (left) and CelebA (right). Original input samples are given in the last row. ### Visual Results Figure 2 shows images generated from each method for MNIST and CelebA. The GMM model is composed of 10 elements, and the KDE is constructed using a Gaussian kernel with a bandwidth fitted via a grid search and 10-fold cross-validation. The specification of the Real NVPs are given in the Appendix. For the MNIST dataset, we observe the best results for the EBCAE (row 6) and KDE (row 3), while the other methods seem to struggle a bit. For the CelebA, our visual observations are slightly different. All methods produce images that are clearly recognizable as faces. However, the Gaussian samples in row 1 and independent margins in row 2 create pictures with some unrealistic artefacts, blurry backgrounds, or odd colors. This is also the case for the GMM in row 4 and VCAE in row 5, but less severe. We believe that this comes from samples of an empty area in the latent space, i.e., where none of the original input pictures were projected to. In contrast to that, the samples in the latent space of the KDE, EBCAE, and Real NVP stay within these natural bounds, producing good results after passing the decoder (rows 3, 6, 8). Recall that all methods use the same autoencoder and only differ by means of sampling in the latent space. From our observations, we also conclude that the autoencoder for the CelebA dataset is less sensitive toward modeling errors in the latent space since all pictures are clearly recognizable as faces. In contrast, for the MNIST dataset, not all images clearly show numbers. Similar results for SVHN are presented in the Appendix. ### Numerical Results The numerical results computed from 2000 random samples displayed in Figure 3 prove that dependence truly matters within the latent space. Simultaneously, the KDE, GMM, and EBCAE perform consistently well over all metrics, delivering comparable results to the more complex Real NVP. Especially the EBCAE outperforms the other methods, whereas the VCAE, Gauss model, and VAE usually cluster in the middle. We further report results over the number of samples in the latent space in Figure 9 in the Appendix. This, at first sight, unusual perspective visualizes the capability to reach good performance even for small sample sizes in latent space. In a small-sample regime, it is crucial to assess how fast a method adapts to data in the latent space and models it correctly. We see that all methods perform well for small sample sizes, i.e., \(n=200\). Similar experiments for MNIST and SVHN can be found in Appendix F. ### Nearest Neighbour and Latent Space Evaluation Next, we evaluate the different modeling techniques in their ability to generate new, realistic images. For this, we focus on pictures from the CelebA dataset in Figure 4. First, we create new, random samples with the respective method (top row) and then compare these with their decoded nearest neighbor in the latent space (middle row). The bottom row Figure 3: Performance metrics of generative models on **CelebA**, reported over epochs computed from 2000 random samples. Note that they only differ in the latent space sampling and share the same autoencoder. displays the latent space nearest neighbor in the original data space before applying the autoencoder. By doing so, we are able to disentangle two effects. First, the effect from purely encoding-decoding an image and, second, the effect of modeling the latent space. Thus, we can check whether new images are significantly different from the input, i.e., whether the distribution modeling the latent space merely reproduces images or generalizes to some extent. We observe that the samples from GMM, VCAE and the Real NVP substantially differ from their nearest neighbors. However, again they sometimes exhibit unrealistic colors and blurry backgrounds. The samples created from KDE and EBCAE look much more similar to their nearest neighbors in the latent space, indicating that these methods do not generalize to the extent of the other methods. However, their samples do not include unrealistic colors or features and seem to avoid sampling from areas where no data point of the original data is present. Thus, they stay in 'natural bounds'. Note that this effect apparently is not reflected in the numerical evaluation metrics. We, therefore, recommend that, in addition to a quantitative evaluation, a qualitative evaluation of the resulting images should always be performed. To further underpin this point, Figure 5 shows 2-dimensional TSNE-Embeddings (see, e.g.,van der Maaten and Hinton 2008) of the latent space for all six versions of the autoencoder (MNIST). Black points indicate original input data, and colored points are synthetic samples from the corresponding method. We see that the KDE, as well as the EBCAE, stay close to the original space. The samples from the GMM and Real NVP also seem to closely mimic the original data, whereas the other methods fail to do so. This visualization confirms our previous conjecture that some algorithms tend to sample from 'empty' areas in the latent space, leading to unrealistic results. Figure 4: Nearest neighbor evaluation of the six investigated modeling methods after decoding. **Top row:** Newly generated images. **Middle row:** Nearest neighbor of new image in the latent space of training samples after decoding. **Bottom row:** Original input training picture of nearest neighbor in latent space. Figure 5: TSNE embeddings of samples in the latent space of the **MNIST** dataset. Points from the original input training data \(Y\) are given in black, whereas new, synthetic samples \(Y^{\prime}\) in the latent space stemming from the different modeling methods are colored. ### Computing Times, Targeted Sampling and Recombination We also report computing times for learning and sampling of the different models for MNIST and CelebA in Table 1. Unsurprisingly, the more straightforward methods such as Gauss, Independence, KDE, and GMM, exhibit the lowest sampling times. The Real NVP shows the highest learning time as a neural network is fitted. However, we expect the difference to be much smaller once trained on an appropriate GPU. The times also reflect the complexities of the methods in the latent space dimensions. Last, we discuss other features of the tested methods, such as targeted sampling and recombination. In contrast to the other techniques, the KDE and EBCAE allow for targeted sampling. Thus, we can generate new images with any desired characteristic directly, e.g., only ones in a data set of images of numbers. In the case of the KDE, this simply works by sampling from the estimated density of the corresponding sub-group. In the case of the EBCAE, we randomly choose among rows in the rank matrix of original samples that share the desired specific attribute, i.e., we sample \(I\) in the first for-loop in Algorithm 1 conditional on the sub-group. Thus, newly generated samples stay close to the original input and therefore share the same main characteristics. Other approaches are also possible, however, they need further tweaks to the model, training, or sampling as the _conditional variational autoencoder_[21]. The second feature we discuss is recombination. By using copula-based models (VCAE and EBCAE), we can facilitate the decomposition idea and split the latent space in its dependence structure and margins, i.e., we combine the dependence structure of images with a specific attribute with the marginal distributions of images with different attributes. Therefore, copula-based methods allow controlling the attributes of created samples to some extent. Our experiments suggest that the dependence structure provides the basic properties of an image, while the marginal distributions are responsible for details (see, e.g., Figure 6). However, we want to point out that it is not generally clear what information is embedded in the dependence structure and what information is in the marginal distributions of latent space. This might also depend on the autoencoder and the dataset at hand. That said, using such a decomposition enables higher flexibility and hopefully fuels new methodological developments in this field. \begin{table} \begin{tabular}{l r r r r} & **CelebA** & **CelebA** & **MNIST** & **MNIST** \\ Method & Learn & Sample & Learn & Sample \\ \hline Gauss & \(<\)0.01 & 0.01 & 0.002 & 0.002 \\ Indep. & 4.10 & 0.07 & 0.393 & 0.003 \\ KDE & 75.25 & 0.01 & 13.958 & 0.001 \\ GMM & 1.35 & 0.03 & 0.115 & 0.004 \\ VCAE & 306.97 & 148.48 & 10.345 & 4.590 \\ EBCAE & 3.41 & 59.36 & 0.328 & 5.738 \\ Real NVP & 2541.19 & 3.69 & 341.608 & 0.477 \\ \hline \end{tabular} \end{table} Table 1: Modeling and sampling time in the **CelebA** and **MNIST** dataset of 2000 artificial samples based on a latent space of size \(n=2000\) in [s]. Figure 6: Samples from recombination experiment with the EBCAE. Glasses are removed by using the marginal distribution of the training data without glasses in the latent space. **Top row:** Samples created with the dependence structure in latent space from samples with glasses and marginal distributions in latent space from samples without glasses. **Middle row:** Nearest neighbor of newly created sample in the training data after decoding. **Bottom row:** Original input picture of nearest neighbor in latent space. ## 4 Discussion In this section, we want to discuss the results of our experiments and want to express some further thoughts. In summary, we observed that sampling from the latent space via the investigated methods is indeed a viable approach to turn an autoencoder into a generative model and may be promising for application in more advanced autoencoders. However, each modeling approach in this setting comes with its own restrictions, advantages, and problems. We witness a trade-off between the ability to generalize, i.e., to create genuinely new pictures, and sample quality, i.e., to avoid unrealistic colors or artefacts. In cases where new data points are sampled in the neighborhood to existing points (as in the KDE or EBCAE), the newly generated data stays in somehow natural bounds and provides realistic, but not completely new, decoded samples. On the other hand, modeling the latent space too generically leads to bad-quality images. We believe this is similar to leaving the feasible set of an optimization problem or sampling from a wrong prior. While being close to actual points of the original latent space, new samples stay within the feasible set. By moving away from these points, the risk of sampling from an unfeasible region and thus creating unrealistic new samples increases. Recombination via a copula-based approach of marginal distributions and dependence structures offers the possibility to detect new feasible regions in the latent space for the creation of realistic images. Also, interpolating by building convex combinations of two points in the latent space seems reasonable. However, without further restrictions during training (see, e.g., discussion in Ghosh et al. 2020), we cannot principally guarantee proper interpolation results. Further, we observe that the mentioned trade-off is not reflected by the performance metrics. Therefore, we strongly recommend not only checking quantitative results but also finding and analyzing the nearest neighbor in the original data to detect the pure reproduction of pictures. This also reveals that the development of further evaluation metrics could be beneficial. A closely related issue is the choice of a parametric vs. a non-parametric modeling method in the latent space. Parametric methods can place probability mass in the latent space, where no data point of the original input data was observed. Thus, parametric methods are able to generate (truly) new data, subject to their assumption. However, if the parametric assumption is wrong, the model creates samples from 'forbidden' areas in the latent space leading to unrealistic images. In spite of this, carefully chosen parametric models can be beneficial, and even a log-likelihood is computable and traceable (although we do not use it for training). Non-parametric methods avoid this human decision and possible source of error completely but are closely bound to the empirical distribution of the given input data. Consequently, such methods can miss important areas of the latent space but create more realistic images. Furthermore, adjusting parameters of the non-parametric models, such as increasing bandwidths or lowering truncation levels, offer possibilities to slowly overcome these limitations. Besides the major points above, the EBCAE and KDE offer an easy way of targeted sampling without additional training effort. This can be beneficial for various applications and is not as straightforward with other methods. Lastly, the investigated methods differ in their runtime. While vine copula learning and sampling is very time-intensive for high dimensions, the EBCAE is much faster but still outperformed by the competitors. For the non-copula methods, the GMM is really fast in both datasets while still capturing the dependence structure to some extent. In contrast to that, the Real NVP needs more time for training but is rather quick in generating new samples. To sum up, we can confirm that there are indeed simple methods to turn a plain autoencoder into a generative model, which may then also be beneficial in more complex generative models. We conclude that the optimal method to do so depends on the goals of the user. Besides runtime considerations, the specific application of the autoencoder matters. For example, if one is interested in targeted sampling, EBCAE or KDE should be applied. Recombination experiments call for a copula-based approach, whereas in all cases, the trade-off between generalization and out-of-bound sampling should be considered.
2305.00359
A Review of Deep Learning Techniques for Speech Processing
The field of speech processing has undergone a transformative shift with the advent of deep learning. The use of multiple processing layers has enabled the creation of models capable of extracting intricate features from speech data. This development has paved the way for unparalleled advancements in speech recognition, text-to-speech synthesis, automatic speech recognition, and emotion recognition, propelling the performance of these tasks to unprecedented heights. The power of deep learning techniques has opened up new avenues for research and innovation in the field of speech processing, with far-reaching implications for a range of industries and applications. This review paper provides a comprehensive overview of the key deep learning models and their applications in speech-processing tasks. We begin by tracing the evolution of speech processing research, from early approaches, such as MFCC and HMM, to more recent advances in deep learning architectures, such as CNNs, RNNs, transformers, conformers, and diffusion models. We categorize the approaches and compare their strengths and weaknesses for solving speech-processing tasks. Furthermore, we extensively cover various speech-processing tasks, datasets, and benchmarks used in the literature and describe how different deep-learning networks have been utilized to tackle these tasks. Additionally, we discuss the challenges and future directions of deep learning in speech processing, including the need for more parameter-efficient, interpretable models and the potential of deep learning for multimodal speech processing. By examining the field's evolution, comparing and contrasting different approaches, and highlighting future directions and challenges, we hope to inspire further research in this exciting and rapidly advancing field.
Ambuj Mehrish, Navonil Majumder, Rishabh Bhardwaj, Rada Mihalcea, Soujanya Poria
2023-04-30T00:17:42Z
http://arxiv.org/abs/2305.00359v3
# A Review of Deep Learning Techniques for Speech Processing ###### Abstract The field of speech processing has undergone a transformative shift with the advent of deep learning. The use of multiple processing layers has enabled the creation of models capable of extracting intricate features from speech data. This development has paved the way for unparalleled advancements in speech recognition, text-to-speech synthesis, automatic speech recognition, and emotion recognition, propelling the performance of these tasks to unprecedented heights. The power of deep learning techniques has opened up new avenues for research and innovation in the field of speech processing, with far-reaching implications for a range of industries and applications. This review paper provides a comprehensive overview of the key deep learning models and their applications in speech-processing tasks. We begin by tracing the evolution of speech processing research, from early approaches, such as MFCC and HMM, to more recent advances in deep learning architectures, such as CNNs, RNNs, transformers, conformers, and diffusion models. We categorize the approaches and compare their strengths and weaknesses for solving speech-processing tasks. Furthermore, we extensively cover various speech-processing tasks, datasets, and benchmarks used in the literature and describe how different deep-learning networks have been utilized to tackle these tasks. Additionally, we discuss the challenges and future directions of deep learning in speech processing, including the need for more parameter-efficient, interpretable models and the potential of deep learning for multimodal speech processing. By examining the field's evolution, comparing and contrasting different approaches, and highlighting future directions and challenges, we hope to inspire further research in this exciting and rapidly advancing field. Figure 1: Evolution of speech processing models over the years. ###### Contents * 1 Introduction * 2 Background * 2 Speech Signals * 2 Speech Features * 3 Traditional models for speech processing * 3 Deep Learning Architectures and Their Applications in Speech Processing Tasks * 3 Recurrent Neural Networks (RNNs) * 3 Convolutional Neural Networks * 3 Transformers * 4 Conformer * 5 Sequence to Sequence Models * 6 Reinforcement Learning * 7 Graph Neural Network * 8 Diffusion Probabilistic Model * 9 Speech Representation Learning * 10 Supervised Learning * 11 Unsupervised learning * 12 Semi-supervised Learning * 13 Self-supervised representation learning (SSRL) [MISSING_PAGE_POST] ## 1. Introduction Humans employ language as a means to effectively convey their emotions and sentiments. Language encompasses a collection of words forming a vocabulary, accompanied by grammar, which dictates the appropriate usage of these words. It manifests in various forms, including written text, sign language, and spoken communication. Speech, specifically, entails the utilization of phonetic combinations of consonant and vowel sounds to articulate words from the vocabulary. Phonetics, in turn, pertains to the production and perception of sounds by individuals. Through speech, individuals are able to express themselves and convey meaning in their chosen language. Speech processing is a field dedicated to the study and application of methods for analyzing and manipulating speech signals. It encompasses a range of tasks, including automatic speech recognition (ASR) [390; 628], speaker recognition (SR) [31], and speech synthesis or text-to-speech [396]. In recent years, speech processing has garnered increasing significance due to its diverse applications in areas such as telecommunications, healthcare, and entertainment. Notably, statistical modeling techniques, particularly Hidden Markov Models (HMMs), have played a pivotal role in advancing the field [149; 442]. These models have paved the way for significant advancements and breakthroughs in speech processing research and development. Over the past few years, the field of speech processing has been transformed by introducing powerful tools, including deep learning. Figure 1 illustrates the evolution of speech processing models over the years, the rapid development of deep learning architecture for speech processing reflects the growing complexity and diversity of the field. This technology has revolutionized the analysis and processing of speech signals using deep neural networks (DNNs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs). These architectures have proven highly effective in various speech-processing applications, such as speech recognition, speaker recognition, and speech synthesis. This study comprehensively overviews the most critical and emerging deep-learning techniques and their potential applications in various speech-processing tasks. Deep learning has revolutionized speech processing by its ability to automatically learn meaningful features from raw speech signals, eliminating the need for manual feature engineering. This breakthrough has led to significant advancements in speech processing performance, particularly in challenging scenarios involving noise, as well as diverse accents and dialects. By leveraging the power of deep neural networks, speech processing systems can now adapt and generalize more effectively, resulting in improved accuracy and robustness in various applications. The inherent capability of deep learning to extract intricate patterns and representations from speech data has opened up new possibilities for tackling real-world speech processing challenges. Deep learning architectures have emerged as powerful tools in speech processing, offering remarkable improvements in various tasks. Pioneering studies, such as [185], have demonstrated the substantial gains achieved by deep neural networks (DNNs) in speech recognition accuracy compared to traditional HMM-based systems. Complementing this, research in [3] showcased the effectiveness of convolutional neural networks (CNNs) for speech recognition. Moreover, recurrent neural networks (RNNs) have proven their efficacy in both speech recognition and synthesis, as highlighted in [161]. Recent advancements in deep learning have further enhanced speech processing systems, with attention mechanisms [85] and transformers [554] playing significant roles. Attention mechanisms enable the model to focus on salient sections of the input signal, while transformers facilitate modeling long-range dependencies within the signal. These developments have led to substantial improvements in the performance and versatility of speech processing systems, unlocking new possibilities for applications in diverse domains. Although deep learning has made remarkable progress in speech processing, it still faces certain challenges that need to be addressed. These challenges include the requirement for substantial amounts of labeled data, the interpretability of the models, and their robustness to different environmental conditions. To provide a comprehensive understanding of the advancements in this domain, this paper presents an extensive overview of deep learning architectures employed in speech-processing applications. Speech processing encompasses the analysis, synthesis, and recognition of speech signals, and the integration of deep learning techniques has led to significant advancements in these areas. By examining the current state-of-the-art approaches, this paper aims to shed light on the potential of deep learning for tackling the existing challenges and further advancing speech processing research. The paper provides a comprehensive exploration of deep-learning architectures in the field of speech processing. It begins by establishing the background, encompassing the definition of speech signals, speech features, and traditional non-neural models. Subsequently, the focus shifts towards an in-depth examination of various deep-learning architectures specifically tailored for speech processing, including RNNs, CNNs, Transformers, GNNs, and diffusion models. Recognizing the significance of representation learning techniques in this domain, the survey paper dedicates a dedicated section to their exploration. Moving forward, the paper delves into an extensive range of speech processing tasks where deep learning has demonstrated substantial advancements. These tasks encompass critical areas such as speech recognition, speech synthesis, speaker recognition, speech-to-speech translation, and speech synthesis. By thoroughly analyzing the fundamentals, model architectures, and specific tasks within the field, the paper then progresses to discuss advanced transfer learning techniques, including domain adaptation, meta-learning, and parameter-efficient transfer learning. Finally, in the conclusion, the paper reflects on the current state of the field and identifies potential future directions. By considering emerging trends and novel approaches, the paper aims to shed light on the evolving landscape of deep learning in speech processing and provide insights into promising avenues for further research and development. #### Why this paper? Deep learning has become a powerful tool in speech processing because it automatically learns high-level representations of speech signals from raw audio data. As a result, significant advancements have been made in various speech-processing tasks, including speech recognition, speaker identification, speech synthesis, and more. These tasks are essential in various applications, such as human-computer interaction, speech-based search, and assistive technology for people with speech impairments. For example, virtual assistants like Siri and Alexa use speech recognition technology, while audiobooks and in-car navigation systems rely on text-to-speech systems. Given the wide range of applications and the rapidly evolving nature of deep learning, a comprehensive review paper that surveys the current state-of-the-art techniques and their applications in speech processing is necessary. Such a paper can help researchers and practitioners stay up-to-date with the latest developments and trends and provide insights into potential areas for future research. However, to the best of our knowledge, no current work covers a broad spectrum of speech-processing tasks. A review paper on deep learning for speech processing can also be a valuable resource for beginners interested in learning about the field. It can provide an overview of the fundamental concepts and techniques used in deep learning for speech processing and help them gain a deeper understanding of the field. While some survey papers focus on specific speech-processing tasks such as speech recognition, a broad survey would cover a wide range of other tasks such as speaker recognition speech synthesis, and more. A broad survey would highlight the commonalities and differences between these tasks and provide a comprehensive view of the advancements made in the field. ## 2. Background Before moving on to deep neural architectures, we discuss basic terms used in speech processing, low-level representations of speech signals, and traditional models used in the field. ### Speech Signals Signal processing is a fundamental discipline that encompasses the study of quantities that exhibit variations in space or time. In the realm of signal processing, a quantity exhibiting spatial or temporal variations is commonly referred to as a signal. Specifically, sound signals are defined as variations in air pressure. Consequently, a speech signal is identified as a type of sound signal, namely pressure variations, generated by humans to facilitate spoken communication. Transducers play a vital role in converting these signals from one form, such as air pressure, to another form, typically an electrical signal. In signal processing, a signal that repetitively manifests after a fixed duration, known as a period, is classified as periodic. The reciprocal of this period represents the frequency of the signal. The waveform of a periodic signal defines its shape and concurrently determines its timbre, which pertains to the subjective perception of sound quality by humans. To facilitate the processing of speech, speech signals are commonly digitized. This entails converting them into a series of numerical values by measuring the signal's amplitude at consistent time intervals. The sampling rate, defined by the number of samples collected per second, determines the granularity of this digitization process. ### Speech Features Speech features are numerical representations of speech signals that are used for analysis, recognition, and synthesis. Broadly, speech signals can be classified into two categories: time-domain features and frequency-domain features. **Time-domain** features are derived directly from the amplitude of the speech signal over time. These are simple to compute and often used in real-time speech-processing applications. Some common time-domain features include: * Energy: Energy is a quantitative measure of the amplitude characteristics of a speech signal over time. It is computed by squaring each sample in the signal and summing them within a specific time window. This captures the overall strength and dynamics of the signal, revealing temporal variations in intensity. The energy measure provides insights into segments with higher or lower amplitudes, aiding in speech recognition, audio segmentation, and speaker diarization. It also helps identify events and transitions indicative of changes in vocal activity. By quantifying amplitude variations, energy analysis contributes to a comprehensive understanding of speech signals and their acoustic properties. * Zero-crossing rate: The zero-crossing rate indicates how frequently the speech signal crosses the zero-axis within a defined time frame. It is computed by counting the number of polarity changes in the signal during a specific window. * Pitch: Pitch refers to the perceived tonal quality in a speaker's voice, which is determined by analyzing the fundamental frequency of the speech signal. The fundamental frequency can be estimated through the application of pitch detection algorithms (Pitch, 1994) or by utilizing autocorrelation techniques (Pitch, 1995). * Linear predictive coding (LPC):Linear Predictive Coding (LPC) is a powerful technique that represents the speech signal as a linear combination of past samples, employing an autoregressive model. The estimation of model parameters is accomplished through methods like the Levinson-Durbin algorithm [54]. The obtained coefficients serve as a valuable feature representation for various speech-processing tasks. **Frequency-domain** features are derived from the signal represented in the frequency domain also known as its spectrum. A spectrum captures the distribution of energy as a function of frequency. Spectrograms are two-dimensional visual representations capturing the variations in a signal's spectrum over time. When compared against time-domain features, it is generally more complex to compute frequency-domain features as they tend to involve time-frequency transform operations such as Fourier transform. * Mel-spectrogram: A Mel spectrogram, also known as a Mel-frequency spectrogram or Melspectrogram, is a representation of the short-term power spectrum of a sound signal. It is widely used in audio signal processing and speech recognition tasks. It is obtained by converting the power spectrum of a speech signal into a mel-scale, which is a perceptual scale of pitches based on the human auditory system's response to different frequencies. The mel-scale divides the frequency range into a set of mel-frequency bands, with higher resolution in the lower frequencies and coarser resolution in the higher frequencies. This scale is designed to mimic the non-linear frequency perception of human hearing. To compute the Melspectrogram, the speech signal is typically divided into short overlapping frames. For each frame, the Fast Fourier Transform (FFT) is applied to obtain the power spectrum. The power spectrum is then transformed into the mel-scale using a filterbank that converts the power values at different frequencies to their corresponding mel-frequency bands. Finally, the logarithm of the mel-scale power values is computed, resulting in the Melspectrogram. Melspectrogram provides a time-frequency representation of the audio signal, where the time dimension corresponds to the frame index, and the frequency dimension represents the mel-frequency bands. It captures both the spectral content and temporal dynamics of the signal, making it useful for tasks such as speech recognition, music analysis, and sound classification. By using the Melspectrogram, the representation of the audio signal is transformed to a more perceptually meaningful domain, which can enhance the performance of various audio processing algorithms. It is particularly beneficial in scenarios where capturing the spectral patterns and frequency content of the signal is important for the analysis or classification task at hand. * Mel-frequency cepstral coefficients (MFCCs): Mel-frequency cepstral coefficients (MFCCs) are a feature representation widely utilized in various applications such as speech recognition, gesture recognition, speaker identification, and cetacean auditory perception systems. MFCCs capture the power spectrum of a sound over a short duration by utilizing a linear cosine transformation of a logarithmically-scaled power spectrum on a non-linear mel frequency scale. The MFCCs consist of a set of coefficients that collectively form a Mel-frequency cepstrum 1. With just 12 parameters related to the amplitude of frequencies, MFCCs provide an adequate number of frequency channels to analyze audio, while still maintaining a compact representation. The main objectives of MFCC extraction are to eliminate vocal fold excitation (F0) information related to pitch, ensure the independence of the extracted features, align with human perception of loudness and frequency, and capture the contextual dynamics of phones. The process of extracting MFCC features involves A/D conversion, pre-emphasis filtering, framing, windowing, Fourier transform, Mel filter bank application, logarithmic operation, discrete cosine transform (DCT), and liftering. By following these steps, MFCCs enable the extraction of informative audio features while avoiding redundancy and preserving the relevant characteristics of the sound signal. Other types of speech features include formant frequencies, pitch contour, cepstral coefficients, wavelet coefficients, and spectral envelope. These features can be used for various speech-processing tasks, including speech recognition, speaker identification, emotion recognition, and speech synthesis. In the field of speech processing, frequency-based representations such as Mel spectrogram and MFCC are widely used since they are more robust to noise as compared to temporal variations of the sound (Datta et al., 2017). Time-domain features can be useful when the task warrants this information (such as pauses, emotions, phoneme duration, and speech segments). It is noteworthy that the time-domain and frequency-domain features tend to capture different sets of information and thus can be used in conjunction to solve a task (Song et al., 2019; Wang et al., 2020; Wang et al., 2021). ### Traditional models for speech processing Traditional speech representation learning algorithms based on shallow models utilize basic nonparametric models for extracting features from speech signals. The primary objective of these models is to extract significant features from the speech signal through mathematical operations, such as Fourier transforms, wavelet transforms, and linear predictive coding (LPC). The extracted features serve as inputs to classification or regression models. The shallow models aim to extract meaningful features from the speech signal, enabling the classification or regression model to learn and make accurate predictions. * Gaussian Mixture Models (GMMs): Gaussian Mixture Models (GMMs) are powerful generative models employed to represent the probability distribution of a speech feature vector. They achieve this by combining multiple Gaussian distributions with different weights. GMMs have found widespread applications in speaker identification (Wang et al., 2020) and speech recognition tasks (Wang et al., 2021). Specifically, in speaker identification, GMMs are utilized to capture the distribution of speaker-specific features, enabling the recognition of individuals based on their unique characteristics. Conversely, in speech recognition, GMMs are employed to model the acoustic properties of speech sounds, facilitating accurate recognition of spoken words and phrases. GMMs play a crucial role in these domains, enabling robust and efficient analysis of speech-related data. * Support Vector Machines (SVMs): Support Vector Machines (SVMs) are a widely adopted class of supervised learning algorithms extensively utilized for various speech classification tasks (Wang et al., 2021). They are particularly effective in domains like speaker recognition (Wang et al., 2021; Wang et al., 2021; Wang et al., 2021) and phoneme recognition (Wang et al., 2021). SVMs excel in their ability to identify optimal hyperplanes that effectively separate different classes in the feature space. By leveraging this optimal separation, SVMs enable accurate classification and recognition of speech patterns. As a result, SVMs have become a fundamental tool in the field of speech analysis and play a vital role in enhancing the performance of speech-related classification tasks. * Hidden Markov Models (HMMs): Hidden Markov Models (HMMs) have gained significant popularity as a powerful tool for performing various speech recognition tasks, particularly ASR (Wang et al., 2021; Wang et al., 2021). In ASR, HMMs are employed to model the probability distribution of speech sounds by incorporating a sequential arrangement of hidden states along with corresponding observations. The training of HMMs is commonly carried out using the Baum-Welch algorithm, a variant of the Expectation Maximization algorithm, which enables effective parameter estimation and model optimization2. By leveraging HMMs in speech recognition, it becomes possible to predict the most likely sequence of speech sounds given an input speech signal. This enables accurate and efficient recognition of spoken language, making HMMs a crucial component in advancing speech recognition technology. Their flexibility and ability to model temporal dependencies contribute to their widespread use in ASR and various other speech-related applications, further enhancing our understanding and utilization of spoken language. Footnote 2: Wikipedia: Baum-Welch algorithm: [http://en.wikipedia.org/wiki/Baum%e2%80%93Welch_algorithm](http://en.wikipedia.org/wiki/Baum%e2%80%93Welch_algorithm) * The K-nearest neighbors (KNN) algorithm is a simple yet effective classification approach utilized in a wide range of speech-related applications, including speaker recognition [475] and language recognition. The core principle of KNN involves identifying the K-nearest neighbors of a given input feature vector within the training data and assigning it to the class that appears most frequently among those neighbors. This algorithm has gained significant popularity due to its practicality and intuitive nature, making it a reliable choice for classifying speech data in numerous real-world scenarios. By leveraging the proximity-based classification, KNN provides a straightforward yet powerful method for accurately categorizing speech samples based on their similarities to the training data. Its versatility and ease of implementation contribute to its widespread adoption in various speech-related domains, facilitating advancements in speaker recognition, language identification, and other applications in the field of speech processing. * Decision trees: Decision trees are widely employed in speech classification tasks as a class of supervised learning algorithms. Their operation involves recursively partitioning the feature space into smaller regions, guided by the values of the features. Within each partition, a decision rule is established to assign the input feature vector to a specific class. The strength of decision trees lies in their ability to capture complex decision boundaries by hierarchically dividing the feature space. By analyzing the values of the input features at each node, decision trees efficiently navigate the classification process. This approach not only provides interpretability, but also facilitates the identification of key features contributing to the classification outcome. Through their recursive partitioning mechanism, decision trees offer a flexible and versatile framework for speech classification. They excel in scenarios where the decision rules are based on discernible thresholds or ranges of feature values. The simplicity and transparency of decision trees make them a valuable tool for understanding and solving speech-related classification tasks. To summarize, conventional speech representation learning algorithms based on shallow models entail feature extraction from the speech signal, which is subsequently used as input for classification or regression models. These algorithms have found extensive applications in speech processing tasks like speech recognition, speaker identification, and speech synthesis. However, they have been progressively superseded by more advanced representation learning algorithms, particularly deep neural networks, due to their enhanced capabilities. ## 3. Deep Learning Architectures and Their Applications in Speech Processing Tasks Deep learning architectures have revolutionized the field of speech processing by demonstrating remarkable performance across various tasks. With their ability to automatically learn hierarchical representations from raw speech data, deep learning models have surpassed traditional approaches in areas such as speech recognition, speaker identification, and speech synthesis. These architectures have been instrumental in capturing intricate patterns, uncovering latent features, and extracting valuable information from vast amounts of speech data. In this section, we delve into the applications of deep learning architectures in speech processing tasks, exploring their potential, advancements, and the impact they have had on the field. By examining the key components and techniques employed in these architectures, we aim to provide insights into the current state-of-the-art in deep learning for speech processing and shed light on the exciting prospects it holds for future advancements in the field. ### Recurrent Neural Networks (RNNs) It is natural to consider Recurrent Neural Networks for various speech processing tasks since the input speech signal is inherently a dynamic process (Wang et al., 2017). RNNs can model a given time-varying (sequential) patterns that were otherwise hard to capture by standard feedforward neural architectures. Initially, RNNs were used in conjunction with HMMs where the sequential data is first modeled by HMMs while localized classification is done by the neural network. However, such a hybrid model tends to inherit limitations of HMMs, for instance, HMM requires task-specific knowledge and independence constraints for observed states (Wang et al., 2017). To overcome the limitations inherited by the hybrid approach, end-to-end systems completely based on RNNs became popular for sequence transduction tasks such as speech recognition and text(Han et al., 2017; Wang et al., 2018). Next, we discuss RNN and it's variants: #### 3.1.1 RNN Models Vanilla RNN.Give input sequence of T states \((x_{1},\dots,x_{T})\) with \(x_{i}\in\mathbb{R}^{d}\), the output state at time \(t\) can be computed as \[h_{t}=\mathcal{H}(W_{hh}h_{t-1}+W_{xh}x_{t}+b_{h}) \tag{1}\] \[y_{t}=W_{hy}h_{t}+b_{y} \tag{2}\] where \(W_{hh},W_{hx},W_{yh}\) are weight matrices and \(b_{h},b_{y}\) are bias vectors. \(\mathcal{H}\) is non-linear activation functions such as Tanh, ReLU, and Sigmoid. RNNs are made of high dimensional hidden states, notice \(h_{t}\) in the above equation, which makes it possible for them to model sequences and help overcome the limitation of feedforward neural networks. The state of the hidden layer is conditioned on the current input and the previous state, which makes the underlying operation recursive. Essentially, the hidden state \(h_{t-1}\) works as a memory of past inputs \(\{x_{k}\}_{k=1}^{t-1}\) that influence the current output \(y_{t}\). Bidirectional RNNs.For numerous tasks in speech processing, it is more effective to process the whole utterance at once. For instance, in speech recognition, one-shot input transcription can be more robust than transcribing based on the partial (i.e. previous) context information (Kang et al., 2017). The vanilla RNN has a limitation in such cases as they are unidirectional in nature, that is, output \(y_{t}\) is obtained from \(\{x_{k}\}_{k=1}^{t}\), and thus, agnostic of what comes after time \(t\). Bidirectional RNNs (BRNNs) were proposed to overcome such shortcomings of RNNs (Wang et al., 2017). BRRNs encode both future and past (input) context in separate hidden layers. The outputs of the two RNNs are then combined at each time step, typically by concatenating them together, to create a new, richer representation that includes both past and future context. \[\overrightarrow{h_{t}} =\mathcal{H}(W_{\overrightarrow{hh}}\overrightarrow{h}_{t-1}+ W_{\overrightarrow{xh}}x_{t}+b_{\overrightarrow{h}}) \tag{3}\] \[\overleftarrow{h_{t}} =\mathcal{H}(W_{\overleftarrow{hh}}\overleftarrow{h}_{t+1}+W_{ \overrightarrow{xh}}x_{t}+b_{\overleftarrow{h}})\] (4) \[y_{t} =W_{\overrightarrow{hy}}\overrightarrow{h}_{t}+W_{\overleftarrow {hy}}\overleftarrow{h}_{t}+b_{y} \tag{5}\] where high dimensional hidden states \(\overrightarrow{h}_{t-1}\) and \(\overleftarrow{h}_{t+1}\) are hidden states modeling the forward context from \(1,2,\ldots,t-1\) and backward context from \(T,T-1,\ldots,t+1\), respectively. #### Long Short-Term Memory. Vanilla RNNs are observed to face another limitation, that is, vanishing gradients that do not allow them to learn from long-range context information. To overcome this, a variant of RNN, named as LSTM, was specifically designed to address the vanishing gradient problem and enable the network to selectively retain (or forget) information over longer periods of time [187]. This attribute is achieved by maintaining separate purpose-built memory cells in the network: the long-term memory cell \(c_{t}\) and the short-term memory cell \(h_{t}\). In Equation (2), LSTM redefines the operator \(\mathcal{H}\) in terms of forget gate \(f_{t}\), input gate \(i_{t}\), and output gate \(o_{t}\), \[i_{t} =\sigma(W_{xi}x_{t}+W_{hi}h_{t-1}+W_{ci}c_{t-1}+b_{i}), \tag{6}\] \[f_{t} =\sigma(W_{xf}x_{t}+W_{hf}h_{t-1}+W_{cf}c_{t-1}+b_{f}),\] (7) \[c_{t} =f_{t}\odot c_{t-1}+i_{t}\odot\tanh{(W_{xc}x_{t}+W_{hc}h_{t-1}+b_ {c})},\] (8) \[o_{t} =\sigma(W_{xo}x_{t}+W_{ho}h_{t-1}+W_{co}c_{t}+b_{o}),\] (9) \[h_{t} =o_{t}\odot\tanh{(c_{t})}, \tag{10}\] where \(\sigma(x)=1/(1+e^{-x})\) is a logistic sigmoid activation function. \(c_{t}\) is a fusion of the information from the previous state of the long-term memory \(c_{t-1}\), the previous state of short-term memory \(h_{t-1}\), and current input \(x_{t}\). \(W\) and \(b\) are weight matrices and biases. \(\odot\) is the element-wise vector multiplication or Hadamard operator. Bidirectional LSTMs (BLSTMs) can capture longer contexts in both forward and backward directions [158]. #### Gated Recurrent Units. Gated Recurrent Units (GRU) aim to be a computationally-efficient approximate of LSTM by using only two gates (vs three in LSTM) and a single memory cell (vs two in LSTM). To control the flow of information over time, a GRU uses an update gate \(z_{t}\) to decide how much of the new input to be added to the previous hidden state and a reset gate \(r_{t}\) to decide how much of previous hidden state information to be forgotten. \[z_{t} =\sigma(W_{xx}x_{t}+W_{hz}h_{t-1}), \tag{11}\] \[r_{t} =\sigma(W_{xx}x_{t}+W_{hr}h_{t-1}),\] (12) \[h_{t} =(1-z_{t})\odot h_{t-1}+z_{t}\odot\tanh(W_{xh}x_{t}+W_{rh}(r_{t} \odot h_{t-1})), \tag{13}\] where \(\odot\) is element-wise multiplication between the two vectors (Hadamard product). RNNs and their variants are widely used in various deep learning applications like speech recognition, synthesis, and natural language understanding. Although seq2seq based on recurrent architectures such as LSTM/GRU has made great strides in speech processing, they suffer from the drawback of slow training speed due to internal recurrence. Another drawback of the RNN family is their inability to leverage information from long distant time steps accurately. #### Connectionist Temporal Classification. Connectionist Temporal Classification (CTC) [159] is a scoring and output function commonly used to train LSTM networks for sequence-based problems with variable timing. CTC has been applied to several tasks, including phoneme recognition, ASR, and other sequence-based problems. One of the major benefits of CTC is its ability to handle unknown alignment between input and output, simplifying the training process. When used in ASR [104; 105; 378], CTC eliminates the need for manual data labeling by assigning probability scores to the output given any input signal. This is particularly advantageous for tasks such as speech recognition and handwriting recognition, where the input and output can vary in size. CTC also solves the problem of having to specify the position of a character in the output, allowing for more efficient training of the neural network without post-processing the output. Finally, the CTC decoder can transform the neural network output into the final text without post-processing. #### 3.1.2. Application The utilization of RNNs in popular products such as Google's voice search and Apple's Siri to process user input and predict the output has been well-documented [177; 304]. RNNs are frequently utilized in speech recognition tasks, such as the prediction of phonetic segments from audio signals [412]. They excel in use cases where context plays a vital role in outcome prediction and are distinct from CNNs as they utilize feedback loops to process a data sequence that informs the final output [412]. In recent times, there have been advancements in the architecture of RNNs, which have been primarily focused on developing end-to-end (E2E) models [302; 409] for ASR. These E2E models have replaced conventional hybrid models and have displayed substantial enhancements in speech recognition [302; 303]. However, a significant challenge faced by E2E RNN models is the synchronization of the input speech sequence with the output label sequence [158]. To tackle this issue, a loss function called CTC [159] is utilized for training RNN models, allowing for the repetition of labels to construct paths of the same length as the input speech sequence. An alternative method is to employ an Attention-based Encoder-Decoder (AED) model based on RNN architecture, which utilizes an attention mechanism to align the input speech sequence with the output label sequence. However, AED models tend to perform poorly on lengthy utterances. The development of Bimodal Recurrent Neural Networks (BRNN) has led to significant advancements in the field of Audiovisual Speech Activity Detection (AV-SAD) [531]. BRNNs have demonstrated immense potential in improving the performance of speech recognition systems, particularly in noisy environments, by combining information from various sources. By integrating separate RNNs for each modality, BRNNs can capture temporal dependencies within and across modalities. This leads to successful outcomes in speech-based systems, where integrating audio and visual modalities is crucial for accurate speech recognition. Compared to conventional audio-only systems, BRNN-based AV-SAD systems display superior performance, particularly in challenging acoustic conditions where audio-only systems might struggle. To enhance the performance of continuous speech recognition, LSTM networks have been utilized in hybrid architectures alongside CNNs [417]. The CNNs extract local features from speech frames that are then processed by LSTMs over time [417]. LSTMs have also been employed for speech synthesis, where they have been shown to enhance the quality of statistical parametric speech synthesis [417]. Aside from their ASR and speech synthesis applications, LSTM networks have been utilized for speech post-filtering. To improve the quality of synthesized speech, researchers have proposed deep learning-based post-filters, with LSTMs demonstrating superior performance over other post-filter types [99]. Bidirectional LSTM (Bi-LSTM) is another variant of RNN that has been widely used for speech synthesis [136]. Several RNN-based analysis/synthesis models such as WaveNet [402], SampleRNN [373], and Tacotron have been developed. These neural vocoder models can generate high-quality synthesized speech from acoustic features without requiring intermediate vocoding steps. ### Convolutional Neural Networks Convolutional neural networks (CNNs) are a specialized class of deep neural architecture consisting of one or more pairs of alternating convolutional and pooling layers. A convolution layer applies filters that process small local parts of the input, where these filters are replicated along the whole input space. A pooling layer converts convolution layer activations to low resolution by taking the maximum filter activation within a specified window and shifting across the activation map. CNNs are variants of fully connected neural networks widely used for processing data with grid-like topology. For example, time-series data (1D grid) with samples at regular intervals or images (2D grid) with pixels constitute a grid-like structure. As discussed in Section 2, the speech spectrogram retains more information than hand-crafted features, including speaker characteristics such as vocal tract length differences across speakers, distinct speaking styles causing formant to undershoot or overshoot, etc. Also, explicitly expressed these characteristics in the frequency domain. The spectrogram representation shows very strong correlations in time and frequency. Due to these characteristics of the spectrogram, it is a suitable input for a CNN processing pipeline that requires preserving locality in both frequency and time axis. For speech signals, modeling local correlations with CNNs will be beneficial. The CNNs can also effectively extract the structural features from the spectrogram and reduce the complexity of the model through weight sharing. This section will discuss the architecture of 1D and 2D CNNs used in various speech-processing tasks. #### 3.2.1 CNN Model Variants 2D CNN.Since spectrograms are two-dimensional visual representations, one can leverage CNN architectures widely used for visual data processing (images and videos) by performing convolutions in two dimensions. The mathematical equation for a 2D convolutional layer can be represented as: \[y_{i,j}^{(k)}=\sigma\bigg{(}\sum_{l=1}^{L}\sum_{m=1}^{M}x_{i+l-1,j+m-1}^{(l)}w _{l,m}^{(k)}+b^{(k)}\bigg{)} \tag{14}\] Here, \(x_{i,j}^{(l)}\) is the pixel value of the \(l^{th}\) input channel at the spatial location \((i,j)\), \(w_{l,m}^{(k)}\) is the weight of the \(m^{th}\) filter at the \(l^{th}\) channel producing the \(k^{th}\) feature map, and \(b^{(k)}\) is the bias term for the \(k^{th}\) feature map. The output feature map \(y_{i,j}^{(k)}\) is obtained by convolving the input image with the filters and then applying an activation function \(\sigma\) to introduce non-linearity. The convolution operation involves sliding the filter window over the input image, computing the dot product between the filter and the input pixels at each location, and producing a single output pixel. However, there are some drawbacks to using a 2D CNN for speech processing. One of the main issues is that 2D convolutions are computationally expensive, especially for large inputs. This is because 2D convolutions involve many multiplications and additions, and the computational cost grows quickly with the input size. To address this issue, a 1D CNN can be designed to operate directly on the speech signal without needing a spectrogram. 1D convolutions are much less computationally expensive than 2D convolutions because they only operate on one dimension of the input. This reduces the multiplications and additions required, making the network faster and more efficient. In addition, 1D feature maps require less memory during processing, which is especially important for real-time applications. A neural network's memory requirements are proportional to its feature maps' size. By using 1D convolutions, the size of the feature maps can be significantly reduced, which can improve the efficiency of the network and reduce the memory requirements. 1D CNN.1D CNN is essentially a special case of 2D CNN where the height of the filter is equal to the height the spectogram. Thus, the filter only slides along the temporal dimension and the height of the resultant feature maps is one. As such, 1D convolutions are computationally less expensive and memory efficient [261], as compared to 2D CNNs. Several studies [6, 262, 245] have shown that 1D CNNs are preferable to their 2D counterparts in certain applications. For example, Alsabhan [12] found that the performance of predicting emotions with a 2D CNN model was lower compared to a 1D CNN model. 1D convolution is useful in speech processing for several reasons: * Since, speech signals are sequences of amplitudes sampled over time, 1D convolution can be applied along temporal dimension to capture temporal variations in the signal. * _Robustness to distortion and noise:_ Since, 1D convolution allows local feature extraction, the resultant features are often resilient to global distortions of the signal. For instance, a speaker might be interrupted in the middle of an utterance. Local features would still produce robust representations for those relevant spans, which is key to ASR, among many speech processing task. On the other hand, speech signals are often contaminated with noise, making extracting meaningful information difficult. 1D convolution followed by pooling layers can mitigate the impact of noise [180], improving speech recognition systems' accuracy. The basic building block of a 1D CNN is the convolutional layer, which applies a set of filters to the input data. A convolutional layer employs a collection of adjustable parameters called filters to carry out convolution operations on the input data, resulting in a set of feature maps as the output, which represent the activation of each filter at each position in the input data. The size of the feature maps depends on the size of the input data, the size of the filters, and the number of filters used. The activation function used in a 1D CNN is typically a non-linear function, such as the rectified linear unit (ReLU) function. Given an input sequence \(x\) of length \(N\), a set of \(K\) filters \(W_{k}\) of length \(M\), and a bias term \(b_{k}\), the output feature map \(y_{k}\) of the \(k^{th}\) filter is given by \[y_{k}[n]=\text{ReLU}(b_{k}+\sum_{m=0}^{M-1}W_{k}[m]*x[n-m]) \tag{15}\] where \(n\) ranges from \(M-1\) to \(N-1\), and \(*\) denotes the convolution operation. After the convolutional layer, the output tensor is typically passed through a pooling layer, reducing the feature maps' size by down-sampling. The most commonly used pooling operation is the max-pooling, which keeps the maximum value from a sliding window across each feature map. CNNs often replace previously popular methods like HMMs and GMM-UBM in various cases. Moreover, CNNs possess the ability to acquire features that remain robust despite variations in speech signals resulting from diverse speakers, accents, and background noise. This is made possible due to three key properties of CNNs: locality, weight sharing, and pooling. The locality property enhances resilience against non-white noise by enabling the computation of effective features from cleaner portions of the spectrum. Consequently, only a smaller subset of features is affected by the noise, allowing higher network layers a better opportunity to handle the noise by combining higher-level features computed for each frequency band. This improvement over standard fully connected neural networks, which process all input features in the lower layers, highlights the significance of locality. As a result, locality reduces the number of network weights that must be learned. #### 3.2.2 Application CNNs have proven to be versatile tools for a range of speech-processing tasks. They have been successfully applied to speech recognition [4, 390], including in hybrid NN-HMM models for speech recognition, and can be used for multi-class classification of words [5]. In addition, CNNs have been proposed for speaker recognition in an emotional speech, with a constrained CNN model presented in [496]. CNNs, both 1D and 2D, have emerged as the core building block for various speech processing models, including acoustic models [162; 273; 483] in ASR systems. For instance, in 2021, researchers from Facebook AI proposed wav2vec2.0 [483], a hybrid ASR system based on CNNs for learning representations of raw speech signals that were then fed into a transformer-based language model. The system achieved state-of-the-art results on several benchmark datasets. Similarly, Google's VGGVox [92] used a CNN with VGG architecture to learn speaker embeddings from Mel spectrograms, achieving state-of-the-art results in speaker recognition. CNNs have also been widely used in developing state-of-the-art speech enhancement and text-to-speech architectures. For instance, the architecture proposed in [311; 541] for Deep Noise Suppression (DNS) [457] challenge and Google's Tacotron2 [491] are examples of models that use CNNs as their core building blocks. In addition to traditional tasks like ASR and speaker identification, CNNs have also been applied to non-traditional speech processing tasks like emotion recognition [230], Parkinson's disease detection [224], language identification [498] and sleep apnea detection [497]. In all these tasks, CNN extracted features from speech signals and fed them into the task classification model. #### 3.2.3 Temporal Convolution Neural Networks Recurrent neural networks, including RNNs, LSTMs, and GRUs, have long been popular for deep-learning sequence modeling tasks. They are especially favored in the speech-processing domain. However, recent studies have revealed that certain CNN architectures can achieve state-of-the-art accuracy in tasks such as audio synthesis, word-level language modelling, and machine translation, as reported in [102; 233; 234]. The advantage of convolutional neural networks is that they enable faster training by allowing parallel computation. They can avoid common issues associated with recurrent models, such as the vanishing or exploding gradient problem or the inability to retain long-term memory. In a recent study by Bai et al. [30], they proposed a generic Temporal Convolutional Neural Network (TCNN) architecture that can be applied to various speech-related tasks. This architecture combines the best practices of modern CNNs and has demonstrated comparable performance to recurrent architectures such as LSTMs and GRUs. The TCN approach could revolutionize speech processing by providing an alternative to the widely used recurrent neural network models. #### 3.2.4 TCNN Model Variants The architecture of TCNN is based upon two principles:(1) There is no information "leakage" from future to past;(2) the architecture can map an input sequence of any length to an output sequence of the same length, similar to RNN. TCN consists of dilated, causal 1D fully-convolutional layers with the same input and output lengths to satisfy the above conditions. In other words, TCNN is simply a 1D fully-convolutional network (FCN) with casual convolutions as shown in Figure 2. * _Causal Convolution [402]_: Causal convolution convolves the input at a specific time point \(t\) solely with the temporally-prior elements. * _Dilated Convolution [629]_: By itself, causal convolution filters have a limited range of perception, meaning they can only consider a fixed number of elements in the past. Therefore, it is challenging to learn any dependency between temporally distant elements for longer sequences. Dilated convolution ameliorates this limitation by repeatedly applying dilating filters to expand the range of perception, as shown in Figure 2. The dilation is achieved by uniformly inserting zeros between the filter weights. Consider a 1-D sequence \(x\in\mathbb{R}^{n}\) and a filter: \(f:\{0,...,k-1\}\rightarrow\mathbb{R}\), the dilated convolution operation \(F_{d}\) on an element \(y\) of the sequence is defined as \[F_{d}(y)=(x*_{d}f)(s)=\sum_{i=0}^{k-1}f(i).x_{y-d.i},\] (16) where \(k\) is filter size, \(d\) is dilation factor, and \(y-d.i\) is the span along the past. The dilation step introduces a fixed step between every two adjacent filter taps. When \(d=1\), a dilated convolution acts as a normal convolution. Whereas, for larger dilation, the filter acts on a wide but non-contiguous range of inputs. Therefore, dilation effectively expands the receptive field of the convolutional networks. #### 3.2.5 Application Recent studies have shown that the TCNN architecture not only outperforms traditional recurrent networks like LSTMs and GRUs in terms of accuracy but also possesses a set of advantageous properties, including: * Parallelism is a key advantage of TCNN over RNNs. In RNNs, time-step predictions depend on their predecessors' completion, which limits parallel computation. In contrast, TCNNs apply the same filter to each span in the input, allowing parallel application thereof. This feature enables more efficient processing of long input sequences compared to RNNs that process sequentially. * The receptive field size can be modified in various ways to enhance the performance of TCNNs. For example, incorporating additional dilated convolutional layers, employing larger dilation factors, or augmenting the filter size are all effective methods. Consequently, TCNNs offer superior management of the model's memory size and are highly adaptable to diverse domains. * When dealing with lengthy input sequences, LSTM and GRU models tend to consume a significant amount of memory to retain the intermediate outcomes for their numerous cell gates. On the other hand, TCNNs utilize shared filters throughout a layer, and the Figure 2: TCNNs leverage causal and dilated convolutions to model temporal dependencies in sequential data. Causal convolutions ensure that future information is not used during training, while dilated convolutions increase the receptive field without increasing computational complexity. This makes TCNNs an effective and efficient solution for a wide range of tasks, including speech recognition, action recognition, and music analysis. back-propagation route depends solely on the depth of the network. This makes TCNNs a more memory-efficient alternative to LSTMs and GRUs, especially in scenarios where memory constraints are a concern. TCNNs can perform real-time speech enhancement in the time domain (Vaswani et al., 2017). They have much fewer trainable parameters than earlier models, making them more efficient. TCNs have also been used for speech and music detection in radio broadcasts (Vaswani et al., 2017; Wang et al., 2018). They have been used for single channel speech enhancement (Vaswani et al., 2017; Wang et al., 2018) and are trained as filter banks to extract features from waveform to improve the performance of ASR (Wang et al., 2018). ### Transformers While recurrence in RNNs (Section 3.1) is a boon for neural networks to model sequential data, it is also a bane as the recurrence in time to update the hidden state intrinsically precludes parallelization. Additionally, although dedicated gated RNNs such as LSTM and GRU have helped to mitigate the vanishing gradient problem to some extent, it can still be a challenge to maintain long-term dependencies in RNNs. Proposed by Vaswani et al. (Vaswani et al., 2017), Transformer solved a critical shortcoming of RNNs by allowing parallelization within the training sample, that is, facilitating the processing of the entire input sequence at once. Since then, the primary idea of using only the attention mechanism to construct an encoder and decoder has served as the basic recipe for many state-of-the-art architectures across the domains of machine learning. In this survey, we use **transformer** to denote architectures that are inspired by Transformer (Vaswani et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). This section overviews the transformer's fundamental design proposed by Vaswani et al. (Vaswani et al., 2017) and its adaptations for different speech-related applications. #### 3.3.1. Basic Architecture Transformer architecture (Vaswani et al., 2017) comprises an attention-based encoder and decoder, with each module consisting of a stack of identical blocks. Each block in the encoder and decoder consists of two sub-layers: a multi-head attention (MHA) mechanism and a position-wise fully connected feedforward network as described in Figure 3. The MHA mechanism in the encoder allows each input element to attend to every other element in the sequence, enabling the model to capture long-range dependencies in the input sequence. The decoder typically uses a combination of MHA and encoder-decoder attention to attend to both the input sequence and the previously generated output elements. The feedforward network in each block of the Transformer provides non-linear transformations to the output of the attention mechanism. Next, we discuss operations involved in transformer layers, that is, multi-head attention and position-wise feedforward network: Attention in Transformers.Attention mechanism, first proposed by Bahdanau et al. (Bahdanau et al., 2017), has revolutionized sequence modeling and transduction models in various tasks of NLP, speech, and computer vision (Vaswani et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). Broadly, it allows the model to focus on specific parts of the input or output sequence, without being limited by the distance between the elements. We can describe the attention mechanism as the mapping of a query vector and set of key-value vector pairs to an output. Precisely, the output vector is computed as a weighted summation of value vectors where the weight of a value vector is obtained by computing the compatibility between the query vector and key vector. Let, each query \(Q\) and key \(K\) are \(d_{k}\) dimensional and value \(V\) is \(d_{v}\) dimensional. Specific to the Transformer, the compatibility function between a query and each key is computed as their dot product between scaled by \(\sqrt{d_{k}}\). To obtain the weights on values, the scaled dot product values are passed through a softmax function: \[\text{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V})=\text{softmax}\left(\frac{ \mathbf{Q}\mathbf{K}^{T}}{\sqrt{d_{k}}}\right)\mathbf{V} \tag{17}\] Here multiple queries, keys, and value vectors, are packed together in matrix form respectively denoted by \(\mathbf{Q}\in\mathbb{R}^{N\times d_{k}}\), \(\mathbf{K}\in\mathbb{R}^{M\times d_{k}}\), and \(\mathbf{V}\in\mathbb{R}^{M\times d_{v}}\). \(N\) and \(M\) represent the lengths of queries and keys (or values). Scaling of dot product attention becomes critical to tackling the issue of small gradients with the increase in \(d_{k}\)[554]. Instead of performing single attention in each transformer block, multiple attentions in lower-dimensional space have been observed to work better [554]. This observation gave rise to **Multi-Head Attention**: For \(h\) heads and dimension of tokens in the model \(d_{m}\), the \(d_{m}\)-dimensional query, key, and values are projected \(h\) times to \(d_{k}\), \(d_{k}\), and \(d_{v}\) dimensions using learnable linear projections3. Each head performs attention operation as per Equation (17). The \(h\)\(d_{v}\)-dimensional are concatenated and projected back to \(d_{m}\) using another projection matrix: Footnote 3: Projection weights are neither shared across heads nor query, key, and values. \[\text{MultiHeadAttn}(\mathbf{Q},\mathbf{K},\mathbf{V})=\text{ Concat}(\text{head}_{1},...\text{ head}_{h})\mathbf{W}^{O}, \tag{18}\] \[\text{with}\ \ \text{head}_{i}=\text{Attention}(\mathbf{Q} \mathbf{W}_{i}^{Q},\mathbf{K}\mathbf{W}_{i}^{K},\mathbf{V}\mathbf{W}_{i}^{V}) \tag{19}\] Where \(\mathbf{W}^{Q},\mathbf{W}^{K}\in\mathbb{R}^{d_{model}\times d_{k}},\mathbf{W} ^{V}\in\mathbb{R}^{d_{model}\times d_{v}},\mathbf{W}^{O}\in\mathbb{R}^{hd_{d} \times d_{model}}\) are learnable projection matrices. Intuitively, multiple attention heads allow for attending to parts of the sequence differently (e.g., longer-term dependencies versus shorter-term dependencies). Intuitively, multiple attention heads allow for attending in different representational spaces jointly. Position-wise FFN.The position-wise FNN consists of two dense layers. It is referred to position-wise since the same two dense layers are used for each positioned item in the sequence and are equivalent to applying two \(1\times 1\) convolution layers. Figure 3: Illustrations of attention (left) and multi-headed attention (right). Residual Connection and Normalization.Residual connection and Layer Normalization are employed around each module for building a deep model. For example, each encoder block output can be defined as follows: \[H^{{}^{\prime}}=\text{LayerNorm}(\text{SelfAttention}(X)+X) \tag{20}\] \[H=\text{LayerNorm}(\text{FFN}(H^{{}^{\prime}})+H^{{}^{\prime}}) \tag{21}\] SelfAttention(.) denotes attention module with \(\mathbf{Q}=\mathbf{K}=\mathbf{V}=\mathbf{X}\), where \(\mathbf{X}\) is the output of the previous layer. Transformer-based architecture turned out to be better than many other architectures such as RNN, LSTM/GRU, etc. One of the major difficulties when applying a Transformer to speech applications that it requires more complex configurations (e.g., optimizer, network structure, data augmentation) than the conventional RNN-based models. Speech signals are continuous-time signals with much higher dimensionality than text data. This high dimensionality poses significant computational challenges for the Transformer architecture, originally designed for sequential text data. Speech signals also have temporal dependencies, which means that the model needs to be able to process and learn from the entire signal rather than just a sequence of inputs. Also, speech signals are inherently variable and complex. The same sentence can be spoken differently and even by the same person at different times. This variability requires the model to be robust to differences in pitch, accent, and speed of speech. #### 3.3.2. Application Recent advancements in NLP which lead to a paradigm shift in the field are highly attributed to the foundation models that are primarily a part of the transformers category, with self-attention being a key ingredient [42]. The recent models have demonstrated human-level performance in several professional and academic benchmarks. For instance, GPT4 scored within the top 10% of test takers on a simulated version of the Uniform Bar Examination [405]. While speech processing has not yet seen a shift in paradigm as in NLP owing to the capabilities of foundational models, even so, transformers have significantly contributed to advancement in the field including but not limited to the following tasks: automatic speech recognition, speech translation, speech synthesis, and speech enhancement, most of which we discuss in detail in Section 5. RNNs and Transformers are two widely adopted neural network architectures employed in the domain of Natural Language Processing (NLP) and speech processing. While RNNs process input words sequentially and preserve a hidden state vector over time, Transformers analyze the entire sentence in parallel and incorporate an internal attention mechanism. This unique feature makes Transformers more efficient than RNNs [244]. Moreover, Transformers employ an attention mechanism that evaluates the relevance of other input tokens in encoding a specific token. This is particularly advantageous in machine translation, as it allows the Transformer to incorporate contextual information, thereby enhancing translation accuracy [244]. To achieve this, Transformers combine word vector embeddings and positional encodings, which are subsequently subjected to a sequence of encoders and decoders. These fundamental differences between RNNs and Transformers establish the latter as a promising option for various natural language processing tasks [244]. A comparative study on transformer vs. RNN [244] in speech applications found that transformer neural networks achieve state-of-the-art performance in neural machine translation and other natural language processing applications [244]. The study compared and analysed transformer and conventional RNNs in a total of 15 ASR, one multilingual ASR, one ST, and two TTS applications. The study found that transformer neural networks outperformed RNNs in most applications tested. Another survey of transformer-based models in speech processing found that transformers have an advantage in comprehending speech, as they analyse the entire sentence simultaneously, whereas RNNs process input words one by one. Transformers have been successfully applied in end-to-end speech processing, including automatic speech recognition (ASR), speech translation (ST), and text-to-speech (TTS) [309]. In 2018, the Speech-Transformer was introduced as a no-recurrence sequence-to-sequence model for speech recognition. To reduce the dimension difference between input and output sequences, the model's architecture was modified by adding convolutional neural network (CNN) layers before feeding the features to the transformer. In a later study [388], the authors proposed a method to improve the performance of end-to-end speech recognition models based on transformers. They integrated the connectionist temporal classification (CTC) with the transformer-based model to achieve better accuracy and used language models to incorporate additional context and mitigate recognition errors. In addition to speech recognition, the transformer model has shown promising results in TTS applications. The transformer based TTS model generates mel-spectrograms, followed by a WaveNet vocoder to output the final audio results [309]. Several neural network-based TTS models, such as Tacotron 2, DeepVoice 3, and transformer TTS, have outperformed traditional concatenative and statistical parametric approaches in terms of speech quality [309; 426; 491]. One of the strengths of Transformer-based architectures for neural speech synthesis is their high efficiency while considering the global context [162; 492]. The Transformer TTS model has shown advantages in training and inference efficiency over RNN-based models such as Tacotron 2 [491]. The efficiency of the Transformer TTS network can speed up the training about 4.25 times [309]. Moreover, Multi-Speech, a multi-speaker TTS model based on the Transformer [309], has demonstrated the effectiveness of synthesizing a more robust and better quality multi-speaker voice than naive Transformer-based TTS. In contrast to the strengths of Transformer-based architectures in neural speech synthesis, large language models based on Transformers such as BERT [109], GPT [444], XLNet [618], and T5 [448] have limitations when it comes to speech processing. One of the issues is that these models require discrete tokens as input, necessitating using a tokenizer or a speech recognition system, introducing errors and noise. Furthermore, pre-training on large-scale text corpora can lead to domain mismatch problems when processing speech data. To address these limitations, dedicated frameworks have been developed for learning speech representations using transformers, including wav2vec [483], data2vec [24], Whisper [443], VALL-E [562], Unispeech [565], SpeechT5 [16] etc. We discuss some of them as follows. * Speech representation learning frameworks, such as wav2vec, have enabled significant advancements in speech processing tasks. One recent framework, w2v-BERT [585], combines contrastive learning and MLM to achieve self-supervised speech pre-training on discrete tokens. Fine-tuning wav2vec models with limited labeled data has also been demonstrated to achieve state-of-the-art results in speech recognition tasks [25]. Moreover, XLS-R [20], another model based on wav2vec 2.0, has shown state-of-the-art results in various tasks, domains, data regimes, and languages, by leveraging multilingual data augmentation and contrastive learning techniques on a large scale. These models learn universal speech representations that can be transferred across languages and domains, thus representing a significant advancement in speech representation learning. * Transformers have been increasingly popular in the development of frameworks for learning representations from multi-modal data, such as speech, images, and text. Among these frameworks, Data2vec [24] is a self-supervised training approach that aims to learn joint representations to capture cross-modal correlations and transfer knowledge across modalities. It has outperformed other unsupervised methods for learning multi-modal representations in benchmark datasets. However, for tasks that require domain-specific models, such as speech recognition or speaker identification, domain-specific models may be more effective, particularly when dealing with data in specific domains or languages. The self-supervised training approach of Data2vec enables cost-effective and scalable learning of representations without requiring labeled data, making it a promising framework for various multi-modal learning applications. * The field of speech recognition has undergone a revolutionary change with the advent of the Whisper model (Wang et al., 2017). This innovative solution has proven to be highly versatile, providing exceptional accuracy for various speech-related tasks, even in challenging environments. The Whisper model achieves its outstanding performance through a minimalist approach to data pre-processing and weak supervision, which allows it to deliver state-of-the-art results in speech processing. The model is capable of performing multilingual speech recognition, translation, and language identification, thanks to its training on a diverse audio dataset. Its multitasking model can cater to various speech-related tasks, such as transcription, voice assistants, education, entertainment, and accessibility. One of the unique features of Whisper is its minimalist approach to data pre-processing, which eliminates the need for significant standardization and simplifies the speech recognition pipeline. The resulting models generalize well to standard benchmarks and deliver competitive performance without fine-tuning, demonstrating the potential of advanced machine learning techniques in speech processing. * Text-to-speech synthesis has been a topic of interest for many years, and recent advancements have led to the development of new models such as VALL-E (Vallall et al., 2018). VALL-E is a novel text-to-speech synthesis model that has gained significant attention due to its unique approach to the task. Unlike traditional TTS systems, VALL-E treats the task as a conditional language modelling problem and leverages a large amount of semi-supervised data to train a generalized TTS system. It can generate high-quality personalized speech with a 3-second acoustic prompt from an unseen speaker and provides diverse outputs with the same input text. VALL-E also preserves the acoustic environment and the speaker's emotions about the acoustic prompt, without requiring additional structure engineering, pre-designed acoustic features, or fine-tuning. Furthermore, VALL-E X (Vallall et al., 2018) is an extension of VALL-E that enables cross-lingual speech synthesis, representing a significant advancement in TTS technology. The timeline highlights the development of large transformer based models for speech processing is shown in Figure 4. The size of the models has grown exponentially, with significant breakthroughs achieved in speech recognition, synthesis, and translation. These large models have set new performance benchmarks in the field of speech processing, but also pose significant computational and data requirements for training and inference. ### Conformer The Transformer architecture, which utilizes a self-attention mechanism, has successfully replaced recurrent operations in previous architectures. Over the past few years, various Transformer variants have been proposed (Vall et al., 2018). Architectures combining Transformers and CNNs have recently shown promising results on speech-processing tasks (Vall et al., 2018). To efficiently model both local and global dependencies of an audio sequence, several attempts have been made to combine CNNs and Transformers. One such architecture proposed by the authors is the Conformer (Vall et al., 2018), a convolution-augmented transformer for speech recognition. Conformer outperforms RNNs, previous Transformers, and CNN-based models, achieving state-of-the-art performance in speech recognition. The Conformer model consists of several building blocks, including convolutional layers, self-attention layers, and feedforward layers. The architecture of the Conformer model can be summarized as follows: * Input Layer: The Conformer model inputs a sequence of audio features, such as MFCCs or Mel spectrograms. * Convolutional Layers: Local features are extracted from the audio signal by processing the input sequence through convolutional layers. * Self-Attention Layers: The Conformer model incorporates self-attention layers following the convolutional layers. Self-attention is a mechanism that enables the model to focus on various sections of the input sequence while making predictions. This is especially advantageous for speech recognition because it facilitates capturing long-term dependencies in the audio signal. * Feedforward Layers: After the self-attention layers, the Conformer model applies a sequence of feedforward layers intended to process the output of the self-attention layers further and ready it for the ultimate prediction. * Output Layer: Finally, the output from the feedforward layers undergoes a softmax activation function to generate the final prediction, typically representing a sequence of character labels or phonemes. The conformer model has emerged as a promising neural network architecture for various speech-related research tasks, including but not limited to speech recognition, speaker recognition, and language identification. In a recent study by Gulati et al. [162], the conformer model was demonstrated to outperform previous state-of-the-art models, particularly in speech recognition significantly. This highlights the potential of the conformer model as a key tool for advancing speech-related research. Figure 4: Timeline highlighting notable large Transformer models developed for speech processing, along with their corresponding parameter sizes. #### 3.4.1. Application The Conformer model stands out among other speech recognition models due to its ability to efficiently model both local and global dependencies of an audio sequence. This is crucial for speech recognition, language translation, and audio classification [1; 2; 162]. The model achieves this through self-attention and convolution modules, combining the strengths of CNNs and Transformers. While CNNs capture local information in audio sequences, the self-attention mechanism captures global dependencies [2]. The Conformer model has achieved remarkable performance in speech recognition tasks, setting benchmarks on datasets such as LibriSpeech and AISHELL-1. Despite these successes, speech synthesis and recognition challenges persist, including difficulties generating natural-sounding speech in non-English languages and real-time speech generation. To address these limitations, Wang et al. [658] proposed a novel approach that combines noisy student training with SpecAugment and large Conformer models pre-trained on the Libri-Light dataset using the wav2vec 2.0 pre-training method. This approach achieved state-of-the-art word error rates on the LibriSpeech dataset. Recently, Wang et al. [575] developed Conformer-LHUC, an extension of the Conformer model that employs learning hidden unit contribution (LHUC) for speaker adaptation. Conformer-LHUC has demonstrated exceptional performance in elderly speech recognition and shows promise for the clinical diagnosis and treatment of Alzheimer's disease. Several enhancements have been made to the Conformer-based model to address high word error rates without a language model, as documented in [336]. Wu [598] proposed a deep sparse Conformer to improve its long-sequence representation capabilities. Furthermore, Burchi and Timofte [49] have recently enhanced the noise robustness of the Efficient Conformer architecture by processing both audio and visual modalities. In addition, models based on Conformer, such as Transducers [252], have been adopted for real-time speech recognition [412] due to their ability to process audio data much more quickly than conventional recurrent neural network (RNN) models. ### Sequence to Sequence Models The sequence-to-sequence (seq2seq) model in speech processing is popularly used for ASR, ST, and TTS tasks. The general architecture of the seq2seq model involves an encoder-decoder network that learns to map an input sequence to an output sequence of varying lengths. In the case of ASR, the input sequence is the speech signal, which is processed by the encoder network to produce a fixed-length feature vector representation of the input signal. The decoder network inputs this feature vector and produces the corresponding text sequence. This can be achieved through a stack of RNNs [434], Transformer [116] or Conformer [162] in the encoder and decoder networks. The sequence-to-sequence model has emerged as a potent tool in speech translation. It can train end-to-end to efficiently map speech spectrograms in one language to their corresponding spectrograms in another. The notable advantage of this approach is eliminating the need for an intermediate text representation, resulting in improved efficiency. Additionally, the Seq2seq models have been successfully implemented in speech generation tasks, where they reverse the ASR approach. In such applications, the input text sequence serves as the input, with the encoder network creating a feature vector representation of the input text. The decoder network then leverages this representation to generate the desired speech signal. Karita et al. [244] conducted an extensive study comparing the performance of transformer and traditional RNN models on 15 different benchmarks for Automatic Speech Recognition (ASR), including a multilingual ASR benchmark, a Speech Translation (ST) benchmark, and two Text-to-Speech (TTS) benchmarks. In addition, they proposed a shared Sequence-to-Sequence (S2S) architecture for AST, TTS, and ST tasks, which is depicted in Figure 5. * Encoder \[X_{0} =\text{Encoder-PreNet}(X),\] \[X_{e} =\text{Encoder-Main}(X_{0})\] (22) where \(X\) is the sequence of speech features (e.g. Mel spectrogram) for AST and ST and phoneme or character sequence for TTS. * Decoder \[Y_{0}[1:t-1] =\text{Decoder-PreNet}(Y[1:t-1]),\] \[Y_{d}[t] =\text{Decoder-Main}(X_{e},Y_{0}[1:t-1]),\] (23) \[Y_{post}[1:t] =\text{Decoder-PostNet}(Y_{d}[1:t]),\] During the training stage, input to the decoder is ground truth target sequence \(Y[1:t-1]\). The Decoder-Main module is utilized to produce a subsequent target frame. This is accomplished by utilizing the encoded sequence \(X_{e}\) and the prefix of the target prefix \(Y_{0}[1:t-1]\). The decoder is mostly unidirectional for sequence generation and often uses an attention mechanism [28] to produce the output. Seq2seq models have been widely used in speech processing, initially based on RNNs. However, RNNs face the challenge of processing long sequences, which can lead to the loss of the initial context by the end of the sequence [244]. To overcome this limitation, the transformer architecture has emerged, leveraging self-attention mechanisms to handle sequential data. The transformer has shown remarkable performance in tasks such as ASR, ST, and speech synthesis. As a result, the use of RNN-based seq2seq models has declined in favour of the transformer-based approach. Figure 5: Unified formulation for Sequence-to-Sequence architecture in speech applications [244]. \(X\) and \(Y\) are source and target sequences respectively. #### 3.5.1. Application Seq2seq models have been used for speech processing tasks such as voice conversion [210; 528], speech synthesis [567; 583; 210; 398; 210], and speech recognition. The field of ASR has seen significant progress, with several advanced techniques emerging as popular options. These include the CTC approach, which has been further developed and improved upon through recent advancements [160], as well as attention-based approaches that have also gained traction [85]. The growing interest in these techniques has increased the use of seq2seq models in the speech community. * Attention-based Approaches: The attention mechanism is a crucial component of sequence-to-sequence models, allowing them to effectively weigh input acoustic features during decoding [355; 28]. Attention-based Seq2seq models utilize previously generated output tokens and the complete input sequence to factorize the joint probability of the target sequence into individual time steps. The attention mechanism is conditioned on the current decoder states and runs over the encoder output representations to incorporate information from the input sequence into the decoder output. Incorporating attention mechanisms in Seq2Seq models has resulted in an impressive performance in various speech processing tasks, such as speech recognition [591; 389; 434; 539], text-to-speech [400; 491; 620], and voice conversion [528; 210]. These models have demonstrated competitiveness with traditional state-of-the-art approaches. Additionally, attention-based Seq2Seq models have been used for confidence estimation tasks in speech recognition, where confidence scores generated by a speech recognizer can assess transcription quality [312]. Furthermore, these models have been explored for few-shot learning, which has the potential to simplify the training and deployment of speech recognition systems [183]. * Connectionist Temporal Classification: While attention-based methods create a soft alignment between input and target sequences, approaches that utilize CTC loss aim to maximize log conditional likelihood by considering all possible monotonic alignments between them. These CTC-based Seq2Seq models have delivered competitive results across various ASR benchmarks [162; 365; 524; 182] and have been extended to other speech-processing tasks such as voice conversion [648; 339; 655], speech synthesis [648] etc. Recent studies have concentrated on enhancing the performance of Seq2Seq models by combining CTC with attention-based mechanisms, resulting in promising outcomes. This combination remains a subject of active investigation in the speech-processing domain. ### Reinforcement Learning Reinforcement learning (RL) is a machine learning paradigm that trains an agent to perform discrete actions in an environment and receive rewards or punishments based on its interactions. The agent aims to learn a policy that maximizes its long-term reward. In recent years, RL has become increasingly popular and has been applied to various domains, including robotics, game playing, and natural language processing. RL has been utilized in speech recognition, speaker diarization, and speech enhancement tasks in the speech field. One of the significant benefits of using RL for speech tasks is its ability to learn directly from raw audio data, eliminating the need for hand-engineered features. This can result in better performance compared to traditional methods that rely on feature extraction. By capturing intricate patterns and relationships in the audio data, RL-based speech systems have the potential to enhance accuracy and robustness. #### 3.6.1. Basic Models The utilization of deep reinforcement learning (DRL) in speech processing involves the environment (a set of states \(S\)), agent, actions (\(A\)), and reward (\(r\)). The semantics of these components depends on the task at hand. For instance, in ASR tasks, the environment can be composed of speech features, the action can be the choices of phonemes, and the reward could be the correctness of those phonemes given the input. Audio signals are one-dimensional time-series signals that undergo pre-processing and feature extraction procedures. Pre-processing steps include noise suppression, silence removal, and channel equalization, improving audio signal quality and creating robust and efficient audio-based systems. Previous research has demonstrated that pre-processing improves the performance of deep learning-based audio systems [288]. Feature extraction is typically performed after pre-processing to convert the audio signal into meaningful and informative features while reducing their number. MFCCs and spectrograms are popular feature extraction choices in speech-based systems [288]. These features are then given to the DRL agent to perform various tasks depending on the application. For instance, consider the scenario where a human speaks to a DRL-trained machine, where the machine must act based on features derived from audio signals. * _Value-based DRL:_ Given the state of the environment (\(s\)), a value function \(Q:S\times A\rightarrow\mathbb{R}\) is learned to estimate overall future reward \(Q(s,a)\) should an action \(a\) be taken. This value function is parameterized with deep networks like CNN, Transformers, etc. * _Policy-based DRL:_ As opposed to value-based RL, policy-based RL methods learns a policy function \(\pi:S\to A\) that chooses the best possible action (\(a\)) based on reward. * _Model-based DRL:_ Unlike the previous two approaches, model-based RL learns the dynamics of the environment in terms of the state transition probabilities, i.e., a function \(M:S\times A\times S\rightarrow\mathbb{R}\). Given such a model, policy, or value functions are optimized. #### 3.6.2 Application In speech-related research, deep reinforcement learning can be used for several purposes, including: ##### Speech recognition and Emotion modeling Deep reinforcement learning (DRL) can be used to train speech recognition systems [88; 89; 231; 451; 534] to transcribe speech accurately. In this case, the system receives an audio input and outputs a text sequence corresponding to the spoken words. The environmental states might be learned from the input audio features. The actions might be the generated phonemes. The reward could be the similarity between the generated and gold phonemes, quantified in edit distance. Several works have also achieved promising results for non-native speech recognition [446] DRL pre-training has shown promise in reducing training time and enhancing performance in various Human-Computer Interaction (HCI) applications, including speech recognition [451]. Recently, researchers have suggested using a reinforcement learning algorithm to develop a Speech Enhancement (SE) system that effectively improves ASR systems. However, ASR systems are often complicated and composed of non-differentiable units, such as acoustic and language models. Therefore, the ASR system's recognition outcomes should be employed to establish the objective function for optimizing the SE model. Other than ASR, SE, some studies have also focused on SER using DRL algorithms [243; 282; 452] ##### Speaker identification Similarly, for speaker identification tasks, the actions can be the speaker's choices, and a binary reward can be the correctness of choice. ##### Speech synthesis and coding Likewise, the states can be the input text, the actions can be the generated audio, and the reward could be the similarity between the gold and generated mel-spectrogram. Deep reinforcement learning has several advantages over traditional machine learning techniques. It can learn from raw data without needing hand-engineered features, making it more flexible and adaptable. It can also learn from feedback, making it more robust and able to handle noisy environments. However, deep reinforcement learning also has some challenges that must be addressed. It requires a lot of data to train and can be computationally expensive. It also requires careful selection of the reward function to ensure that the system learns the desired behavior. ### Graph Neural Network Over the past few years, the field of Graph Neural Networks (GNNs) has witnessed a remarkable expansion as a widely adopted approach for analysing and learning from data on graphs. GNNs have demonstrated their potential in various domains, including computer science, physics, mathematics, chemistry, and biology, by delivering successful outcomes. Furthermore, in recent times, the speech-processing domain has also witnessed the growth of GNNs. #### 3.7.1. Basic Models Speech processing involves analysing and processing audio signals, and GNNs can be useful in this context when we represent the audio data as a graph. In this answer, we will explain the architecture of GNNs for speech processing. The standard GNN pipeline is shown in Figure 6, according to the application the GNN layer can consist of Graph Convolutional Layers (Gran et al., 2017), Graph Attention Layers (Gran et al., 2017), or Graph Transformer (Gran et al., 2017). Graph Representation of Speech Data.The first step in using GNNs for speech processing is representing the speech data as a graph. One way to do this is to represent the speech signal as a sequence of frames, each representing a short audio signal segment. We can then represent each frame as a node in the graph, with edges connecting adjacent frames. Graph Convolutional Layers.Once the speech data is represented as a graph, we can use graph convolutional layers to learn representations of the graph nodes. Graph convolutional layers are similar to traditional ones, but instead of operating on a grid-like structure, they operate on graphs. These layers learn to aggregate information from neighboring nodes to update the features of each node. Graph Attention Layers.Graph attention layers can be combined with graph convolutional layers to give more importance to certain nodes in the graph. Graph attention layers learn to assign weights to neighbor nodes based on their features, which can help capture important patterns in Figure 6. A standard experimental pipeline for GCNs, which embeds the graph node and embeds the graph node edge features, performs several GNN layers to compute convolutional features, and finally predicts a task-specific MLP layer. speech data. Several works have used graph attention layers for neural speech synthesis [338] or speaker verification [227] and diarization [277]. Recurrent Layers.Recurrent layers can be used in GNNs for speech processing to capture temporal dependencies between adjacent frames in the audio signal. Recurrent layers allow the network to maintain an internal state that carries information from previous time steps, which can be useful for modeling the dynamics of speech signals. Output Layers.The output layer of a GNN for speech processing can be a classification layer that predicts a label for the speech data (e.g., phoneme or word) or a regression layer that predicts a continuous value (e.g., pitch or loudness). The output layer can be a traditional fully connected layer or a graph pooling layer that aggregates information from all the nodes in the graph. #### 3.7.2. Application The advantages of using GNNs for speech processing tasks include their ability to represent the dependencies and interrelationships between various entities, which is suitable for speech processing tasks such as speaker diarization [499, 500, 571], speaker verification [228, 494], speech synthesis [338, 520, 521], or speech separation [558, 576], which require the analysis of complex data representations. GNNs retain a state representing information from their neighborhood with arbitrary depth, unlike standard neural networks. GNNs can be used to model the relationship between phonemes and words. GNNs can learn to recognize words in spoken language by treating the phoneme sequence as a graph. GNNs can also be used to model the relationship between different acoustic features, such as pitch, duration, and amplitude, in speech signals, improving speech recognition accuracy. GNNs have shown promising results in multichannel speech enhancement, where they are used for extracting clean speech from noisy mixtures captured by multiple microphones [542]. The authors of a recent study [391] propose a novel approach to multichannel speech enhancement by combining Graph Convolutional Networks (GCNs) with spatial filtering techniques such as the Minimum Variance Distortionless Response (MVDR) beamformer. The algorithm aims to extract speech and noise from noisy signals by computing the Power Spectral Density (PSD) matrices of the noise and the speech signal of interest and then obtaining optimal weights for the beam former using a frequency-time mask. The proposed method combines the MVDR beam former with a super-Gaussian joint maximum a posteriori (SGJMAP) based SE gain function and a GCN-based separation network. The SGJMAP-based SE gain function is used to enhance the speech signals, while the GCN-based separation network is used to separate the speech from the noise further. ### Diffusion Probabilistic Model Diffusion probabilistic models, inspired by non-equilibrium thermodynamics [186, 508], have proven to be highly effective for generating high-quality images and audio. These models create a Markov chain of diffusion steps (\(x_{t}\sim q(x_{t}|x_{t-1})\)) from the original data (\(x_{0}\)) to the latent variable \(x_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) by gradually adding pre-scheduled noise to the data. The reverse diffusion process then reconstructs the desired data samples (\(x_{0}\)) from the noise \(x_{T}\), as shown in Figure 7. Unlike VAE or flow models, diffusion models keep the dimensionality of the latent variables fixed. While mostly used for image and audio synthesis, diffusion models have potential applications in speech-processing tasks, such as speech synthesis and enhancement. This section offers a comprehensive overview of the fundamental principles of diffusion models and explores their potential uses in the speech domain. Forward diffusion process.Given a clean speech data \(x_{0}\sim q_{data}(x_{0})\), \[q(x_{1},...,x_{T}|x_{0})=\prod_{t=1}^{T}q(x_{t}|x_{t-1}). \tag{24}\] At every time step \(t\), \(q(x_{t}|x_{t-1}):=\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}\mathbf{ I})\) where \(\{\beta_{t}\in(0,1)\}_{t=1}^{T}\). As the forward process progresses, the data sample \(x_{0}\) losses its distinguishable features, and as \(T\rightarrow\infty\), \(x_{T}\) approaches a standard Gaussian distribution. #### 3.8.2 Reverse diffusion process. The reverse diffusion process is defined by a Markov chain from \(x_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) to \(x_{0}\) and parameterized by \(\theta\): \[p_{\theta}(x_{0},...,x_{T-1}|x_{T})=\prod_{t=1}^{T}p_{\theta}(x_{t-1}|x_{t}) \tag{25}\] where \(x_{T}\sim\mathcal{N}(0,I)\) and the transition probability \(p_{\theta}(x_{t-1}|x_{t})\) is learnt through noise-estimation. This process eliminates the Gaussian noise added in the forward diffusion process. #### 3.8.3 Application Diffusion models have emerged as a leading approach for generating high-quality speech in recent years [67; 204; 218; 431; 432; 269]. These non-autoregressive models transform white noise signals into structured waveforms via a Markov chain with a fixed number of steps. One such model, FastDiff, has achieved impressive results in high-quality speech synthesis [204]. By leveraging a stack of time-aware diffusion processes, FastDiff can generate high-quality speech samples 58 times faster than real-time on a V100 GPU, making it practical for speech synthesis deployment for the first time. It also outperforms other competing methods in end-to-end text-to-speech synthesis. Another powerful diffusion probabilistic model proposed for audio synthesis is DiffWave [269]. It is non-autoregressive and generates high-fidelity audio for different waveform generation tasks, such as neural vocoding conditioned on mel spectrogram, class-conditional generation, and unconditional generation. DiffWave delivers speech quality on par with the strong WaveNet vocoder [402] while synthesizing audio much faster. Diffusion models have shown great promise in speech processing, particularly in speech enhancement [347; 348; 440; 487]. Recent advances in diffusion probabilistic models have led to the development of a new speech enhancement algorithm that incorporates the characteristics of the noisy speech signal into the diffusion and reverses processes [349]. This new algorithm is a generalized form of the probabilistic diffusion model, known as the conditional diffusion probabilistic model. During its reverse process, it can adapt to non-Gaussian real noises in the estimated speech signal. In addition, Qiu et al. [440] propose SRTNet, a novel method for speech enhancement that Figure 7. The Diffusion Probabilistic Model is a generative model that progressively transforms a noise distribution into the target data distribution through a series of diffusion steps, where the noise level decreases as the process continues. The model is trained by maximizing the likelihood of the data distribution and can be used for tasks such as speech synthesis, enhancement, and denoising. uses the diffusion model as a module for stochastic refinement. The proposed method comprises a joint network of deterministic and stochastic modules, forming the "enhance-and-refine" paradigm. The paper also includes a theoretical demonstration of the proposed method's feasibility and presents experimental results to support its effectiveness. ## 4. Speech Representation Learning The process of speech representation learning is essential for extracting pertinent and practical characteristics from speech signals, which can be utilized for various downstream tasks such as speaker identification, speech recognition, and emotion recognition. While traditional methods for engineering features have been extensively used, recent advancements in deep-learning-based techniques utilizing supervised or unsupervised learning have shown remarkable potential in this field. Nonetheless, a novel approach founded on self-supervised representation learning has surfaced, aiming to unveil the inherent structure of speech data and acquire representations that capture the underlying structure of the data. This approach surpasses traditional feature engineering methods and can significantly increase the accuracy to a considerable extent and effectiveness of downstream tasks. The primary objective of this new paradigm is to uncover informative and meaningful features from speech signals and outperform existing approaches. Therefore, this approach is considered a promising direction for future research in speech representation learning. This section provides a comprehensive overview of the evolution of speech representation learning with neural networks. We will examine various techniques and architectures developed over the years, including the emergence of unsupervised representation learning methods like autoencoders, generative adversarial networks (GANs), and self-supervised representation learning frameworks. We will also examine the difficulties and constraints associated with these techniques, such as data scarcity, domain adaptation, and the interpretability of learned representations. Through a comprehensive analysis of the advantages and limitations of different representation learning approaches, we aim to provide insights into how to harness their power to improve the accuracy and robustness of speech processing systems. ### Supervised Learning In supervised representation learning, the model is trained using annotated datasets to learn a mapping between input data and output labels. The set of parameters that define the mapping function is optimized during training to minimize the difference between the predicted and true output labels in the training data. The goal of supervised representation learning is to enable the model to learn a useful representation or features of the input data that can be used to accurately predict the output label for new, unseen data. For instance, supervised representation learning in speech processing using CNNs learn speech features from spectrograms. CNNs can identify patterns in spectrograms relevant to speech recognition, such as those corresponding to different phonemes or words. Unlike CNNs, which typically require spectrogram input, RNNs can directly take in the raw speech signals as input and learn to extract features or representations that are relevant for speech recognition or other speech-processing tasks. Learning speaker representations typically involves minimizing a loss function. Chung et al. (2018) compares their effectiveness for speaker recognition tasks, we distill it in Table 1 to present an overview of commonly used loss functions. Additionally, a new angular variant of the prototypical loss is introduced in their work. Results from extensive experimental validation on the VoxCeleb1 test set indicate that the GE2E and prototypical networks outperform other models in terms of performance. #### 4.1.1. Deep speaker representations Speaker representation is a critical aspect of speech processing, allowing machines to analyze and process various parts of a speaker's voice, including pitch, intonation, accent, and speaking style. In recent years, deep neural networks (DNNs) have shown great promise in learning robust features for speaker recognition. This section reviews deep learning-based techniques for speaker representation learning that have demonstrated significant improvements over traditional methods. These deep speaker representations can be applied to a range of speaker-recognition tasks beyond verification and identification, including diarization [287; 572; 637], voice conversion [86; 323; 594], multi-speaker TTS [476; 607; 418], speaker adaptation [84] etc. To provide a comprehensive overview, we analyzed deep embeddings from the perspectives of input raw [226; 454] or mel-spectogram [507], network architecture [108; 325], temporal pooling strategies [384], and loss functions [91; 505; 569]. In the following subsection, we introduce two representative deep embeddings: \(d\)-vector [552] and \(x\)-vector [506; 507]. These embeddings have been widely adopted recently and have demonstrated state-of-the-art performance in various speaker-recognition tasks. By understanding the strengths and weaknesses of different deep learning-based techniques for speaker-representation learning, we can better leverage their power to improve the accuracy and robustness of speaker-recognition systems. * \(\mathbf{d}\)-vector technique, proposed by Variani et al. (2014) [552], serves as a frame-level speaker embedding method, as illustrated in Figure 8. In this approach, during the training phase, each frame within a training utterance is labeled with the speaker's true identity. This transforms the training process into a classification task, where a maxout Deep Neural Network (DNN) classifies the frames based on the speaker's identity. The DNN employs softmax as the output layer to minimize the cross-entropy loss between the ground-truth frame labels and the network's output. During the testing phase, the \(d\)-vector technique extracts the output activation of each frame from the last hidden layer of the DNN, serving as the deep embedding feature for that frame. To generate a compact representation called the \(d\)-vector, the technique computes the average of the deep embedding features from all frames within an utterance. The underlying hypothesis is that the compact representation space developed using a development set can effectively generalize to unseen speakers during the testing phase [552]. * \(\mathbf{x}\)**-vector**[506; 507] is a segment-level speaker embedding and an advancement over the \(d\)-vector method as it incorporates additional modeling of temporal information and phonetic information in speech signals, resulting in improved performance compared to the \(d\)-vector. Figure 8. \(d\)-vector model architecture. \(x\)-vector employs an aggregation process to move from frame-by-frame speaker labeling to utterance-level speaker labeling as highlighted in Figure 9. The network structure of the \(x\)-vector is depicted in a figure, which consists of time-delay layers for extracting frame-level speech embeddings, a statistical pooling layer for concatenating mean and standard deviation of embeddings as a segment-level feature, and a standard feedforward network for classifying the segment-level feature to its speaker. \(x\)-vector is the segment-level speaker embedding generated from the feedforward network's second-to-last hidden layer. The authors in [470; 616] have also discovered the significance of data augmentation in enhancing the performance of the \(x\)-vector. \begin{table} \begin{tabular}{l l l} \hline \hline Loss Function & Objective Type & Description \\ \hline Softmax & Classification & \(L_{\text{S}}=-\frac{1}{N}\sum_{i=1}^{N}\log\frac{\exp\psi_{y_{i}}^{T}x_{i}y_{ i}}{\sum_{i=1}^{C}\exp\psi_{y_{i}}^{T}x_{i}y_{i}}\) \\ AM-Softmax (CosFace) [569] & Classification & \(L_{\text{C}}=-\frac{1}{N}\sum_{i=1}^{N}\log\frac{\exp\psi_{i}(\cos\theta_{y_{ i},i}-m)}{\exp\psi(\cos\theta_{y_{i},i}-m)+2\sum_{T\neq i}\exp\psi(\cos\theta_{y_{i},i})}\) \\ AAM-Softmax (ArcFace) [103] & Classification & \(L_{A}=-\frac{1}{N}\sum_{i=1}^{N}\log\frac{\exp\psi(\cos\theta_{y_{i},i}-m)}{ \exp\psi(\cos\theta_{y_{i},i}+m)\sum_{T\neq i}\exp\psi(\cos\theta_{y_{i},i})}\) \\ Triplet [484] & Metric learning [640] & \(L_{T}=\frac{1}{N}\sum_{j=1}^{N}\max(0,||x_{j0}-x_{j,1}||_{2}^{2},||x_{j0}-x_{ k+j,1}||_{2}^{2}+m)\) \\ Prototypical [505] & Metric learning [505] & \(L_{P}=-\frac{1}{N}\sum_{j=1}^{N}\log\frac{\exp\psi_{j,j}}{\sum_{k=1}^{N}\exp \psi_{j,k}}\) \\ Generalized end-to-end (GE2E) [561] & Metric learning [573] & \(L_{G}=-\frac{1}{N}\sum_{j,l}\log\frac{\exp\psi_{j,l}}{\sum_{k=1}^{N}\exp\psi_{ j,k}}\) \\ Angular Prototypical & Metric learning & \(L_{AP}=-\frac{1}{N}\sum_{j,l}\log\frac{\exp\psi_{j,l}}{\sum_{k=1}^{N}\exp\psi_{ j,l}}\) \\ \hline \hline \end{tabular} \end{table} Table 1: The table summarizes various loss functions used in training the speaker recognition models including their formulation [91]. Figure 9: \(x\)-vector model architecture. \(x_{1}\),\(x_{2}\),...,\(x_{T}\) are the spectral features such as Mel spectrograms of the speech utterance. ### Unsupervised learning Unsupervised representation learning for speech processing has gained significant emphasis over the past few years. Similar to visual modality in CV and text modality in NLP, speech i.e. audio modality introduces unique challenges. Unsupervised speech representation learning is concerned with learning useful speech representations without using annotated data. Usually, the model is first pre-trained on the task where plenty of data is available. The model is then fined tuned or used to extract input representations for a small model, specifically targeting tasks with limited data. One approach to addressing the unique challenges of unsupervised speech representation learning is to use probabilistic latent variable models (PLVM), which assume an unknown generative process produces the data and enables the learning of rich structural representations and reasoning about observed and unobserved factors of variation in complex datasets such as speech within a probabilistic framework. PLVM specified a joint distribution \(p(x,z)\) over unobserved stochastic latent variable \(z\) and observed variables \(x\). By factorizing the joint distribution into modular components, it becomes possible to learn rich structural representations and reason about observed and unobserved factors of variation in complex datasets such as speech within a probabilistic framework. The likelihood of a PLVM given a data \(x\) can be written as \[p(x)=\int p(x|z)p(z)dz. \tag{26}\] Probabilistic latent variable models provide a powerful way to learn a representation that captures the underlying relationships between observed and unobserved variables, without requiring explicit supervision or labels. These models involve unobserved latent variables that must be inferred from the observed data, typically using probabilistic inference techniques such as Markov Chain Monte Carlo (MCMC) methods. In the context of representation learning, Variational autoencoders (VAE) are commonly used with latent variable models for various speech processing tasks, leveraging the power of probabilistic modeling to capture complex patterns in speech data. ### Semi-supervised Learning Semi-supervised learning can be viewed as a process of optimizing a model using both labeled and unlabeled data. The set of labeled data points, denoted by \(X_{L}\), contains \(N_{L}\) items, where each item is represented as \((x_{i},y_{i})\) with \(y_{i}\) being the label of \(x_{i}\). On the other hand, the set of unlabeled data points, denoted by \(X_{U}\), consists of \(N_{U}\) items, represented as \(x_{N_{L}+1},x_{N_{L}+2},...,x_{N_{L}+N_{U}}\). In semi-supervised learning, the objective is to train a model \(f_{\theta}\) with parameters \(\theta\) that can minimize the expected loss over the entire dataset. The loss function \(L(y,f_{\theta}(x))\) is used to quantify Figure 10. Overview of difference between probabilistic latent variable models and self-supervised learning. In latent variable models learn the functions \(f(.)\) and \(g(.)\) learn the parameters of distribution \(p\) and \(q\). The latent variable \(z\) is used for representing learning. the deviation between the model's prediction \(f_{\theta}(x)\) and the ground truth label \(y\). The expected loss can be mathematically expressed as: \[L(y,f_{\theta}(x))=E_{(x,y)\sim p_{data}(x,y)}\left[L(y,f_{\theta}(x))\right] \tag{27}\] where \(p_{data}(x,y)\) is the underlying data distribution.In semi-supervised learning, the loss function is typically decomposed into two parts: a supervised loss term that is only defined on the labeled data, and an unsupervised loss term that is defined on both labeled and unlabelled data. The supervised loss term is calculated as follows: \[\mathcal{L}_{sup}=\frac{1}{N_{L}}\sum_{(x,y)\in X_{L}}L(y,f_{\theta}(x)) \tag{28}\] The unsupervised loss term leverages the unlabelled data to encourage the model to learn meaningful representations that capture the underlying structure of the data. One common approach is to use a regularization term that encourages the model to produce similar outputs for similar input data. This can be achieved by minimizing the distance between the output of the model for two similar input data points. One such regularization term is the entropy minimization term, which can be expressed as: \[\mathcal{L}_{unsup}=\frac{1}{N_{U}}\sum_{(\mathbf{x}_{i})\in X_{U}}\sum_{j=1}^{|y |}p_{\theta}(y_{j},x_{i})\log p_{\theta}(y_{j},x_{i}) \tag{29}\] where \(p_{\theta}(y_{j}|x_{i})\) is the predicted probability of the \(j\)-th label for the unlabelled data point \(x_{i}\). Finally the overall objective function for semi-supervised learning can be expressed as \(\mathcal{L}=\mathcal{L}_{sup}+\alpha\mathcal{L}_{unsup}\), \(\alpha\) is a hyperparameter that controls the weight of the unsupervised loss term. The goal is to find the optimal parameters \(\theta\) that minimize this objective function. Semi-supervised learning involves learning a model from both labelled and unlabelled data by minimizing a combination of supervised and unsupervised loss terms. By leveraging the additional unlabelled data, semi-supervised learning can improve the generalization and performance of the model in downstream tasks. Semi-supervised learning techniques are increasingly being employed to enhance the performance of DNNs across a range of downstream tasks in speech processing, including ASR, TTS, etc. The primary objective of such approaches is to leverage large unlabelled datasets to augment the performance of supervised tasks that rely on labelled datasets. The recent advancements in speech recognition have led to a growing interest in the integration of semi-supervised learning methods to improve the performance of ASR and TTS systems [34; 89; 229; 605; 657; 658]. This approach is particularly beneficial in scenarios where labelled data is scarce or expensive to acquire. In fact, for many languages around the globe, labelled data for training ASR models are often inadequate, making it challenging to achieve optimal results. Thus, using a semi-supervised learning model trained on abundant resource data can offer a viable solution that can be readily extended to low-resource languages. Semi-supervised learning has emerged as a valuable tool for addressing the challenges of insufficient annotations and poor generalization [165]. Research in various domains, including image quality assessment [341], has demonstrated that leveraging both labelled and unlabelled data through semi-supervised learning can lead to improved performance and generalization. In the domain of speech quality assessment, several studies [488] have exploited the generalization capabilities of semi-supervised learning to enhance performance. Moreover, semi-supervised learning has gained significant attention in other areas of speech processing, such as end-to-end speech translation [428]. By leveraging large amounts of unlabelled data, semi-supervised learning approaches have demonstrated promising results in improving the performance and robustness of speech translation models. This highlights the potential of semi-supervised learning to address the limitations of traditional supervised learning approaches in a variety of speech processing tasks. ### Self-supervised representation learning (SSRL) Self-supervised representation learning (SSRL) is a machine learning approach that focuses on achieving robust and in-depth feature learning while minimizing reliance on extensively annotated datasets, thus reducing the annotation bottleneck (Krizhevsky et al., 2014; Sutskever et al., 2015). SSRL comprises various techniques that allow models to be trained without needing human-annotated labels (Krizhevsky et al., 2014; Sutskever et al., 2015). One of the key advantages of SSRL is its ability to operate on unlabelled datasets, which reduces the need for large annotated datasets (Krizhevsky et al., 2014; Sutskever et al., 2015). In recent years, self-supervised learning has progressed rapidly, with some methods approaching or surpassing the efficacy of fully supervised learning methods. Self-supervised learning methods typically involve pretext tasks that generate pseudo labels for discriminative model training without actual labeling. The difference between self-supervised representation learning and unsupervised representation is highlighted in Figure 10. In contrast to unsupervised representation learning, SSRL techniques are designed to generate these pseudo labels for model training. The ability of SSRL to achieve robust and in-depth feature learning without relying heavily on annotated datasets holds great promise for the continued development of machine learning techniques. SSRL differs from supervised learning mainly in terms of its data requirements. While supervised learning relies on labeled data, where the model learns from input-output pairs, SSL generates its own labels from the input data, eliminating the need for labeled data (Sutskever et al., 2015). The SSL approach trains the model to predict a portion of the input data, which is then utilized as a label for the task at hand (Sutskever et al., 2015). Although SSRL is an unsupervised learning technique, it seeks to tackle tasks commonly associated with supervised learning without relying on labeled data (Sutskever et al., 2015). #### 4.4.1. Generative Models This method involves instructing a model to produce samples resembling the input data without explicitly learning the labels, creating valuable representations applicable to other tasks. The Figure 11. Generative approaches to self-supervised learning. detailed architecture for generative models with three different variants is shown in Figure 11. The earliest self-supervised method, predicting masked inputs using surrounding data, originated from the text field in 2013 with word2vec. The continuous bag of words (CBOW) concept of word2vec predicts a central word based on its neighbors, resembling ELMo and BERT's masked language modeling (MLM). These non-autoregressive generative approaches differ in their use of advanced structures, such as bidirectional LSTM (for ELMo) and transformer (for BERT), with recent models producing contextual embeddings. In the context of the speech, Mockingjay [330] applied masking to all feature dimensions in the speech domain, whereas TERA [329] applied to mask only to a particular subset of feature dimensions. The summary of generative self-supervised approaches along with the data used for training the models are outlined in Table 2. We further discuss different generative approaches as highlighted in Figure 11 as follows: * Auto-encoding Models: Auto-encoding Models have garnered significant attention in the domain of self-supervised learning, particularly Autoencoders (AEs) and Variational Autoencoders (VAEs). AEs consist of an encoder and a decoder that work together to reconstruct input while disregarding less important details, prioritizing the extraction of meaningful features. VAEs, a probabilistic variant of AEs, have found wide-ranging applications in the field of speech modeling. Furthermore, the vector-quantized variational autoencoder (VQ-VAE) [550] has been developed as an extended generative model. The VQ-VAE introduces parameterization of the posterior distribution to represent discrete latent representations. Remarkably, the VQ-VAE has demonstrated notable success in generative spoken language modeling. By combining a discrete latent space with self-supervised learning, its performance is further improved. * Autoregressive models: Autoregressive generative self-supervised learning uses autoregressive prediction coding technique [95] to model the probability distribution of a sequence of data points. This approach aims to predict the next data point in a sequence based on the previous data points. Autoregressive models typically use RNNs or a transformer architecture as a basic model. The authors in paper [402] introduce a generative model for raw audio called WaveNet, based on PixelCNN [549]. To enhance the model's ability to handle long-range temporal dependencies, the authors incorporate dilated causal convolutions [402]. They also utilize Gated Residual blocks and skip connections to improve the model's expressivity. * Masked Reconstruction: The concept of masked reconstruction is influenced by the masked language model (MLM) task proposed in BERT [109]. This task involves masking specific tokens in input sentences with learned masking tokens or other input tokens, and training the model to reconstruct these masked tokens from the non-masked ones. Recent research \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Reference} & \multirow{2}{*}{Task (Metric)} & Pre-Training & \multicolumn{2}{c}{Dataset} \\ \cline{3-5} & & & Dataset (hours) & Training & Test \\ \hline \multirow{2}{*}{Mockingjay} & [330] & PC & LS (360h) & LS (360h) & LS (test-clean) \\ \cline{2-5} & & SR & LS (360h) & LS (100h) & LS (100h) \\ \hline \multirow{2}{*}{PASE} & [416] & ASR & LS (50 hr) & DIRHA & DIRHA \\ \hline \multirow{2}{*}{PASE+} & [456] & ASR & LS (50 hr) & DIRHA & DIRHA \\ & & & & CHiME-5 & CHiME-5 \\ \hline \multirow{2}{*}{DeCoAR} & \multirow{2}{*}{[326]} & \multirow{2}{*}{ASR} & LS (100h, 360h, 460 h, 960h) & LS (100h, 360h, 460 h, 960h) & LS (test-clean) \\ & & & WSJ si284 & WSJ si284 & LS (test-other) \\ \hline \hline \end{tabular} \end{table} Table 2. Summary of _generative self-supervised_ approaches and proposed models for speech processing with associated metrics and training Data. **ASR**: Automatic Speech Recognition, **PR**: Phoneme Recognition. **PC**: Phoneme Classification, **SR**: Speaker Recognition, **LS**: LibriSpeech. has explored similar pretext tasks for speech representation learning that help models develop contextualized representations capturing information from the entire input, like the DeCoAR model [326]. This approach assists the model in comprehending input data better, leading to more precise and informative representations. #### 4.4.2 Contrastive Models The technique involves training a model to differentiate between similar and dissimilar pairs of data samples, which helps the model acquire valuable representations that can be utilized for various tasks, as shown on Figure 12. The fundamental principle of contrastive learning is to generate positive and negative pairs of training samples based on the comprehension of the data. The model must learn a function that assigns high similarity scores to two positive samples and low similarity scores to two negative samples. Therefore, generating appropriate samples is crucial for ensuring that the model comprehends the fundamental features and structures of the data. Table 3 outlines popular contrastive self-supervised models used for different speech-processing tasks. We discuss Wav2Vec 2.0 since it has achieved state-of-the-art results in different downstream tasks. * Wav2Vec 2.0 [26] is a framework for self-supervised learning of speech representations that is one of the current state-of-the-art models for ASR [26]. The training of the model occurs in two stages. Initially, the model operates in a self-supervised mode during the first phase, where it uses unlabelled data and aims to achieve the best speech representation possible. The second phase is fine-tuning a particular dataset for a specific purpose. Wav2Vec 2.0 takes advantage of self-supervised training and uses convolutional layers to extract features from raw audio. In the speech field, researchers have explored different approaches to avoid overfitting, including augmentation techniques like Speech SimCLR [220] and the use of positive and negative pairs through methods like Contrastive Predictive Coding (CPC) (Ooster and Meyer [404]), Wav2vec (v1, v2.0) (Schneider et al. [483]), VQ-wav2vec (Baevski et al. [25]), and Discrete BERT [23]." In the Figure 12. Contrastive Self-supervised learning: Contrastive Predictive Coding. graph field, researchers have developed approaches like Deep Graph Infomax (DGI) (Velickovic et al., 2019 [556]) to learn representations that maximize the mutual information between local patches and global structures while minimizing mutual information between patches of corrupted graphs and the original graph's global representation. #### 4.4.3. Predictive Models In training predictive models, the primary concept involves creating simpler objectives or targets to minimize the need for data generation. However, the most critical and difficult aspect is ensuring that the task's difficulty level is appropriate for the model to learn effectively. Predictive SSRL methods have been leveraged in ASR through transformer-based models to acquire meaningful representations [23, 193, 329] and have proven transformative in exploiting the growing abundance of data [150]. Table 4 highlight popularly used SSRL methods along with the data used for training \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Reference} & \multirow{2}{*}{Task} & Pre-Training & \multicolumn{2}{c}{Dataset} \\ \cline{4-5} & & & Dataset (hours) & Training & Test \\ \hline \multirow{2}{*}{CPC} & [403] & PC & LS (100h) & LS (100h) & LS (100h) \\ \cline{3-5} & & SR & LS (100h) & LS (100h) & LS (100h) \\ \hline \multirow{2}{*}{Modified CPC} & [465] & PC & \begin{tabular}{c} LS (100h, 360h) \\ ZeroSpeech2017(45h) \\ \end{tabular} & CV-Dataset & CV-Dataset \\ \hline \multirow{4}{*}{Bidirectional CPC} & \multirow{4}{*}{ASR} & WSJ (80h) & WSJ (80h) & \multirow{4}{*}{WSJ (test92, test93)} \\ & & LS (960h) & LS (960h) & WSJ (test92, test93) \\ & & TIMIT (5h) & TIMIT (5h) & LS (test-clean, test-other) \\ & & SSA (1h) & SSA (1h) & TED3 (dev, test) \\ & & TED3 (440h) & TED3 (440h) & SwithBoard (eval2000) \\ \cline{3-5} & & & SwithBoard (310h) & SwithBoard (310h) & \\ \cline{3-5} & & ASR-Multi & Audio Set (2500h) & Audio Set (2500h) & \multirow{4}{*}{OpenSLR} \\ & & AVSpeech (3100h) & AVSpeech (3100h) & & ALFFA \\ & & CV-Dataset (430h & CV-Dataset (430h) & \\ \hline \multirow{2}{*}{wav2vec} & \multirow{2}{*}{[483]} & ASR & LS 80/86h & \multirow{2}{*}{WSJ (si284)} & \multirow{2}{*}{WSJ (eval92)} \\ & & LS 960h + WSJ (si284) & & & \\ \cline{3-5} & & PR & TIMIT & TIMIT & TIMIT \\ \hline \multirow{2}{*}{wav2vec 2.0} & \multirow{2}{*}{[26]} & ASR & LS (960h) & LS (test-clean) & LS (test-other) \\ & & & LL (60000h) & & LS (test-other) \\ \cline{3-5} & & PR & LS (960h) & & \\ \cline{3-5} & & PR & LL (60000h) & TIMIT & TIMIT \\ \hline \multirow{2}{*}{vq-wav2vec 2.0} & \multirow{2}{*}{[25]} & ASR & LS (960h) & WSJ (si284) & WSJ (eval92) \\ \cline{3-5} & & PR & LS (960h) & TIMIT & TIMIT \\ \hline \multirow{2}{*}{wav2vec-C} & \multirow{2}{*}{[474]} & ASR & Alexa-10k & Alexa-eval & Alexa-eval \\ \hline \multirow{2}{*}{wav2v-BERT} & \multirow{2}{*}{[96]} & ASR & LL (60000h) & LS (960h) & LS (test) \\ & & & LS (dev) & LS (dev-other) \\ \hline \multirow{2}{*}{Speech SimCLR} & \multirow{2}{*}{[220]} & ASR & LS (960h) & \multirow{2}{*}{WJS (si284)} & \multirow{2}{*}{WJS (si284)} \\ & & ASR & WJS (si284) & & \\ \cline{3-5} & & & TED2 & & \\ \cline{3-5} & & PR & LS (960h) & & \\ \cline{3-5} & & PR & WS (si284) & TIMIT & TIMIT \\ \cline{3-5} & & & TED2 & & \\ \hline \multirow{2}{*}{UnSpeech} & \multirow{2}{*}{[381]} & \multirow{2}{*}{ASR-Mult} & LL (60000h) & \multirow{2}{*}{SUPERB} & \multirow{2}{*}{SUPERB} \\ & & & GigaSpeech (10000h) & & \\ \cline{3-5} & & VP (24000h) & & \\ \hline \hline \end{tabular} \end{table} Table 3. Summary of _contrastive self-supervised_ approaches and proposed models for speech processing with associated metrics and training Data. **ASR**: Automatic Speech Recognition, **PR**: Phoneme Recognition. **PC**: Phoneme Classification, **SR**: Speaker Recognition, **LS**: LibriSpeech, **LL**: LibriLight, **WSJ**: Wall Street Journal. these models. In the following section we breifly discuss three popular predictive SSRL approaches used widely in various downstream tasks. * The direct application of BERT-type training to speech input presents challenges due to the unsegmented and unstructured nature of speech. To overcome this obstacle, a pioneering model known as Discrete BERT [23] has been developed. This model converts continuous speech input into a sequence of discrete codes, facilitating code representation learning. The discrete units are obtained from a pre-trained vq-wav2vec model [25], and they serve as both inputs and targets within a standard BERT model. The architecture of Discrete BERT, illustrated in Figure 13 (a), incorporates a softmax normalized output layer. During training, categorical cross-entropy loss is employed, with a masked perspective of the original speech input utilized for predicting code representations. Remarkably, the Discrete BERT model has exhibited impressive efficacy in self-supervised speech representation learning. Even with a mere 10-minute fine-tuning set, it achieved a Word Error Rate (WER) of 25% on the standard test-other subset. This approach effectively tackles the challenge of directly applying BERT-type training to continuous speech input and holds substantial potential for significantly enhancing speech recognition accuracy * The HuBERT [193] and TERA [329] models are two self-supervised approaches for speech representation learning. HuBERT uses an offline clustering step to align target labels with a BERT-like prediction loss, with the prediction loss applied only over the masked regions as outlined in Figure 13 (b). This encourages the model to learn a combined acoustic and language model over the continuous inputs. On the other hand, TERA is a self-supervised speech pre-training method that reconstructs acoustic frames from their altered counterparts using a stochastic policy to alter along various dimensions, including time, frequency, and tasks. These alterations help extract feature-based speech representations that can be fine-tuned as part of downstream models. Microsoft has introduced UniSpeech-SAT [72] and WavLM [71] models, which follow the HuBERT framework. These models have been designed to enhance speaker representation and improve various downstream tasks. The key focus of these models is data augmentation during the pre-training stage, resulting in superior performance. WavLM model has exhibited outstanding effectiveness in diverse downstream tasks, such as automatic speech recognition, phoneme recognition, speaker Figure 13: Predictive Self-supervised learning: (a) Discrete BERT (b) HuBERT. identification, and emotion recognition. It is worth highlighting that this model currently holds the top position on the SUPERB leaderboard (Kumar et al., 2017), which evaluates speech representations' performance in terms of reusability. Self-supervised learning has emerged as a widely adopted and effective technique for speech processing tasks due to its ability to train models with large amounts of unlabeled data. A comprehensive overview of self-supervised approaches, evaluation metrics, and training data is provided in Table 4 for speech recognition, speaker recognition, and speech enhancement. Researchers and practitioners can use this resource to select appropriate self-supervised methods and datasets to enhance their speech-processing systems. As self-supervised learning techniques continue to advance and refine, we can expect significant progress and advancements in speech processing. ## 5. Speech Processing Tasks In recent times, the field of speech processing has gained significant attention due to its rapid evolution and its crucial role in modern technological applications. This field involves the use of diverse techniques and algorithms to analyse and understand spoken language, ranging from basic speech recognition to more complex tasks such as spoken language understanding and speaker identification. Since speech is one of the most natural forms of communication, speech processing has become a critical component of many applications such as virtual assistants, call centres, and speech-to-text transcription. In this section, we provide a comprehensive overview of the various speech-processing tasks and the techniques used to achieve them, while also discussing the current challenges and limitations faced in this field and its potential for future development. The assessment of speech-processing models depends greatly on the calibre of datasets employed. By utilizing standardized datasets, researchers are enabled to objectively gauge the efficacy of varying approaches and identify scopes for advancement. The selection of evaluation metrics plays a critical role in this process, hinging on the task at hand and the desired outcome. Therefore, it is essential that researchers conduct a meticulous appraisal of different metrics to make informed decisions. This paper offers a thorough summary of frequently utilized datasets and metrics across diverse downstream tasks, as presented in Table 5 and, Table 6. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Reference} & \multirow{2}{*}{Task (Metric)} & Pre-Training & \multicolumn{2}{c}{Dataset} \\ \cline{3-6} & & & Dataset (hours) & Training & Test \\ \hline \multirow{4}{*}{BEST-RQ} & \multirow{2}{*}{[78]} & \multirow{2}{*}{ASR} & \multirow{2}{*}{LL (60000h)} & \multirow{2}{*}{LS (960h)} & LS (test) \\ & & & & & LS (test-other) \\ & & & & & LS (dev) \\ \cline{3-6} & & & LL (60000h) & & \\ & & ASR-Multi & & GigaSpeech (10000h) & SUPERB & SUPERB \\ & & & VP (24000h) & & & \\ \hline data2vec & [24] & ASR & LS (960h) & LS (10m, 1h, 100h, 960h) & LS (960h) \\ \hline Discrete BERT & [23] & ASR & LS (960h) & LS (100h) & LS (test) \\ & & & LS (960h) & LS (960h) & LS (test-other) \\ \hline HuBERT & [625] & ASR & LS (960h) & LS (960h) & LS (test) \\ & & & LL (60000h) & LS (960h) & LS (test-other) \\ \hline WavLM & [71] & ASR & LL (60000h) & SUPERB & SUPERB \\ \hline \hline \end{tabular} \end{table} Table 4. Summary of _predictive self-supervised_ approaches and proposed models for speech processing with associated metrics and training Data. **ASR**: Automatic Speech Recognition, **PR**: Phoneme Recognition. **PC**: Phoneme Classification, **SR**: Speaker Recognition, **LL**: LibriLight, **LS**: LibriSpeech. ### Automatic speech recognition (ASR) & conversational multi-speaker AST #### 5.1.1. Task Description Automatic speech recognition (ASR) technology enables machines to convert spoken language into text or commands, serving as a cornerstone of human-machine communication and facilitating a wide range of applications such as speech-to-speech translation and information retrieval (Shen et al., 2017). ASR involves multiple intricate steps, starting with the extraction and analysis of acoustic features, including spectral and prosodic features, which are then employed to recognize spoken words. Next, an acoustic model matches the extracted features to phonetic units, while a language model predicts the most probable sequence of words based on the recognized phonetic units. Ultimately, the acoustic and language model outcomes are merged to produce the transcription of spoken words. Deep learning techniques have gained popularity in recent years, allowing for improved accuracy \begin{table} \begin{tabular}{l l l l l l l l l l l l l} \hline \hline Dataset & Language & Langlab (hours) & ASR & PR & PC & SR & SV & SER & IC & TTS & VC & ST & SS \\ \hline \hline TMIT Acoustic-Phonetic Continuous Speech Corpus & English & 5.4 & ✓ & ✓ & ✓ & & & & & & & \\ \hline Lip Reading Sentences 2 (RXS2) & English & & ✓ & & & & & & & & & \\ \hline LibriSpeech (L5) & English & 1000 & ✓ & ✓ & ✓ & ✓ & & & & & \\ \hline GigaSpeech & English & 10000 & ✓ & & & & & & & & & \\ \hline Fleurs & Multilingual & 12 & ✓ & & & & & & & & \\ \hline LibriTTS & English & 585 & & & & & & & ✓ & ✓ \\ \hline L2ARCTC & English & 11.2 & & & & & & & ✓ & ✓ \\ \hline CMUARCTC & English & 20 & & & & & & & ✓ & ✓ \\ \hline Wall Street Journal (WS) & English & & ✓ & ✓ & ✓ & & & & & & \\ \hline VanPourali (WP) & Multilingual & 1800 & ✓ & & & & & & & & \\ \hline BABEL (BBL) & Multilingual & & ✓ & & & & & & & & \\ \hline Common Voice (CV-dataset) & Multilingual & 9253 & ✓ & ✓ & ✓ & & & & & & \\ \hline CSTR VCTK & English & & & & & & & & & & \\ \hline HUB 5 & English & 2000 & ✓ & & & & & & & & \\ \hline CHIME-5 & English & 50.12 & & & & & & & & & \\ \hline TID-LIUM 3 (TED 3) & English & 452 & & & & & & & & & \\ \hline TED-LIUM 2 (TED 2) & English & 118 & & & & & & & & & \\ \hline AISHELL-1 & Mandarin & 520 & ✓ & & & & & & & & \\ \hline AISHELL-3 & Mandarin & 85 & & & & & & ✓ & ✓ \\ \hline AISHELL-4 & Mandarin & 120 & & & & & & ✓ & ✓ & & \\ \hline Arabic Speech Corpus & Arabic & 3.7 & ✓ & ✓ & ✓ & & & & & & \\ \hline Persian Consomant Wood Combination & Persian & - & ✓ & ✓ & ✓ & & & & & & \\ \hline ALFFA & Multilingual & 5.2+18.3 & & & & & & & & & \\ \hline OperaSR-multi & Multilingual & 4.4+265.9 & & & & & & & & & \\ \hline VCTK & English & 44 & & & & & & & & & \\ \hline VoxCeleb1/2 & English & & & & & & ✓ & ✓ & & & \\ \hline Fluent Speech Commands (PSC) & English & 14.7 & & & & & & ✓ & & \\ \hline Emotional Speech Dataset (ISD) & English & 29 & & & & & ✓ & & & \\ \hline Interactive Emotional Dyadic Motion Capture (EMOCAP) & English & 12 & & & & & ✓ & & & \\ \hline Multimodal EmotionLines Dataset ( MEDL) & English & - & & & & & ✓ & & & \\ \hline LineaSepech En-Fr & English/French & - & & & & & & & ✓ \\ \hline CoViST-2 & Multilingual & 2880 & & & & & & & ✓ \\ \hline LibriLight (L1) & English & 60000 & ✓ & ✓ & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 5. Comparative analysis of speech processing datasets: This table summarizes the essential features of different speech-processing datasets, including their typical applications in various speech-processing tasks. **ASR**: Automatic Speech Recognition, **PR**: Phoneme Recognition. **PC**: Phoneme Classification, **SR**: Speaker Recognition, **SV**: Speaker Verification, **SER**: Speech Emotion Recognition, **IC**: Intent Classification, **TTS**: Text-to-Speech, **VC**: Voice Conversion, **ST**: Speech Translation, **SS**: Speech Separation in ASR systems [26; 443]. This paper provides an overview of the key components involved in ASR and highlights the role of deep learning techniques in enhancing the technology's accuracy. Most speech recognition systems that use deep learning aim to simplify the processing pipeline by training a single model to directly map speech signals to their corresponding text transcriptions. Unlike traditional ASR systems that require multiple components to extract and model features, such as HMMs and GMMs, end-to-end models do not rely on hand-designed components [19; 305]. Instead, end-to-end ASR systems use DNNs to learn acoustic and linguistic representations directly from the input speech signals [305]. One popular type of end-to-end model is the encoder-decoder model with attention. This model uses an encoder network to map input audio signals to hidden representations, and a decoder network to generate text transcriptions from the hidden representations. During the decoding process, the attention mechanism enables the decoder to selectively focus on different parts of the input signal [305]. End-to-end ASR models can be trained using various techniques such as CTC [245], which is used to train models without explicit alignment between the input and output sequences, and RNNs, which are commonly used to model temporal dependencies in sequential data such as speech signals. Transfer learning-based approaches can also improve end-to-end ASR performance by leveraging pre-trained models or features [106; 327; 489]. While end-to-end ASR models have shown promising results in various applications, there is still room for improvement to achieve human-level performance [106; 137; 236; 237; 625]. Nonetheless, deep learning-based end-to-end ASR architecture offers a promising and efficient approach to speech recognition that can simplify the processing pipeline and improve recognition accuracy. #### 5.1.2 Dataset The development and evaluation of ASR systems are heavily dependent on the availability of large datasets. As a result, ASR is an active area of research, with numerous datasets used for this purpose. In this context, several popular datasets have gained prominence for use in ASR systems. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Tasks & Metric & Description & Score range & Evaluation dataset \\ \hline Automatic speech recognition & WER & Word Error Rate & 0-1 & TIMIT \\ & CER & Character Error Rate & 0-1 & LibriSpeech \\ \hline Phoneme recognition & Accuracy & Classification accuracy & 0-1 & TIMIT \\ \hline Phoneme classification & F1-score & Harmonic mean of precision and recall & 0-1 & TIMIT \\ \hline Speaker recognition & EER & Equal Error Rate & 0-1 & VoxCeleb1 \\ \hline Speaker verification & FAR/FRR & False Acceptance Rate / False Rejection Rate & 0-1 & VoxCeleb1 \\ \hline Speech emotion recognition & Accuracy & Classification accuracy & 0-1 & IEMOCAP, ESD \\ \hline Intent classification & F1-score & Harmonic mean of precision and recall & 0-1 & ATIS, SNIPS \\ \hline Text-to-speech & MOS & Mean Opinion Score & 1-5 & LJSpeech, LibriTTS \\ \hline Voice conversion & MOS & Mean Opinion Score & 1-5 & VCC 2016 \\ \hline Speech translation & BLEU & Bilingual Evaluation Understudy & 0-1 & MuST-C \\ \hline Speech separation & SI-SDRi & Signal to Distortion Ratio & -20-30 & WSJ0-2mix \\ \hline Speech enhancement & PESQ & Perceptual Evaluation of Speech Quality & -0-5-4.5 & NOIZEUS \\ \hline Voice activity detection & F1-score & Harmonic mean of precision and recall & 0-1 & QUT-NOISE \\ \hline \end{tabular} \end{table} Table 6. Comprehensive Evaluation Metrics for Speech Processing Tasks. This table provides a comprehensive overview of the evaluation metrics used to assess the performance of speech-based systems across various tasks such as ASR, speaker verification, and TTS. The table highlights the specific metrics employed for each task, along with the score range and commonly used datasets. * Common Voice: Mozilla's Common Voice project [17] is dedicated to producing an accessible, unrestricted collection of human speech for the purpose of training speech recognition systems. This ever-expanding dataset features contributions from more than \(9,000\) speakers spanning \(60\) different languages. * LibriSpeech: LibriSpeech [410] is a corpus of approximately 1,000 hours of read English speech created from audiobooks in the public domain. It is widely used for speech recognition research and is notable for its high audio quality and clean transcription. * VoxCeleb: VoxCeleb [92] is a large-scale dataset containing over 1 million short audio clips of celebrities speaking, which can be used for speech recognition and recognition research. It includes a diverse range of speakers from different backgrounds and professions. * TIMIT: The TIMIT corpus [153] is a widely used speech dataset consisting of recordings consisting of 630 speakers representing eight major dialects of American English, each reading ten phonetically rich sentences. It has been used as a benchmark for speech recognition research since its creation in 1986. * CHiME-5: The CHiME-5 dataset [33] is a collection of recordings made in a domestic environment to simulate a real-world speech recognition scenario. It includes 6.5 hours of audio from multiple microphone arrays and is designed to test the performance of ASR systems in noisy and reverberant environments. Other notable datasets include Google's Speech Commands Dataset [589], the Wall Street Journal dataset4, and TED-LIUM [468]. Footnote 4: [https://www.ldc.upenn.edu/](https://www.ldc.upenn.edu/) #### 5.1.3 Models The use of RNN-based architecture in speech recognition has many advantages over traditional acoustic models. One of the most significant benefits is their ability to capture long-term temporal dependencies [244] in speech data, enabling them to model the dynamic nature of speech signals. Additionally, RNNs can effectively process variable-length audio sequences, which is essential in speech recognition tasks where the duration of spoken words and phrases can vary widely. RNN-based models can efficiently identify and segment phonemes, detect and transcribe spoken words, and can be trained end-to-end, eliminating the need for intermediate steps. These features make RNN-based models particularly useful in real-time applications, such as speech recognition in mobile devices or smart homes [117; 178], where low latency and high accuracy are crucial. In the past, RNNs were the go-to model for ASR. However, their limited ability to handle long-range dependencies prompted the adoption of the Transformer architecture. For example, in 2019, Google's Speech-to-Text API transitioned to a Transformer-based architecture that surpassed the previous RNN-based model, especially in noisy environments and for longer sentences, as reported in [651]. Additionally, Facebook AI Research introduced wav2vec 2.0, a self-supervised learning approach that leverages a Transformer-based architecture to perform unsupervised speech recognition. wav2vec 2.0 has significantly outperformed the previous RNN-based model and achieved state-of-the-art results on several benchmark datasets. Transformer for the ASR task is first proposed in [116], where authors include CNN layers before submitting preprocessed speech features to the input. By incorporating more CNN layers, it becomes feasible to diminish the gap between the sizes of the input and output sequences, given that the number of frames in audio exceeds the number of tokens in text. This results in a favorable impact on the training process. The change in the original architecture is minimal, and the model achieves a competitive word error rate (WER) of 10.9% on the Wall Street Journal (WSK) speech recognition dataset (Table 7). Despite its numerous advantages, Transformers in its pristine state has several issues when applied to ASR. RNN, with its overall training speed (i.e., convergence) and better WER because of effective joint training and decoding methods, is still the best option. The authors in [116] propose the Speech Transformer, which has the advantage of faster iteration time, but slower convergence compared to RNN-based ASR. However, integrating the Speech Transformer with the naive language model (LM) is challenging. To address this issue, various improvements in the Speech Transformer architecture have been proposed in recent years. For example, [245] suggests incorporating the Connectionist Temporal Classification (CTC) loss into the Speech Transformer. CTC is a popular technique used in speech recognition to align input and output sequences of varying lengths and one-to-many or many-to-one mappings. It introduces a blank symbol representing gaps between output symbols and computes the loss function by summing probabilities across all possible paths. The loss function encourages the model to assign high probabilities to correct output symbols and low probabilities to incorrect output symbols and the blank symbol, allowing the model to predict sequences of varying lengths. The CTC loss is commonly used with RNNs such as LSTM and GRU, which are well-suited for sequential data. CTC loss is a powerful tool for training neural networks to perform sequence-to-sequence tasks where the input and output sequences have varying lengths and mappings between them are not one-to-one. Various other improvements have also been proposed to enhance the performance of Speech Transformer architecture and integrate it with the naive language model, as the use of the transformer directly for ASR has not been effective in exploiting the correlation among the speech frames. The sequence order of speech, which the recurrent processing of input features can represent, is an important distinction. The degradation in performance for long sentences is reported using absolute positional embedding (AED) [85]. The problems associated with long sequences can become more acute for transformer [672]. To address this issue, a transition was made from absolute positional encoding to relative positional embeddings [672]. Whereas authors in [537] replace positional embeddings with pooling layers. In a considerably different approach, the authors in [383] propose a novel way of combining positional embedding with speech features by replacing positional encoding with trainable convolution layers. This update further improves the stability of optimization for large-scale learning of transformer networks. The above works confirmed the superiority of their techniques against sinusoidal positional encoding. In 2016, Baidu introduced a hybrid ASR model called Deep Speech 2 [13] that uses both RNNs and Transformers. The model also uses CNNs to extract features from the audio signal, followed \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Architecture} & Extra & \multirow{2}{*}{WER\(\downarrow\)} & \multirow{2}{*}{WER\(\downarrow\)} & \multirow{2}{*}{Model} & \multirow{2}{*}{Architecture} & Extra & \multirow{2}{*}{WER\(\approx 1\)} \\ & & Training Data & & & & & & Training Data \\ \hline \multicolumn{8}{l}{LibriSpeech test} \\ \hline Conformer * Wire2we2.26 [45] & Conformer * Wire2we2.0 & Y & 1.4 & 2.6 & wire2we2.26 [20] & Transformer + CNN & Y & **3.3** \\ w2k-BERT XQL [79] & CNN-Transformer & Y & 1.4 & 2.5 & w-new2we2we [20] & Transformer + CNN & Y & 11.6 \\ Speechformer (HBi)[58] & Conformer & Y & 1.7 & 3.3 & LSTM + Transformer & Big [45] & LSTM & N & 14.5 \\ Speechformer (HBi)[58] & Conformer & N & 2.6 & 4.0 & & Common Voice & & \\ ConformerHBi +SpeSpeformer [114] & LSTM-CNN & Y & 1.7 & 3.4 & Speechformer (HBi) [58] & Conformer & N & 10.8 \\ Conformer (HBi)[58] & Conformer & N & 1.9 & 4.1 & Whigper [445] & & N & 9.5 \\ ConductNet [100] & Conformer + wire2we2.0 & N & 1.9 & 3.4 & & W3P & real92 & & \\ Supersformer [255] & Conformer & N & 2.47 & 5.97 & Speechformer (1080) [56] & Conformer & N & 1.3 \\ LSTM Transformer [634] & LSTM & N & 2.23 & 5.6 & ultra-chain [633] & TDNN & N & 2.32 \\ Transformer Transformer [331] & Transformer & N & 2.0 & 4.2 & & GigSpeech & & \\ Whigper [445] & & N & 2.7 (ZS) & 5.6 (ZS) & Conformer/Transformer-AID [41] & Conformer & N & 10.80 \\ \hline \hline \end{tabular} \end{table} Table 7: Table summarizing the performance of different ASR models in terms of WER% on five different datasets (LibriSpeech test, LibriSpeech clean, TIMIT, Common Voice, WS) eval92, and GigaSpeech) also highlighting the use of extra data during training. ZS stands for Zero-Shot Performance. by a stack of RNNs to model the temporal dependencies and a Transformer-based decoder to generate the output sequence. This approach achieved state-of-the-art results on several benchmark datasets such as LibriSpeech, VoxForge, WSJeval92 etc. The transition of ASR models from RNNs to Transformers has significantly improved performance, especially for long sentences and noisy environments. The Transformer architecture has been widely adopted by different companies and research groups for their ASR models, and it is expected that more organizations will follow this trend in the upcoming years. One of the advanced speech models that leverage this architecture is the Universal Speech Model (USM) [656] developed by Google, which has been trained on over 12 million hours of speech and 28 billion sentences of text in more than 300 languages. With its 2 billion parameters, USM can recognize speech in both common languages like English and Mandarin and less-common languages. Other popular acoustic models for speech recognition include Quartznet [273], Citrinet [365], and Conformer [162]. These models can be chosen and switched based on the specific use case and performance requirements of the speech recognition pipeline. For example, Conformer-based acoustic models are preferred for addressing robust ASR, as shown in a recent study. Another study found that Conformer-15 is more effective in handling real-world data and can produce up to 43% fewer errors on noisy data than other popular ASR models. Additionally, fine-tuning pre-trained models such as BERT [109] and GPT [444] has been explored for ASR tasks, leading to state-of-the-art performance on benchmark datasets like LibriSpeech (refer to Table 7). An open-source toolkit called Vosk6 provides pre-trained models for multiple languages optimized for real-time and efficient performance, making it suitable for applications that require such performance. Footnote 5: [https://www.assemblyai.com/blog/conformer-1/](https://www.assemblyai.com/blog/conformer-1/) Footnote 6: [https://alphacephei.com/vosk/lm](https://alphacephei.com/vosk/lm) The field of speech recognition has made significant progress by adopting unsupervised pre-training techniques, such as those utilized by Wav2Vec 2.0 [26]. Another recent advancement in automatic speech recognition (ASR) is the whisper model, which has achieved human-level \begin{table} \begin{tabular}{l|c c} \hline \hline Dataset & wav2vec2.0 Large & Whisper Large \\ \hline Common Voice & 29.9 & **9.0** \\ Fleurs En & 14.6 & **4.4** \\ Tedlium & 10.5 & **4.0** \\ CHiME6 & 65.8 & **25.5** \\ VoxPopuli En & 17.9 & **7.3** \\ Switchboard & 28.3 & **13.8** \\ CallHome & 34.8 & **17.6** \\ LibriSpeech Clean & 2.7 & **2.7** \\ LibriSpeech Other & 6.2 & **5.2** \\ \hline \hline \end{tabular} \end{table} Table 8: Comparison of performance between wav2vec2.0 Large and Whisper on different datasets. The zero-shot Whisper model consistently outperforms wav2vec2.0 Large on several datasets, indicating significant performance differences. accuracy when transcribing the LibriSpeech dataset. These two cutting-edge frameworks, Wav2Vec 2.0 and whisper, currently represent the state-of-the-art in ASR. The whisper model is trained on an extensive supervised dataset, including over 680,000 hours of audio data collected from the web, which has made it more resilient to various accents, background noise, and technical jargon. The whisper model is also capable of transcribing and translating audio in multiple languages, making it a versatile tool. OpenAI has released inference models and code, laying the groundwork for the development of practical applications based on the whisper model. In contrast to its predecessor, Wav2Vec 2.0 is a self-supervised learning framework that trains models on unlabeled audio data before fine-tuning them on specific datasets. It uses a contrastive predictive coding (CPC) loss function to learn speech representations directly from raw audio data, requiring less labeled data. The model's performance has been impressive, achieving state-of-the-art results on several ASR benchmarks. These advances in unsupervised pre-training techniques and the development of novel ASR frameworks like Whisper and Wav2Vec 2.0 have greatly improved the field of speech recognition, paving the way for new real-world applications. In summary, the Table 8 highlights the varying effectiveness of wav2vec2.0 large and whisper models across different datasets. ### Neural Speech Synthesis #### 5.2.1. Task Description Neural speech synthesis is a technology that utilizes artificial intelligence and deep learning techniques to create speech from text or other inputs. Its applications are widespread, including in healthcare, where it can be used to develop assistive technologies for those who are unable to communicate due to neurological impairments. To generate speech, deep neural networks like CNNs, RNNs, transformers, and diffusion models are trained using phonemes and the mel spectrum. The process involves several components, such as text analysis, acoustic models, and vocoders, as shown in Figure 14. Acoustic models convert linguistic features into acoustic features, which are then used by the vocoder to synthesize the final speech signal. Various architectures, including neural vocoders based on GANs like HiFi-GAN (Zhu et al., 2017), are used by the vocoder to generate speech. Neural speech synthesis also enables manipulation of voice, pitch, and speed of speech signals using frameworks such as Fastspeech2 (Fastspeech2, 2018) and NANSY/NANSY++ (Fastspeech et al., 2018; Dosovitskiy et al., 2018). These frameworks use information bottleneck to disentangle analysis features for controllable synthesis. The research in neural speech synthesis can be classified into two prominent approaches: autoregressive and non-autoregressive models. Autoregressive models generate speech one element at a time, sequentially, while non-autoregressive models generate all the elements simultaneously, in parallel. Table 9 outlines the different architecture proposed under each category. The evaluation of synthesized speech is of paramount importance for assessing its quality and fidelity. It serves as a means to gauge the effectiveness of different speech synthesis techniques, algorithms, and parameterization methods. In this regard, the application of statistical tests has emerged as a valuable approach to objectively measure the similarity between synthesized speech and natural speech (Wav et al., 2018). These tests complement the traditional Mean Opinion Score (MOS) evaluations and provide quantitative insights into the performance of speech synthesis systems. Additionally, widely used objective metrics such as Mel Cepstral Distortion (MCD) and Word Error Rate (WER) contribute to the comprehensive evaluation of synthesized speech, enabling researchers and practitioners to identify areas for improvement and refine the synthesis process. By employing these objective metrics and statistical tests, the evaluation of synthesized speech becomes a rigorous and systematic process, enhancing the overall quality and fidelity of speech synthesis techniques. #### 5.2.2. Datasets The field of neural speech synthesis is rapidly advancing and relies heavily on high-quality datasets for effective training and evaluation of models. One of the most frequently utilized datasets in this field is the LJ Speech [217], which features about 24 hours of recorded speech from a single female speaker reading passages from the public domain LJ Speech Corpus. This dataset is free and has corresponding transcripts, making it an excellent choice for text-to-speech synthesis tasks. Moreover, it has been used as a benchmark for numerous neural speech synthesis models, including Tacotron [583], WaveNet [402], and DeepVoice [18, 156]. Apart from the LJ Speech dataset, several other datasets are widely used in neural speech synthesis research. The CMU Arctic [267] and L2 Arctic [661] datasets contain recordings of English speakers with diverse accents reading passages designed to capture various phonetic and prosodic aspects of speech. The LibriSpeech [410], VoxCeleb [92], TIMIT Acoustic-Phonetic Continuous Speech Corpus [153], and Common Voice Dataset [17] are other valuable datasets that offer ample opportunities for training and evaluating text-to-speech synthesis models. #### 5.2.3. Models Neural network-based text-to-speech (TTS) systems have been proposed using neural networks as the basis for speech synthesis, particularly with the emergence of deep learning. In Statistical Parametric Speech Synthesis (SPSS), early neural models replaced HMMs for acoustic modeling. The first modern neural TTS model, WaveNet [402], generated waveforms directly from linguistic features. Other models, such as DeepVoice 1/2 [18, 156], used neural network-based models to follow the three components of statistical parametric synthesis. End-to-end models, including Tacotron 1 & 2 [491, 583], Deep Voice 3, and FastSpeech 1 & 2 [458, 460], simplified text analysis modules and utilized mel-spectrograms to simplify acoustic features with character/phoneme sequences as input. Fully end-to-end TTS systems, such as ClariNet [425], FastSpeech 2 [458], and EATS [114], are capable of directly generating waveforms from text inputs. Compared to concatenative synthesis 7 and statistical parametric synthesis, neural network-based speech synthesis offers several advantages including superior voice quality, naturalness, intelligibility, and reduced reliance on human preprocessing and feature development. Therefore, end-to-end TTS systems represent a promising direction for advancing the field of speech synthesis. \begin{table} \begin{tabular}{c c c} \hline \hline Method & Text-To-Speech & Vocoder \\ \hline \multirow{4}{*}{Autoregressive Model} & Tacotron [583], Tacotron2 [691], Deep Voice 1,2,3 & WaveNet [402], WaveRNN [232], WaveGAN [420] \\ & Transformer-TTS [309], DuTA [627], Flowtron [548] & LPCNet [546], GAN-TTS [38], MultiBand-WaveRNN [627] \\ & RouTrans [310], DeviceTTS [211],Wave-Tacotron [590] & ImproveLPCNet [545], Burneded LPCNet2 [414] \\ & Apple TTS [9] & \\ \hline \multirow{8}{*}{Non-Autoregressive Model} & ParaNet [421], FastSpeech [460], DIR-T [317], EATS [115] & \\ & FastSpeech2 [458], FastPitch [284], Glow-TTS [250] & \\ & Flow-TTS [376], SpeedySpeech [544] & Parailed-WaveNet [401], WaveGlow [435], Parailed-WaveGAN [608] \\ & Parallel Teactron [126], EVAE-TTS [296] & MelGAN [275], MultiBand-MelGAN [612], VoxGAN [614], WaveGrad [67] \\ & Parallel Teactron2 [126], Grad-TTS [431], VITS [251] & DIRWave [269], HIR-GAN [268], StyleMelGAN [386], Pre-GAN [254] \\ & RAD-TTS [493], WaveGrad2 [69], DelightInfTS [342] & SIFTNet [241], Avocodo [32] \\ & PortuSpeech [459], DiffGAN-TTS [340], JETS [318] & \\ & WaveThutVec [502], FastDiff [204], CLONE [343] & \\ \hline \hline \end{tabular} \end{table} Table 9. Exploring the Landscape of TTS and Vocoder Architectures: Autoregressive and Non-Autoregressive Models. Transformer models have become increasingly popular for generating mel-spectrograms in TTS systems [309; 458]. These models are preferred over RNN structures in end-to-end TTS systems because they improve training and inference efficiency [309; 460]. In a study conducted by Li et al. [309], a multi-head attention mechanism replaced both RNN structures and the vanilla attention mechanism in Tacotron 2 [491]. This approach addressed the long-distance dependency problem and improved pluralization. Phoneme sequences were used as input to generate the mel-spectrogram, and speech samples were synthesized using WaveNet as a vocoder. Results showed that the transformer-based TTS approach was 4.25 times faster than Tacotron 2 and achieved similar MOS (Mean Opinion Score) performance. Aside from the work mentioned above, there are other studies that are based on the Tacotron architecture. For example, Skerry-Ryan et al. [503] and Wang et al. [584] proposed Tacotron-based models for prosody control. These models use a separate encoder to compute style information from reference audio that is not provided in the text. Another noteworthy work is the Global-style-Token (GST) [584] which improves on style embeddings by adding an attention layer to capture a wider range of acoustic styles. The FastSpeech [460] algorithm aims to improve the inference speed of TTS systems. To achieve this, it utilizes a feedforward network based on 1D convolution and the self-attention mechanism in transformers to generate Mel-spectrograms in parallel. Additionally, it solves the issue of sequence length mismatch between the Mel-spectrogram sequence and its corresponding phoneme sequence by employing a length regulator based on a duration predictor. The FastSpeech model was evaluated on the LJSpeech dataset and demonstrated significantly faster Mel-spectrogram generation than the autoregressive transformer model while maintaining comparable performance. FastPitch builds on FastSpeech by conditioning the TTS model on fundamental frequency or pitch contour, which improves convergence and eliminates the need for knowledge distillation of Mel-spectrogram targets in FastSpeech. FastSpeech 2 [458] represents a transformer-based Text-to-Speech (TTS) system that addresses the limitations of its predecessor, FastSpeech, while effectively handling the challenging one-to-many mapping problem in TTS. It introduces the utilization of a broader range of speech information, including energy, pitch, and more accurate duration, as conditional inputs. Furthermore, FastSpeech 2 trains the system directly on a ground-truth target, enhancing the quality of the synthesized speech. Additionally, a simplified variant called FastSpeech 2s has been proposed in [61], eliminating the requirement for intermediate Mel-spectrograms and enabling the direct generation of speech from text during inference. Experimental evaluations conducted on the LJSpeech dataset demonstrated that both FastSpeech 2 and FastSpeech 2s offer a streamlined training pipeline, resulting in fast, robust, and controllable speech synthesis compared to FastSpeech. Furthermore, in addition to the transformer-based TTS systems like FastSpeech 2 and FastSpeech 2s, researchers have also been exploring the potential of Variational Autoencoder (VAE) based TTS models [163; 251; 296; 196]. These models can learn a latent representation of speech signals from textual input and may be able to produce high-quality speech with less training data and greater control over the generated speech characteristics. For example, authors in [251] used a conditional variational autoencoder (CVAE) to model the acoustic features of speech and an adversarial loss to improve the naturalness of the generated speech. This approach involved conditioning the CVAE on the linguistic features of the input text and using an adversarial loss to match the distribution of the generated speech to that of natural speech. Results from this method have shown promise in generating speech that exhibits natural prosody and intonation. WaveGrad [67] and DiffWave [269] have emerged as significant contributions in the field, employing diffusion models to generate raw waveforms with exceptional performance. In contrast, GradTTS [431] and DiffTTS [218] utilize diffusion models to generate mel features rather than raw waveforms. Addressing the intricate challenge of one-shot many-to-many voice conversion, DiffVC [432] introduces a novel solver based on stochastic differential equations. Expanding the scope of sound generation to include singing voice synthesis, DiffSinger [334] introduces a shallow diffusion mechanism. Additionally, Diffsound [611] proposes a sound generation framework that incorporates text conditioning and employs a discrete diffusion model, effectively resolving concerns related to unidirectional bias and accumulated errors. EdiTTS [525] introduces a diffusion-based audio model that is specifically tailored for the text-to-speech task. Its innovative approach involves the utilization of the denoising reversal process to incorporate desired edits through coarse perturbations in the prior space. Similarly, Guided-TTS [249] and Guided-TTS2 [257] stand as early text-to-speech models that have effectively harnessed diffusion models for sound generation. Furthermore, Levkovitch et al. [301] have made notable contributions by combining a voice diffusion model with a spectrogram domain conditioning technique. This combined approach facilitates text-to-speech synthesis, even with previously unseen voices during the training phase, thereby enhancing the model's versatility and capabilities. InferGrad [74] enhances the diffusion-based text-to-speech model by incorporating the inference process during training, particularly when a limited number of inference steps are available. This improvement results in faster and higher-quality sampling. SpecGrad [264] introduces adaptations to the time-varying spectral envelope of diffusion noise based on conditioning log-mel spectrograms, drawing inspiration from signal processing techniques. ItoTTS [597] presents a unified framework that combines text-to-speech and vocoder models, utilizing linear SDE (Stochastic Differential Equation) as its fundamental principle. ProDiff [206] proposes a progressive and efficient diffusion model specifically designed for generating high-quality text-to-speech synthesis. Unlike traditional diffusion models that require a large number of iterations, ProDiff parameterizes the model by predicting clean data and incorporates a teacher-synthesized mel-spectrogram as a target to minimize data discrepancies and improve the sharpness of predictions. Finally, Binaural Grad [299] explores the application of diffusion models in binaural audio synthesis, aiming to generate binaural audio from monaural audio sources. It accomplishes this through a two-stage diffusion-based framework. #### Alignment Improving the alignment of text and speech in TTS architecture has been the focus of recent research [22; 29; 35; 64; 225; 316; 375; 377; 431; 459; 490; 493; 646]. Traditional TTS models require external aligners to provide attention alignments of phoneme-to-frame sequences, which can be complex and inefficient. Although autoregressive TTS models use an attention mechanism to learn these alignments online, these alignments tend to be brittle and often fail to generalize to long utterances and out-of-domain text, resulting in missing or repeating words. Figure 14. Neural Text-to-speech (TTS) pipeline: a diagram showing the main modules of a typical TTS system. The system takes text input and processes it through various stages to generate speech output. The text analysis module tokenizes the input text and generates linguistic features such as phonemes and prosody. The acoustic model module then converts these linguistic features into acoustic features, such as mel spectrograms, using a neural network. Finally, the waveform generation module synthesizes the speech waveform from the acoustic features using another neural network. In their study [121], the authors presented a novel text encoder network that includes an additional objective function to explicitly align text and speech encodings. The text encoder architecture is straightforward, consisting of an embedding layer, followed by two bidirectional LSTM layers that maintain the input's resolution. The study utilized the same subword segmentation for the input text as for the ASR output targets. While RNN models with soft attention mechanisms have been proven to be highly effective in various tasks, including speech synthesis, their use in online settings results in quadratic time complexity due to the pass over the entire input sequence for generating each element in the output sequence. In [447], the authors proposed an end-to-end differentiable method for learning monotonic alignments, enabling the computation of attention in linear time. Several enhancements, such as those proposed in [79], have been proposed in recent years to improve alignment in TTS models. Additionally, in [21], the authors introduced a generic alignment learning framework that can be easily extended to various neural TTS models. The use of normalizing flow has been introduced to address output diversity issues in parallel TTS architectures. This technique is utilized to model the duration of speech, as evidenced by studies conducted in [250; 493; 377]. One such flow-based generative model is Glow-TTS [250], developed specifically for parallel TTS without the need for an external aligner. The model employs the generic Glow architecture previously used in computer vision and vocoder models to produce mel-spectrograms from text inputs, which are then converted to speech audio. Glow-TTS has demonstrated superior synthesis speed over the autoregressive model, Tacotron 2, while maintaining comparable speech quality. Recently, a new TTS model called EfficientTTS [377] has been introduced. This model outperforms previous models such as Tacotron 2 and Glow-TTS in terms of speech quality, training efficiency, and synthesis speed. The EfficientTTS model uses a multi-head attention mechanism to align input text and speech encodings, enabling it to generate high-quality speech with fewer parameters and faster synthesis speed. Overall, the introduction of normalizing flow and the development of models such as Glow-TTS and EfficientTTS have significantly improved the quality and efficiency of TTS systems. Figure 15: The architecture of the Generative Spoken Language Model GSLM introduced by Meta in [281]. GSLM model operates through a three-part architecture. Firstly, the encoder takes the speech waveform and transforms it into distinct units represented as S2u. Secondly, the decoder reverses this mapping by converting the units back to the original waveform, represented as u2S. Finally, the language model is unit-based and captures the distribution of unit sequences, which can be viewed as a form of pseudo-text. To resolve output diversity issues in parallel TTS architectures, normalizing flow has been introduced to model the duration of speech [250, 377, 493]. Glow-TTS [250] is a flow-based generative model for parallel TTS that does not require any external aligner12345. It is built on the generic Glow model that is previously used in computer vision and vocoder models3. Glow-TTS is designed to produce mel-spectrograms from text input, which can then be converted to speech audio4. It has been shown to achieve an order-of-magnitude speed-up over the autoregressive model, Tacotron 2, at synthesis with comparable speech quality. EfficientTTS is a recent study that proposed a new TTS model, which significantly outperformed models such as Tacotron 2 [491] and Glow-TTS [250] in terms of speech quality, training efficiency, and synthesis speed. The EfficientTTS [377] model uses a multi-head attention mechanism to align the input text and speech encodings, enabling it to generate high-quality speech with fewer parameters and faster synthesis speed. #### 5.2.5. Speech Resynthesis Speech resynthesis is the process of generating speech from a given input signal. The input signal can be in various forms, such as a digital recording, text, or other types of data. The aim of speech resynthesis is to create an output that closely resembles the original signal in terms of sound quality, prosody, and other acoustic characteristics. Speech resynthesis is an important research area with various applications, including speech enhancement [194, 363, 526], and voice conversion [362]. Recent advancements in speech resynthesis have revolutionized the field by incorporating self-supervised discrete representations to generate disentangled representations of speech content, prosodic information, and speaker identity. These techniques enable the generation of speech in a controlled and precise manner, as seen in [281, 495, 437, 281]. The objective is to generate high-quality speech that maintains or degrades acoustic cues, such as phonotactics, syllabic rhythm, or intonation, from natural speech recordings. Speech resynthesis is a vital research area with various applications, including speech enhancement and voice conversion, and recent advancements have revolutionized the field by incorporating self-supervised discrete representations. These techniques enable the generation of high-quality speech that maintains or degrades acoustic cues from natural speech recordings, and they have been used in the GSLM [281] architecture for acoustic modeling, speech recognition, and synthesis, as outlined in Figure 15. It comprises a discrete speech encoder, a generative language model, and a speech decoder, all trained without supervision. GSLM is the only prior work addressing the generative aspect of speech pre-training, which builds a text-free language model using discovered units. #### 5.2.6. Voice Conversion Modifying a speaker's voice in a provided audio sample to that of another individual is called voice conversion, preserving linguistic content information. TTS and Voice conversion share a common objective of generating natural speech. While models based on RNNs and CNNs have been successfully applied to voice conversion, the use of the transformer has shown promising results. Voice Transformer Network (VTN) [210] is a seq2seq voice conversion (VC) model based on the transformer architecture with TTS pre-training. Seq2seq VC models are attractive as they can convert prosody, and the VTN is a novel approach in this field that has been proven to be effective in converting speech from a source to a target without changing the linguistic content. ASR and TTS-based voice conversion is a promising approach to voice conversion [532]. It involves using an ASR model to transcribe the source speech into the linguistic representation and then using a TTS model to synthesize the target speech with the desired voice characteristics [430]. However, this approach overlooks the modeling of prosody, which plays an important role in speech naturalness and conversion similarity. To address this issue, researchers have proposed to directly predict prosody from the linguistic representation in a target-speaker-dependent manner [649]. Other researchers have explored using a mix of ASR and TTS features to improve the quality of voice conversion [647; 665; 209; 664]. CycleGAN [238; 239; 240], VAE [82; 235; 595], and VAE with the generative adversarial network [191] are other popular VC other popular approaches for non-parallel-voice conversion. CycleGAN-VC [238] uses a cycle-consistent adversarial network to convert the source voice to the target voice and can generate high-quality speech without any extra data, modules, or alignment procedure. Several improvements and modifications are also proposed in recent years [191; 239; 240]. VAE-based voice conversion is a promising approach that can generate high-quality speech with a small amount of training data [82; 235; 595]. #### 5.2.7. Vocoders The field of audio synthesis has undergone significant advancements in recent years, with various approaches proposed to enhance the quality of synthesized audio. Prior studies have concentrated on improving discriminator architectures or incorporating auxiliary training losses. For instance, MelGAN introduced a multiscale discriminator that uses window-based discriminators at different scales and applies average pooling to downsample the raw waveform. It enforces the correspondence between the input Mel spectrogram and the synthesized waveform using an L1 feature matching loss from the discriminator. In contrast, GAN-TTS [38] utilizes an ensemble of discriminators that operate on random windows of different sizes and enforce the mapping between the conditioner and the waveform adversarially using conditional discriminators. Another approach, parallel WaveGAN [608], extends the single short-time Fourier transform loss to multi-resolution and employs it as an auxiliary loss for GAN training. Recently, some researchers have improved MelGAN by integrating the multi-resolution short-time Fourier transform loss. HiFi-GAN reuses the multi-scale discriminator from MelGAN and introduces the multi-period discriminator for high-fidelity synthesis. UnivNet employs a multi-resolution discriminator that takes multi-resolution spectrograms as input and can enhance the spectral structure of a synthesized waveform. In contrast, CARGAN integrates partial autoregression into the generator to enhance pitch and periodicity accuracy. The recent generative models for modeling raw audio can be categorized into the following groups. * Autoregressive models: Although WaveNet is renowned for its exceptional ability to generate high-quality speech, including natural-sounding intonation and prosody, other neural vocoders have emerged as potential alternatives in recent years. For instance, LPCNet [546] employs a combination of linear predictive coding (LPC) and deep neural networks (DNNs) to generate speech of similar quality while being computationally efficient and capable of producing low-bitrate speech. Similarly, SampleRNN [373], an unconditional end-to-end model, has demonstrated potential as it leverages a hierarchical RNN architecture and is trained end-to-end to generate raw speech of high quality. * Generative Adversarial Network (GAN) vocoders: Numerous vocoders have been created that employ Generative Adversarial Networks (GANs) to generate speech of exceptional quality. These GAN-based vocoders, which include MelGAN MelGAN [275]and HiFIGAN [268], are capable of producing high-fidelity raw audio by conditioning on mel spectrograms. Furthermore, they can synthesize audio at speeds several hundred times faster than real-time on a single GPU, as evidenced by research conducted in [39; 268; 275; 608]. * Diffusion-based models: In recent years, there have been several novel architectures proposed that are based on diffusion. Two prominent examples of these are WaveGrad [68] and DiffWave [269]. The WaveGrad model architecture builds upon prior works from score matching and diffusion probabilistic models, while the DiffWave model uses adaptive noise spectral shaping to adapt the diffusion noise. This adaptation, achieved through time-varying filtering, improves sound quality, particularly in high-frequency bands. Other examples of diffusion-based vocoders include InferGrad [74], SpecGrad [264], and Priorgrad [293]. InfraGrad incorporates the inference process into training to reduce inference iterations while maintaining high quality. SpecGrad adapts the diffusion noise distribution to a given acoustic feature and uses adaptive noise spectral shaping to generate high-fidelity speech waveforms. * Flow-based models: Parallel WaveNet, WaveGlow, etc. [258; 294; 354; 427; 435] are based on normalizing flows and are capable of generating high-fidelity speech in real-time. While flow-based vocoders generally perform worse than autoregressive vocoders with regard to modeling the density of speech signals, recent research [354] has proposed new techniques to improve their performance. Universal neural vocoding is a challenging task that has achieved limited success to date. However, recent advances in speech synthesis have shown a promising trend toward improving zero-shot performance by scaling up model sizes. Despite its potential, this approach has yet to be extensively explored. Nonetheless, several approaches have been proposed to address the challenges of universal vocoding. For example, WaveRNN has been utilized in previous studies to achieve universal vocoding (Lorenzo-Trueba et al. [344]; Paul et al. [419]). Another approach Jiao et al. [221] developed involves constructing a universal vocoder using a flow-based model. Additionally, the GAN vocoder has emerged as a promising candidate for this task, as suggested by You et al. [626]. #### 5.2.8. Controllable Speech Synthesis Controllable Speech Synthesis [122; 276; 460; 543; 584; 547; 584] is a rapidly evolving research area that focuses on generating natural-sounding speech with the ability to control various aspects of speech, including pitch, speed, and emotion. Controllable Speech Synthesis is positioned in the emerging field of affective computing at the intersection of three disciplines: expressive speech analysis [533], natural language processing, and machine learning. This field aims to develop systems capable of recognizing, interpreting, and generating human-like emotional responses in interactions between humans and machines. Expressive speech analysis is a critical component of this field. It provides mathematical tools to analyse speech signals and extract various acoustic features, including pitch, loudness, and duration, that convey emotions in speech. Natural language processing is also crucial to this field, as it helps to process the text input and extract the meaning and sentiment of the words. Finally, machine learning techniques are used to model and control the expressive features of the synthesized speech, enabling the systems to produce more expressive and controllable speech [11; 205; 274; 295; 337; 408; 515; 548; 666]. In the last few years, notable advancements have been achieved in this field [164; 248; 450], and several approaches have been proposed to enhance the quality of synthesized speech. For example, some studies propose using deep learning techniques to synthesize expressive speech and conditional generation models to control the prosodic features of speech [248; 450]. Others propose using motion matching-based algorithms to synthesize gestures from speech [164]. #### 5.2.9. Disentangling and Transferring The importance of disentangled representations for neural speech synthesis cannot be overstated, as it has been widely recognized in the literature that this approach can greatly improve the interpretability and expressiveness of speech synthesis models [195; 360; 436]. Disentangling multiple styles or prosody information during training is crucial to enhance the quality of expressive speech synthesis and control. Various disentangling techniques have been developed using adversarial and collaborative games, the VAE framework, bottleneck reconstructions, and frame-level noise modeling combined with adversarial training. For instance, Ma et al. [360] have employed adversarial and collaborative games to enhance the disentanglement of content and style, resulting in improved controllability. Hsu et al. [195] have utilized the VAE framework with adversarial training to separate speaker information from noise. Qian et al. [436] have introduced speech flow, which can disentangle rhythm, pitch, content, and timbre through three bottleneck reconstructions. In another work based on, adversarial training, Zhang et al. [642] have proposed a method that disentangles noise from the speaker by modeling the noise at the frame level. Developing high-quality speech synthesis models that can handle noisy data and generate accurate representations of speech is a challenging task. To tackle this issue, Zhang et al. [650] propose a novel approach involving multi-length adversarial training. This method allows for modeling different noise conditions and improves the accuracy of pitch prediction by incorporating discriminators on the mel-spectrogram. By replacing the traditional pitch predictor model with this approach, the authors demonstrate significant improvements in the fidelity of synthesized speech. #### 5.2.10 Robustness Using neural TTS models can present issues with robustness, leading to low-quality audio samples for unseen or atypical text. In response, Li et al. [310] proposed RobuTrans [310], a robust transformer that converts input text to linguistic features before feeding it to the encoder. This model also includes modifications to the attention mechanism and position embedding, resulting in improved MOS scores compared to other TTS models. Another approach to enhancing robustness is the s-Transformer, introduced by Wang et al. [579], which models speech at the segment level, allowing it to capture long-term dependencies and use segment-level encoder-decoder attention. This technique performs similarly to the standard transformer, exhibiting robustness for extra-long sentences. Lastly, Zheng et al. [670] proposed an approach that combines a local recurrent neural network with the transformer to capture sequential and local information in sequences. Evaluation of a 20-hour Mandarin speech corpus demonstrated that this model outperforms the transformer alone in performance. In their recent paper [610], the authors proposed a novel method for extracting dynamic prosody information from audio recordings, even in noisy environments. Their approach employs probabilistic denoising diffusion models and knowledge distillation to learn speaking style features from a teacher model, resulting in a highly accurate reproduction of prosody and timber. This model shows great potential in applications such as speech synthesis and recognition, where noise-robust prosody information is crucial. Other noteworthy advances in the development of robust TTS systems include the work by [493], which focuses on a robust speech-text alignment module, as well as the use of normalizing flows for diverse speech synthesis. #### 5.2.11 Low-Resource Neural Speech Synthesis High-quality paired text and speech data are crucial for building high-quality Text-to-Speech (TTS) systems [147]. Unfortunately, most languages are not supported by popular commercialized speech services due to the lack of sufficient training data [604]. To overcome this challenge, researchers have developed TTS systems under low data resource scenarios using various techniques [127; 147; 538; 604]. Several techniques have been proposed by researchers to enhance the efficiency of low-resource/Zero-shot TTS systems. One of these is the use of semi-supervised speech synthesis methods that utilize unpaired training data to improve data efficiency, as suggested in a study by Liu et al. [328]. Another method involves cascading pre-trained models for ASR, MT, and TTS to increase data size from unlabelled speech, as proposed by Nguyen et al. (2019). In addition, researchers have employed crowdsourced acoustic data collection to develop TTS systems for low-resource languages, as shown in a study by Butryna et al. (2019). Huang et al. (2020) introduced a zero-shot style transfer approach for out-of-domain speech synthesis that generates speech samples exhibiting a new and distinctive style, such as speaker identity, emotion, and prosody. ### Speaker recognition #### 5.3.1. Task Description Speech signal consists of information on various characteristics of a speaker, such as origin, identity, gender, emotion, etc. This property of speech allows speech-based speaker profiling with a wide range of applications in forensics, recommendation systems, etc. The research on recognizing speakers is extensive and aims to solve two major tasks: speaker identification (what is the identity?) and speaker verification (is the speaker he/she claims to be?). Speaker recognition/verification tasks require extracting a fixed-length vector, called speaker embedding, from unconstrained utterances. These embeddings represent the speakers and can be used for identification or verification tasks. Recent state-of-the-art speaker-embedding-extractor models are based on DNNs and have shown superior performance on both speaker identification and verification tasks. * **Speaker Recognition** (SR) relies on speaker identification as a key aspect, where an unknown speaker's speech sample is compared to speech models of known speakers to determine their identity. The primary aim of speaker identification is to distinguish an individual's identity from a group of known speakers. This process involves a detailed analysis of the speaker's voice characteristics such as pitch, tone, accent, and other pertinent features to establish their identity. Recent advancements in deep learning techniques have significantly enhanced speaker identification, leading to the creation of accurate, efficient, and end-to-end models. Various deep learning-based models such as CNNs, RNNs, and their combinations have demonstrated exceptional performance in several subtasks of speaker identification, including verification, identification, diarization, and robust recognition (Zhou et al., 2019; Wang et al., 2019; Wang et al., 2019). * **Speaker Verification** (SV) is a process that involves confirming the identity of a speaker through their speech. It differs from speaker identification, which aims to identify unknown speakers by comparing their voices with that of registered speakers in a database. Speaker verification verifies whether a speaker is who they claim to be by comparing their voice with an available speaker template. Deep learning-based speaker verification relies on Speaker Representation based on embeddings, which involves learning low-dimensional vector representations from speech signals that capture speaker characteristics, such as pitch and speaking style, and can be used to compare different speech signals and determine their similarity. #### 5.3.2. Dataset The VoxCeleb dataset (VoxCeleb 1 & 2) is widely used in speaker recognition research, as mentioned in (VoxCeleb et al., 2019). This dataset consists of speech data collected from publicly available media, employing a fully automated pipeline that incorporates computer vision techniques. The pipeline retrieves videos from YouTube and applies active speaker verification using a two-stream synchronization CNN. Speaker identity is further confirmed through CNN-based facial recognition. Another commonly employed dataset is TIMIT, which comprises recordings of phonetically balanced English sentences spoken by a diverse set of speakers. TIMIT is commonly used for evaluating speech recognition and speaker identification systems, as referenced in (Kumar et al., 2019). Other noteworthy datasets in the field include the SITW database [371], which provides hand-annotated speech samples for benchmarking text-independent speaker recognition technology, and the RSR2015 database [286], which contains speech recordings acquired in a typical office environment using multiple mobile devices. Additionally, the RedDots project [291] and VOICES corpus [463] offer unique collections of offline voice recordings in furnished rooms with background noise, while the CN-CELEB database [135] focuses on a specific person of interest extracted from bilibili.com using an automated pipeline followed by human verification. The BookTubeSpeech dataset [424] was also collected using an automated pipeline from BookTube videos, and the Hi-MIA database [438] was designed specifically for far-field scenarios using multiple microphone arrays. The FFSVC20 challenge [439] and DIHARD challenge [471] are speaker verification and diarization research initiatives focusing on far-field and robustness challenges, respectively. Finally, the LibriSpeech dataset [410], originally intended for speech recognition, is also useful for speaker recognition tasks due to its included speaker identity labels. #### 5.3.3 Models Speaker identification (SI) and verification (SV) are crucial research topics in the field of speech technology due to their significant importance in various applications such as security [125], forensics [270], biometric authentication [170], and speaker diarization [601]. Speaker recognition has become more popular with technological advancements, including the Internet of Things (IoT), smart devices, voice assistants, smart homes, and humanoids. Therefore, a significant quantity of research has been conducted in this field, and many methods have been developed, making the state-of-the-art in this field quite mature and versatile. However, it has become increasingly challenging to provide an overview of the various methods due to the high number of studies in the field. A neural network approach for speaker verification was first attempted by Variani et al. [553] in 2014, utilizing four fully connected layers for speaker classification. Their approach has successfully verified speakers with short-duration utterances by obtaining the \(d\)-vector by averaging the output of the last hidden layer across frames. Although various attempts have been made to directly learn speaker representation from raw waveforms by other researchers (Jung et al. [226], Ravanelli and Bengio [454]), other well-designed neural networks like CNNs and RNNs have been proposed for speaker verification tasks by Ye and Yang [621]. Nevertheless, the field still requires more powerful deep neural networks for superior extraction of speaker features. Speaker verification has seen notable advancements with the advent of more powerful deep neural networks. One such model is the \(x\)-vector-based system proposed by Snyder et al. [507], which has gained widespread popularity due to its remarkable performance. Since its introduction, the \(x\)-vector system has undergone significant architectural enhancements and optimized training procedures [103]. The widely-used ResNet [176] architecture has been incorporated into the system to improve its performance further. Adding residual connections between frame-level layers has been found to improve the embeddings [152; 634]. This technique has also aided in faster convergence of the back-propagation algorithm and mitigated the vanishing gradient problem [176]. Tang et al. [530] proposed further improvements to the \(x\)-vector system. They introduced a hybrid structure based on TDNN and LSTM to generate complementary speaker information at different levels. They also suggested a multi-level pooling strategy to collect the speaker information from global and local perspectives. These advancements have significantly improved speaker verification systems' performance and paved the way for further developments in the field. Desplanques et al. [108] propose a state-of-the-art architecture for speaker verification utilizing a Time Delay Neural Network (TDNN) called ECAPA-TDNN. The paper presents a range of enhancements to the existing \(x\)-vector architecture that leverages recent developments in face verification and computer vision. Specifically, the authors suggest three major improvements. Firstly, they propose restructuring the initial frame layers into 1-dimensional Res2Net modules with impactful skip connections, which can better capture the relationships between different time frames. Secondly, they introduce Squeeze-and-Excitation blocks to the TDNN layers, which help highlight the most informative channels and improve feature discrimination. Lastly, the paper proposes channel attention propagation and aggregation to efficiently propagate attention weights through multiple TDNN layers, further enhancing the model's ability to discriminate between speakers. Additionally, the paper presents a new approach that utilizes ECAPA-TDNN from the speaker recognition domain as the backbone network for a multiscale channel adaptive module. The proposed method achieves promising results, demonstrating the effectiveness of the proposed architecture in speaker verification. Overall, ECAPA-TDNN offers a comprehensive solution to speaker verification by introducing several novel contributions that improve the existing \(x\)-vector architecture, which has been state-of-the-art in speaker verification for several years. The proposed approach also achieves promising results, suggesting that the proposed architecture can effectively tackle the challenges of speaker verification. The attention mechanism is a powerful method for obtaining a more discriminative utterance-level feature by explicitly selecting frame-level representations that better represent speaker characteristics. Recently, the Transformer model with a self-attention mechanism has become effective in various application fields, including speaker verification. The Transformer architecture has been extensively explored for speaker verification. TESA [370] is an architecture based on the Transformer's encoder, proposed as a replacement for conventional PLDA-based speaker verification to capture speaker characteristics better. TESA outperforms PLDA on the same dataset by utilizing the next sentence prediction task of BERT [109]. Zhu et al. [675] proposed a method to create fixed-dimensional speaker verification representation using a serialized multi-layer multi-head attention mechanism. Unlike other studies that redesign the inner structure of the attention module, their approach strictly follows the original Transformer, providing simple but effective modifications. ### Speaker Diarization #### 5.4.1. Task Description Speaker diarization is a critical component in the analysis of multi-speaker audio data, and it addresses the question of "who spoke when." The term "diarize" refers to the process of making a note or keeping a record of events, as per the English dictionary. A traditional speaker diarization system comprises several crucial components that work together to achieve accurate and efficient speaker diarization. In this section, we will discuss the different components of a speaker diarization system (Figure 16) and their role in achieving accurate speaker diarization. * _Acoustic Features Extraction_: In the analysis of multi-speaker speech data, one critical component is the extraction of acoustic features [14; 536]. This process involves extracting features such as pitch, energy, and MFCCs from the audio signal. These acoustic features play a crucial role in identifying different speakers by analyzing their unique characteristics. * _Segmentation_: Segmentation is a crucial component in the analysis of multi-speaker audio data, where the audio signal is divided into smaller segments based on the silence periods between speakers [14; 536]. This process helps in reducing the complexity of the problem and makes it easier to identify different speakers in smaller segments * _Speaker Embedding Extraction_: This process involves obtaining a low-dimensional representation of each speaker's voice, which is commonly referred to as speaker embedding. This is achieved by passing the acoustic features extracted from the speech signal through a deep neural network, such as a CNN or RNN[506]. * _Clustering_: In this component, the extracted speaker embeddings are clustered based on similarity, and each cluster represents a different speaker [14, 536]. This process commonly uses unsupervised clustering algorithms, such as k-means clustering. * _Speaker Classification_: In this component, the speaker embeddings are classified into different speaker identities using a supervised classification algorithm, such as SVM or MLP [14, 536]. * _Re-segmentation_: This component is responsible for refining the initial segmentation by adjusting the segment boundaries based on the classification results. It helps in improving the accuracy of speaker diarization by reducing the errors made during the initial segmentation. Various studies focus on traditional speaker diarization systems [14, 536]. This paper will review the recent efforts toward deep learning-based speaker diarizations techniques. #### 5.4.2. Dataset * _NIST SRE 2000 (Disk-8) or CALLHOME dataset_: The NIST SRE 2000 (Disk-8) corpus, also referred to as the CALLHOME dataset, is a frequently utilized resource for speaker diarization in contemporary research papers. Originally released in 2000, this dataset comprises conversational telephone speech (CTS) collected from diverse speakers representing a wide range of ages, genders, and dialects. It includes 500 sessions of multilingual telephonic speech, each containing two to seven speakers, with two primary speakers in each conversation. The dataset covers various topics, including personal and familial relationships, work, education, and leisure activities. The audio recordings were obtained using a single microphone and had a sampling rate of 8 kHz, with 16-bit linear quantization. * _Directions into Heterogeneous Audio Research (DIHARD) Challenge and dataset_: The DIHARD Challenge, organized by the National Institute of Standards and Technology (NIST), aims to enhance the accuracy of speech recognition and diarization in challenging acoustic environments, such as crowded spaces, distant microphones, and reverberant rooms. The challenge comprises tasks requiring advanced machine-learning techniques, including speaker diarization, recognition, and speech activity detection. The DIHARD dataset used in the challenge comprises over 50 hours of speech from more than 500 speakers, gathered from diverse sources like meetings, broadcast news, and telephone conversations. These recordings feature various acoustic challenges, such as overlapping speech, background noise, and distant or reverberant speech, captured through different microphone setups. To aid in the evaluation process, the dataset has been divided into separate development and Figure 16. Speaker diarization system diagram showcasing the process of identifying and differentiating multiple speakers in an audio recording using various techniques such as VAD, segmentation, clustering and re-segmentation. evaluation sets. The assessment metrics used to gauge performance include diarization error rate (DER), as well as accuracy in speaker verification, identification, and speech activity detection. * _Augmented Multi-party Interaction (AMI) database_: The AMI database is a collection of audio and video recordings that capture real-world multi-party conversations in office environments. The database was developed as part of the AMI project, which aimed to develop technology for automatically analyzing multi-party meetings. The database contains over 100 hours of audio and video recordings of meetings involving four to seven participants, totaling 112 meetings. The meetings were held in multiple offices and were designed to reflect the kinds of discussions that take place in typical business meetings. The audio recordings were captured using close-talk microphones placed on each participant and additional microphones placed in the room to capture ambient sound. The video recordings were captured using multiple cameras placed around the room. In addition to the audio and video recordings, the database also includes annotations that provide additional information about the meetings, including speaker identities, speech transcriptions, and information about the meeting structure (e.g., turn-taking patterns). The AMI database has been used extensively in research on automatic speech recognition, speaker diarization, and other related speech and language processing topics. * _VoxSRC Challenge and VoxConverse corpus_: The VoxCeleb Speaker Recognition Challenge (VoxSRC) is an annual competition designed to assess the capabilities of speaker recognition systems in identifying speakers from speech recorded in real-world environments. The challenge provides participants with a dataset of audio and visual recordings of interviews, news shows, and talk shows featuring famous individuals. The VoxSRC encompasses several tracks, including speaker diarization, and comprises a development set (20.3 hours, 216 recordings) and a test set (53.5 hours, 310 recordings). Recordings in the dataset may feature between one and 21 speakers, with a diverse range of ambient noises, such as background music and laughter. To facilitate the speaker diarization track of the VoxSRC-21 and VoxSRC-22 competitions, VoxConverse, an audio-visual diarization dataset containing multi-speaker clips of human speech sourced from YouTube videos, is available, and additional details are provided on the project website 8. Footnote 8: [https://www.robots.ox.ac.uk/](https://www.robots.ox.ac.uk/) vgg/data/voxconverse/ * _LibriCSS_: The LibriCSS corpus is a valuable resource for researchers studying speech separation, recognition, and speaker diarization. The corpus comprises 10 hours of multichannel recordings captured using a 7-channel microphone array in a real meeting room. The audio was played from the LibriSpeech corpus, and each of the ten sessions was subdivided into six 10-minute mini-sessions. Each mini-session contained audio from eight speakers and was designed to have different overlap ratios ranging from 0% to 40%. To make research easier, the corpus includes baseline systems for speech separation and Automatic Speech Recognition (ASR) and a baseline system that integrates speech separation, speaker diarization, and ASR. These baseline systems have already been developed and made available to researchers. * _Rich Transcription Evaluation Series_: The Rich Transcription Evaluation Series dataset is a collection of speech data used for speaker diarization evaluation. The Rich Transcription Fall 2003 Evaluation (RT-03F) was the first evaluation in the series focused on "Who Said What" tasks. The dataset has been used in subsequent evaluations, including the Second DIHARD Diarization Challenge, which used the Jaccard index to compute the JER (Jaccard Error Rate) for each pair of segmentations. The dataset is essential for data-driven spoken language processing methods and calculates speaker diarization accuracy at the utterance level. The dataset includes rules, evaluation methods, and baseline systems to promote reproducible research in the field. The dataset has been used in various speaker diarization systems and their subtasks in the context of broadcast news and CTS data * _CHiME-5/6 challenge and dataset_ The CHiME-5/6 challenge is a speech processing challenge focusing on distant multi-microphone conversational speech diarization and recognition in everyday home environments. The challenge provides a dataset of recordings from everyday home environments, including dinner recordings originally collected for and exposed during the CHiME-5 challenge. The dataset is designed to be representative of natural conversational speech. The challenge features two audio input conditions: single-channel and multichannel. Participants are provided with baseline systems for speech enhancement, speech activity detection (SAD), and diarization, as well as results obtained with these systems for all tracks. The challenge aims to improve the robustness of diarization systems to variations in recording equipment, noise conditions, and conversational domains. * one recorded using lapel microphones for individual speakers and the other using omnidirectional microphone arrays placed on the table. It is an ideal dataset for evaluating speaker diarization systems integrated with the ASR module. AMI's value proposition is further enhanced by providing forced alignment data, which captures the timings at the word and phoneme levels and speaker labeling. Finally, it's worth noting that each meeting session involves a small group of three to five speakers. #### 5.4.3. Models Speaker diarization has been a subject of research in the field of audio processing, with the goal of separating speakers in an audio recording. In recent years, deep learning has emerged as a powerful technique for speaker diarization, leading to significant advancements in this field. In this article, we will explore some of the recent developments in deep learning architecture for speaker diarization, focusing on different modules of speaker diarization as outlined in Figure 16. Through this discussion, we will highlight major advancements in each module. * _Segmentation and clustering_: Speaker diarization systems typically use a range of techniques for segmenting speech, such as identifying speaker change, uniform speaker segmentation, ASR-based word segmentation, and supervised speaker turn detection. However, each approach has its own benefits and drawbacks. Uniform speaker segmentation involves dividing speech into segments of equal length, which can be difficult to optimize to capture speaker turn boundaries and include enough speaker information. ASR-based word segmentation identifies word boundaries using automatic speech recognition, but the resulting segments may be too brief to provide adequate speaker information. Supervised speaker turn detection, on the other hand, involves a specialized model that can accurately identify speaker turn timestamps. While this method can achieve high accuracy, it requires labeled data for training. These techniques have been widely discussed in previous research, and choosing the appropriate one depends on the specific requirements of the application. * The authors in [98] propose real-time speaker diarization system that combines incremental clustering and local diarization applied to a rolling window of speech data and is designed to handle overlapping speech segments. The proposed pipeline is designed to utilize end-to-end overlap-aware segmentation to detect and separate overlapping speakers. * In another related work, authors in [643] introduce a novel speaker diarization system with a generalized neural speaker clustering module as the backbone. * In a recent study conducted by Park et al. [415], a new framework for spectral clustering is proposed that allows for automatic parameter tuning of the clustering algorithm in the context of speaker diarization. The proposed technique utilizes normalized maximum eigengap (NME) values to determine the number of clusters and threshold parameters for each row in an affinity matrix during spectral clustering. The authors demonstrated that their method outperformed existing state-of-the-art methods on two different datasets for speaker diarization. * Bayesian HMM clustering of x-vector sequences (VBx) diarization approach, which clusters x-vectors using a Bayesian hidden Markov model (BHMM) [285], combined with a ResNet101 (He et al. [176]) \(x\)-vector extractor achieves superior results on CALLHOME [111], AMI [53] and DIHARD II [472] datasets * _Speaker Embedding Extraction and Classification_: * Attentive Aggregation for Speaker Diarization [278]: This approach uses an attention mechanism to aggregate embeddings from multiple frames and generate speaker embeddings. The speaker embeddings are then used for clustering to identify speaker segments. * End-to-End Speaker Diarization with Self-Attention [145]: This method uses a self-attention mechanism to capture the correlations between the input frames and generates embeddings for each frame. The embeddings are then used for clustering to identify speaker segments. * Wang et al. [577] present an innovative method for measuring similarity between speaker embeddings in speaker diarization using neural networks. The approach incorporates past and future contexts and uses a segmental pooling strategy. Furthermore, the speaker embedding network and similarity measurement model are jointly trained. The paper extends this framework to target-speaker voice activity detection (TS-VAD) [372]. The proposed method effectively learns the similarity between speaker embeddings by considering both past and future contexts. * Time-Depth Separable Convolutions for Speaker Diarization [266]: This approach uses time-depth separable convolutions to generate embeddings for each frame, which are then used for clustering to identify speaker segments. The method is computationally efficient and achieves state-of-the-art performance on several benchmark datasets. * Numerous studies in this field centre around developing a re-segmentation strategy for diarization systems that can effectively handle both voice activity and overlapped speech detection. This approach can also be a post-processing step to identify and assign overlapped speech regions accurately. Notable examples of such works include those by Bullock et al. [47] and Bredin and Laurent [45]. * _End-to-End Neural Diarization_: In addition to the above work, end-to-end speaker diarization systems have gained the attention of the research community due to their ability to handle speaker overlaps and their optimization to minimize diarization errors directly. In one such work, the authors propose end-to-end neural speaker diarization that does not rely on clustering and instead uses a self-attention-based neural network to directly output the joint speech activities of all speakers for each segment [145]. Following the trend, several other works propose enhanced architectures based on self-attention [324; 630] ### Speech-to-speech translation #### 5.5.1 Task Description Speech-to-text translation (ST) is the process of converting spoken language from one language to another in text form. Traditionally, this has been achieved using a cascaded structure that incorporates automatic speech recognition (ASR) and machine translation (MT) components. However, a more recent end-to-end (E2E) method [15, 62, 639, 669, 165, 478, 522, 639] has gained popularity due to its ability to eliminate issues with error propagation and high latency associated with cascaded methods [63, 516]. The E2E method uses an audio encoder to analyze audio signals and a text decoder to generate translated text. One notable advantage of ST systems is that they allow for more natural and fluent communication than other language translation methods. By translating speech in real-time, ST systems can capture the subtleties of speech, including tone, intonation, and rhythm, which are essential for effective communication. Developing ST systems is a highly intricate process that involves integrating various technologies such as speech recognition, natural language processing, and machine translation. One significant obstacle in ST is the variation in accents and dialects across different languages, which can significantly impact the accuracy of the translation. #### 5.5.2 Dataset There are numerous datasets available for the end-to-end speech translation task, with some of the most widely used ones being MuST-C [56], IWSLT [481], and CoVoST 2 [564]. These datasets cover a variety of languages, including English, German, Spanish, French, Italian, Dutch, Portuguese, Romanian, Arabic, Chinese, Japanese, Korean, and Russian. For instance, TED-LIUM [468] is a suitable dataset for speech-to-text, text-to-speech, and speech-to-speech translation tasks, as it contains transcriptions and audio recordings of TED talks in English, French, German, Italian, and Spanish. Another open-source dataset is Common Voice, which covers several languages, including English, French, German, Italian, and Spanish. Additionally, VoxForge9 is designed for acoustic model training and includes speech recordings and transcriptions in several languages, including English, French, German, Italian, and Spanish. LibriSpeech [410] is a dataset of spoken English specifically designed for speech recognition and speech-to-text translation tasks. Lastly, How2 [124] is a multimodal machine translation dataset that includes speech recordings, text transcriptions, and video and image data, covering English, German, Italian, and Spanish. These datasets have been instrumental in training state-of-the-art speech-to-speech translation models and will continue to play a crucial role in further advancing the field. Footnote 9: [http://www.voxforge.org/](http://www.voxforge.org/) #### 5.5.3 Models End-to-end speech translation models are a promising approach to direct the speech translation field. These models use a single sequence-to-sequence model for speech-to-text translation and then text-to-speech translation. In 2017, researchers demonstrated that end-to-end models outperform cascade models[3]. One study published in 2019 provides an overview of different end-to-end architectures and the usage of an additional connectionist temporal classification (CTC) loss for better convergence [27]. The study compares different end-to-end architectures for speech-to-text translation. In 2019, Google introduced Translatotron [219], an end-to-end speech-to-speech translation system. Translatotron uses a single sequence-to-sequence model for speech-to-text translation and then text-to-speech translation. No transcripts or other intermediate text representations are used during inference. The system was validated by measuring the BLEU score, computed with text transcribed by a speech recognition system. Though the results lag behind a conventional cascade system, the feasibility of the end-to-end direct speech-to-speech translation was demonstrated [219]. In a recent publication from 2020, researchers presented a study on an end-to-end speech translation system. This system incorporates pre-trained models such as Wav2Vec 2.0 and mBART, along with coupling modules between the encoder and decoder. The study also introduces an efficient fine-tuning technique, which selectively trains only 20% of the total parameters [622]. The system developed by the UPC Machine Translation group actively participated in the IWSLT 2021 offline speech translation task, which aimed to develop a system capable of translating English audio recordings from TED talks into German text. E2E ST is often improved by pretraining the encoder and/or decoder with transcripts from speech recognition or text translation tasks [639, 63, 110, 633]. Consequently, it has become the standard approach used in various toolkits [660, 214, 563, 669]. However, transcripts are not always available, and the significance of pretraining for E2E ST is rarely studied. Zhang et al. [638] explored the effectiveness of E2E ST trained solely on speech-translation pairs and proposed an algorithm for training from scratch. The proposed system outperforms previous studies in four benchmarks covering 23 languages without pretraining. The paper also discusses neural acoustic feature modeling, which extracts acoustic features directly from raw speech signals to simplify inductive biases and enhance speech description. ### Speech enhancement #### 5.6.1. Task Description In situations where there is ambient noise present, speech recognition systems can encounter difficulty in correctly interpreting spoken language signals, resulting in reduced performance [123]. One possible solution to address this issue is the development of speech enhancement systems that can eliminate noise and other types of signal distortion from spoken language, thereby improving signal quality. These systems are frequently implemented as a preprocessing step to enhance the accuracy of speech recognition and can serve as an effective approach for enhancing the performance of ASR systems in noisy environments. This section will delve into the significance of speech enhancement technology in boosting the accuracy of speech recognition. #### 5.6.2. Dataset One popular dataset for speech enhancement tasks is AISHELL-4, which comprises authentic Mandarin speech recordings captured during conferences using an 8-channel circular microphone array. In accordance with [144], AISHELL-4 is composed of 211 meeting sessions, each featuring 4 to 8 speakers, for a total of 120 hours of content. This dataset is of great value for research into multi-speaker processing owing to its realistic acoustics and various speech qualities, including speaker diarization and speech recognition Another popular dataset used for speech enhancement is the dataset from Deep Noise Suppression (DNS) challenge [457], a large-scale dataset of noisy speech signals and their corresponding clean speech signals. The DNS dataset contains over \(10,000\) hours of noisy speech signals and over \(1,000\) hours of clean speech signals, making it useful for training deep learning models for speech enhancement. The Voice Bank Corpus (VCTK) is another dataset containing speech recordings from 109 speakers, each recording approximately 400 sentences. The dataset contains clean and noisy speech recordings, making it useful for training speech enhancement models. These datasets provide realistic acoustics, rich natural speech characteristics, and large-scale noisy and clean speech signals, making them useful for training deep learning models. #### 5.6.3 Models Several Classical algorithms have been reported in the literature for speech enhancement, including spectral subtraction [41], Wiener and Kalman filtering [319, 480], MMSE estimation [128], comb filtering [222], subspace methods [171]. Phase spectrum compensation [407]. However, classical algorithms such as spectral subtraction and Wiener filtering approach the problem in the spectral domain and are restricted to stationary or quasi-stationary noise. Neural network-based approaches inspired from other areas such as computer vision [10, 146, 188] and generative adversarial networks [142, 321, 469, 596] or developed for general audio processing tasks [588, 157] have outperformed the classical approaches. Various neural network models based on different architectures, including fully connected neural networks [606], deep denoising autoencoder [346], CNN [143], LSTM [77], and Transformer [263] have effectively handled diverse noisy conditions. Diffusion-based models have also shown promising results for speech enhancement [298, 349, 623] and have led to the development of novel speech enhancement algorithms called Conditional Diffusion Probabilistic Model (CDiffuSE) that incorporates characteristics of the observed noisy speech signal into the diffusion and reverse processing [349]. CDiffuSE is a generalized formulation of the diffusion probabilistic model that can adapt to non-Gaussian real noises in the estimated speech signal. Another diffusion-based model for speech enhancement is StoRM [298], which stands for Stochastic Regeneration Model. It uses a predictive model to remove vocalizing and breathing artifacts while producing high-quality samples using a diffusion process, even in adverse conditions. StoRM has shown great ability at bridging the performance gap between predictive and generative approaches for speech enhancement. Furthermore, authors in [623] propose cold diffusion process is an advanced iterative version of the diffusion process to recover clean speech from noisy speech. According to the authors, it can be utilized to restore high-quality samples from arbitrary degradations. Table 10 summarizing the performance of different speech enhancement algorithms on the Deep Noise Suppression (DNS) Challenge dataset using different metrics. \begin{table} \begin{tabular}{|l|c c c c|l|} \hline **Model** & **PESQ-WB** & **PESQ-NB** & **SI-SDR-WB** & **SI-SDR-NB** & **Architecture** \\ \hline FRCRN [664] & 3.23 & - & - & - & U-Net + CRN \\ Sudo rm -rf [541] & 2.95 & - & 19.7 & - & UConvBlock + CNN \\ DCTCRN-P [311] & 2.82 & - & - & - & CNN \\ PoCoNet [216] & 2.7885 & - & - & - & - \\ FullSubNet [172] & 2.777 & 3.305 & 17.29 & - & LSTM \\ RNN-Modulation [559] & 2.75 & - & - & - & GRU \\ Conv-TasNet-SNR [271] & 2.73 & - & - & - & CNN \\ Sudo rm-rf [540] & 2.69 & - & 18.6 & - & UConvBlock + CNN \\ RemixIT [541] & 2.34 & - & 16.0 & - & UConvBlock \\ SN-Net [668] & - & 3.39 & - & 19.52 & CNN \\ DCCRN-E-Aug [202] & - & 3.214 & - & - & CNN + LSTM \\ DTLN [592] & - & 3.04 & 16.34 & - & LSTM \\ DCCRN-E [202] & - & 3.04 & - & - & CNN + LSTM \\ \hline \end{tabular} \end{table} Table 10: Performance of different speech enhancement algorithms on the Deep Noise Suppression (DNS) Challenge dataset. The table showcases improvements in PESQ-WB, PESQ-NB, SI-SDR-WB, and SI-SDR-NB metrics, and identifies the top-performing methods in each category. ### Audio Super Resolution #### 5.7.1. Task Description Audio super-resolution is a technique that involves predicting the missing high-resolution components of low-resolution audio signals. Achieving this task can be difficult due to the continuous nature of audio signals. Current methods typically approach super-resolution by treating audio as discrete data and focusing on fixed scale factors. In order to accomplish audio super-resolution, deep neural networks are trained using pairs of low and high-quality audio examples. During testing, the model predicts missing samples within a low-resolution signal. Some recent deep network approaches have shown promise by framing the problem as a regression issue either in the time or frequency domain [320]. These methods have been able to achieve impressive results. #### 5.7.2. Datasets This section provides an overview of the diverse datasets utilized in Audio Super Resolution literature. One of the most frequently used datasets is the MUSDB18, specifically designed for music source separation and enhancement. This dataset encompasses more than 150 songs with distinct tracks for individual instruments. Another prominent dataset is UrbanSound8K, which comprises over, 8000 environmental sound files collected from 10 different categories, making it ideal for evaluating Audio Super Resolution algorithms in noisy environments. Furthermore, the VoiceBank dataset is another essential resource for evaluating Audio Super Resolution systems, comprising over 10,000 speech recordings from five distinct speakers. This dataset offers a rich source of information for assessing speech processing systems, including Audio Super Resolution. Another dataset, LibriSpeech, features more than 1000 hours of spoken words from several books and speakers, making it valuable for evaluating Audio Super Resolution algorithms to enhance the quality of spoken words. Finally, the TED-LIUM dataset, which includes over 140 hours of speech recordings from various speakers giving TED talks, provides a real-world setting for evaluating Audio Super Resolution algorithms for speech enhancement. By using these datasets, researchers can evaluate Audio Super Resolution systems for a wide range of audio signals and improve the generalizability of these algorithms for real-world scenarios. #### 5.7.3. Models Audio super-resolution has been extensively explored using deep learning architectures [8, 40, 168, 253, 290, 320, 333, 392, 453, 624]. One notable paper by Rakotonirina [453] proposes a novel network architecture that integrates convolution and self-attention mechanisms for audio super-resolution. Specifically, they use Attention-based Feature-Wise Linear Modulation (AFiLM) [453] to modulate the activations of the convolutional model. In another recent work by Yoneyama et al. [624], the super-resolution task is decomposed into domain adaptation and resampling processes to handle acoustic mismatch in unpaired low- and high-resolution signals. To address this, they jointly optimize the two processes within the CycleGAN framework. Moreover, the Time-Frequency Network (TFNet) [320] proposed a deep network that achieves promising results by modeling the task as a regression problem in either time or frequency domain. To further enhance audio super-resolution, the paper proposes a time-frequency network that combines time and frequency domain information. Finally, recent advancements in diffusion models have introduced new approaches to neural audio upsampling. Specifically, Lee and Han [290], and Han and Lee [168] propose NU-Wave 1 and 2 diffusion probabilistic models, respectively, which can produce high-quality waveforms with a sampling rate of 48kHz from coarse 16kHz or 24kHz inputs. These models are a promising direction for improving audio super-resolution. ### Voice Activity Detection (VAD) #### 5.8.1. Task Description Due to the increasing sophistication of mobile devices like smartphones, speech-controlled applications have become incredibly popular. These apps offer a hands-free method for controlling home devices, facilitating telephony, and allowing drivers to safely use their vehicle's infotainment systems while on the go. However, accurately distinguishing between noise and human speech is critical for these applications to work without interruption. To overcome this issue, Voice Activity Detection (VAD) systems have been created to recognize speech presence or absence, thus ensuring consistent and effective operation. #### 5.8.2. Datasets Voice activity detection models can be trained and evaluated using various datasets, each with unique features. The TIMIT dataset is popular, providing, 6300 phonetically transcribed utterances from 630 speakers. On the other hand, CHiME-5 is designed for speech separation and recognition in real-world environments and includes multichannel recordings of 20 speakers in locations such as cafes, buses, and pedestrian areas. Despite its primary purpose, CHiME-5 is widely used for voice activity detection. AURORA-4 is specifically designed to evaluate the robustness of ASR systems and contains over \(10,000\) in noisy speech utterances recorded in environments like car noise, babble noise, and street noise. It is also extended to VAD for evaluating challenging scenarios. DEMAND is a suitable dataset for evaluating VAD algorithms as it includes over 1200 artificially created noise signals with various noise types like white noise, pink noise, and cafe noise. Finally, VoxCeleb contains over 100,000 utterances from more than 6,000 speakers, primarily designed for speaker recognition systems evaluation, but it can also be used for voice activity detection. #### 5.8.3. Models Recent advances in deep learning have greatly improved the performance of voice activity detection (VAD), particularly in noisy environments [380, 462]. To further improve VAD accuracy, researchers have explored various deep learning architectures, including NAS-VAD [462] and self-attentive VAD [223]. NAS-VAD employs neural architecture search to reduce the need for human effort in network design and has demonstrated superior performance in terms of AUC and F1-score compared to other models. Similarly, self-attentive VAD uses a self-attention mechanism to capture long-term dependencies in input signals and has also outperformed other models on the TIMIT dataset. Additionally, a deep neural network (DNN) system has been proposed for automatic speech detection in audio signals [380]. This system uses MLPs, RNNs, and CNNs, with CNNs delivering the best performance. Furthermore, a hybrid acoustic-lexical deep learning approach has been proposed for deception detection, combining both acoustic and lexical features. ### Speech Quality Assessment #### 5.9.1. Task Description Speech quality assessment is a crucial process that involves the objective evaluation of speech signals using various metrics and measures. The primary aim of this assessment is to determine the level of intelligibility and comprehensibility of speech to a human listener. Although human evaluation is considered the gold standard for assessing speech quality, it can be time-consuming, expensive, and not scalable. Mean opinion score (MOS) is the most commonly used and reliable method of obtaining human judgments for speech quality estimation. Accurate speech quality assessment is essential in the development and design of real-world applications such as ASR, Speech Enhancement, and VoIP. #### 5.9.2. Datasets The speech quality assessment algorithms are evaluated using several datasets, each with unique characteristics. The TIMIT Acoustic-Phonetic Continuous Speech Corpus [153] has clean speech recordings and artificially generated degraded versions for speech synthesis and quality assessment research. The NOIZEUS dataset [203] is designed for evaluating noise reduction and speech quality assessment algorithms, with clean speech and artificially degraded versions containing various types of noise and distortion. The ETSI Aurora databases [361] are used for evaluating speech enhancement techniques and quality assessment algorithms, containing speech recordings with different types of distortions like acoustic echo and background noise. Furthermore, for training and validation, the clean speech recordings from the DNS Challenge [457] can be used along with the noise dataset such as FSDK50 [138] for additive noise degradation. #### 5.9.3. Models Current objective methods such as Perceptual Evaluation of Speech Quality (PESQ) [466] and Perceptual Objective Listening Quality Assessment (POLQA) [36] for evaluating the quality of speech mostly rely on the availability of the corresponding clean reference. These methods fail in real-world scenarios where the ground truth clean reference is unavailable. In recent years, several attempts to automatically estimate the MOS using neural networks for performing quality assessment and predicting ratings or scores have attracted much attention [55, 57, 118, 119, 404, 514]. These approaches outperform traditional approaches without the need for a clean reference. However, they lack robustness and generalization capabilities, limiting their use in real-world applications. The authors in [404] explore Deep machine listening for Estimating Speech Quality (DESQ) for predicting the perceived speech quality based on phoneme posterior probabilities obtained using a deep neural network. In recent years, there have been several quality assessment frameworks developed to estimate speech quality, such as NORESQA [369] based on non-matching reference (NMR). NORESQA takes inspiration from the human ability to assess speech quality even when the content is non-matching. Additionally, NORESQA introduces two new metrics - NORESQA-score, which is based on SI-SDR for speech, and NORESQA-MOS, which evaluates the Mean Opinion Score (MOS) of a speech recording using non-matching references. A recent extension to NORESQA, known as NORESQA-MOS, has been proposed in [368]. The primary difference between these frameworks is that while NORESQA estimates speech quality using non-matching references through NORESQA-score and NORESQA-MOS, NORESQA-MOS is specifically designed to assess the MOS of a given speech recording using NMRs. ### Speech Separation #### 5.10.1. Task Description Speech separation refers to separating a mixed audio signal into its sources, including speech, music, and background noise. The problem is often referred to as the cocktail party problem [175], as it mimics the difficulty of listening to a conversation in a noisy room with multiple speakers. This problem is particularly relevant in real-world scenarios such as phone conversations, meetings, and live events, where various extraneous sounds may contaminate speech. Traditionally, speech separation has been studied as a signal-processing problem, where researchers have focused on developing algorithms to separate sources based on their spectral characteristics [557, 635]. However, recent advances in machine learning have led to a new approach that formulates speech separation as a supervised learning problem [181, 352, 587]. This approach has seen a significant improvement in performance with the advent of deep neural networks, which can learn complex relationships between input features and output sources. #### 5.10.2 Datasets The WSJ0-2mix dataset comprises mixtures of two Wall Street Journal corpus (WSJ) speakers. It consists of a training set of 30,000 mixtures and a test set of 5000 mixtures, and it has been widely used to evaluate speech separation algorithms. CHiME-4 is a dataset that contains recordings of multiple speakers in real-world environments, such as a living room, a kitchen, and a cafe and is designed to test algorithms in challenging acoustic environments. TIMIT-2mix is a dataset based on the TIMIT corpus, consisting of mixtures of two speakers, and includes a training set of 462 mixtures and a test set of 400 mixtures. The dataset provides a more controlled environment than CHiME-4 to test speech separation algorithms. LibriMix is derived from the LibriSpeech corpus and includes mixtures of up to four speakers, with a training set of 100,000 mixtures and a test set of 1,000 mixtures, providing a more realistic and challenging environment than WSJ0-2mix. Lastly, the MUSDB18 dataset contains mixtures of music tracks separated into individual stems, including vocals, drums, bass, and other instruments. It consists of a training set of 100 songs and a test set of 50 songs. Despite not being specifically designed for that purpose, it has been used as a benchmark for evaluating speech separation algorithms. #### 5.10.3 Models Deep Clustering++ [181], first proposed in 2015, employs deep neural networks to extract features from the input signal and cluster similar feature vectors in a latent space to separate different speakers. The model's performance is improved using spectral masking and a permutation invariant training method. The advantage of this model is its ability to handle multiple speakers, but it also has a high computational cost. Chimera++ [587] is another effective model that combines deep clustering with mask-inference networks in a multi-objective training scheme. The model is trained using a multitask learning approach, optimizing speech enhancement and speaker identification. Chimera++ can perform speech enhancement and speaker identification but has a relatively long training time. TasNet v2 [352] employs a convolutional neural network (CNN) to process the input signal and generate a time-frequency mask for each source. The model is trained using an invariant permutation training (PIT) method [265], which enables it to separate multiple sources accurately. TasNet v2 achieves state-of-the-art performance in various speech separation tasks with high separation accuracy, but its disadvantage is its relatively high computational cost. The variant of TasNet based on CNNs is proposed in [353]. The model is called Conv-TasNet and can generate a time-frequency mask for each source to obtain the separated source's signal. Compared to previous models, Conv-TasNet has faster processing time but lower accuracy. In recent research, encoder-decoder architectures have been explored for effectively separating source signals. One promising approach is the Hybrid Tasnet architecture [613], which utilizes an encoder to extract features from the input signal and a decoder to generate the independent sources. This hybrid architecture captures both short-term and long-term dependencies in the input signal, leading to improved separation performance. However, it should be noted that this model's higher computational cost should be considered when selecting an appropriate separation method. Dual-path RNN [351] uses RNN architecture to perform speech separation. The model uses a dual-path structure [351] to capture low-frequency and high-frequency information in the input signal. Dual-path RNN achieves impressive performance in various speech separation tasks. The advantage of this model is its ability to capture low-frequency and high-frequency information, but its disadvantage is its high computational cost. Gated DualPathRNN [387] is a variant of Dual-path RNN that employs gated recurrent units (GRUs) to improve the model's performance. The model uses a gating mechanism to control the flow of information in the recurrent network, allowing it to capture long-term dependencies in the input signal. Gated DualPathRNN achieves state-of-the-art performance in various speech separation tasks. The advantage of this model is its ability to capture long-term dependencies, but its disadvantage is its higher computational cost than other models. Wavesplit [633] employs a Wave-U-Net [517] architecture to perform speech separation. The model uses a fully convolutional neural network to extract features from the input signal and generate a time-frequency mask for each source. Wavesplit achieves impressive performance in various speech separation tasks. The advantage of this model is its high separation accuracy and relatively fast processing time, but its disadvantage is its relatively high memory usage. Numerous studies have investigated the application of Transformer architecture in the context of speech separation. One such study is SepFormer [518], which has yielded encouraging outcomes on the WSJ0-2mix and WSJ0-3mix datasets, as evidenced by the data presented in Table 11. Additionally, MossFormer [663] is another cutting-edge architecture that has successfully pushed the boundaries of monaural speech separation across multiple speech separation benchmarks. It is worth noting that although both models employ attention mechanisms, MossFormer integrates a blend of convolutional modules to further amplify its performance. Diffusion models have been proven to be highly effective in various machine learning tasks related to computer vision, as well as speech-processing tasks. The recent development of DiffSep [482] for speech separation, which is based on score-matching of a stochastic differential equation, has shown competitive performance on the VoiceBank-DEMAND dataset. Additionally, Separate And Diffuse [357], another diffusion-based model that utilizes a pretrained diffusion model, currently represents the state-of-the-art performance in various speech separation benchmarks (refer to Table 11). These advancements demonstrate the significant potential of diffusion models in advancing the field of machine learning and speech processing. ### Spoken Language Understanding #### 5.11.1 Task Description Spoken Language Understanding (SLU) is a rapidly developing field that brings together speech processing and natural language processing to help machines comprehend human speech and respond appropriately. The ultimate goal of SLU is to bridge the gap between human and machine \begin{table} \begin{tabular}{l l c c c c c c c c} \hline \hline Model & Architecture & WSJ0-2mix & WSJ0-3mix & WSJ0-5mix & Libri2Mix & Libri5Mix & Libri10Mix & Libri2Mix & WHAM \\ \hline Separate And Diffuse [357] & Diffusion & 23.9 & 20.9 & - & 21.5 & 14.2 & 9 & 5.2 & - \\ MossFormer (L) [663] & Transformer & 22.8 & 21.2 & - & - & - & - & - & - \\ MossFormer (M) [663] & Transformer & 22.5 & 20.8 & - & - & - & - & - & 17.3 \\ SepFormer [518] & Transformer & 22.3 & 19.5 & - & - & - & - & - & - \\ Sandglasset [283] & Transformer + LSTM & 21.0 & 19.5 & - & - & - & - & - & - \\ Hungarian PIT [120] & RNN & - & - & 13.22 & - & 12.72 & 7.78 & 4.26 & - \\ TDANet (L) [308] & Transformer + CNN & - & - & - & 17.4 & - & - & - & 15.2 \\ TDANet [308] & Transformer + CNN & - & - & - & 16.9 & - & - & - & 14.8 \\ Sepit [356] & CNN & 22.4 & 20.1 & - & - & 13.7 & 8.2 & - & - \\ Gated DualPathRNN [387] & CNN + LSTM & 20.12 & 16.85 & 10.56 & - & - & - & - & - \\ Dual-path RNN [351] & LSTM & 18.8 & - & - & - & - & - & - & - \\ Conv-Tasnet [353] & CNN & 15.3 & - & - & - & - & - & - & - \\ \hline \hline \end{tabular} \end{table} Table 11. Table comparing the performance of different speech separation methods using SI-SDRi metrics on various speech separation benchmarks. understanding. Typically, SLU tasks involve identifying the domain or topic of a spoken utterance, determining the speaker's intent or goal in making the utterance, and filling in any relevant slots or variables associated with that intent. For example, consider the spoken utterance, "_What is the weather like in San Francisco today?_" An SLU system would need to identify the domain (weather), the intent (obtaining current weather information), and the specific slot to be filled (location-San Francisco) to generate an appropriate response. By improving SLU capabilities, we can enable more effective communication between humans and machines, making interactions more natural and efficient. Data-driven methods are frequently utilized to achieve these tasks, employing large datasets to train models capable of accurately recognizing and interpreting spoken language. Among these methods, machine learning techniques, such as deep neural networks, are widely employed, given their exceptional ability to handle complex and ambiguous speech data. The SLU task may be subdivided into the following categories for greater clarity. * _Keyword Spotting_: Keyword Spotting (KS) is a technique used in speech processing to identify specific words or phrases within spoken language. It involves analysing audio recordings and detecting instances of pre-defined keywords or phrases. This technique is commonly used in applications such as voice assistants, where the system needs to recognize specific commands or questions from the user. * _Intent Classification_: Intent Classification (IC) is a spoken language understanding task that involves identifying the intent behind a spoken sentence. It is usually implemented as a pipeline process, with a speech recognition module followed by text processing that classifies the intents. However, end-to-end intent classification using speech has numerous advantages compared to the conventional pipeline approach using AST followed by NLP modules. * _Slot Filling_: Slot Filling (SF) is a widely used technique in Speech Language Understanding (SLU) that enables the extraction of important information, such as names, dates, and locations, from a user's speech. The process involves identifying the specific pieces of information that are relevant to the user's request and placing them into pre-defined slots. For instance, if a user asks for the weather in a particular city, the system will identify the city name and fill it into the appropriate slot, thereby providing an accurate and relevant response. #### 5.11.2 Dataset * Keyword Spotting Datasets: * _Coucke et al._[100]: This dataset is a speech command recognition dataset that consists of 105,000 spoken commands in English, with each command being one of 35 keywords. The dataset is designed to be highly varied and challenging, with a diverse set of speakers and background noise conditions. * _Leroy et al._[300]: This dataset is a federated learning-based keyword spotting dataset, it is composed of data from multiple sources that are trained together without sharing the raw data. The dataset consists of audio recordings from multiple devices and environments, with the goal of improving the robustness of KS across different devices and settings * _Auto-KWS_[570]: This dataset is automatically generated using TTS approach. The dataset consists of 1000 keywords spoken by 100 different synthetic voices, with variations in accent, gender, and age. * _Speech Commands_[589]: This data is a large-scale dataset for KS task that consists of over \(100,000\) spoken commands in English, with each command belonging to 35 different keywords. The dataset is specifically designed to be highly varied and challenging, with a diverse set of speakers and background noises. It is commonly used as a benchmark dataset for KS research. * Intent Classification and Slot Filling * _ATIS_[179]: The Airline Travel Information System (ATIS) dataset is a collection of spoken queries and responses related to airline travel, such as flight reservations, flight status, and airport information. The dataset is annotated with both intent labels (e.g. "flight booking", "flight status inquiry") and slot labels (e.g. depart city, arrival city, date). The ATIS dataset has been used extensively as a benchmark for natural language understanding models. * _SNIPS_[101]: SNIPS is a dataset of voice commands designed for building a natural language understanding system. It consists of thousands of examples of spoken requests, each annotated with the intent of the request (e.g. "play music", "set an alarm", etc.). The dataset is widely used for training IC and SF models. * _Fluent Speech Commands_[350]: It is a dataset of voice commands for controlling smart home devices, such as lights, thermostats, and locks. The dataset consists of over 1,5000 spoken commands, each labeled with the intended devices and action (e.g. "turn on the living room lights", "set the thermostat to 72 degrees"). The dataset is designed to have variations in speaker accent, background noise, and device placement. * _MIT-Restaurant and MIT-Movie_[335]: These are two datasets created by researchers at MIT for training natural language understanding models from restaurant and movie information requests. The dataset contains spoken and text-based queries, each labeled with the intent of the request (e.g. "find a nearby Italian restaurant", get information about the movie Inception") and relevant slot information (e.g. restaurant type, movie name, etc). The datasets are widely used for benchmarking natural language understanding models. #### 5.11.3 Models * _Keyword Spotting:_ The state-of-the-art techniques for keyword spotting in speech involve deep learning models, such as CNNs [467] and transformers [37]. Wav2Keyword is one of the popular model based on Wav2Vec2.0 architecture [486] and have achieved SOTA results on Speech Commands data V1 and V21. Another model that achieves SOTA classification accuracy on the Google Speech commands dataset is Keyword Transformer (KWT) [486]. KWT uses a transformer model and achieves 98.6% and 97.7% accuracy on the 12 and 35-word tasks, respectively. KWT also has low latency and can be used on mobile devices. * The DIET architecture, as introduced in [48], is a transformer-based multitask model that addresses intent classification and entity recognition simultaneously. DIET allows for the seamless integration of various pre-trained embeddings such as BERT, GloVe, and ConveRT. Results from experiments show that DIET outperforms fine-tuned BERT and has the added benefit of being six times faster to train. * Chang et al. [59] investigated the effectiveness of prompt tuning on the GSLM architecture and showcased its competitiveness on various SLU tasks, such as KS, IC, and SF. Impressively, this approach achieves comparable results with fewer trainable parameters than full fine-tuning. Despite being a popular and effective technique in numerous NLP tasks, prompt tuning has not received much attention in the speech community. Additionally, other researchers have pursued a different path by utilizing pre-trained wav2vec2.0 and different adapters [315] to attain state-of-the-art outcomes. Despite the remarkable progress made in the field of SLU, accurately comprehending human speech in real-life situations continues to pose significant challenges. These challenges are amplified by the presence of diverse accents, dialects, and linguistic variations. In a notable study, Vanzo et al. (2017) emphasize the significance of SLU in facilitating effective human-robot interaction, particularly within the context of house service robots. The authors delve into the specific obstacles encountered in this domain, which encompass handling noisy and unstructured speech, accommodating various accents and speech variations, and deciphering complex commands involving multiple actions. To overcome these obstacles, ongoing research endeavors are dedicated to developing innovative solutions that enhance the precision and efficacy of SLU systems. By addressing these challenges, the aim is to enable more robust and accurate speech comprehension in diverse real-life scenarios. Recent studies, including the comprehensive analysis of the performance of different models and techniques for Keyword Spotting (KS) and Slot Filling (SF) tasks on Google Speech Commands and ATIS benchmark datasets (Table 12), have furnished valuable insights into the strengths and limitations of such approaches in SLU. Capitalizing on these findings and leveraging the latest advances in deep learning and speech recognition could help us continue to expand the frontiers of spoken language understanding and drive further innovation in this domain. ### Audio/visual multimodal speech processing The process of speech perception in humans is intricate and involves multiple sensory modalities, including auditory and visual cues. The generation of speech sounds involves articulators such as the tongue, lips, and teeth, whose movements are critical for producing different speech sounds and visible to others. The importance of visual cues becomes more pronounced for individuals with hearing impairments who depend on lip-reading to comprehend spoken language, while individuals with normal hearing can also benefit from visual cues in noisy environments. When investigating language comprehension and communication, it is essential to consider both auditory and visual information, as studies have demonstrated that visual information can assist in distinguishing between acoustically similar sounds that differ in articulatory characteristics. A comprehensive understanding of the interaction between these sensory modalities can lead to the development of assistive technologies for individuals with hearing impairments and enhance communication strategies in challenging listening environments. #### 5.12.1. Task Description The tasks under audiovisual multimodal processing can be subdivided into the following categories. * _Lip-reading_: Lip-reading is a remarkable ability that allows us to comprehend spoken language from silent videos. However, it is a challenging task even for humans. Recent \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Keyword Spotting on Google Speech Commands (Accuracy \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}}\) } } \\ \hline Model & Reference & Google Speech Commands V1 12 & Google Speech Commands V2 12 & Google Speech Commands V2 35 & Model & Reference & ATIS \\ \hline TripletLoss-v015 & (Sudig et al., 2017) & 98.56 & 98.37 & 97.8 & CTRAN & (Sudig et al., 2017) & 0.9560 \\ WireXWS & (Sudig et al., 2017) & 97.9 & 98.5 & 97.8 & Bi-modal with a decoder & (Sudig et al., 2017) & 0.9699 \\ KWT-3 & (Sudig et al., 2017) & 97.47 \(\pm\)0.15 & 98.54 \(\pm\)0.07 & 97.49 \(\pm\)0.09 & Joint BERT & (Sudig et al., 2017) & 0.961 \\ KWT-1 & (Sudig et al., 2017) & 97.27 \(\pm\)0.08 & 98.48 \(\pm\)0.08 & 97.27 \(\pm\)0.08 & Joint BERT + CRF & (Sudig et al., 2017) & 0.96 \\ KWT-2 & (Sudig et al., 2017) & 97.26 \(\pm\)0.18 & 98.68 \(\pm\)0.10 & 96.95 \(\pm\)0.14 & SF-ID & (Sudig et al., 2017) & 0.958 \\ Attention RNN & (Sudig et al., 2017) & 95.6 & 96.9 & 93.9 & Capsule-NLU & (Sudig et al., 2017) & 0.952 \\ \hline \hline \end{tabular} \end{table} Table 12. Comprehensive performance analysis of various models for Keyword Spotting (KS) and Slot Filling (SF) tasks, evaluated on two benchmark datasets: Google Speech Commands for KS and ATIS for SF. advancements in deep learning technology have enabled the development of neural network-based lip-reading models to accomplish this task with high accuracy. These models take silent facial videos as input and produce the corresponding speech audio or characters as output. The potential applications of automatic lip-reading models are vast and diverse, including enabling videoconferencing in noisy environments, using surveillance videos as long-range listening devices, and facilitating conversations in noisy social settings. Developing these models could significantly improve our daily lives. * _Audiovisual speech separation_: Recent years have witnessed a growing interest in audiovisual speech separation, driven by the remarkable human capacity to selectively focus on a specific sound source amidst background noise, commonly known as the "cocktail party effect." This phenomenon poses a significant challenge in computer speech recognition, prompting the development of automatic speech separation techniques aimed at isolating individual speech sources from complex audio signals. In a noteworthy study by Ephrat et al. (2018) Ephrat et al. (2018), the authors proposed that audiovisual speech separation surpasses audio-only approaches by leveraging visual cues from a speaker's face to resolve ambiguity in speech signals. By integrating visual information, the model's ability to disentangle overlapping speech signals is enhanced. The implications of automatic speech separation extend across diverse applications, including assistive technologies for individuals with hearing impairments and head-mounted devices designed to facilitate effective communication in noisy meeting scenarios. * _Talking face generation_: Generating a realistic talking face of a target character, synchronized with a given speech and ensuring smooth transitions between facial images, is the objective of talking face generation. This task has garnered substantial interest and poses a significant challenge due to the dynamic nature of facial movements, which depend on both visual information (input face image) and acoustic information (input speech audio) to achieve accurate lip-speech synchronization. Despite its challenges, talking face generation holds immense potential for various applications, including teleconferencing, creating virtual characters with specific facial expressions, and enhancing speech comprehension. In recent years, significant advancements have been made in the field of talking face generation, as evidenced by notable studies [65, 133, 134, 513, 671]. #### 5.12.2. Datasets Several datasets are widely used for audiovisual multimodal research, including VoxCeleb, TCD-TIMID [173], etc. We briefly discuss some of them in the following section. * _TCD-TIMID [173]_: This is an extensive and diverse audiovisual dataset that encompasses both audio and video recordings of 600 distinct sentences spoken by 60 participants. The dataset features a wide range of speakers with different genders, accents, and backgrounds, making it highly suitable for talker-independent speech recognition research. The audio recordings are of exceptional quality, captured using high-fidelity microphones with a sampling rate of 48kHz. Meanwhile, the video footage is of 720p resolution and includes depth information for every frame * _LipReading in the Wild (LRW) [93]:_ The LRW is a comprehensive audiovisual dataset that encompasses 500 distinct words spoken by more than 1000 speakers. This dataset has been segmented into distinct training, evaluation, and test sets to facilitate efficient research. Additionally, the LRW-1000 dataset [617] represents a subset of LRW, featuring a 1000-word vocabulary. Researchers can benefit from pre-trained weights included with this dataset, simplifying the evaluation process. Overall, these datasets are highly regarded in the scientific community for their size and versatility in supporting research related to speech recognition and natural language processing * _LRS2 and LRS3_10: The LRS2 and LRS3 datasets are additional examples of audiovisual speech recognition datasets that have been gathered from videos captured in real-world settings. Each of these datasets has its own distinct train/test split and includes cropped face tracks as well as corresponding audio clips sourced from British television. Both datasets are considered to be of significant value to researchers in the field of speech recognition, particularly those focused on audiovisual analysis. Footnote 10: [https://www.robots.ox.ac.uk/vgg/data/lip_reading/lrs2.html](https://www.robots.ox.ac.uk/vgg/data/lip_reading/lrs2.html) * _GRID [97]:_ This dataset comprises high-fidelity audio and video recordings of more than 1000 sentences spoken by 34 distinct speakers, including 18 males and 16 females. The sentences were gathered using the prompt "put red at G9 now" and are widely employed in research related to audio-visual speech separation and talking face synthesis. The dataset is considered to be of exceptional quality and is highly sought after in the scientific community. #### 5.12.3 Models In recent years, there has been a remarkable surge in the development of algorithms tailored for multimodal tasks. Specifically, significant attention has been devoted to the advancement of neural networks for Text-to-Speech (TTS) applications [251; 458; 459; 460]. The integration of visual and auditory modalities through multimodal processing has played a pivotal role in enhancing various tasks relevant to our daily lives. Lip-reading, for instance, has witnessed notable progress in recent years, whether accompanied by audio or not. Son et al. have made a significant contribution to this field with their hybrid model [511]. Combining convolutional neural networks (CNN), long short-term memory (LSTM) networks, and an attention mechanism, their model captures correlations between lip videos and audio, enabling accurate character generation. Additionally, the authors introduce a new dataset called LRS, which facilitates the development of lip-reading models. Another noteworthy model, LiRA [359], focuses on self-supervised learning for lip-reading. It leverages lip image sequences and audio waveforms to derive high-level representations during the pre-training stage, achieving word-level and sentence-level lip-reading capabilities. In the realm of capturing human emotions expressed through acoustic signals, Ephrat et al. [129] propose an innovative model that frames the task as an acoustic regression problem instead of a visual-to-text modeling approach. Their work emphasizes the advantages of this perspective. Furthermore, Vid2Speech [131], a CNN-based model, takes facial image sequences as input and generates corresponding speech audio waveforms. It employs a two-tower CNN model that processes facial grayscale images while calculating optical flow between frames. Additionally, other models such as those based on mutual information maximization [667] and spatiotemporal fusion [653] have been proposed for the lip-reading task, further expanding the methodologies explored in this domain. In an early attempt to develop algorithms for audiovisual speech separation, the authors of [130] proposed a CNN-based architecture that encodes facial images and speech spectrograms to compute a complex mask for speech separation. Additionally, they introduced the AVspeech dataset in this work. AV-CVAE [393] utilizes a conditional VAE to detect the lip movements of the speaker and predict separated speech. In a deviation from speech signals, [385] focuses on audiovisual singing separation and employs a two-stream CNN architecture, Y-Net [374], to process audio and video separately. This work introduces a large dataset of solo singing videos for audiovisual singing separation. The VisualSpeech [151] architecture takes a face image sequence and mixed audio of lip movement as input and predicts a complex mask. It also proposes a cross-modal embedding space to facilitate the correlation of audio and visual modalities. Finally, FaceFilter [94] uses still images as visual information, and other methods for the audiovisual speech separation task are proposed in [10; 146; 379]. The rise of Deepfake videos on the internet has led to a surge in demand for creating realistic talking faces for various applications, such as video production, marketing, and entertainment. Previously, the conventional approach involved manipulating 3D meshes to create specific faces, which was time-consuming and limited to certain identities. However, recent advancements in deep generative models have made significant progress. For example, DAVS [671] introduced an end-to-end trainable deep neural network capable of learning a joint audiovisual representation, which uses adversarial training to disentangle the latent space. Another architecture proposed by ATVGnet [65] consists of an audio transformation network (AT-net) and a visual generation network (VG-net) for processing acoustic and visual information, respectively. This method introduced a regression-based discriminator, a dynamically adjustable pixel-wise loss, and an attention mechanism. In [674], a novel framework for talking face generation was presented, which discovers audiovisual coherence through an asymmetrical mutual information estimator. Furthermore, the authors in [133] proposed an end-to-end approach based on generative adversarial networks that use noisy speech for talking face generation. In addition, alternative methods based on conditional recurrent adversarial networks and speech-driven talking face generation were introduced in [134; 513]. ## 6. Advanced Transfer Learning Techniques for Speech Processing ### Domain Adaptation #### 6.1.1. Task Description Domain adaptation is a field that deals with adapting a model trained on a labeled dataset from a source domain to a target domain, where the source domain differs from the target domain. The goal of domain adaptation is to reduce the performance gap between the source and target domains by minimizing the difference between their distributions. In speech processing, domain adaptation has various applications such as speech recognition [44; 87; 200; 292; 395], speaker verification [76; 184; 578; 600; 645], and speech synthesis [602; 631]. This section explores the use of domain adaptation in these tasks by reviewing recent literature on the subject. Specifically, we discuss the techniques used in domain adaptation, their effectiveness, and the challenges that arise when applying them to speech processing. #### 6.1.2. Models Various techniques have been proposed to adapt a deep learning model for speech processing tasks. An example of a technique is reconstruction-based domain adaptation, which leverages an additional reconstruction task to generate a communal representation for all the domains. The Deep Reconstruction Classification Network (DRCN) [154] is an illustration of such an approach, as it endeavors to address both tasks concurrently: (i) classification of the source data and (ii) reconstruction of the input data. Another technique used in domain adaptation is the domain-adversarial neural network architecture, which aims to learn domain-invariant features using a gradient reversal layer [51; 574; 654]. Different domain adaptation techniques are successfully applied to different speech processing tasks, such as speaker recognition [44; 200; 313; 395] and verification [75; 76; 306; 645; 673], where the goal is to verify the identity of a speaker using their voice. One approach for domain adaptation in speaker verification is to use adversarial domain training to learn speaker-independent features insensitive to variations in the recording environment [75]. Domain adaptation has also been applied to speech recognition [213; 367; 519; 631] to improve speech recognition accuracy in a target domain. One recent approach for domain adaptation in ASR is prompt-tuning [112], which involves fine-tuning the ASR system on a small amount of data from the new domain. Another approach is to use adapter modules for transducer-based speech recognition systems [364; 479], which can balance the recognition accuracy of general speech and improve recognition on adaptation domains. The Machine Speech Chain integrates both end-to-end (E2E) ASR and neural text-to-speech (TTS) into one circle [631]. This integration can be used for domain adaptation by fine-tuning the E2E ASR on a small amount of data from the new domain and then using the TTS to generate synthetic speech in the new domain for further training. In addition to domain adaptation techniques used in speech recognition, there has been growing interest in adapting text-to-speech (TTS) models to specific speakers or domains. This research direction is critical, especially in low-resource settings where collecting sufficient training data can be challenging. Several recent works have proposed different approaches for speaker and domain adaptation in TTS, such as AdaSpeech [66; 599; 609]. ### Meta Learning #### 6.2.1. Task Description Meta-learning is a branch of machine learning that focuses on improving the learning algorithms used for tasks such as parameter initialization, optimization strategies, network architecture, and distance metrics. This approach has been demonstrated to facilitate faster fine-tuning, better performance convergence, and the ability to train models from scratch, which is especially advantageous for speech-processing tasks. Meta-learning techniques have been employed in various speech-processing tasks, such as low-resource ASR [192; 215], SV [644], TTS [208] and domain generalization for speaker recognition [242]. Meta-learning has the potential to improve speech processing tasks by learning better learning algorithms that can adapt to new tasks and data more efficiently. Meta-learning can also reduce the cost of model training and fine-tuning, which is particularly useful for low-resource speech processing tasks. Further investigation is required to delve into the full potential of meta-learning in speech processing and to develop more effective meta-learning algorithms for different speech-processing tasks. #### 6.2.2. Models In low-resource ASR, meta-learning is used to quickly adapt unseen target languages by formulating ASR for different languages as different tasks and meta-learning the initialization parameters from many pretraining languages [192; 501]. The proposed approach, MetaASR [192], significantly outperforms the state-of-the-art multitask pretraining approach on all target languages with different combinations of pretraining languages. In speaker verification, meta-learning is used to improve the meta-learning training for SV by introducing two methods to improve the backbone embedding network [73]. The proposed methods can obtain consistent improvements over the existing meta-learning training framework [279]. Meta-learning has proven to be a promising approach in various speech-related tasks, including low-resource ASR and speaker verification. In addition to these tasks, meta-learning has also been applied to few-shot speaker adaptive TTS and language-agnostic TTS, demonstrating its potential to improve performance across different speech technologies. Meta-TTS [208] is an example of a meta-learning model used for a few-shot speaker adaptive TTS. It can synthesize high-speaker-similarity speech from a few enrolment samples with fewer adaptation steps. Similarly, a language-agnostic meta-learning approach is proposed in [358] for low-resource TTS. ### Parameter-Efficient Transfer Learning Transfer learning has played a significant role in the recent progress of speech processing. Fine-tuning pre-trained large models, such as those trained on LibriSpeech (2019) or Common Voice (2019), has been widely used for transfer learning in speech processing. However, fine-tuning all parameters for each downstream task can be computationally expensive. To overcome this challenge, researchers have been exploring parameter-efficient transfer learning techniques that optimize only a fraction of the model parameters, aiming to improve training efficiency. This article investigates these parameter-efficient transfer learning techniques in speech processing, evaluates their effectiveness in improving training efficiency without sacrificing performance, and discusses the challenges and opportunities associated with these techniques, highlighting their potential to advance the field of speech processing. #### 6.3.1. Adapters In recent years, retrofitting adapter modules with a few parameters to pre-trained models has emerged as an effective approach in speech processing. This involves optimizing the adapter modules while keeping the pre-trained parameters frozen for downstream tasks. Recent studies (Li et al., 2023; Liu et al., 2021) (Li et al., 2021; 2020) have shown that adapters often outperform fine-tuning while using only a fraction of the total parameters. Different adapter architectures are available, such as bottleneck adapters (Houlsby et al., 2019)(Houlsby et al., 2019), tiny attention adapters (Zhao et al., 2022)(Zhao et al., 2022), prefix-tuning adapters (Li and Liang, 2021)(Li and Liang, 2021), and LoRA adapters (Hu et al., 2022)(Hu et al., 2022), among others Next, we will review the different approaches for parameter-efficient transfer learning. The different approaches are illustrated in Figure 17 and Figure 18 **Adapter Tuning.** Adapters are a type of neural module that can be retrofitted onto a pre-trained language model, with significantly fewer parameters than the original model. One such type is the Figure 17. Transformer architecture and Adapter, Prefix Tuning, and LoRA. Figure 18. The architecture of 1D convolution layer-based lightweight adapter. \(k\) is the kernel size of 1D convolution. \(*\) denotes depth-wise convolution. bottleneck or standard adapter (Houlsby et al., 2019; Pfeiffer et al., 2020) [189; 423]. The adapter takes an input vector \(h\in\mathbb{R}^{d}\) and down-projects it to a lower-dimensional space with dimensionality \(m\) (where \(m<d\)), applies a non-linear function \(g(\cdot)\), and then up-projects the result back to the original \(d\)-dimensional space. Finally, the output is obtained by adding a residual connection. \[\mathbf{h}\leftarrow\mathbf{h}+g(\mathbf{h}\mathbf{W}_{\text{down}})\mathbf{W}_{\text{up}} \tag{30}\] where matrices \(\mathbf{W}_{\text{down}}\) and \(\mathbf{W}_{\text{up}}\) are used as down and up projection matrices, respectively, with Wdown having dimensions \(\mathbb{R}^{d\times m}\) and \(\mathbf{W}_{\text{up}}\) having dimensions \(\mathbb{R}^{m\times d}\). Previous studies have empirically shown that a two-layer feedforward neural network with a bottleneck is effective. In this work, we follow the experimental settings outlined in [423] for the adapter, which is inserted after the feedforward layer of every transformer module, as depicted in Figure 17. **Prefix tuning.** Recent studies have suggested modifying the attention module of the Transformer model to improve its performance in natural language processing tasks. This approach involves adding learnable vectors to the pre-trained multi-head attention keys and values at every layer, as depicted in Figure 17. Specifically, two sets of learnable prefix vectors, \(\mathbf{P}_{K}\) and \(\mathbf{P}_{\mathbf{V}}\), are concatenated with the original key and value matrices \(\mathbf{K}\) and \(\mathbf{V}\), while the query matrix \(\mathbf{Q}\) remains unchanged. The resulting matrices are then used for multi-head attention, where each head of the attention mechanism is computed as follows: \[\text{head}_{i}=\text{Attn}(\mathbf{Q}W_{Q}^{(i)},[\mathbf{P}_{K}^{(i)},\mathbf{K}\mathbf{W}_{ Q}^{(i)}],[\mathbf{P}_{V}^{(i)},\mathbf{V}W_{Q}^{(i)}]) \tag{31}\] where \(\text{Attn}(\cdot)\) is scaled dot-product attention given by: \[\text{Attn}(\mathbf{Q},\mathbf{K},\mathbf{V})=\text{softmax}(\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt {d_{k}}})\mathbf{V} \tag{32}\] The attention heads in each layer are modified by prefix tuning, with only the prefix vectors \(\mathbf{P}K\) and \(\mathbf{P}\mathbf{V}\) being updated during training. This approach provides greater control over the transmission of acoustic information between layers and effectively activates the pre-trained model's knowledge. **LoRA.** LoRA is a novel approach proposed by Hu et al. (2021) [198], which aims to approximate weight updates in the Transformer by injecting trainable low-rank matrices into its layers. In this method, a pre-trained weight matrix \(W\in\mathbb{R}^{d\times k}\) is updated by a low-rank decomposition \(\mathbf{W}+\Delta\mathbf{W}=\mathbf{W}+\mathbf{W}\text{down}\mathbf{W}\text{up}\), where \(\mathbf{W}\text{down}\in\mathbb{R}^{d\times r}\), \(\mathbf{W}\text{up}\in\mathbb{R}^{r\times k}\) are tunable parameters and \(r\) represents the rank of the decomposition matrices, with \(r<d\). Specifically, for a given input \(\mathbf{x}\) to the linear projection in the multi-headed attention layer, LoRA modifies the projection output \(\mathbf{h}\) as follows: \[\mathbf{h}\leftarrow\mathbf{h}+s\cdot\mathbf{x}\mathbf{W}_{\text{down}}\mathbf{W}_{\text{up}} \tag{33}\] In this work, LoRA is integrated into four locations of the multi-head attention layer, as illustrated in Figure 17. Thanks to its lightweight nature, the pre-trained model can accommodate many small modules for different tasks, allowing for efficient task switching by replacing the modules. Additionally, LoRA incurs no inference latency and achieves a convergence rate that is comparable to that of training the original model, unlike fully fine-tuned models [198]. **Convolutional Adapter.** CNNs have become increasingly popular in the field of speech processing due to their ability to learn task-specific information and combine channel-wise information within local receptive fields. To further improve the efficiency of CNNs for speech processing tasks, Li et al. (2023) [315] proposed a lightweight adapter, called the ConvAdapter, which uses three 1D convolutional layers, layer normalization, and a squeeze-and-excite module (Zhang et al., 2017) [201], as shown in Figure 18. By utilizing depth-wise convolution, which requires fewer parameters and is more computationally efficient, the authors were able to achieve better performance while using fewer resources. In this approach, the ConvAdapter is added to the same location as the Bottleneck Adapter (Figure 17). \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{IC} & \multicolumn{2}{c}{PR} & \multicolumn{2}{c}{SF} \\ \cline{2-9} & \multirow{2}{*}{\#Parameters} & FS & \multirow{2}{*}{\#Parameters} & LS & \multirow{2}{*}{\#Parameters} & \multicolumn{2}{c}{SNIPS} \\ \cline{3-4} \cline{6-7} & & ACC\% \(\uparrow\) & & PER \(\downarrow\) & & \multicolumn{2}{c}{F1 \% \(\uparrow\)} & \multicolumn{2}{c}{CER \(\downarrow\)} \\ \hline Fine-Tuning & 315707288 & 99.60 & 311304394 & 0.0577 & 311375119 & 93.89 & 0.1411 \\ Adapter & 25471256 (8.06\%) & 99.39 & 25278538 (8.01\%) & 0.1571 & 25349263 (8.14\%) & 92.60 & 0.1666 \\ Prefix Tuning & 1743128 (0.55\%) & 93.43 & 1550410 (0.49\%) & 0.1598 & 1621135 (0.50\%) & 62.32 & 0.6041 \\ LoRA & 3807512 (1.20\%) & 99.68 & 3614794 (1.16\%) & 0.1053 & 3685519 (1.18\%) & 90.61 & 0.2016 \\ ConvAdapter & 3672344 (1.16\%) & 95.60 & 3479626 (1.11\%) & 0.1532 & 3550351 (1.14\%) & 59.27 & 0.6405 \\ \hline \hline \end{tabular} \end{table} Table 14: Results on SURE benchmark for full fine-tuning and other parameter-efficient training methods on pre-trained Wav2Vec 2.0 for IC and PR tasks on **FS**: Fluent Speech [350] and **LS**: LibriSpeech [410] datasets, respectively. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{\#Parameters} & \multicolumn{2}{c}{ETS (acc \% / w-f1) \(\uparrow\)} & \multicolumn{2}{c}{SR (acc \%) \(\uparrow\)} & \multicolumn{2}{c}{ASR (wer) \(\downarrow\)} & \multicolumn{2}{c}{KS (acc \%) \(\uparrow\)} \\ \cline{3-4} \cline{6-7} & & ESD & MELD & ESD & VCTK & ESD & FLEURS & LS & Speech Command \\ \hline Fine Tuning & 315,703,947 & **96.53** & 42.93 & 99.00 & 92.36 & 0.2295 & **0.135** & **0.0903** & 99.08 \\ Adapter & 25,467,915 (8.08\%) & 94.07 & 41.58 & 98.87 & 96.32 & 0.2290 & 0.214 & 0.2425 & 99.19 \\ Prefix Tuning & 1,739,787 (**0.55\%**) & 90.00 & **44.21** & **99.73** & **98.49** & **0.2255** & 0.166 & 0.1022 & 98.86 \\ LoRA & 3,804,171 (1.20\%) & 90.00 & **47.05** & 99.00 & 97.61 & 0.2428 & 0.149 & 0.1014 & 98.28 \\ ConvAdapter & 2,952,539 (0.94\%) & 91.87 & 46.30 & 99.60 & 97.61 & 0.2456 & 0.2062 & 0.2958 & **98.99** \\ \hline \hline \end{tabular} \end{table} Table 13: The study evaluated various parameter-efficient training methods on pre-trained Word2Vec 2.0, including full fine-tuning, on the SURE benchmark. The fraction of trainable parameters were represented by percentages, with the number of KS task’s trainable parameters given. Results are reported using weighted-f1 as the metric (w-f1) on MELD, with the best performance in bold and the second best underlined. To avoid data imbalance, the researchers opted for using weighted-f1 as the metric. The study cites Li et al. (2023) [315] as a reference. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Parameters (\%)} & \multicolumn{2}{c}{LTS} & \multicolumn{2}{c}{L2ARTIC} \\ \cline{3-6} & & MCD \(\downarrow\) & WER \(\downarrow\) & MCD \(\downarrow\) & WER \(\downarrow\) \\ \hline Fine-tuning & 35802977 & 6.2038 & 0.2655 & 6.71469 & 0.2141 \\ Adapter & 659200 & 6.1634 & 0.3143 & 6.544 & 0.2504 \\ Prefix & 153600 & 6.2523 & 0.3334 & 7.4264 & 0.3244 \\ LoRA & 81920 & 6.8319 & 0.3786 & 7.0698 & 0.3291 \\ Convadapter & 108800 & 6.9202 & 0.3365 & 6.9712 & 0.3227 \\ \hline \hline \end{tabular} \end{table} Table 15: Results on the SURE benchmark for the TTS task. MCD and WER are the metrics used to compare fine-tuning and other parameter-efficient approaches. Table 13, Table 14, and Table 15 present the results of various speech processing tasks in the SURE benchmark. The findings demonstrate that the adapter-based methods perform comparably well in fine-tuning. However, there is no significant advantage of any particular adapter type over others for these benchmark tasks and datasets. #### 6.3.2. Knowledge Distillation (KD) Knowledge distillation involves training a smaller model to mimic the behavior of a larger and more complex model. This can be done by training the smaller model to predict the outputs of the larger model or, by using, the larger model's hidden representations as input to the smaller model. Knowledge distillation is effective in reducing the computational cost of training and inference. Cho et al. (2018) conducted knowledge distillation (KD) by directly applying it to the downstream task. One way to improve this approach is to use KD as pre-training for various downstream tasks, thus allowing for knowledge reuse. A noteworthy result achieved by Denisov and Vu (2017) was using KD in pretraining. However, they achieved this by initializing an utterance encoder with a trained ASR model's backbone, followed by a trained NLU backbone. Knowledge distillation can be applied directly into a wav2vec 2.0 encoder without ASR training and a trained NLU module to enhance this method. Kim et al. (2019) implemented a more complex architecture, utilizing KD in both the pretraining and fine-tuning stages. #### 6.3.3. Model Compression Researchers have also explored various architectural modifications to existing models to make them more parameter-efficient. One such approach is _pruning_(Kumar et al., 2019; Zhang et al., 2020), where motivated by lottery-ticket hypothesis (LTH) (Kumar et al., 2019), the task-irrelevant parameters are masked based on some threshold defined by importance score, such as some parameter norm. Another form of compression could be _low-rank factorization_(Kumar et al., 2019), where the parameter matrices are factorized into lower-rank matrices with much fewer parameters. Finally, _quantization_ is a popular approach to reduce the model size and improve energy efficiency with a minimal performance penalty. It involves transforming 32-bit floating point model weights into integers with fewer bit-counts (Zhou et al., 2019)--8-bit, 4-bit, 2-bit, and even 1-bit--through scaling and shifting. At the same time, the quantization of the activation is also handled based on the input. Lai et al. (2020) iteratively prune and subsequently fine-tune wav2vec2.0 on downstream tasks to obtained improved results over fine-tuned wav2vec2.0. Winata et al. (2019) employ low-rank transformers to excise the model size by half and increase the inference speed by 1.35 times. Peng et al. (2019) employ KD and quantization to make wav2vec2.0 twice as fast, twice as energy efficient, and 4.8 times smaller at the cost of a 7% increase in WER. Without the KD step, the model is 3.6 times smaller with mere 0.1% WER degradation. ## 7. Conclusion and Future Research Directions The rapid advancements in deep learning techniques have revolutionized speech processing tasks, enabling significant progress in speech recognition, speaker recognition, and speech synthesis. This paper provides a comprehensive review of the latest developments in deep learning techniques for speech-processing tasks. We begin by examining the early developments in speech processing, including representation learning and HMM-based modeling, before presenting a concise summary of fundamental deep learning techniques and their applications in speech processing. Furthermore, we discuss key speech-processing tasks, highlight the datasets used in these tasks, and present the latest and most relevant research works utilizing deep learning techniques. We envisage several lines of development in speech processing: 1. _Large Speech Models:_ In addition to the advancements made with wav2vec2.0, further progress in the field of ASR and TTS models involves the development of larger and more comprehensive models, along with the utilization of larger datasets. By leveraging these resources, it becomes possible to create TTS models that exhibit enhanced naturalness and human-like prosody. One promising approach to achieve this is through the application of adversarial training, where a discriminator is employed to distinguish between machine-generated speech and reference speech. This adversarial framework facilitates the generation of TTS models that closely resemble human speech, providing a significant step forward in achieving more realistic and high-quality synthesized speech. By exploring these avenues, researchers aim to push the boundaries of speech synthesis technology, ultimately enhancing the overall performance and realism of TTS systems. 2. _Multilingual Models:_ Self-supervised learning has emerged as a transformative approach in the field of speech recognition, particularly for low-resource languages characterized by scarce or unavailable labeled datasets. The recent development of the XLS-R model, a state-of-the-art self-supervised speech recognition model, represents a significant milestone in this domain. With a remarkable scale of over 2 billion parameters, the XLS-R model has been trained on a diverse dataset spanning 128 languages, surpassing its predecessor in terms of language coverage. The notable advantage of scaling up larger multilingual models like XLS-R lies in the substantial performance improvements they offer. As a result, these models are poised to outperform single-language models and hold immense promise for the future of speech recognition. By harnessing the power of self-supervised learning and leveraging multilingual datasets, the XLS-R model showcases the potential for addressing the challenges posed by low-resource languages and advancing the field of speech recognition to new heights. 3. _Multimodal Speech Models:_ Traditional speech and text models have typically operated within a single modality, focusing solely on either speech or text inputs and outputs. However, as the scale of generative models continues to grow exponentially, the integration of multiple modalities becomes a natural progression. This trend is evident in the latest developments, such as the unveiling of groundbreaking language models like GPT-4 [405] and Kosmos-I [207], which demonstrate the ability to process both images and text jointly. These pioneering multimodal models pave the way for the emergence of large-scale architectures that can seamlessly handle speech and other modalities in a unified manner. The convergence of multiple modalities within a single model opens up new avenues for comprehensive understanding and generation of multimodal content, and it is highly anticipated that we will witness the rapid development of large multimodal models tailored for speech and beyond in the near future. 4. _In-Context Learning:_ Utilizing mixed-modality models opens up possibilities for the development of in-context learning approaches for a wide range of speech-related tasks. This paradigm allows the tasks to be explicitly defined within the input, along with accompanying examples. Remarkable progress has already been demonstrated in large language models (LLMs), including notable works such as InstructGPT [406], FLAN-T5 [90], and LLaMA [535]. These models showcase the efficacy of in-context learning, where the integration of context-driven information empowers the models to excel in various speech tasks. By leveraging mixed-modality models and incorporating contextual cues, researchers are advancing the boundaries of speech processing capabilities, paving the way for more versatile and context-aware speech systems. 5. _Controllable Speech Generation:_ An intriguing application stemming from the aforementioned concept is controllable text-to-speech (TTS), which allows for fine-grained control over various attributes of the synthesized speech. Attributes such as tone, accent, age, gender, and more can be precisely controlled through in-context text guidance. This controllability in TTS opens up exciting possibilities for personalization and customization, enabling users to tailor the synthesized speech to their specific requirements. By leveraging advanced models and techniques, researchers are making significant strides in developing controllable TTS systems that provide users with a powerful and flexible speech synthesis experience. 6. _Parameter-efficient Learning:_ With the increasing scale of LLMs and speech models, it becomes imperative to adapt these models with minimal parameter updates. This necessitates the development of specialized adapters that can efficiently update these emerging mixed-modality large models. Additionally, model compression techniques have proven to be practical solutions in addressing the challenges posed by these large models. Recent research [280, 422, 593] has demonstrated the effectiveness of model compression, highlighting the sparsity that exists within these models, particularly for specific tasks. By employing model compression techniques, researchers can reduce the computational requirements and memory footprint of these models while preserving their performance, making them more practical and accessible for real-world applications. 7. _Explainability:_ Explainability remains elusive to these large networks as they grow. Researchers are steadfast in explaining these networks' functioning and learning dynamics. Recently, much work has been done to learn the fine-tuning and in-context learning dynamics of these large models for text under the neural-tangent-kernel (NTK) asymptotic framework [366]. Such exploration is yet to be done in the speech domain. More yet, explainability could be built-in as inductive bias in architecture. To this end, brain-inspired architectures [382] are being developed, which may shed more light on this aspect of large models. 8. _Neuroscience-inspired Architectures:_In recent years, there has been significant research exploring the parallels between speech-processing architectures and the intricate workings of the human brain [382]. These studies have unveiled compelling evidence of a strong correlation between the layers of speech models and the functional hierarchy observed in the human brain. This intriguing finding has served as a catalyst for the development of neuroscience-inspired speech models that demonstrate comparable performance to state-of-the-art (SOTA) models [382]. By drawing inspiration from the underlying principles of neural processing in the human brain, these innovative speech models aim to enhance our understanding of speech perception and production while pushing the boundaries of performance in the field of speech processing. 9. _Text-to-Audio Models for Text-to-Speech:_ Lately, transformer and diffusion-based text-to-audio (TTA) model development is turning into an exciting area of research. Until recently, most of these models [155, 272, 332, 580, 611] overlooked speech in favour of general audio. In the future, however, the models will likely strive to be equally performant in both audio and speech. To that end, current TTS methods will likely be an integral part of those models. Recently, Suno-AI [523] have aimed at striking a good balance between general audio and speech, although their implementation is not public, nor have they provided any detailed paper. ## Acknowledgement This research is supported by the Ministry of Education, Singapore, under its AcRF Tier-2 grant (Project no. T2MOE2008, and Grantor reference no. MOE-T2EP20220-0017), and A*STAR under its RIE 2020 AME programmatic grant (project reference no. RGAST2003. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore.
2309.11181
The Bass functional of martingale transport
An interesting question in the field of martingale optimal transport, is to determine the martingale with prescribed initial and terminal marginals which is most correlated to Brownian motion. Under a necessary and sufficient irreducibility condition, the answer to this question is given by a $\textit{Bass martingale}$. At an intuitive level, the latter can be imagined as an order-preserving and martingale-preserving space transformation of an underlying Brownian motion starting with an initial law $\alpha$ which is tuned to ensure the marginal constraints. In this article we study how to determine the aforementioned initial condition $\alpha$. This is done by a careful study of what we dub the $\textit{Bass functional}$. In our main result we show the equivalence between the existence of minimizers of the Bass functional and the existence of a Bass martingale with prescribed marginals. This complements the convex duality approach in a companion paper by the present authors together with M. Beiglb\"ock, with a purely variational perspective. We also establish an infinitesimal version of this result, and furthermore prove the displacement convexity of the Bass functional along certain generalized geodesics in the $2$-Wasserstein space.
Julio Backhoff-Veraguas, Walter Schachermayer, Bertram Tschiderer
2023-09-20T10:06:33Z
http://arxiv.org/abs/2309.11181v1
# The Bass functional of martingale transport ###### Abstract. An interesting question in the field of martingale optimal transport, is to determine the martingale with prescribed initial and terminal marginals which is most correlated to Brownian motion. Under a necessary and sufficient irreducibility condition, the answer to this question is given by a _Bass martingale_. At an intuitive level, the latter can be imagined as an order-preserving and martingale-preserving space transformation of an underlying Brownian motion starting with an initial law \(\alpha\) which is tuned to ensure the marginal constraints. In this article we study how to determine the aforementioned initial condition \(\alpha\). This is done by a careful study of what we dub the _Bass functional_. In our main result we show the equivalence between the existence of minimizers of the Bass functional and the existence of a Bass martingale with prescribed marginals. This complements the convex duality approach in a companion paper by the present authors together with M. Beiglbock, with a purely variational perspective. We also establish an infinitesimal version of this result, and furthermore prove the displacement convexity of the Bass functional along certain generalized geodesics in the 2-Wasserstein space. _Keywords:_ Optimal transport, Brenier's theorem, Benamou-Brenier, Stretched Brownian motion, Bass martingale. _Mathematics Subject Classification (2010):_ Primary 60G42, 60G44; Secondary 91G20. We thank Ben Robinson for his valuable comments during the preparation of this paper. WS and BT acknowledge support by the Austrian Science Fund (FWF) through projects P 35197 and P 35519, and JB acknowledges support by the FWF through projects Y 00782 and P 36835. Introduction ### Background Let \(\mathcal{P}(\mathbb{R}^{d})\) be a bounded bounded domain with Lipschitz boundary \(\partial\mathcal{P}(\mathbb{R}^{d})\) and \(\mathcal{P}(\mathbb{R}^{d})\) be a bounded domain with Lipschitz boundary \(\partial\mathcal{P}(\mathbb{R}^{d})\). We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if \(\mathcal{P}(\mathbb{R}^{d})\) is bounded if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is bounded. We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is bounded. We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is bounded. We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is bounded. We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is bounded. We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_. We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_. We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_. We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_. We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_. We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_. We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_. We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_. We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_. We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_. We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_. We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_. We say that \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_ if and only if \(\mathcal{P}(\mathbb{R}^{d})\) is _strongly bounded_. maximizing the covariance between \(p_{1}\) and \(p_{2}\) is equivalent to minimizing their expected squared distance; see also (2.3) below. **Definition 1.4**.: We introduce the _Bass functional_ \[\mathcal{P}_{2}(\mathbb{R}^{d})\ni\alpha\longmapsto\mathcal{V}(\alpha)\coloneqq \operatorname{MCov}(\alpha*\gamma,\nu)-\operatorname{MCov}(\alpha,\mu). \tag{1.5}\] In our first main result we derive a novel formulation of problem (1.1), which characterizes the Bass measure \(\hat{\alpha}\) in (1.3) as the optimizer of the Bass functional (1.5). **Theorem 1.5**.: _Let \(\mu,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) with \(\mu\leq_{\mathrm{c}}\nu\). Then_ \[P(\mu,\nu)=\inf_{\alpha\in\mathcal{P}_{2}(\mathbb{R}^{d})}\mathcal{V}(\alpha). \tag{1.6}\] _The right-hand side of (1.6) is attained by \(\hat{\alpha}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) if and only if there is a Bass martingale from \(\mu\) to \(\nu\) with Bass measure \(\hat{\alpha}\in\mathcal{P}_{2}(\mathbb{R}^{d})\)._ The proof of Theorem 1.5 is given in Section 3. In Section 4 we will show the following infinitesimal version of Theorem 1.5, which constitutes our second main result: **Theorem 1.6**.: _Let \((M_{t})_{0<t<1}\) be an \(\mathbb{R}^{d}\)-valued martingale bounded in \(L^{2}\), which is given by the stochastic integral_ \[M_{t}=M_{0}+\int_{0}^{t}\sigma_{s}\;dB_{s},\qquad 0\leq t\leq 1,\] _where \((\sigma_{t})_{0<t<1}\) is a progressively measurable process. Denote by \(\mu_{t}\) the law of \(M_{t}\). For Lebesgue-a.e. \(0\leq t\leq 1\) we have, for each \(\alpha\in\mathcal{P}_{2}(\mathbb{R}^{d})\), the inequality_ \[\mathbb{E}\big{[}\mathrm{tr}(\sigma_{t})\big{]}\leq\liminf_{h\to 0}\tfrac{1}{h} \Big{(}\operatorname{MCov}(\alpha*\gamma^{h},\mu_{t+h})-\operatorname{MCov}( \alpha,\mu_{t})\Big{)}. \tag{1.7}\] We note that, for a Bass martingale \((\hat{M}_{t})_{0<t<1}\) of the form \[d\hat{M}_{t}=\hat{\sigma}_{t}(\hat{M}_{t})\;dB_{t},\] with associated Bass measure \(\hat{\alpha}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) and diffusion function \(\hat{\sigma}_{t}\colon\mathbb{R}^{d}\to\mathbb{R}^{d\times d}\), we have, for Lebesgue-a.e. \(0\leq t\leq 1\), the equality \[\mathbb{E}\big{[}\mathrm{tr}\big{(}\hat{\sigma}_{t}(\hat{M}_{t})\big{)}\big{]} =\frac{d}{dt}\operatorname{MCov}(\hat{\alpha}*\gamma^{t},\hat{\mu}_{t}),\] where \(\hat{\mu}_{t}=\operatorname{Law}(\hat{M}_{t})\). This exhibits the sharpness of (1.7) and shows that Theorem 1.6 is an infinitesimal analogue of Theorem 1.5. In our final main result we discuss convexity properties of the Bass functional \(\alpha\mapsto\mathcal{V}(\alpha)\) defined in (1.5). **Theorem 1.7**.: _We have the following results:_ 1. _If_ \(d=1\)_, then_ \(\mathcal{V}\) _is displacement convex, i.e., convex along the geodesics given by McCann interpolations_ _[_37_]__._ 2. _If_ \(d\geq 1\)_, then_ \(\mathcal{V}\) _is displacement convex along generalized geodesics with base_ \(\mu\)_._ The proof of this result, together with a discussion on the various forms of convexity stated therein (see e.g. [2, 43, 37]), and a treatment of the strict convexity of \(\mathcal{V}\), are deferred to Section 5. We merely stress here that the Bass functional fails to be convex, and can even be concave, if we consider convex combinations of measures in the usual linear sense. ### Related literature Optimal transport as a field in mathematics goes back to Monge [38] and Kantorovich [33], who established its modern formulation. The seminal results of Benamou, Brenier, and McCann [15, 16, 13, 35, 36] form the basis of the modern theory, with striking applications in a variety of different areas, see the monographs [43, 44, 1, 41]. We are interested in transport problems where the transport plan satisfies an additional martingale constraint. This additional requirement arises naturally in finance (e.g. [8]), but is of independent mathematical interest. For example there are notable consequences for the study of martingale inequalities (e.g. [14, 29, 40]) and the Skorokhod embedding problem (e.g. [7, 32, 12]). Early articles on this topic of _martingale optimal transport_ include [30, 8, 42, 23, 21, 17]. The study of irreducibility of a pair of marginals \((\mu,\nu)\) was initiated by Beiglbock and Juillet [11] in dimension one and extended in the works [24, 20, 39] to multiple dimensions. Continuous-time martingale optimal transport problems have received much attention in the recent years; see e.g. [9, 19, 26, 28, 25, 18, 27]. In this paper we concern ourselves with the specific structure given by the martingale Benamou-Brenier problem, introduced in [4] in probabilistic language and in [31] in PDE language, and subsequently studied through the point of view of duality theory in [5]. In the context of market impact in finance, the same kind of problem appeared independently in a work by Loeper [34]. It was also shown in [4] that the optimizer \(\hat{M}\) of the problem (MBB) is the process whose evolution follows the movement of Brownian motion as closely as possible with respect to an _adapted Wasserstein distance_ (see e.g. [3, 22]) subject to the given marginal constraints. ###### Contents * 1 Introduction * 1.1 Martingale optimization problem * 1.2 Bass martingales and structure of stretched Brownian motion * 1.3 Main results * 1.4 Related literature * 2 Preliminaries * 2.1 Dual viewpoint * 2.2 Static martingale optimal transport * 2.3 Structure of optimizers * 3 A variational characterization of Bass measures * 4 An infinitesimal version of Theorem 1.5 * 5 Displacement convexity of the Bass functional ## 2. Preliminaries In this short section we give a more detailed review of some of the main results in [5], which will be useful for the coming discussions and proofs. ### Dual viewpoint As established in [5, Theorem 1.4], the problem (1.1) admits a dual formulation with a particularly appealing structure: **Theorem 2.1**.: _Let \(\mu,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) with \(\mu\leq_{\mathrm{c}}\nu\). The value \(P(\mu,\nu)\) of the problem (1.1) is equal to_ \[D\left(\mu,\nu\right)\coloneqq\inf_{\begin{subarray}{c}\psi\in L^{1}(\nu),\\ \psi\ \mathrm{convex}\end{subarray}}\left(\int\psi\ d\nu-\int\left(\psi^{*}*\gamma \right)^{*}d\mu\right) \tag{2.1}\] _and is attained by a convex function \(\hat{\psi}\) if and only if \((\mu,\nu)\) is irreducible. In this case, the (unique) optimizer to (MBB), (1.1) is given by the Bass martingale with associated convex function \(\hat{\upsilon}=\hat{\psi}^{*}\) and Bass measure \(\hat{\alpha}=\nabla(\hat{\psi}^{*}*\gamma)^{*}(\mu)\)._ Note that the symbol \(*\) used as a superscript denotes the convex conjugate of a function. We also remark that attainment of \(D(\mu,\nu)\) has to be understood in a "relaxed" sense, since the optimizer \(\hat{\psi}\) is not necessarily \(\nu\)-integrable; see [5, Proposition 4.2]. ### Static martingale optimal transport We fix \(\mu,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) with \(\mu\preceq_{\mathrm{c}}\nu\) and consider a static / discrete-time version of the continuous-time martingale optimization problem (1.1), to wit \[\tilde{P}(\mu,\nu)\coloneqq\sup_{\pi\in\mathsf{MT}(\mu,\nu)}\int\mathrm{MCov}( \pi_{x},\gamma)\,\mu(dx). \tag{2.2}\] The collection of martingale transports \(\mathsf{MT}(\mu,\nu)\) consists of those couplings \(\pi\in\mathsf{Cpl}(\mu,\nu)\) that satisfy \(\mathrm{bary}(\pi_{x})\coloneqq\int y\,\pi_{x}(dy)=x\), for \(\mu\)-a.e. \(x\in\mathbb{R}^{d}\). Here, the family of probability measures \(\{\pi_{x}\}_{x\in\mathbb{R}^{d}}\subseteq\mathcal{P}_{2}(\mathbb{R}^{d})\) is obtained by disintegrating the coupling \(\pi\) with respect to its first marginal \(\mu\), i.e., \(\pi(dx,dy)=\pi_{x}(dy)\,\mu(dx)\). By [4, Theorem 2.2] the value \(\tilde{P}(\mu,\nu)\) of (2.2) is finite and equals \(P(\mu,\nu)\), as defined in (1.1). Furthermore, there exists a unique optimizer \(\hat{\pi}\in\mathsf{MT}(\mu,\nu)\) of (2.2) and if \((\hat{M}_{t})_{0<t<1}\) is the stretched Brownian motion from \(\mu\) to \(\nu\), then the law of \((\hat{M}_{0},\hat{M}_{1})\) equals \(\hat{\pi}\). As already alluded to, maximizing the maximal covariance is equivalent to minimizing the squared quadratic Wasserstein distance, modulo adding constants. More precisely, in the present setting we have the relation \[\inf_{\pi\in\mathsf{MT}(\mu,\nu)}\int\mathcal{W}_{2}^{2}(\pi_{x},\gamma)\,\mu (dx)=d+\int\,|y|^{2}\,d\nu(y)-2\tilde{P}(\mu,\nu),\] where the quadratic Wasserstein distance \(\mathcal{W}_{2}(\,\cdot\,,\,\cdot\,)\) between two probability measures \(p_{1},p_{2}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) is defined as \[\mathcal{W}_{2}(p_{1},p_{2})\coloneqq\sqrt{\inf_{q\in\mathsf{Cpl}(p_{1},p_{2 })}\int\,|x_{1}-x_{2}|^{2}\,q(dx_{1},dx_{2})}. \tag{2.3}\] In these terms, the value of (MBB) can be expressed as \[MT(\mu,\nu)=\inf_{\pi\in\mathsf{MT}(\mu,\nu)}\int\,\mathcal{W}_{2}^{2}(\pi_{x },\gamma)\,\mu(dx)-\int\,|x|^{2}\,d\mu(x).\] ### Structure of optimizers From [5, Theorem 6.6] we recall the following characterization of the dual optimizer \(\hat{\psi}\) of (2.1) and of the primal optimizer \(\hat{\pi}\in\mathsf{MT}(\mu,\nu)\) of (2.2). **Lemma 2.2**.: _Let \(\mu,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) with \(\mu\preceq_{\mathrm{c}}\nu\). Suppose that a Bass martingale \((\hat{M}_{t})_{0<t<1}\) from \(\mu\) to \(\nu\) with Bass measure \(\hat{\alpha}\in\mathcal{P}(\mathbb{R}^{d})\) and associated convex function \(\hat{v}\) exists. Then the Legendre transform \(\hat{v}^{*}\) is equal to the dual optimizer \(\hat{\psi}\) of (2.1) and \(\mathrm{Law}(\hat{M}_{0},\hat{M}_{1})\) is equal to the primal optimizer \(\hat{\pi}\) of (2.2). Furthermore, we have \(\hat{\alpha}=\nabla\hat{\phi}(\mu)\), where_ \[\nabla\hat{\phi}(x)=(\nabla\hat{v}*\gamma)^{-1}(x)=\nabla(\hat{v}*\gamma)^{*} (x), \tag{2.4}\] _for \(\mu\)-a.e. \(x\in\mathbb{R}^{d}\), and_ \[\hat{\pi}_{x}=\mathrm{Law}(\hat{M}_{1}\mid\hat{M}_{0}=x)=\nabla\hat{v}(\gamma \nabla_{\hat{\phi}(x)}), \tag{2.5}\] _where \(\gamma\nabla_{\hat{\psi}(x)}\) denotes the \(d\)-dimensional Gaussian distribution with barycenter \(\nabla\hat{\phi}(x)\) and covariance matrix \(I_{d}\)._ We set \(\hat{u}\coloneqq\hat{v}\ast\gamma\), so that \(\nabla\hat{u}(\hat{\alpha})=\mu\). Recalling (1.3), we summarize the relationships between the optimizers in the following diagram: Finally, we prove the equivalence between the identities (1.3) and the existence of a Bass martingale from \(\mu\) to \(\nu\). **Lemma 2.3**.: _Let \(\mu,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) with \(\mu\preceq_{\mathrm{c}}\nu\). There is a Bass martingale \(\hat{M}\) with Bass measure \(\hat{\alpha}\in\mathcal{P}(\mathbb{R}^{d})\) from \(\mu=\mathrm{Law}(\hat{M}_{0})\) to \(\nu=\mathrm{Law}(\hat{M}_{1})\) if and only if there is a convex function \(\hat{v}\colon\mathbb{R}^{d}\to\mathbb{R}\) satisfying the identities_ \[(\nabla\hat{v}\ast\gamma)(\hat{\alpha})=\mu\qquad\text{ and }\qquad\nabla\hat{v}(\hat{ \alpha}\ast\gamma)=\nu. \tag{2.6}\] _Moreover, the Bass martingale \(\hat{M}\) can be expressed as_ \[\hat{M}_{t}=\nabla\hat{v}_{t}(B_{t}),\qquad 0\leqslant t\leqslant 1. \tag{2.7}\] Proof.: Let \(\hat{M}\) be a Bass martingale in the sense of Definition 1.2. We first prove (2.7). Let \(A\subseteq\mathbb{R}^{d}\) be a Borel set. We have to show that \[\mathbb{E}\big{[}\nabla\hat{v}(B_{1})\,\mathbf{1}_{\{B_{t}\in A\}}\big{]}= \mathbb{E}\big{[}(\nabla\hat{v}\ast\gamma^{1-t})(B_{t})\,\mathbf{1}_{\{B_{t} \in A\}}\big{]}. \tag{2.8}\] Denote by \(\varphi_{t}(x,y)\) the Gaussian kernel, for \(t\in(0,1]\) and \(x,y\in\mathbb{R}^{d}\). Then the left-hand side of (2.8) can be expressed as \[\int\hat{\alpha}(dx_{0})\int_{A}\varphi_{t}(x_{0},dx_{t})\int\,\nabla\hat{v}(x _{1})\,\varphi_{1-t}(x_{t},dx_{1}),\] while the right-hand side is equal to \[\int\,\hat{\alpha}(dx_{0})\int_{A}(\nabla\hat{v}\ast\gamma^{1-t})(x_{t})\, \varphi_{t}(x_{0},dx_{t}).\] Now we see that (2.8) follows from \[\int\nabla\hat{v}(x_{1})\,\varphi_{1-t}(x_{t},dx_{1})=\int\nabla\hat{v}(x_{1 })\,\gamma^{1-t}_{x_{t}}(dx_{1})=(\nabla\hat{v}\ast\gamma^{1-t})(x_{t}),\] where \(\gamma^{1-t}_{x_{t}}\) denotes the \(d\)-dimensional Gaussian distribution with barycenter \(x_{t}\) and covariance matrix \((1-t)I_{d}\). This completes the proof of (2.7). In particular, at times \(t=0\) and \(t=1\) we obtain from (2.7) that \(\hat{M}_{0}=(\nabla\hat{v}\ast\gamma)(B_{0})\) and \(\hat{M}_{1}=\nabla\hat{v}(B_{1})\), respectively. If \(\hat{M}\) is a Bass martingale from \(\mu=\mathrm{Law}(\hat{M}_{0})\) to \(\nu=\mathrm{Law}(\hat{M}_{1})\), this readily gives (2.6) Conversely, suppose that \(\mu,\nu,\hat{\alpha},\hat{v}\) satisfy the identities (2.6). Let \((B_{t})_{0\leqslant t\leqslant 1}\) be Brownian motion on \(\mathbb{R}^{d}\) with \(\mathrm{Law}(B_{0})=\hat{\alpha}\). We then define a process \((\hat{M}_{t})_{0\leqslant t\leqslant 1}\) by (1.2). In light of the previous argument, \(\hat{M}\) is characterized by (2.7). Since by assumption the identities (2.6) are satisfied, we see that \(\mathrm{Law}(\hat{M}_{0})=\mu\) and \(\mathrm{Law}(\hat{M}_{1})=\nu\). Thus \(\hat{M}\) is indeed a Bass martingale from \(\mu\) to \(\nu\). ## 3. A variational characterization of Bass measures Throughout this section we fix \(\mu,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) with \(\mu\leq_{\mathrm{c}}\nu\) and provide the proof of Theorem 1.5. This is done in several steps. **Lemma 3.1**.: _We have the weak duality_ \[P(\mu,\nu)\leqslant\inf_{\alpha\in\mathcal{P}_{2}(\mathbb{R}^{d})}\mathcal{V} (\alpha). \tag{3.1}\] Proof.: Let \(\alpha\in\mathcal{P}_{2}(\mathbb{R}^{d})\) be arbitrary. By Brenier's theorem [43, Theorem 2.12] there is a convex function \(v\) such that \(\nabla v(\alpha*\gamma)=\nu\). Hence from the Kantorovich duality [44, Theorem 5.10] it follows that \[\mathrm{MCov}(\alpha*\gamma,\nu)=\int v\,d(\alpha*\gamma)+\int v^{*}\,d\nu= \int(v*\gamma)\,d\alpha+\int v^{*}\,d\nu.\] Since \(v*\gamma\) is convex, applying once more the Kantorovich duality yields \[\mathrm{MCov}(\alpha*\gamma,\nu) =\int v^{*}\,d\nu-\int(v*\gamma)^{*}\,d\mu+\int(v*\gamma)\,d\alpha +\int(v*\gamma)^{*}\,d\mu\] \[\geqslant\int v^{*}\,d\nu-\int(v*\gamma)^{*}\,d\mu+\mathrm{MCov}( \alpha,\mu).\] Finally, from Theorem 2.1 we deduce that \[\mathrm{MCov}(\alpha*\gamma,\nu) \geqslant\inf_{\psi\text{ convex}}\Big{(}\int\psi\,d\nu-\int( \psi^{*}*\gamma)^{*}\,d\mu\Big{)}+\mathrm{MCov}(\alpha,\mu)\] \[=P(\mu,\nu)+\mathrm{MCov}(\alpha,\mu),\] which gives the inequality (3.1). We remark that it is immaterial whether in (2.1) we optimize over convex functions \(\psi\) which are elements of \(L^{1}(\nu)\) or which are just \(\mu\)-a.s. finite, see [5, Section 4]. **Lemma 3.2**.: _Suppose that there exists a Bass martingale from \(\mu\) to \(\nu\) with Bass measure \(\hat{\alpha}\in\mathcal{P}_{2}(\mathbb{R}^{d})\). Then the right-hand side of (3.1) is attained by \(\hat{\alpha}\) and is equal to_ \[\mathcal{V}(\hat{\alpha})=\int\mathrm{MCov}(\hat{\pi}_{x},\gamma)\;\mu(dx), \tag{3.2}\] _where \(\hat{\pi}\in\mathsf{MT}(\mu,\nu)\) is the optimizer of (2.2)._ Proof.: By assumption there exists a Bass martingale from \(\mu\) to \(\nu\), with Bass measure \(\hat{\alpha}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) and associated convex function \(\hat{v}\) satisfying (recall Lemma 2.3) the identities (2.6). According to Lemma 2.2, we have that \(\hat{\alpha}=\nabla\hat{\varphi}(\mu)\) and \[\hat{\pi}_{x}=\nabla\hat{v}(\gamma_{\nabla\hat{\varphi}(x)}),\] for \(\mu\)-a.e. \(x\in\mathbb{R}^{d}\). Applying Brenier's theorem, we deduce that \[\int\mathrm{MCov}(\hat{\pi}_{x},\gamma)\;\mu(dx)=\int\int\big{(} \nabla\hat{v}\big{(}\nabla\hat{\varphi}(x)+z\big{)},z\big{)}\,\gamma(dz)\; \mu(dx)\] \[=\int\int\Big{(}\big{(}\nabla\hat{v}(\nabla\hat{\varphi}(x)+z), \nabla\hat{\varphi}(x)+z\big{)}-\big{(}\nabla\hat{v}(\nabla\hat{\varphi}(x)+z \big{)},\nabla\hat{\varphi}(x)\big{)}\Big{)}\,\gamma(dz)\;\mu(dx)\] \[=\int\int\big{(}\big{(}\nabla\hat{v}(a+z),a+z\big{)}-\big{(} \nabla\hat{v}(a+z),a\big{)}\big{)}\,\gamma(dz)\;\hat{\alpha}(da)\] \[=\int\big{(}\nabla\hat{v},\mathrm{Id}\big{)}\;d(\hat{\alpha}* \gamma)-\int\big{(}(\nabla\hat{v}*\gamma),\mathrm{Id}\big{)}\;d\hat{\alpha}\] \[=\mathrm{MCov}(\hat{\alpha}*\gamma,\nu)-\mathrm{MCov}(\hat{ \alpha},\mu)=\mathcal{V}(\hat{\alpha}),\] which shows (3.2). Together with the weak duality (3.1) of Lemma 3.1 above, and recalling from Subsection 2.2 that the right-hand side of (3.2) is equal to \(\tilde{P}(\mu,\nu)=P(\mu,\nu)\), we conclude the assertion of Lemma 3.2. **Lemma 3.3**.: _We have the duality result_ \[P(\mu,\nu)=\inf_{\alpha\in\mathcal{P}_{2}(\mathbb{R}^{d})}\mathcal{V}(\alpha). \tag{3.3}\] Proof.: For \(\varepsilon>0\) we define \(\mu^{\varepsilon}\coloneqq\mu*\gamma^{\varepsilon}\) and \(\nu^{\varepsilon}\coloneqq\nu*\gamma^{2\varepsilon}\). Then \(\mu^{\varepsilon}\leq_{\mathrm{c}}\nu^{\varepsilon}\) and the pair \((\mu^{\varepsilon},\nu^{\varepsilon})\) is irreducible. Hence by Theorem 1.3 there is a Bass martingale from \(\mu^{\varepsilon}\) to \(\nu^{\varepsilon}\), so that by Lemma 3.2 we have \[\sup_{\pi\in\mathrm{MT}(\mu^{\varepsilon},\nu^{\varepsilon})}\int\mathrm{ MCov}(\pi_{x},\gamma)\,\mu^{\varepsilon}(dx)=\inf_{\alpha\in\mathcal{P}_{2}( \mathbb{R}^{d})}\Big{(}\mathrm{MCov}(\alpha*\gamma,\nu^{\varepsilon})- \mathrm{MCov}(\alpha,\mu^{\varepsilon})\Big{)}. \tag{3.4}\] By weak optimal transport arguments (see [10, Theorem 2.3]) we know \[\limsup_{\varepsilon\to 0}\sup_{\pi\in\mathrm{MT}(\mu^{\varepsilon},\nu^{ \varepsilon})}\int\mathrm{MCov}(\pi_{x},\gamma)\,\mu^{\varepsilon}(dx)\leqslant \sup_{\pi\in\mathrm{MT}(\mu,\nu)}\int\mathrm{MCov}(\pi_{x},\gamma)\,\mu(dx).\] Therefore, if we can show that the right-hand side of (3.4) converges to the right-hand side of (3.3), we will obtain the inequality \[P(\mu,\nu)\geqslant\inf_{\alpha\in\mathcal{P}_{2}(\mathbb{R}^{d})}\mathcal{V} (\alpha)=\inf_{\alpha\in\mathcal{P}_{2}(\mathbb{R}^{d})}\Big{(}\mathrm{MCov}( \alpha*\gamma,\nu)-\mathrm{MCov}(\alpha,\mu)\Big{)},\] which, together with the weak duality of Lemma 3.1, establishes (3.3). But this follows easily from \[|\mathrm{MCov}(\alpha,\mu^{\varepsilon})-\mathrm{MCov}(\alpha,\mu)|\leqslant c _{1}\varepsilon+\tfrac{1}{2}|\mathcal{W}_{2}^{2}(\alpha,\mu^{\varepsilon})- \mathcal{W}_{2}^{2}(\alpha,\mu)|\leqslant c_{2}(\varepsilon+\varepsilon^{2})\] and a similar estimate for \(|\mathrm{MCov}(\alpha*\gamma,\nu^{\varepsilon})-\mathrm{MCov}(\alpha*\gamma, \nu)|\). **Lemma 3.4**.: _Suppose that the right-hand side of (3.3) is attained by \(\hat{\alpha}\in\mathcal{P}_{2}(\mathbb{R}^{d})\). Then there exists a Bass martingale from \(\mu\) to \(\nu\) with Bass measure \(\hat{\alpha}\)._ Proof.: By Brenier's theorem there is a convex function \(\hat{v}\) such that \(\nabla\hat{v}(\hat{\alpha}*\gamma)=\nu\). According to Lemma 2.3, for the existence of a Bass martingale from \(\mu\) to \(\nu\), it remains to show the first equality in (2.6), i.e., \[(\nabla\hat{v}*\gamma)(\hat{\alpha})=\mu. \tag{3.5}\] Let \(\hat{Z}\) and \(X\) be random variables with laws \(\hat{\alpha}\) and \(\mu\), respectively, such that \[\mathrm{MCov}(\hat{\alpha},\mu)=\mathbb{E}\big{[}\langle\hat{Z},X\rangle\big{]}. \tag{3.6}\] Denote by \(\hat{q}\,(dz,dx)\) the law of the coupling \((\hat{Z},X)\). Let \(\mathbf{w}\colon\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a smooth function with compact support and define probability measures \((\alpha_{u})_{u\in\mathbb{R}}\subseteq\mathcal{P}_{2}(\mathbb{R}^{d})\) by \[\int f\,d\alpha_{u}\coloneqq\int\int f\big{(}z+u\mathbf{w}(z,x)\big{)}\,q\,(dz,dx),\qquad f\in C_{b}(\mathbb{R}^{d}). \tag{3.7}\] We claim that \[\liminf_{u\to 0}\tfrac{1}{u}\Big{(}\mathrm{MCov}(\alpha_{u},\mu)-\mathrm{ MCov}(\hat{\alpha},\mu)\Big{)}\geqslant\mathbb{E}\big{[}\big{\langle}\mathbf{w}( \hat{Z},X),X\big{\rangle}\big{]} \tag{3.8}\] and \[\lim_{u\to 0}\tfrac{1}{u}\Big{(}\mathrm{MCov}(\alpha_{u}*\gamma,\nu)- \mathrm{MCov}(\hat{\alpha}*\gamma,\nu)\Big{)}=\mathbb{E}\big{[}\big{\langle}\bm {w}(\hat{Z},X),(\nabla\hat{v}*\gamma)(\hat{Z})\big{\rangle}\big{]}. \tag{3.9}\] Using the optimality of \(\hat{\alpha}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) for the right-hand side of (3.3) and admitting the two claims (3.8), (3.9), we deduce that \[0 \leqslant\liminf_{u\to 0}\tfrac{1}{u}\Big{(}\Big{(} \mathrm{MCov}(\alpha_{u}*\gamma,\nu)-\mathrm{MCov}(\hat{\alpha}*\gamma,\nu) \Big{)}-\Big{(}\mathrm{MCov}(\alpha_{u},\mu)-\mathrm{MCov}(\hat{\alpha},\mu) \Big{)}\] \[\leqslant\mathbb{E}\big{[}\big{\langle}\mathbf{w}(\hat{Z},X),(\nabla \hat{v}*\gamma)(\hat{Z})-X\big{\rangle}\big{]}.\] Since \(\mathbf{w}\) was arbitrary, it follows that the random variable \((\nabla\hat{v}*\gamma)(\hat{Z})\) has the same law as \(X\), which readily gives (3.5). We now turn to the proof of the claim (3.8). By the definition of \(\alpha_{u}\) in (3.7), the random variable \(Z_{u}\coloneqq\hat{Z}+uw(\hat{Z},X)\) has law \(\alpha_{u}\). Consequently, \[\operatorname{MCov}(\alpha_{u},\mu)\geqslant\mathbb{E}\big{[}\langle Z_{u},X \rangle\big{]}. \tag{3.10}\] Combining (3.6) and (3.10) yields (3.8). It remains to show the claim (3.9). By analogy with the proof of (3.8), we obtain the inequality "\(\geqslant\)" in (3.9). For the reverse inequality, we note that by the Kantorovich duality we have \[\operatorname{MCov}(\alpha_{u}*\gamma,\nu) =\inf_{v\text{\,convex}}\Big{(}\int v\,d(\alpha_{u}*\gamma)-\int v ^{*}\,d\nu\Big{)}\] \[\leqslant\int\hat{v}\,d(\alpha_{u}*\gamma)-\int\hat{v}^{*}\,d\nu\] \[=\int(\hat{v}*\gamma)\,d\alpha_{u}-\int\hat{v}^{*}\,d\nu\] and \[\operatorname{MCov}(\hat{\alpha}*\gamma,\nu)=\int\hat{v}\,d(\alpha*\gamma)+ \int\hat{v}^{*}\,d\nu=\int(\hat{v}*\gamma)\,d\hat{\alpha}+\int\hat{v}^{*}\,d\nu.\] Therefore \[\operatorname{MCov}(\alpha_{u}*\gamma,\nu)-\operatorname{MCov}(\hat{\alpha}* \gamma,\nu) \leqslant\int(\hat{v}*\gamma)\,d\alpha_{u}-\int(\hat{v}*\gamma)\,d \hat{\alpha}\] Using the convexity of the function \(\hat{v}*\gamma\), we deduce that \[\tfrac{1}{u}\Big{(}\operatorname{MCov}(\alpha_{u}*\gamma,\nu)-\operatorname{ MCov}(\hat{\alpha}*\gamma,\nu)\Big{)}\leqslant\mathbb{E}\big{[}\big{\langle} \boldsymbol{w}(\hat{Z},X),(\nabla\hat{v}*\gamma)(\hat{Z}+uw(\hat{Z},X))\big{\rangle} \big{]}.\] Now observe that the expectation on the right-hand side of the above inequality is equal to the expectation of the random variable \[Y_{u}\coloneqq\Big{\langle}\boldsymbol{w}(\hat{Z},X)\,,\nabla\hat{v}(\hat{Z}+ \Gamma)\exp\big{(}u\big{\langle}\Gamma,\boldsymbol{w}(\hat{Z},X)\big{\rangle} -\tfrac{u^{2}}{2}|\boldsymbol{w}(\hat{Z},X)|^{2}\big{)}\Big{\rangle},\] where \(\Gamma\) is a standard Gaussian random vector on \(\mathbb{R}^{d}\), independent of \(\hat{Z}\) as well as of \(X\). Clearly by continuity \[\lim_{u\to 0}Y_{u}=\big{\langle}\boldsymbol{w}(\hat{Z},X),\nabla\hat{v}(\hat{Z}+ \Gamma)\big{\rangle},\qquad\mathbb{P}\text{-a.s.}\] As \(\boldsymbol{w}\) is smooth with compact support, for \(\delta>0\) we can find constants \(c_{1},c_{2}\) such that \[\forall u\in[-\delta,\delta]:\quad|Y_{u}|\leqslant c_{1}\,|\nabla\hat{v}( \hat{Z}+\Gamma)|\,\mathrm{e}^{c_{2}|\Gamma|}.\] By the Cauchy-Schwarz inequality and since \(\nabla\hat{v}(\hat{\alpha}*\gamma)=\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\), we have the bound \[\mathbb{E}\big{[}|\nabla\hat{v}(\hat{Z}+\Gamma)|\,\mathrm{e}^{|\Gamma|}\big{]} \leqslant\sqrt{\int|y|^{2}\,d\nu(y)}\sqrt{\mathbb{E}\big{[}\mathrm{e}^{2| \Gamma|}\big{]}}<+\infty.\] Therefore we can apply the dominated convergence theorem and conclude that \[\limsup_{u\to 0}\tfrac{1}{u}\Big{(}\operatorname{MCov}(\alpha_{u}*\gamma,\nu)- \operatorname{MCov}(\hat{\alpha}*\gamma,\nu)\Big{)}\leqslant\mathbb{E}\big{[} \big{\langle}\boldsymbol{w}(\hat{Z},X),(\nabla\hat{v}*\gamma)(\hat{Z})\big{\rangle} \big{]},\] which completes the proof of the claim (3.9). Proof of Theorem 1.5.: The assertion of the theorem follows from Lemmas 3.2 - 3.4. The reader has certainly noticed that the proof of Lemma 3.2 was given in an analytic style while the proof of Lemma 3.4 was given in a more probabilistic language. In the remainder of this section we give an alternative probabilistic proof of Lemma 3.2 and sketch how to translate the proof of Lemma 3.4 into a more analytic language. The following probabilistic proof of Lemma 3.2 does not require the duality results developed in [5], but only relies on the definition of Bass martingales. _Probabilistic proof of Lemma 3.2._ By assumption there exists a Bass martingale from \(\mu\) to \(\nu\), with Bass measure \(\hat{\alpha}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) and associated convex function \(\hat{v}\) satisfying (recall Lemma 2.3) the identities (2.6). Let \(\alpha\in\mathcal{P}_{2}(\mathbb{R}^{d})\) be arbitrary. We have to show that \[\mathcal{V}(\hat{\alpha}) =\operatorname{MCov}(\hat{\alpha}*\gamma,\nu)-\operatorname{MCov} (\hat{\alpha},\mu)\leqslant \tag{3.11}\] \[\leqslant\operatorname{MCov}(\alpha*\gamma,\nu)-\operatorname{ MCov}(\alpha,\mu)=\mathcal{V}(\alpha).\] Take a random variable \(\hat{Z}\) with law \(\hat{\alpha}\) and define \[X\coloneqq(\nabla\hat{v}*\gamma)(\hat{Z}). \tag{3.12}\] By Brenier's theorem the coupling \((\hat{Z},X)\) is optimal and according to (2.6) the random variable \(X\) has law \(\mu\). Now choose a random variable \(Z\) with law \(\alpha\) such that the coupling \((Z,X)\) is optimal with respect to the maximal covariance (equivalently, with respect to the quadratic Wasserstein distance). Clearly \[\operatorname{MCov}(\alpha,\mu)-\operatorname{MCov}(\hat{\alpha},\mu)=\mathbb{ E}\big{[}\langle Z-\hat{Z},X\rangle\big{]}. \tag{3.13}\] Take a standard Gaussian random vector \(\Gamma\) on \(\mathbb{R}^{d}\), independent of \(Z\) as well as of \(\hat{Z}\). The random variables \(\hat{Z}+\Gamma\) and \[Y\coloneqq\nabla\hat{v}(\hat{Z}+\Gamma) \tag{3.14}\] have laws \(\hat{\alpha}*\gamma\) and \(\nu\), respectively. As by Brenier's theorem the coupling \((\hat{Z}+\Gamma,Y)\) is optimal, we have \[\operatorname{MCov}(\hat{\alpha}*\gamma,\nu)=\mathbb{E}\big{[}\langle\hat{Z}+ \Gamma,Y\rangle\big{]}. \tag{3.15}\] Since the random variable \(Z+\Gamma\) has law \(\alpha*\gamma\) we conclude that \((Z+\Gamma,Y)\) is some coupling between \(\alpha*\gamma\) and \(\nu\), i.e., \[\operatorname{MCov}(\alpha*\gamma,\nu)\geqslant\mathbb{E}\big{[}\langle Z+ \Gamma,Y\rangle\big{]}. \tag{3.16}\] From (3.13) - (3.16) we obtain the inequality \[\operatorname{MCov}(\alpha*\gamma,\nu)-\operatorname{MCov}(\hat{\alpha}*\gamma,\nu)-\operatorname{MCov}(\alpha,\mu)+\operatorname{MCov}(\hat{\alpha},\mu) \geqslant\mathbb{E}\big{[}\langle Z-\hat{Z},Y-X\rangle\big{]}.\] Therefore, in order to establish the inequality (3.11), it remains to show that \[\mathbb{E}\big{[}\langle Z-\hat{Z},Y-X\rangle\big{]}=0. \tag{3.17}\] For that purpose, we condition \(Y-X\) on the random variables \(Z\) as well as \(\hat{Z}\), so that by (3.12) and (3.14) we obtain \[\mathbb{E}[Y-X\,|\,Z,\hat{Z}]=0,\] which implies (3.17). We finally give an alternative heuristic argument for Lemma 3.4, which is based on differentiating the maximal covariance along a continuity equation. Alternative heuristic proof of Lemma 3.4.: Suppose that the right-hand side of (3.3) is attained by \(\hat{\alpha}\in\mathcal{P}_{2}(\mathbb{R}^{d})\). We want to show that there exists a Bass martingale from \(\mu\) to \(\nu\) with Bass measure \(\hat{\alpha}\). The idea is to perturb \(\hat{\alpha}\) along a continuity equation \[\partial_{t}\alpha_{t}+\operatorname{div}(\mathbf{v}_{t}\alpha_{t})=0,\qquad t\in( -h,h),\] with \(h>0\), \(\alpha_{0}\coloneqq\hat{\alpha}\), and where \(\mathbf{v}_{t}\) is a velocity field. Observe that \[\partial_{t}|_{t=0}\,\mathcal{V}(\alpha_{t}) =\partial_{t}|_{t=0}\left(\operatorname{MCov}(\alpha_{t}*\gamma, \nu)-\operatorname{MCov}(\alpha_{t},\mu)\right)\] \[=\partial_{t}|_{t=0}\int\hat{v}\,d(\alpha_{t}*\gamma)-\partial_{t }|_{t=0}\int\hat{u}\,d\alpha_{t},\] where \(\nabla\hat{v}(\hat{\alpha}*\gamma)=\nu\) is optimal and likewise \(\nabla\hat{u}(\hat{\alpha})=\mu\) is optimal. By the continuity equation we obtain \[\partial_{t}|_{t=0}\int\hat{u}\,d\alpha_{t}=\int\langle\nabla\hat{u},\mathbf{v}_{0 }\rangle\,d\hat{\alpha}.\] With similar computations we have \[\partial_{t}|_{t=0}\int\,\hat{v}\,d(\alpha_{t}*\gamma)=\int\left\langle\nabla\hat{v }*\gamma,\mathbf{v}_{0}\right\rangle d\hat{\alpha}.\] As \(\mathbf{v}_{0}\) was arbitrary and \(\hat{\alpha}\) was optimal, we conclude that \[0=\int\,\left\langle\nabla\hat{v}*\gamma-\nabla\hat{u},\mathbf{v}_{0}\right\rangle d \hat{\alpha},\] so that \(\nabla\hat{u}\), the optimal map from \(\hat{\alpha}\) to \(\mu\), is \(\hat{\alpha}\)-a.s. equal to \(\nabla\hat{v}*\gamma\), where \(\nabla\hat{v}\) is the optimal map from \(\hat{\alpha}*\gamma\) to \(\nu\). Recalling (2.6) and Lemma 2.3, this is precisely the structure of the Bass martingale. ## 4. An infinitesimal version of Theorem 1.5 We provide the proof of Theorem 1.6, an infinitesimal version of Theorem 1.5. Proof of Theorem 1.6.: For a partition \(\Pi=\{t_{0},t_{1},\ldots,t_{n}\}\) of the interval \([0,1]\) with \[0=t_{0}<t_{1}<\ldots<t_{n}=1\] we denote by \(\Sigma^{\Pi}\) the collection of all progressively measurable and \(L^{2}\)-bounded processes \((\sigma_{t}^{\Pi})_{0\leqslant t\leqslant 1}\) such that the stochastic integral \[M_{t}^{\Pi}\coloneqq M_{0}+\int_{0}^{t}\sigma_{s}^{\Pi}\,dB_{s},\qquad 0 \leqslant t\leqslant 1\] defines an \(L^{2}\)-bounded martingale with \(\operatorname{Law}(M_{t_{k}}^{\Pi})=\mu_{t_{k}}\), for \(k=0,\ldots,n\). We define \[m^{\Pi}([t_{k-1},t_{k}])\coloneqq\sup_{\sigma^{\Pi}\in\Sigma^{\Pi}}\mathbb{E} \Big{[}\int_{t_{k-1}}^{t_{k}}\operatorname{tr}(\sigma_{s}^{\Pi})\,ds\Big{]}. \tag{4.1}\] By [4], we know that the optimizer of \[m^{\Pi}([0,1])=\sup_{\sigma^{\Pi}\in\Sigma^{\Pi}}\mathbb{E}\Big{[}\int_{0}^{1 }\operatorname{tr}(\sigma_{s}^{\Pi})\,ds\Big{]}\] is given, on each interval \([t_{k-1},t_{k}]\), by the stretched Brownian motion from \(\mu_{t_{k-1}}\) to \(\mu_{t_{k}}\). By Theorem 1.5 we have \[m^{\Pi}([t_{k-1},t_{k}])=\inf_{\alpha\in\mathcal{P}_{2}(\mathbb{R}^{d})}\Big{(} \operatorname{MCov}(\alpha*\gamma^{t_{k}-t_{k-1}},\mu_{t_{k}})-\operatorname{ MCov}(\alpha,\mu_{t_{k-1}})\Big{)}. \tag{4.2}\] For \(t_{k}\in\Pi\) and a refinement \(\Pi_{1}\) of \(\Pi\) we have \[m^{\Pi}([0,t_{k}])\geqslant m^{\Pi_{1}}([0,t_{k}]),\] as the process \((\sigma_{t}^{\Pi_{1}})_{0\leqslant t\leqslant 1}\) has to satisfy more requirements than the process \((\sigma_{t}^{\Pi})_{0\leqslant t\leqslant 1}\). We therefore may pass to a limit \(m\coloneqq\,\lim m^{\Pi}\) along the net of finite partitions \(\Pi\) of the interval \([0,1]\), which extends to a finite measure on \([0,1]\), still denoted by \(m\). Clearly the measure \(m\) is absolutely continuous with respect to Lebesgue measure on \([0,1]\) and we denote the corresponding density by \(g(t)\), for \(0\leqslant t\leqslant 1\). We claim that, for \(0\leqslant r\leqslant u\leqslant 1\), we have \[\mathbb{E}\Big{[}\int_{r}^{u}\operatorname{tr}(\sigma_{s})\,ds\Big{]} \leqslant m([r,u]). \tag{4.3}\] Indeed, otherwise we could find a partition \(\Pi\) with \(r,u\in\Pi\), such that \[\mathbb{E}\Big{[}\int_{r}^{u}\operatorname{tr}(\sigma_{s}^{\Pi})\,ds\Big{]} >m^{\Pi}([r,u]),\] which yields a contradiction to the definition of \(m^{\Pi}(\,\cdot\,)\) in (4.1). Since (4.3) holds for all intervals \([r,u]\subseteq[0,1]\), we deduce that \[\mathbb{E}\big{[}\operatorname{tr}(\sigma_{t})\big{]}\leqslant g(t), \tag{4.4}\] for Lebesgue-a.e. \(0\leqslant t\leqslant 1\). From (4.2) we conclude, for Lebesgue-a.e. \(0\leqslant t\leqslant 1\) and for each \(\alpha\in\mathcal{P}_{2}(\mathbb{R}^{d})\), the inequality \[g(t)\leqslant\liminf_{h\to 0}\tfrac{1}{h}\Big{(}\mathrm{MCov}(\alpha*\gamma^{h}, \mu_{t+h})-\mathrm{MCov}(\alpha,\mu_{t})\Big{)}.\] Together with (4.4), this finishes the proof of (1.7). Again we provide a more analytic argument for Theorem 1.6, at least on a formal level. Alternative heuristic proof of Theorem 1.6.: We will use the Kantorovich duality and the Fokker-Planck equations to get a hold of \(\tfrac{d}{dh}\mathrm{MCov}(\alpha*\gamma^{h},\mu_{t+h})\). By a change of variables we then get an equivalent expression which, when minimized, gives the left-hand side of (1.7). We suppose here that \(M\) is a strong solution of the stochastic differential equation \(dM_{u}=\sigma_{u}(M_{u})\,dB_{u}\), with \(\sigma\) as benevolent as needed, so that in particular \(\mu_{u}\) admits a density for each \(u\). We set \(\rho_{h}\coloneqq\alpha*\gamma^{h}\), \(\Sigma\coloneqq\sigma\sigma^{\prime}\), and notice that for fixed \(t\) we have \[\partial_{h}\rho_{h}(x) =\tfrac{1}{2}\Delta\rho_{h}(x),\qquad\rho_{0}=\alpha;\] \[\partial_{h}\mu_{t+h}(x) =\tfrac{1}{2}\sum_{i,k}\partial_{ik}^{2}\big{(}\Sigma_{ik}\mu_{t +h}(x)\big{)}.\] By the Kantorovich duality we have \[\mathrm{MCov}(\rho_{h},\mu_{t+h}) =\inf_{\phi\ \mathrm{convex}}\int\phi\,d\rho_{h}+\int\phi^{*}\,d\mu_{t +h}\] \[=\int\phi^{\mu_{t+h}}_{\rho_{h}}\,d\rho_{h}+\int\phi^{\rho_{h}}_{ \mu_{t+h}}\,d\mu_{t+h},\] where we denote by \(\phi^{q}_{p}(\,\cdot\,)\) the convex function, which is unique up to a constant, such that \(\nabla\phi^{q}_{p}(p)=q\). Using this, or more directly [44, Theorem 23.9], we have \[\frac{d}{dh}\,\mathrm{MCov}(\rho_{h},\mu_{t+h}) =\int\phi^{\mu_{t+h}}_{\rho_{h}}\,\partial_{h}\rho_{h}\,d\lambda+ \int\phi^{\rho_{h}}_{\mu_{t+h}}\,\partial_{h}\mu_{t+h}\,d\lambda\] \[=\int\phi^{\mu_{t+h}}_{\rho_{h}}\,\tfrac{1}{2}\Delta\rho_{h}\,d \lambda+\int\phi^{\rho_{h}}_{\mu_{t+h}}\,\tfrac{1}{2}\sum_{i,k}\partial_{ik}^ {2}\left(\Sigma_{ik}\mu_{t+h}\right)\,d\lambda\] \[=\tfrac{1}{2}\int\,\sum_{i,k}\partial_{i,k}^{2}\left(\phi^{\mu_{ t+h}}_{\rho_{h}}\right)\,I_{ik}\,d\rho_{h}+\tfrac{1}{2}\int\,\sum_{i,k} \partial_{i,k}^{2}(\phi^{\rho_{h}}_{\mu_{t+h}})\,\Sigma_{ik}\,d\mu_{t+h}\] \[=\tfrac{1}{2}\int\,\mathrm{tr}\big{(}D^{2}(\phi^{\mu_{t+h}}_{\rho _{h}})\big{)}\,\rho_{h}\,d\lambda+\tfrac{1}{2}\int\,\mathrm{tr}\big{(}D^{2}( \phi^{\rho_{h}}_{\mu_{t+h}})\Sigma\big{)}\,\mu_{t+h}\,d\lambda,\] where we denote by \(D\) and \(D^{2}\) the Jacobian and Hessian matrix, respectively. During this proof we will use the convention that if \(x\mapsto a(x)\in\mathbb{R}^{d}\) is an invertible vector-valued function, then \(a^{-1}(x)\) denotes the inverse function, whereas if \(x\mapsto A(x)\in\mathbb{R}^{d\times d}\) is a matrix-valued function, then \([A(x)]^{-1}\) denotes the matrix inverse of \(A(x)\). Now observe that \[D^{2}(\phi^{\mu_{t+h}}_{\rho_{h}})(x)=D(\nabla\phi^{\mu_{t+h}}_{\rho_{h}})(x)= D\big{(}(\nabla\phi^{\rho_{h}}_{\mu_{t+h}})^{-1}\big{)}(x)=[D\nabla\phi^{\rho_{h}}_{ \mu_{t+h}}\circ\nabla\phi^{\mu_{t+h}}_{\rho_{h}}(x)]^{-1},\] so that \[\int\mathrm{tr}\big{(}D^{2}(\phi^{\mu_{t+h}}_{\rho_{h}})(x)\big{)} \,\rho_{h}\,d\lambda =\int\mathrm{tr}\big{(}[D\nabla\phi^{\rho_{h}}_{\mu_{t+h}}\circ \nabla\phi^{\mu_{t+h}}_{\rho_{h}}(x)]^{-1}\big{)}\,\rho_{h}\,d\lambda\] \[=\int\mathrm{tr}\big{(}[D^{2}\phi^{\rho_{h}}_{\mu_{t+h}}(y)]^{- 1}\big{)}\,\mu_{t+h}\,d\lambda.\] Altogether we have \[\frac{d}{dh}\,\mathrm{MCov}(\rho_{h},\mu_{t+h})=\tfrac{1}{2}\int\,\left( \mathrm{tr}\big{(}[D^{2}\phi^{\rho_{h}}_{\mu_{t+h}}]^{-1}\big{)}+\mathrm{tr} \big{(}D^{2}(\phi^{\rho_{h}}_{\mu_{t+h}})\Sigma\big{)}\right)\,\mu_{t+h}\,d\lambda.\] Define now the functional on invertible, positive-semidefinite symmetric matrices \[A\mapsto J(A)\coloneqq\operatorname{tr}(A^{-1})+\operatorname{tr}(A\Sigma).\] We remark that \(J(A)\geqslant 2\operatorname{tr}(\Sigma^{1/2})\), since this is equivalent to the trivial statement \[|A^{-1/2}-A^{1/2}\Sigma^{1/2}|_{\operatorname{HS}}\geqslant 0.\] Hence in fact the minimum of \(J(\,\cdot\,)\) is attained at \(A=\Sigma^{-1/2}\). We conclude that \[\frac{d}{dh}\Big{|}_{h=0}\operatorname{MCov}(\rho_{h},\mu_{t+h})\geqslant\int \operatorname{tr}\bigl{(}\sigma_{t}(y)\bigr{)}\,\mu_{t}(dy)=\mathbb{E}\bigl{[} \operatorname{tr}\bigl{(}\sigma_{t}(M_{t})\bigr{)}\bigr{]},\] which completes the proof of Theorem 1.6. ## 5. Displacement convexity of the Bass functional We observe that the Bass functional \(\alpha\mapsto\mathcal{V}(\alpha)\) provides a novel example of a convex functional with respect to the almost-Riemannian structure of the quadratic Wasserstein space \(\mathcal{P}_{2}\). As mentioned in [43, Open Problem 5.17], there are only few known examples of so-called displacement convex functionals (see [43, Definition 5.10], [2, Definition 9.1.1], [37]), and it is desirable to find new ones. We shall state two versions of this result. The first one, Proposition 5.1, pertains to the case \(d=1\), while the second one, Proposition 5.2, holds for general \(d\in\mathbb{N}\). We also note that, contrary to the rest of this paper, we do not assume that \(\mu\preceq_{\mathrm{c}}\nu\). **Proposition 5.1**.: _Suppose \(d=1\). Let \(\mu,\nu\in\mathcal{P}_{2}(\mathbb{R})\). The Bass functional_ \[\mathcal{P}_{2}(\mathbb{R})\ni\alpha\longmapsto\mathcal{V}(\alpha)= \operatorname{MCov}(\alpha\ast\gamma,\nu)-\operatorname{MCov}(\alpha,\mu) \tag{5.1}\] _is displacement convex. Moreover, if a geodesic \((\alpha_{u})_{0\leqslant u\leqslant 1}\) in \(\mathcal{P}_{2}(\mathbb{R})\) is such that \(\alpha_{1}\) is not a translate of \(\alpha_{0}\) and if \(\nu\) is not a Dirac measure, the function \(u\mapsto\mathcal{V}(\alpha_{u})\) is strictly convex._ Proof.: We start by noting that the Bass functional \(\mathcal{V}(\,\cdot\,)\) of (5.1) can equivalently be defined in terms of the quadratic Wasserstein distance \(\mathcal{W}_{2}(\,\cdot\,\,,\,\cdot\,)\) of (2.3) rather than in terms of the maximal covariance \(\operatorname{MCov}(\,\cdot\,\,,\,\cdot\,)\) of (1.4). Indeed, we have the identity \[\mathcal{V}(\alpha)=\operatorname{MCov}(\alpha\ast\gamma,\nu)-\operatorname{ MCov}(\alpha,\mu)=\tfrac{1}{2}\mathcal{W}_{2}^{2}(\alpha,\mu)-\tfrac{1}{2} \mathcal{W}_{2}^{2}(\alpha\ast\gamma,\nu)+\text{const},\] where the constant \[\text{const}=\tfrac{d}{2}+\tfrac{1}{2}\int|y|^{2}\,d\nu(y)-\tfrac{1}{2}\int|x| ^{2}\,d\mu(x)\] does not depend on \(\alpha\). Therefore showing the (strict) displacement convexity of the Bass functional \(\mathcal{V}(\,\cdot\,)\) is equivalent to showing the (strict) displacement convexity of the functional \[\mathcal{U}(\alpha)\coloneqq\mathcal{W}_{2}^{2}(\alpha,\mu)-\mathcal{W}_{2}^{ 2}(\alpha\ast\gamma,\nu). \tag{5.2}\] Fix \(\mu,\nu\in\mathcal{P}_{2}(\mathbb{R})\) and let \((\alpha_{u})_{0\leqslant u\leqslant 1}\) be a geodesic in the quadratic Wasserstein space \(\mathcal{P}_{2}(\mathbb{R})\). Using the hypothesis \(d=1\) we can choose mutually comonotone random variables \(Z_{0}\), \(Z_{1}\) and \(X\) with laws \(\alpha_{0}\), \(\alpha_{1}\) and \(\mu\), respectively. As \((\alpha_{u})_{0\leqslant u\leqslant 1}\) is a geodesic, the random variable \(Z_{u}\coloneqq(1-u)Z_{0}+uZ_{1}\) has law \(\alpha_{u}\), for \(0\leqslant u\leqslant 1\). Also note that each \(Z_{u}\) is comonotone with \(X\). Let \(u_{0},u\in[0,1]\). As regards the first Wasserstein distance in (5.2), a straightforward calculation yields \[\mathcal{W}_{2}^{2}(\alpha_{u},\mu) -\mathcal{W}_{2}^{2}(\alpha_{u_{0}},\mu)= \tag{5.3}\] \[=\mathbb{E}[|Z_{u}-X|^{2}]-\mathbb{E}[|Z_{u_{0}}-X|^{2}]\] (5.4) \[=\mathbb{E}[|Z_{u}-Z_{u_{0}}|^{2}]-2\mathbb{E}[\langle Z_{u}-Z_{ u_{0}},X-Z_{u_{0}}\rangle]\] (5.5) \[=(u-u_{0})^{2}\mathbb{E}[|Z_{1}-Z_{0}|^{2}]-2(u-u_{0})\mathbb{E} [\langle Z_{1}-Z_{0},X-Z_{u_{0}}\rangle]. \tag{5.6}\] Passing to the second Wasserstein distance in (5.2), we take a standard Gaussian random variable \(\Gamma\) on \(\mathbb{R}\), independent of \(Z_{0}\) as well as of \(Z_{1}\). Next we choose a random variable \(Y_{u_{0}}\) such that \((Z_{u_{0}}+\Gamma,Y_{u_{0}})\) is an optimal coupling of \((\alpha_{u_{0}}*\gamma,\nu)\). As \((Z_{u}*\Gamma,Y_{u_{0}})\) is a (typically sub-optimal) coupling of \((\alpha_{u}*\gamma,\nu)\), we obtain the inequality \[\mathcal{W}_{2}^{2}(\alpha_{u}*\gamma,\nu)-\mathcal{W}_{2}^{2}( \alpha_{u_{0}}*\gamma,\nu)\leqslant \tag{5.7}\] \[\leqslant\mathbb{E}[|Z_{u}+\Gamma-Y_{u_{0}}|^{2}]-\mathbb{E}[|Z_{ u_{0}}+\Gamma-Y_{u_{0}}|^{2}]\] (5.8) \[=\mathbb{E}[|Z_{u}-Z_{u_{0}}|^{2}]-2\mathbb{E}[\{Z_{u}-Z_{u_{0}},Y _{u_{0}}-(Z_{u_{0}}+\Gamma)\}]\] (5.9) \[=(u-u_{0})^{2}\mathbb{E}[|Z_{1}-Z_{0}|^{2}]-2(u-u_{0})\mathbb{E}[ (Z_{1}-Z_{0},Y_{u_{0}}-(Z_{u_{0}}+\Gamma))]. \tag{5.10}\] Combining (5.3) - (5.10), we deduce that \[\mathcal{U}(\alpha_{u})-\mathcal{U}(\alpha_{u_{0}})= \tag{5.11}\] \[\quad=\Big{(}\mathcal{W}_{2}^{2}(\alpha_{u},\mu)-\mathcal{W}_{2}^ {2}(\alpha_{u}*\gamma,\nu)\Big{)}-\Big{(}\mathcal{W}_{2}^{2}(\alpha_{u_{0}}, \mu)-\mathcal{W}_{2}^{2}(\alpha_{u_{0}}*\gamma,\nu)\Big{)}\geqslant\] (5.12) \[\quad\geqslant 2(u-u_{0})\mathbb{E}[(Z_{1}-Z_{0},Y_{u_{0}}-X- \Gamma)]\] (5.13) \[=2(u-u_{0})\mathbb{E}[(Z_{1}-Z_{0},Y_{u_{0}}-X)], \tag{5.14}\] where the last equation follows from conditioning on \(Z_{0},Z_{1}\). The expression in (5.14) defines a linear function in \(u\), which lies below and touches the function \[u\longmapsto\mathcal{U}(\alpha_{u})-\mathcal{U}(\alpha_{u_{0}})=\] \[\qquad\qquad=\Big{(}\mathcal{W}_{2}^{2}(\alpha_{u},\mu)-\mathcal{ W}_{2}^{2}(\alpha_{u}*\gamma,\nu)\Big{)}-\Big{(}\mathcal{W}_{2}^{2}(\alpha_{u_{0}}, \mu)-\mathcal{W}_{2}^{2}(\alpha_{u_{0}}*\gamma,\nu)\Big{)}\] at the point \(u=u_{0}\). This readily implies the convexity of the function \[u\longmapsto\mathcal{U}(\alpha_{u})=\mathcal{W}_{2}^{2}(\alpha_{u},\mu)- \mathcal{W}_{2}^{2}(\alpha_{u}*\gamma,\nu).\] It remains to show the strict convexity assertion of Proposition 5.1. If \(\alpha_{1}\) is not a translate of \(\alpha_{0}\), then \(\alpha_{u}\) is not a translate of \(\alpha_{u_{0}}\) either, provided that \(u\neq u_{0}\). As \(Z_{u_{0}}+\Gamma\) is comonotone with \(Y_{u_{0}}\) and \(Y_{u_{0}}\) is assumed to be non-constant, we may find \(y_{0}\in\mathbb{R}\) and \(z_{0}\in\mathbb{R}\) such that \(\mathbb{P}[Y_{u_{0}}<y_{0}]\in(0,1)\) and \[\{Z_{u_{0}}+\Gamma<z_{0}\}=\{Y_{u_{0}}<y_{0}\}.\] If \(Z_{u}+\Gamma\) were also comonotone with \(Y_{u_{0}}\), we could find \(z\in\mathbb{R}\) such that \[\{Z_{u_{0}}+\Gamma<z_{0}\}=\{Y_{u_{0}}<y_{0}\}=\{Z_{u}+\Gamma<z\},\] where we have used that the law of \(Z_{u}+\Gamma\) is continuous. Conditioning on \(\Gamma=\zeta\) this implies that, for Lebesgue-a.e. \(\zeta\in\mathbb{R}\), \[\{Z_{u_{0}}<z_{0}-\zeta\}=\{Z_{u}<z-\zeta\},\] so that \(Z_{u_{0}}\) and \(Z_{u}\) are translates. This gives the desired contradiction, showing that there is a strict inequality in (5.7), (5.8) (thus also in (5.12), (5.13)), which implies the strict convexity assertion of Proposition 5.1. We now pass to the case of general \(d\in\mathbb{N}\). In Proposition 5.2 below we formulate a convexity property of the Bass functional \(\mathcal{V}(\,\cdot\,)\) pertaining to the notion of _generalized geodesics_ as analyzed in [2, Definition 9.2.2]. Recall that \((\alpha_{u})_{0\leqslant u\leqslant 1}\) is a generalized geodesic with base \(\mu\), joining \(\alpha_{0}\) to \(\alpha_{1}\), if there are random variables \(Z_{0}\), \(Z_{1}\) and \(X\) with laws \(\alpha_{0}\), \(\alpha_{1}\) and \(\mu\), respectively, such that \((Z_{0},X)\) and \((Z_{1},X)\) are optimal couplings and such that the random variable \(Z_{u}\coloneqq uZ_{1}+(1-u)Z_{0}\) has law \(\alpha_{u}\), for \(0\leqslant u\leqslant 1\). **Proposition 5.2**.: _Let \(\mu,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\). The Bass functional_ \[\mathcal{P}_{2}(\mathbb{R}^{d})\ni\alpha\longmapsto\mathcal{V}(\alpha)=\mathrm{ MCov}(\alpha*\gamma,\nu)-\mathrm{MCov}(\alpha,\mu)\] _is convex along generalized geodesics \((\alpha_{u})_{0\leqslant u\leqslant 1}\) in \(\mathcal{P}_{2}(\mathbb{R}^{d})\) with base \(\mu\)._ We do not know whether the above assertion is also true along (non generalized) geodesics \((\alpha_{u})_{0\leqslant u\leqslant 1}\) in \(\mathcal{P}_{2}(\mathbb{R}^{d})\), when \(d>1\). Proof of Proposition 5.2.: We follow the lines of the proof of Proposition 5.1 and consider again the functional \[\mathcal{U}(\alpha)=\mathcal{W}_{2}^{2}(\alpha,\mu)-\mathcal{W}_{2}^{2}(\alpha \ast\gamma,\nu)\] as in (5.2). Let \((\alpha_{u})_{0\leqslant u\leqslant 1}\) be a generalized geodesic with base \(\mu\), joining \(\alpha_{0}\) to \(\alpha_{1}\). Take \(Z_{0},Z_{1},Z_{u}\), \(X\) as above such that \((Z_{0},X)\) and \((Z_{1},X)\) are optimal couplings and by definition \(Z_{u}\sim\alpha_{u}\). Note that \((Z_{u},X)\) is an optimal coupling of \((\alpha_{u},\mu)\) by [2, Lemma 9.2.1], for \(0\leqslant u\leqslant 1\). The equalities (5.3) - (5.6) and the inequality in (5.7) - (5.10) then carry over verbatim and we again arrive at (5.12) - (5.14), which shows the convexity of the function \([0,1]\ni u\mapsto\mathcal{U}(\alpha_{u})\).
2309.12488
Sharpness-Aware Minimization and the Edge of Stability
Recent experiments have shown that, often, when training a neural network with gradient descent (GD) with a step size $\eta$, the operator norm of the Hessian of the loss grows until it approximately reaches $2/\eta$, after which it fluctuates around this value. The quantity $2/\eta$ has been called the "edge of stability" based on consideration of a local quadratic approximation of the loss. We perform a similar calculation to arrive at an "edge of stability" for Sharpness-Aware Minimization (SAM), a variant of GD which has been shown to improve its generalization. Unlike the case for GD, the resulting SAM-edge depends on the norm of the gradient. Using three deep learning training tasks, we see empirically that SAM operates on the edge of stability identified by this analysis.
Philip M. Long, Peter L. Bartlett
2023-09-21T21:15:51Z
http://arxiv.org/abs/2309.12488v6
# Sharpness-Aware Minimization ###### Abstract Recent experiments have shown that, often, when training a neural network with gradient descent (GD) with a step size \(\eta\), the operator norm of the Hessian of the loss grows until it approximately reaches \(2/\eta\), after which it fluctuates around this value. The quantity \(2/\eta\) has been called the "edge of stability" based on consideration of a local quadratic approximation of the loss. We perform a similar calculation to arrive at an "edge of stability" for Sharpness-Aware Minimization (SAM), a variant of GD which has been shown to improve its generalization. Unlike the case for GD, the resulting SAM-edge depends on the norm of the gradient. Using three deep learning training tasks, we see empirically that SAM operates on the edge of stability identified by this analysis. ## 1 Introduction _Sharpness-aware Minimization_ (SAM) (Foret et al., 2020) is a new gradient-based neural network training algorithm that advanced the state-of-the-art test accuracy on a number of prominent benchmark datasets. As its name suggests, it explicitly seeks to find a solution that not only fits the training data, but that avoids "sharp" minima, for which nearby parameter vectors perform poorly. SAM is an incremental algorithm that updates its parameters using a gradient computed at a neighbor of the current solution. The neighbor is the point in parameter space found by taking a step of length \(\rho\) "uphill" in the gradient direction. The practical success of SAM has motivated theoretical research (Bartlett et al., 2022; Wen et al., 2023; Andriushchenko et al., 2023), including results highlighting senses in which SAM's update may be viewed, under certain conditions, as including a component that performs gradient descent on the operator norm of the Hessian (Bartlett et al., 2022; Wen et al., 2023). Meanwhile, Cohen et al. (2021), building on the work of Jastrzebski et al. (2020) and others, exposed a striking phenomenon regarding neural network training with the original gradient descent (GD) method: for many initialization schemes and learning rates \(\eta\), the operator norm of the Hessian eventually settles in the neighborhood of \(2/\eta\). This has been termed the "edge of stability", in part because a convex quadratic trained by gradient descent with a learning rate \(\eta\) will only converge if the operator norm of its Hessian (which is the same everywhere) is less than \(2/\eta\). This phenomenon also inspired substantial theoretical research (see Arora et al., 2022, Damian et al., 2022, Ma et al., 2022, Zhu et al., 2022, Ahn et al., 2022b, Chen and Bruna, 2022) - one result identified conditions under which, when training approaches the edge of stability, the dynamics includes a self-stabilization mechanism that tends to drive the operator norm of the Hessian back down (Damian et al., 2022). In this paper, we investigate whether SAM operates at the edge of stability. First, we perform a derivation, analogous to the one that identifies \(2/\eta\) as the edge of stability for GD, that yields a formula for the operator norm of the Hessian that may be viewed as the edge of stability for SAM. As expected, SAM's edge of stability depends on the radius \(\rho\) of its neighborhood. It also depends on the norm of the gradient of the training error at the current solution, unlike the case of GD. As the norm of the gradient gets smaller, the edge of stability for SAM also gets smaller. Next, we evaluate experimentally whether SAM operates at the edge of stability identified by our analysis. Our first experiments are with fully connected networks on MNIST. Here, it is feasible to experiment with a version of SAM that uses a batch gradient, albeit computed at the neighbor uphill of the current iterate at a distance \(\rho\). For many combinations of the step size \(\eta\) and the radius \(\rho\), the operator norm of the Hessian at SAM's iterates closely matches the value arising from our analysis. Next, we experiment with a convolutional neural network training on 1000 examples from CIFAR10. Here again, we see SAM operating on the edge of stability. Finally, we experiment with a standard Transformer architecture training a language model on tiny_shakespeare using the more practical version of SAM that uses stochastic gradients. Here, we also see substantial agreement with our theoretical analysis. In our experiments with SAM, its edge of stability is often _much_ smaller than \(2/\eta\), even early in training. Rather than first driving the training error to a very small value, and then drifting along a manifold of near-optimal solutions to wider minima, SAM's process drives solutions toward smooth regions of parameter space early in training, while the loss is still large. The derivation of SAM's edge of stability is in Section 2. The experiments are described in detail in Section 3. The results are in Section 4. Section 5 includes further description of related work. We conclude in Section 6. ## 2 Derivation The _Sharpness-Aware Minimization_ algorithm is defined by the update \[w_{t+1}=w_{t}-\eta\nabla\ell\left(w_{t}+\rho\frac{\nabla\ell(w_{t})}{\|\nabla \ell(w_{t})\|}\right). \tag{1}\] This is like gradient descent, except using a gradient evaluated at \(w_{t}+\rho\frac{\nabla\ell(w_{t})}{\|\nabla\ell(w_{t})\|}\) instead of \(w_{t}\). In this section, we calculate an "edge of stability" for SAM analogous to the \(2/\eta\) value for GD. Before analyzing SAM, however, let us review the standard analysis that identifies the edge of stability for GD, assuming for simplicity that the quadratic approximation around an iterate is exact. **Proposition 1**.: _For \(w_{t}\in\mathbb{R}^{d}\), \(\eta>0\), if_ * \(g=\nabla\ell(w_{t})\neq 0\)_,_ \(H=\nabla^{2}\ell(w_{t})\)_,_ \(w_{t+1}=w_{t}-\eta g\)_, and_ * _for all_ \(w\in\mathbb{R}^{d}\)_,_ \(\ell(w)=\ell(w_{t})+g^{T}(w-w_{t})+\frac{(w-w_{t})^{\top}H(w-w_{t})^{\top}}{2}\)_,_ _then_ * _if_ \(||H||_{op}<\frac{2}{\eta}\)_, then_ \(\ell(w_{t+1})<\ell(w_{t})\)_, and_ * _this condition on_ \(||H||_{op}\) _is the weakest possible of its type: if_ * \(g\) _is aligned with a principal eigenvector of_ \(H\) _whose eigenvalue is non-negative, then_ * \(\operatorname{sign}(\ell(w_{t+1})-\ell(w_{t}))=\operatorname{sign}\left(||H|| _{op}-\frac{2}{\eta}\right)\)_._ Proof.: Substituting \(w_{t+1}-w_{t}\) into the formula for \(\ell\), we have \[\ell(w_{t+1}) =\ell(w_{t})-\eta g^{\top}g+\frac{\eta^{2}g^{\top}Hg}{2}\] \[\leq\ell(w_{t})-\eta g^{\top}g+\frac{\eta^{2}g^{\top}||H||_{op}g} {2}\] \[=\ell(w_{t})-\eta\left(1-\frac{\eta||H||_{op}}{2}\right)||g||^{2}.\] If \(||H||_{op}<\frac{2}{\eta}\), since \(g\neq 0\), this implies \(\ell(w_{t+1})<\ell(w_{t})\). When \(g\) is aligned with a principal eigenvector of \(H\) whose eigenvalue is non-negative, we have \(Hg=||H||_{op}g\), which implies, as above, that \[\ell(w_{t+1})=\ell(w_{t})-\eta\left(1-\frac{\eta||H||_{op}}{2}\right)||g||^{2},\] which, again since \(g\neq 0\), implies \(\operatorname{sign}(\ell(w_{t+1})-\ell(w_{t}))=\operatorname{sign}\left(||H|| _{op}-\frac{2}{\eta}\right)\). Even in the convex quadratic case, the dynamics of SAM are much more complex than GD (see Bartlett et al., 2022). However, if we bound \(||H||_{op}\) in terms of \(||g||\) as well as \(\eta\) and \(\rho\), an analogous result holds. **Proposition 2**.: _For \(w_{t}\in\mathbb{R}^{d}\), \(\eta>0\), \(\rho>0\), if_ * \(g=\nabla\ell(w_{t})\neq 0\)_,_ \(H=\nabla^{2}\ell(w_{t})\succeq 0\)_,_ \(w_{t+1}=w_{t}-\eta\nabla\ell\left(w_{t}+\rho\frac{\nabla\ell(w_{t})}{||\nabla \ell(w_{t})||}\right)\)_, and_ * _for all_ \(w\in\mathbb{R}^{d}\)_,_ \(\ell(w)=\ell(w_{t})+g^{T}(w-w_{t})+\frac{(w-w_{t})^{\top}H(w-w_{t})^{\top}}{2}\)_,_ _then_ * _if_ \(||H||_{op}<\frac{||g||}{2\rho}\left(\sqrt{1+\frac{8\rho}{\eta||g||}}-1\right)\)_, then_ \(\ell(w_{t+1})<\ell(w_{t})\)_, and_ * _this condition on_ \(||H||_{op}\) _is the weakest possible of its type: if_ * \(g\) _is aligned with a principal eigenvector of_ \(H\)_, then_ * Proposition 2 is an immediate consequence of the following stronger, but somewhat more technical, proposition. **Proposition 3**.: _For \(w_{t}\in\mathbb{R}^{d}\), \(\eta>0\), \(\rho>0\), if_ * \(g=\nabla\ell(w_{t})\neq 0\) _and_ \(H=\nabla^{2}\ell(w_{t})\) _has eigenvalues_ \(\lambda_{1},...,\lambda_{d}\) _and unit-length eigenvectors_ \(v_{1},...,v_{d}\)_,_ * \(w_{t+1}=w_{t}-\eta\nabla\ell\left(w_{t}+\rho\frac{\nabla\ell(w_{t})}{\|\nabla \ell(w_{t})\|}\right)\)_,_ * _for all_ \(w\in\mathbb{R}^{d}\)_,_ \(\ell(w)=\ell(w_{t})+g^{T}(w-w_{t})+\frac{(w-w_{t})^{\top}H(w-w_{t})^{\top}}{2}\)__ _then_ * _if, for all_ \(i\)_,_ \[-\frac{||g||}{\rho}\leq\lambda_{i}\leq\frac{||g||}{2\rho}\left(\sqrt{1+\frac{8 \rho}{\eta||g||}}-1\right),\] _and there is an_ \(i\) _such that_ \[g\cdot v_{i}\neq 0\text{ and }-\frac{||g||}{\rho}<\lambda_{i}<\frac{||g||}{2 \rho}\left(\sqrt{1+\frac{8\rho}{\eta||g||}}-1\right),\] _then_ \(\ell(w_{t+1})<\ell(w_{t})\)_, and_ * \(g\) _is aligned with a principal eigenvector of_ \(H\) _whose eigenvalue is non-negative, then_ * \(\operatorname{sign}(\ell(w_{t+1})-\ell(w_{t}))=\operatorname{sign}\left(||H||_ {op}-\frac{||g||}{2\rho}\left(\sqrt{1+\frac{8\rho}{\eta||g||}}-1\right)\right)\)_._ Proof.: Substituting \(w_{t+1}-w_{t}\) into the formula for \(\ell\), in part since \(H\) is symmetric, we have \[\ell(w_{t+1}) =\ell(w_{t})-\eta g^{\top}\left(g+\rho H\frac{g}{||g||}\right)+ \frac{\eta^{2}\left(g+\rho H\frac{g}{||g||}\right)^{\top}H\left(g+\rho H\frac {g}{||g||}\right)}{2}\] \[=\ell(w_{t})-\eta g^{\top}\left(I+\frac{\rho H}{||g||}-\eta\left( \frac{\left(I+\frac{\rho H}{||g||}\right)^{2}H}{2}\right)\right)g.\] Using the fact that, since \(H\) is symmetric, any matrix polynomial of \(H\) has the same eigenvectors as \(H\), we have \[\ell(w_{t+1}) =\ell(w_{t})-\eta\sum_{i=1}^{n}(v_{i}\cdot g)^{2}\left(1+\frac{ \rho\lambda_{i}}{||g||}-\eta\left(\frac{\left(1+\frac{\rho\lambda_{i}}{||g||} \right)^{2}\lambda_{i}}{2}\right)\right)\] \[=\ell(w_{t})-\eta\sum_{i=1}^{n}(v_{i}\cdot g)^{2}\left(1+\frac{ \rho\lambda_{i}}{||g||}\right)\left(1-\frac{\eta\left(1+\frac{\rho\lambda_{i} }{||g||}\right)\lambda_{i}}{2}\right). \tag{2}\] Recalling that each \(\lambda_{i}\geq-\frac{||g||}{\rho}\), let us focus on the last factor of one term in the sum of (2) for which \(\lambda_{i}>-\frac{||g||}{\rho}\) and \((v_{i}\cdot g)^{2}\neq 0\). We have \[1-\frac{\eta\left(1+\frac{\rho\lambda_{i}}{||g||}\right)\lambda_{ i}}{2}\geq 0\] \[\Leftrightarrow\eta\rho\lambda_{i}^{2}+\eta\lambda_{i}||g||-2||g ||\leq 0.\] The convex quadratic on the LHS has two solutions, one that is negative, and one that is positive: \[\frac{\pm\sqrt{\eta^{2}||g||^{2}+8\eta\rho||g||}-\eta||g||}{2\eta\rho}\] \[=\frac{||g||}{2\rho}\left(\pm\sqrt{1+\frac{8\rho}{\eta||g||}}-1 \right).\] Thus, given that \(\lambda_{i}>-\frac{||g||}{\rho}\), the \(i\)th term of the sum in (2) is positive iff \[-\frac{||g||}{2\rho}\left(\sqrt{1+\frac{8\rho}{\eta||g||}}+1\right)<\lambda_ {i}<\frac{||g||}{2\rho}\left(\sqrt{1+\frac{8\rho}{\eta||g||}}-1\right), \tag{3}\] for which \[-\frac{||g||}{\rho}<\lambda_{i}<\frac{||g||}{2\rho}\left(\sqrt{1+\frac{8\rho} {\eta||g||}}-1\right),\] suffices. Thus each term in the sum of (2) is non-negative, and at least one is positive, so \(\ell(w_{t+1})<\ell(w_{t})\). If \(g\) is aligned with a principal eigenvector of \(H\) whose eigenvalue is non-negative, assuming wlog that this principal eigenvector is \(v_{1}\), we have \((v_{1}\cdot g)^{2}>0\), and \((v_{i}\cdot g)^{2}=0\) for all \(i\neq 1\). In this case, all of the terms in the sum in (2) are zero except the first, thus \[\operatorname{sign}(\ell(w_{t+1})-\ell(w_{t})) =-\operatorname{sign}\left(1-\frac{\eta\left(1+\frac{\rho \lambda_{1}}{||g||}\right)\lambda_{1}}{2}\right)\] \[=\operatorname{sign}\left(\lambda_{1}-\frac{||g||}{2\rho}\left( \sqrt{1+\frac{8\rho}{\eta||g||}}-1\right)\right)\] \[=\operatorname{sign}\left(||H||_{op}-\frac{||g||}{2\rho}\left( \sqrt{1+\frac{8\rho}{\eta||g||}}-1\right)\right),\] where we have used the equivalent bounds on \(\lambda_{1}\) given by (3). We refer to the threshold \(\frac{||g||}{2\rho}\left(\sqrt{1+\frac{8\rho}{\eta||g||}}-1\right)\) identified in Proposition 2 as _SAM's edge of stability_, or the SAM-edge for short. The ratio \(\frac{\|H\|_{op}}{2/\eta}\) between the edge of stability for SAM, and the edge for GD, is \[\frac{||H||_{op}}{2/\eta}=\frac{\eta\|g\|}{4\rho}\left(\sqrt{1+\frac{8\rho}{\eta \|g\|}}-1\right).\] This ratio depends on \(\eta\), \(\rho\) and \(||g||\) through \(\eta\|g\|/(2\rho)\); let us refer to this intermediate quantity as \(\alpha\). Figure 1 shows the function \[\alpha\mapsto\frac{\alpha}{2}\left(\sqrt{1+\frac{4}{\alpha}}-1\right),\] that, at SAM's edge of stability, gives \(\|H\|_{op}/(2/\eta)\) as a function of \(\alpha=\eta\|g\|/(2\rho)\). Notice that as \(\alpha\to\infty\), this function approaches 1, and it approaches zero like \(\sqrt{\alpha}\); that is, \(\|H\|_{op}\to\sqrt{2/\eta}\sqrt{\|g\|/\rho}\). Proposition 3 focuses on the case where the largest eigenvalue is positive. This is motivated in part by the work of Ghorbani et al. (2019), who found that, often, after a small amount of training of a neural network, any negative eigenvalues in the Hessian are very small. ## 3 Methods We performed experiments in three settings. In each setting, we trained for a variety of combinations of hyperparameters, and tracked various quantities, including the operator norm of the Hessian, and the SAM edge. Code is available. ### Settings First, we trained a depth-four fully connected network, with 1000 nodes in each hidden layer, on MNIST using the quadratic loss with batch gradient descent. We trained for four hours of wallclock time on a V100 GPU. The weights were initialized using Glorot normal initialization. Prior to training, the data was centered. Next, we trained a CNN on CIFAR10 using the quadratic loss. To make batch gradients feasible, we only trained on the first 1000 examples. The CNN architecture was standard: there were two blocks comprised of a convolutional layer with a ReLU nonlinearity followed by layer normalization, then \(2\times 2\) max pooling with a \(2\times 2\) stride. In the first block the convolutional layer had 16 channels, and in the second block, it had 32 channels. Training was performed for 12 hours on a V100 GPU. Here again, the weights were initialized using Glorot normal initialization, and data was centered before training. For the final setting, we modified the sample implementation of Transformers distributed with the Haiku package (see Hennigan et al., 2023), training an autoregressive character language model using the tiny_shakespeare dataset, using minibatches of size 128. The operator norm of the Hessian, and its principal directions, were also estimated using minibatches. The architecture was as in the Haiku distribution, with 6 layers, 8 heads, a key size of 32, "model size" of 128, and sequence length of 64. Because it introduces noise, Dropout was removed. The last 10000 lines of tiny_shakespeare were set aside as a test set, and the remaining data was used for training. ### Hyperparameters We trained once for each combination of the following hyperparameters: * For MNIST, * learning rates \(\eta\): 0.03, 0.1, 0.3, * SAM offsets \(\rho\) (see (1)): 0.0, 0.1, 0.3, 1.0. * For CIFAR10, * learning rates: 0.0003, 0.001, 0.003, 0.01, * \(\rho\) values: 0.0, 0.1, 0.3, 1.0 * For tiny_shakespeare, * learning rates: 0.01, 0.02, 0.05, 0.1, 0.2, 0.5 * \(\rho\) values: 0.0, 0.1, 0.3, 1.0. Results were discarded whenever training diverged. ### Implementation We coded our experiments using Jax (Bradbury et al., 2018), along with Flax (Heek et al., 2023) (for the image classification experiments), and Haiku (Hennigan et al., 2020) (for the language model experiments). ### Unreported preliminary experiments During an exploration phase, we conducted a number of preliminary experiments, during which we identified new statistics to collect, what hyperparameter combinations to try, etc. (For example, we wanted to minimize the fraction of runs with learning rates too small to bring about the edge of stability, and those with learning rates so large that training diverged.) The results reported in this paper were one series of final runs for the last combinations of hyperparameters. ## 4 Results All of the results from every run that did not diverge may be found in a supplementary folder. (In all of the plots, the training time in seconds is plotted along the horizontal axis.) In this section, we go over some of the most noteworthy results. ### Mnist Figure 2 contains plots of the magnitudes of the top three eigenvalues of the Hessian, along with \(2/\eta\) and the SAM-edge, when an MLP was trained on MNIST using gradient descent. There is a plot for each learning rate \(\eta\). As in [Cohen et al., 2021], if the learning rate is large enough, the operator norm of the Hessian stabilizes near \(2/\eta\). We can think of GD as a special case of SAM with \(\rho=0\); the SAM-edge is of course \(2/\eta\) in that case. Figure 3 contains the analogous plots when \(\rho=0.1\). Despite the fact that gradients are taken from locations at a distance just \(0.1\) from each of the iterates, the cumulative effect results in solutions with Hessians an order of magnitude smaller than those seen with GD. Figure 4 contains the analogous plots, but without \(2/\eta\), and with the axis rescaled to zoom in the SAM edge and the magnitudes of the principal eigenvalues of the Hessian. The operator norm closely tracks the SAM edge derived in Section 2. SAM operates at the edge of stability for a wider variety of learning rates than GD. We also see the SAM edge decreasing over time, as the Figure 3: Magnitudes of the largest eigenvalues of the Hessian when an MLP is trained with SAM on MNIST, with \(\rho=0.1\). Figure 2: Magnitudes of the largest eigenvalues of the Hessian when an MLP is trained with GD on MNIST. gradients get smaller. The top three principal components are very close to one another. This is consistent with the view that SAM effectively performs gradient descent on the operator norm of the Hessian - if it did, a step would reduce the principal eigenvalue, while leaving the others at their old values, bringing the top eigenvalue closer to the others. In Figure 5, we plot the training losses, when \(\rho=0.0\) and \(\rho=0.1\). SAM achieves flatter minima with similar loss. We also see that SAM drives training toward smoother regions in parameter space while the training error is still fairly high. In Figure 6, we examine alignments between the gradients and the principal eigenvector of the Hessian, again where \(\rho=0.1\). We evaluate both the gradient at the iterate, and the gradient evaluated by SAM, at a distance \(\rho\) uphill. Since there are millions of parameters, random directions would have a tiny amount of alignment. We see a significant alignment between both gradients and the principal eigenvector of the Hessian, though the gradient used by SAM is aligned more closely. Recall that there are a number of eigenvectors whose eigenvalues are nearly equal to the largest value. Reducing their eigenvalues can also make progress toward ultimately reducing the operator norm of the Hessian. ### Cifar10 In this section, we report on experiments with convolutional neural networks trained on 1000 examples from CIFAR10. As before, we start with the case of GD in Figure 7. At the larger learning rates, training is reaching the edge of stability. Next, we plot the same quantities when the network is trained with SAM, with \(\rho=0.1\), in Figure 8. Here, the eigenvalues are multiple orders of magnitude smaller than \(2/\eta\). Next, in Figure 9 we no longer plot \(2/\eta\), and zoom in on the region where the SAM edge and the eigenvalues are. Here, as with MNIST, we once again see SAM operating at the edge of stability identified in Section 2, even at learning rates where GD did not. Figure 10 contains plots of the training loss on CIFAR10, for \(\rho=0.0\) and \(\rho=0.1\). In this task, SAM achieves wider minima without sacrificing training error. In fact, when \(\eta=0.001\), its training error is better. In Figure 11, we examine alignments between the gradients and the principal eigenvector of the Hessian in the case where \(\rho=0.1\) and a CNN is trained on CIFAR10. Again, we see significant Figure 4: Magnitudes of the largest eigenvalues of the Hessian when an MLP is trained with SAM on MNIST, with \(\rho=0.1\). Figure 5: Training loss with GD and SAM on MNIST. Figure 6: Alignments between gradients and the principal eigenvector of the Hessian with SAM on MNIST when \(\rho=0.1\). Figure 8: Magnitudes of the largest eigenvalues of the Hessian when a CNN is trained with SAM, with \(\rho=0.1\), on CIFAR10. Figure 7: Magnitudes of the largest eigenvalues of the Hessian when a CNN is trained with GD on 1000 examples from CIFAR10. Figure 11: Alignments between gradients and the principal eigenvector of the Hessian with SAM on CIFAR10. Figure 10: Training loss with SGD and SAM on CIFAR10. Figure 9: Magnitudes of the largest eigenvalues of the Hessian when a CNN is trained with SAM, with \(\rho=0.1\), on CIFAR10. alignment, especially at the higher learning rates. As in MNIST, we also see stronger alignment with the principal direction for the gradients evaluated at the uphill location used by SAM. ### Language modeling Next, we report on experiments training a language model. As before, we start with SGD, here in Figure 12. Next, we plot the same quantities when the network is trained with SAM, with \(\rho=0.3\), in Figure 13. Here, the operator norm of the Hessian is significantly less than when SGD is used, and we begin see evidence that training in SAM operates at the edge of stability analyzed in Section 2. In Figure 14, we zoom in on the lower part of the curve, and plot the operator norm of the Hessian, to examine the relationship between this quantity and the SAM edge in more detail. Figure 15 contains plots of the training loss, once again estimated per-minibatch. We included these mainly to motivate the combinations of hyperparameters where we examined other aspects of the dynamics of SAM. As expected, while SAM does take longer to achieve a certain loss, it ultimately achieves training error similar to SGD, but with less sharpness. Figure 16 contains plots of the alignment, once again estimated per-minibatch. For the large learning rates, late in training, despite the sampling noise arising from the use of minibatches, we see a systematic tendency for the SAM gradients to align more closely with the principal eigenvector of the Hessian than the gradients at the initial solution. However, for the smallest learning rates, the _opposite_ holds. Figure 12: Magnitudes of the largest eigenvalues of the Hessian when a language model is trained with SGD. Figure 14: Magnitudes of the largest eigenvalues of the Hessian when a language model is trained with SAM, with \(\rho=0.3\). Figure 13: Magnitudes of the largest eigenvalues of the Hessian when a language model is trained with SAM, with \(\rho=0.3\). Figure 15: Training loss in the language modeling experiments. ## 5 Related work In this section, we describe some previously mentioned related work in more detail, and also go over some additional papers. Bartlett et al. (2022) analyzed the dynamics of SAM applied to a convex quadratic objective, and showed that it converges to oscillating in the direction of the principal eigenvector. Then they analyzed one step of SAM in more generality, starting at a solution near a local minimum, analogous to one of the steady-state solutions in the convex quadratic case. They showed that the update from this point can be decomposed into three terms, a term that corresponds to the update in the convex quadratic case (which moves to the other solution in the oscillation), a term in the descent direction of the operator norm of the Hessian, and a third term, which, for small \(\eta\) and \(\rho\), is of lower order. The edge-of-stability point identified here is not a consequence of that analysis. Among the varied results of Wen et al. (2023) is a theorem that may be paraphrased by saying that, for a smooth enough objective functions, in an overparameterized regime where there is a manifold of minimizers, once SAM's iterates are close to this manifold, its updates track the updates that would be obtained by performing gradient flow to minimize the operator norm of the Hessian among minimizers of the loss. Their main results use the assumptions that \(\eta\log(1/\rho)\) and \(\rho/\eta\) are sufficiently small. As was seen in (Cohen et al., 2021) and also here, the edge-of-stability phenomenon dissipates as \(\eta\) gets small. Andriushchenko et al. (2023) demonstrated empirically that networks trained by SAM tend to have features with lower rank, and illustrated how this can arise using a theoretical analysis of a two-layer network. Cohen et al. (2022) demonstrated that some adaptive gradient methods, such as Adam, operate Figure 16: Alignments between gradients and the principal direction of the Hessian in the language modeling experiments. at the edge of stability. A number of authors have provided insight by analyzing the dynamics of gradient descent under clean and simple conditions under which the edge of stability arises (see Zhu et al., 2022; Agarwala et al., 2023; Ahn et al., 2022; Chen and Bruna, 2022; Even et al., 2023). Properties of the loss landscape that are compatible with edge of stability training have also been described (and evaluated empirically) (Ma et al., 2022; Ahn et al., 2022). Arora et al. (2022) established conditions under which an algorithm like GD, but that normalizes the gradients so that they have unit length, operates at the edge of stability, and also analyzed an algorithm that takes gradients with respect to the square root of the loss. Some authors have studied an algorithm like SAM, but, instead of updating using the gradient from the neighbor of the current iterate that is a constant distance \(\rho\) uphill, instead uses a gradient from neighbor whose distance from the current iterate scales with the norm of the gradient at the iterate (Andriushchenko and Flammarion, 2022; Agarwala and Dauphin, 2023), what has been called "unnormalized SAM". Dai et al. (2023) made a case that the SAM's normalization is crucial, motivating research into the original algorithm. ## 6 Conclusion We have computed the critical value of operator norm of the Hessian corresponding to the edge of stability for SAM. This SAM-edge is a decreasing function of the norm of the gradient, so it tends to decrease as training progresses. For three deep learning training tasks, we have seen that the operator norm of the Hessian closely tracks this edge of stability, despite the noise introduced by estimating using minibatches in the tiny_shakespeare task. SAM interacts strongly with the edge-of-stability phenomenon to drive down the operator norm of the Hessian, while also driving down the training error. The mechanism through which this occurs remains a mystery, presenting a challenge for theory. The analyses of Bartlett et al. (2022) and Wen et al. (2023) both required \(\eta\) and \(\rho\) to be small, and analyzed the effect of the dynamics on the operator norm of the Hessian late in training, whereas we empirically see a strong effect even early in training. One especially interesting question is how the training error is reduced so rapidly despite the overshooting associated with edge-of-stability training. The experiments with language models showed that the edge-of-stability phenomenon can also be seen, to a limited extent, when training with SGD. A more thorough understanding of SAM and the edge of stability when training with SGD is another interesting and important subject for further research. (Wen et al. (2023) analyzed a variant of SAM that works using SGD one example at a time, and pointed out strong qualitative differences between the algorithm that works with batch gradients and this extreme version of SGD, suggesting that interesting and rich structure might be found in the behavior of SAM with minibatches of intermediate size.) In our experiments, there was a general tendency for the gradients used by SAM to be more aligned with the principal direction of the Hessian than gradients evaluated at the iterates. It is not clear why this is the case, and under what conditions it happens. The theoretical analysis by Bartlett et al. (2022) depended critically on the assumption that the update gradient was aligned with the principal eigenvector of the Hessian, which raises the possibility that the fact that the gradients used by SAM are aligned more closely with the principal direction of the Hessian is key to its success. However, it is not clear under what conditions, and why, this improved alignment is seen, and when it is helpful. There also was an intriguing exception when language models were trained with SGD using small step sizes that it would be interesting to further explore. ## Acknowledgements We thank Naman Agarwal and Hossein Mobahi for valuable conversations, and Naman Agarwal for his comments on an earlier version of this paper. PB gratefully acknowledges the support of the NSF through grants DMS-2023505 and DMS-2031883 and of Simons Foundation award #814639.
2309.13796
Using Z3 to Verify Inferences in Fragments of Linear Logic
Linear logic is a substructural logic proposed as a refinement of classical and intuitionistic logics, with applications in programming languages, game semantics, and quantum physics. We present a template for Gentzen-style linear logic sequents that supports verification of logic inference rules using automatic theorem proving. Specifically, we use the Z3 Theorem Prover [8] to check targeted inference rules based on a set of inference rules that are presumed to be valid. To demonstrate the approach, we apply it to validate several derived inference rules for two different fragments of linear logic: MLL+Mix (Multiplicative Linear Logic extended with a Mix rule) and MILL (Multiplicative Intuitionistic Linear Logic).
Alen Docef, Radu Negulescu, Mihai Prunescu
2023-09-25T01:13:36Z
http://arxiv.org/abs/2309.13796v1
# Using Z3 to Verify Inferences in Fragments of Linear Logic ###### Abstract Linear logic is a substructural logic proposed as a refinement of classical and intuitionistic logics, with applications in programming languages, game semantics, and quantum physics. We present a template for Gentzen-style linear logic sequents that supports verification of logic inference rules using automatic theorem proving. Specifically, we use the Z3 Theorem Prover [9] to check targeted inference rules based on a set of inference rules that are presumed to be valid. To demonstrate the approach, we apply it to validate several derived inference rules for two different fragments of linear logic: MLL+Mix (Multiplicative Linear Logic extended with a Mix rule) and MILL (Multiplicative Intuitionistic Linear Logic). **Keywords**: linear logic, MLL+Mix, MILL, Z3, inference rules **M.S.C. Classification**: 03B47, 03F52 ## 1 Introduction The Z3 Theorem Prover [9] is a satisfiability modulo theories (SMT) solver targeted at software verification and program analysis. Besides SMT, the symbolic reasoning engine of Z3 also uses automatic reasoning, incremental solving, model generation, and other artificial intelligence techniques to determine satisfiability of a set of rules in a theory and to produce models. Linear logic [14] is a substructural logic proposed as a refinement of classical and intuitionistic logics, with applications in programming languages, game semantics, and quantum physics. In [21], the author makes a functorial connection between arbitrary models of multiplicative linear logic and the category of presheaves over arbitrary rings. Other far-reaching considerations connected with category theory are made by the same author in [20]. A more accessible approach to this connection is presented in [27]. Connections with semantics for higher order quantum computing were studied in [23]. A usual interpretation of linear logic, already intended by Girard, is that formulas do not hold values as true and false, but contain information about the availability and use of given resources. In this context, [22] presents an overview of linear logic programming. The article [5] sketches a unified approach, a "Rosetta stone", based on categories as well, for interpreting linear logic in three seemingly unrelated domains: topology, quantum physics, and lambda calculus. Relations between linear logic and concurrency theory are an active area of research, for instance in [3]. General presentations of linear logic can be found in: [11], [10], [15], [28], [18]. Two important fragments of linear logic are multiplicative intuitionistic linear logic (MILL) and multiplicative linear logic with the Mix-rule (MLL+Mix). MILL is crystallized in [6], where its proof-theory is studied from a categorical theoretic point of view. A variant of MILL and its proof methods
2309.14947
Genus 0 logarithmic and tropical fixed-domain counts for Hirzebruch surfaces
For a non-singular projective toric variety $X$, the virtual logarithmic Tevelev degrees are defined as the virtual degree of the morphism from the moduli stack of logarithmic stable maps $\overline{\mathcal{M}}_{\mathsf{\Gamma}}(X)$ to the product $\overline{\mathcal{M}}_{g,n} \times X^n$. In this paper, after proving the genus $0$ correspondence theorem in this setting, we use tropical methods to provide closed formulas for the case in which $X$ is a Hirzebruch surface. In order to do so, we explicitly list all the tropical curves contributing to the count.
Alessio Cela, Aitor Iribar Lopez
2023-09-26T14:02:47Z
http://arxiv.org/abs/2309.14947v1
# Genus \(0\) logarithmic and tropical fixed-domain counts for Hirzebruch surfaces ###### Abstract For a non-singular projective toric variety \(X\), the virtual logarithmic Tevelev degrees are defined as the virtual degree of the morphism from the moduli stack of logarithmic stable maps \(\overline{\mathcal{M}}_{\mathrm{f}}(X)\) to the product \(\overline{\mathcal{M}}_{g,n}\times X^{n}\). In this paper, after proving the genus \(0\) correspondence theorem in this setting, we use tropical methods to provide closed formulas for the case in which \(X\) is a Hirzebruch surface. In order to do so, we explicitly list all the tropical curves contributing to the count. ###### Contents * 1 Introduction * 1.1 Logarithmic curve counting with fixed domain * 1.2 Genus \(0\) Correspondence theorem * 1.3 Genus \(0\) counts for Hirzebruch surfaces * 1.3.1 Description of the curves enumerated in \(\operatorname{\mathsf{trop}}\operatorname{\mathsf{Tev}}_{\mathsf{f}}^{ \mathcal{H}_{*}}\) * 1.4 Comparison of virtual fundamental classes for maps to Hirzebruch surfaces * 1.5 Further directions * 2 The correspondence theorem * 3 Genus \(0\) tropical count for Hirzebruch surfaces * 3.1 Tropical curves and intersection theory * 3.2 Proof of Theorem 7 * 3.2.1 Case \(|\mu_{3}|+|\mu_{4}|\geq n-1\) * 3.2.2 Case \(|\mu_{3}|+|\mu_{4}|<n-1\) * 3.2.3 Exclusion of further contributions * 4 Absence of rational curves interpolating \(n\) points on Hirzebruch surfaces * 5 Proof of Theorem 13 * 5.0.1 Case \(a=2j\) * 5.0.2 Case \(a=2j+1\) ## 1 Introduction Let \(X\) be a non-singular projective variety defined over \(\mathbb{C}\) of dimension \(r\). Fix integers \(g\geq 0\) and \(n\geq 1\) such that \(2g-3+n>0\), ensuring that the moduli stack \(\overline{\mathcal{M}}_{g,n}\) of stable curve is well-defined. Fix also an effective curve class \(\beta\in H_{2}(X,\mathbb{Z})\). Curve counts on \(X\) are formulated in Gromov-Witten theory as intersection numbers on the moduli space of stable maps \(\overline{\mathcal{M}}_{g,n}(X,\beta)\) against \[[\overline{\mathcal{M}}_{g,n}(X,\beta)]^{\mathrm{vir}}\in A_{*}(\overline{ \mathcal{M}}_{g,n}(X,\beta))\] where \([\overline{\mathcal{M}}_{g,n}(X,\beta)]^{\mathrm{vir}}\) is the virtual fundamental class constructed in [1]. The Tevelev degrees of \(X\) are such counts where in additions the domain curve is fixed (and general) and \(n\) point insertions are imposed. More precisely, assume the dimensional constraint \[\mathrm{vdim}(\overline{\mathcal{M}}_{g,n}(X,\beta))=\dim(\overline{\mathcal{ M}}_{g,n}\times X^{n})\] holds and let \[\tau^{\prime}:\overline{\mathcal{M}}_{g,n}(X,\beta)\to\overline{\mathcal{M}}_{ g,n}\times X^{n}\] be the natural map obtained from the stabilized domain curve and the evaluation morphisms. **Definition 1**.: _[_1_, Definition 1.1]_ _The **Tevelev degree \(\mathsf{vTev}_{g,n,\beta}^{X}\in\mathbb{Q}\)** of \(X\) is defined by the equality_ \[\tau^{\prime}_{*}[\overline{\mathcal{M}}_{g,n}(X,\beta)]^{\mathrm{vir}}= \mathsf{vTev}_{g,n,\beta}^{X}[\overline{\mathcal{M}}_{g,n}\times X^{n}]\in A _{*}(\overline{\mathcal{M}}_{g,n}\times X^{n}).\] Fixed-domain curve counts for Grassmanians have a beautiful story at the intersection between algebraic geometry and physics. They are computed by the celebrated Vafa-Intriligator formula, conjectured by the physicists Vafa and Intriligator [14] and partially proved by Siebert-Tian [15] and by Bertram-Daskalopoulos-Wentworth in [1, 2], and fully proven by Marian-Oprea in [16] using Quot-schemes. The equivalence with the formulation in terms of stable maps was then proven by Marian-Oprea-Pandharipande in [17]. The systematic study of Tevelev degrees for general targets started with [11], motivated by work of Tevelev [18] on scattering amplitudes in mathematical physics. The paper [11] then stimulated a series of subsequent studies [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] In this paper, our aim is to extend the notion of Tevelev degrees to the situation where \(X\) is a toric variety and any tangency condition with the boundary \(\partial X\) is imposed. This is achieved using the moduli stack of logarithmic stable maps [10]. After Mikhalkin's breakthrough [19], a natural correspondence between algebraic curves and tropical curves is expected in certain nice situations. In the recent years, various versions of such correspondence have been proved [14, 15, 16, 17] and many tropical analogs of classical curve counting problems have been proved [1, 2]. Using [15, 2], we show that the correspondence theorem holds in our context when the genus is \(0\). Furthermore, we provide simple closed formulas for Hirzebruch surfaces using tropical methods. Our method can be applied to many other geometries. ### Logarithmic curve counting with fixed domain Assume further that \(X\) is a toric variety with fan \(\Sigma\). Fix integers \(g\geq 0\) and \(n,m\geq 1\) and contact order \(c\) along the toric buondary \(\partial X\) of \(X\) (see [10, Definition 3.1]). We package the discrete data \((g,n,m,c)\) in the symbol \(\mathsf{\Gamma}\), while still assuming the stability condition \(2g-2+n>0\). Let \(\overline{\mathcal{M}}_{\mathsf{\Gamma}}(X)\) be the moduli space of genus \(g\) and \(n+m\) marked logarithmic stable maps \([f:(C,p_{1},...,p_{n},q_{1},...,q_{m})\to X]\) having contact order \(c\) to the toric boundary divisor along the \(m\) marked points \(q_{1},...,q_{m}\). This space and its virtual fundamental class \([\overline{\mathcal{M}}_{\mathsf{f}}(X)]^{\mathrm{vir}}\) were constructed in [10] and shown to be proper in [1]. In this paper we deal with logarithmic fixed-domain curve count problems with point insertions at the markings \(p_{1},\ldots,p_{n}\). We set up the discrete data so that the problem has finitely many solutions. In order to make it precise we require some notation. **Notation 2**.: _Order the components \(D_{1},\ldots,D_{k}\) of \(\partial X\). Then, we can think of \(c\) as the following data:_ * _a function_ \(\varphi:\{1,...,m\}\to\{1,\ldots,k\}\) _encoding to which divisor the marking_ \(q_{i}\) _is sent for_ \(i=1,\ldots,m\)_;_ * \(k\) _vectors_ \(\mu_{i}\in\mathbb{N}_{\geq 0}^{m_{i}}\) _for_ \(i=1,\ldots,k\) _defined by_ \[\mu_{i,j}=\text{ multiplicity prescibed by }c\text{ of the }j\text{-th marked point }q_{k}\text{ mapping to }D_{i}.\] _We will denote by_ \(|\mu_{i}|\) _the length of_ \(\mu_{i}\) _for_ \(i=1,\ldots,k\)_._ Assume the dimensional constraint \[\mathrm{vdim}(\overline{\mathcal{M}}_{\mathsf{f}}(X))=\dim(\overline{ \mathcal{M}}_{g,n}\times X^{n})\] holds or equivalently that \[m=r(n+g-1) \tag{1}\] and let \[\tau:\overline{\mathcal{M}}_{\mathsf{f}}(X)\to\overline{\mathcal{M}}_{g,n} \times X^{n} \tag{2}\] be the canonical morphism obtained from the domain curve \(\pi:\overline{\mathcal{M}}_{g,n}(X,\beta)\to\overline{\mathcal{M}}_{g,n}\) and the evaluation maps \(\mathrm{ev}:\overline{\mathcal{M}}_{\mathsf{f}}(X)\to X^{n}\). **Definition 3**.: _The **virtual logarithmic Tevelev degree**\(\mathrm{v}\mathsf{Tev}_{\mathsf{f}}^{X}\in\mathbb{Q}\) of \(X\) is defined by the equality_ \[\tau_{*}[\overline{\mathcal{M}}_{\mathsf{f}}(X)]^{\mathrm{vir}}=\Bigg{(}\prod _{i=1}^{k}\prod_{u\geq 1}|\{v\ |\ \mu_{i,v}=u\}|!\Bigg{)}\,\mathrm{v}\mathsf{Tev}_{\mathsf{f}}^{X}[\overline{ \mathcal{M}}_{g,n}\times X^{n}]\in A^{0}(\overline{\mathcal{M}}_{g,n}\times X^ {n}).\] The factor \(\prod_{i=1}^{k}\prod_{u\geq 0}|\{v\ |\ \mu_{i,v}=u\}|!\) reflects the possible orderings of the markings \(q_{j}\). In Theorem 7 below, we compute all the genus \(0\) virtual Tevelev degrees for Hirzebruch surfaces using tropical methods. ### Genus \(0\) Correspondence theorem Suppose that \(g=0\) and let \(M^{\mathrm{trop}}(\mathbb{R}^{r},\mathsf{\Gamma})\) be the moduli space of labelled tropical rational \(n\) marked tropical curves \([h:\mathsf{C}\to\mathbb{R}^{r}]\) of degree \(\Delta\) prescribed by \(c\). By definition, \(\Delta\) is an ordered list of \(m\) vectors \(v_{i}\) in \(\mathbb{R}^{r}\) each parallel to one ray of the fan \(\Sigma\) and such that if \(c\) prescribes that \(q_{k}\) is the \(j\)-th marking mapped to \(D_{i}\) then the lattice length of \(v_{k}\) is \(\mu_{i,j}\). Our definitions of tropical curves, maps and their moduli spaces is that in [1, Definition 3.2 and 4.1]. Note that, by Equation (1), we have \[|\Delta|=m=r(n-1).\] Let \[\operatorname{trop}(\tau):M^{\operatorname{trop}}(\mathbb{R}^{r},\mathsf{\Gamma}) \to M_{0,n}^{\operatorname{trop}}\times(\mathbb{R}^{r})^{n} \tag{3}\] be the canonical morphism obtained from the domain curve and the evaluation map. The map \(\operatorname{trop}(\tau)\) is a morphism of equidimensional tropical fans with \(M_{0,n}^{\operatorname{trop}}\times(\mathbb{R}^{r})^{n}\)[1, Definition 2.8]. Since \(M_{0,n}^{\operatorname{trop}}\times(\mathbb{R}^{r})^{n}\) is irreducible, by [1, Corollary 2.26], we have a well-defined notion of degree. **Definition 4**.: _Define_ \[\operatorname{trop}\mathsf{Tev}_{\mathsf{\Gamma}}^{X}=\frac{\operatorname{ degree}(\operatorname{trop}(\tau))}{\prod_{i=1}^{k}\prod_{u\geq 1}|\{v\ |\ \mu_{i,v}=u\}|!}\] _to be the **tropical Teelev degree** of \(X\) w.r.t. \(\mathsf{\Gamma}\)._ After Mikhalkin's break-through [14], various correspondence theorems have been proved [13, 14, 15, 16]. In our context, we have the following: **Theorem 5**.: _Virtual logarithmic tevelev degrees and their corresponding tropical degrees coincide in genus \(0\), i.e._ \[\mathsf{v}\mathsf{Tev}_{\mathsf{\Gamma}}^{X}=\operatorname{trop}\mathsf{Tev} _{\mathsf{\Gamma}}^{X}.\] The proof is given in SS2 below. ### Genus \(0\) counts for Hirzebruch surfaces In the following we specialize to the case when \(X=\mathcal{H}_{a}\) is the Hirzebruch surface \(\mathbb{P}(\mathcal{O}\oplus\mathcal{O}(a))\) with \(a\geq 1\) and provide closed formulas for the tropical (and so the virtual logarithmic) Tevelev degrees of \(\mathcal{H}_{a}\) with any tangency conditions \(c\). **Notation 6**.: _The fan \(\Sigma\) of \(\mathcal{H}_{a}\) has four rays, with associated unit vectors_ \[n_{1}=(-1,a),\ n_{2}=(0,1),\ n_{3}=(1,0),\text{ and }n_{4}=(0,-1).\] _Denote by \(D_{1},D_{2},D_{3}\) and \(D_{4}\) the corresponding toric divisors._ **Theorem 7**.: _We have_ * _if either_ \(|\mu_{1}|>n-1\) _or_ \(|\mu_{3}|>n-1\)_, then_ \[\operatorname{trop}\mathsf{Tev}_{\mathsf{\Gamma}}^{\mathcal{H}_{a}}=0,\] * _otherwise_ \[\operatorname{trop}\mathsf{Tev}_{\mathsf{\Gamma}}^{\mathcal{H}_{a}}=\Bigg{(} \prod_{i=1}^{4}\frac{|\mu_{i}|!\prod_{j=1}^{|\mu_{i}|}\mu_{i,j}}{\prod_{u\geq 1 }|\{v\ |\ \mu_{i,v}=u\}|!}\Bigg{)}a^{n-1-|\mu_{2}|-|\mu_{4}|}\binom{n-1-|\mu_{4}|}{|\mu_{ 2}|}\Bigg{)}\] The proof of this theorem is given in SS3. **Remark 8**.: _The formula above gives zero whenever \(|\mu_{2}|>n-1-|\mu_{4}|\). In particular, suppose that \(\mu_{i,j}=1\) for all \(i,j\) and that \(a\geq 2\). Then_ \[|\mu_{1}|=|\mu_{3}|\text{ and }|\mu_{4}|=|\mu_{2}|+(a+1)|\mu_{1}|\] _so_ \[|\mu_{4}|+|\mu_{2}|>\frac{|\Delta|}{2}=n-1\] _and \(\mathsf{tropTev}_{\mathsf{f}}^{\mathcal{H}_{a}}=0\)._ A geometric interpretation of this fact is given in SS4. Suppose \(a=1\) and \(\mu_{2}=\emptyset\). Formally, \(\Sigma\) reduces to the fan of \(\mathbb{P}^{2}\) and we are counting curves in \(\mathbb{P}^{2}\). Then (the proof of) Theorem 7 also shows the following. **Theorem 9**.: _We have_ \[\mathsf{tropTev}_{\mathsf{f}}^{\mathbb{P}^{2}}=\prod_{i=1,3,4}\frac{|\mu_{i}|!\prod_{j=1}^{|\mu_{i}|}\mu_{i,j}}{\prod_{u\geq 1}|\{v\ |\ \mu_{i,v}=u\}|!}.\] #### 1.3.1 Description of the curves enumerated in \(\mathsf{tropTev}_{\mathsf{f}}^{\mathcal{H}_{a}}\) When \(r=2\), we can describe all the curves contributing to \(\mathsf{tropTev}_{\mathsf{f}}^{X}\). Fix general points \(x_{1},\ldots,x_{n}\) in \(\mathbb{R}^{2}\) and fix the stabilized domain curve \(\bar{\mathsf{C}}\) in \(M_{0,n}^{\mathrm{trop}}\) to have have all lengths equal to \(0\). Such a curve is not in the interior of a maximal cone of \(M_{0,n}^{\mathrm{trop}}\), but we are allowed to assume so by the intersection theoretic point view presented in SS3.1. **Proposition 10**.: _The curves \([h:\mathsf{C}\to\mathbb{R}^{2}]\) contributing to \(\mathsf{tropTev}_{\mathsf{f}}^{X}\) with point insertions \(x_{1},\ldots,x_{n}\) and stabilized domain curve \(\bar{\mathsf{C}}\) are all embeddings and the domain curve \(\mathsf{C}\) has one of the two shapes in Figure 1:_ 1. _in type_ \(A\) _there is a central vertex_ \(V\) _with the marking_ \(p_{1}\) _attached to it and_ \(n-1\) _leaves from_ \(V\) _consisting of two bounded edges, one marking and two ends,_ 2. _in type_ \(B\)_, there still is a central vertex_ \(V\) _and_ \(n\) _leaves from it of which exactly two consist of one bounded edge, one marking and one unbounded edge and the other_ \(n-2\) _leaves are as in type_ \(A\)_._ Figure 1: Shape of the domain curve The proof of this proposition is also given in SS3. For \(X=\mathcal{H}_{a}\), we will choose points in the following way: * if \(|\mu_{3}|+|\mu_{4}|\geq n-1\), the point \(x_{1}\) is in the origin \((0,0)\), there are \(n-1-|\mu_{4}|\) points in \(\{x>0,y>0\}\), \(|\mu_{3}|+|\mu_{4}|-(n-1)\) in \(\{ax+y>0,y<0\}\) and \(n-1-|\mu_{3}|\) in \(\{x<0,y<0\}\). Note that if \(|\mu_{3}|>n-1\) or \(|\mu_{4}|>n-1\) then Theorem 7 prescribes \(\mathsf{tropTev}_{\mathsf{f}}^{\mathcal{H}_{a}}=0\). * if instead \(|\mu_{3}|+|\mu_{4}|<n-1\), then again \(x_{1}\) is in the origin \((0,0)\), there are \(n-1-|\mu_{3}|-|\mu_{4}|\) points in \(\{x<0,ax+y>0\}\), \(|\mu_{3}|\) in \(\{x>0,y>0\}\) and \(|\mu_{4}|\) in \(\{x<0,y<0\}\). We will then prove that there are \[\prod_{i=1}^{4}|\mu_{i}|!\binom{n-1-|\mu_{4}|}{|\mu_{2}|}\] tropical curves as in Proposition 10 and moreover that each of such curves contributes with multiplicity \[a^{n-1-|\mu_{2}|-|\mu_{4}|}\prod_{j=1}^{|\mu_{i}|}\mu_{i,j}.\] to \(\mathsf{tropTev}_{\mathsf{f}}^{\mathcal{H}_{a}}\). **Example 11**.: _Suppose \(a=2\) and \(\mu_{1}=(1,2)\), \(\mu_{2}=(3)\), \(\mu_{3}=(1,1,1)\) and \(\mu_{4}=(4,4)\). So in this case \(|\mu_{3}|+|\mu_{4}|\geq n-1\). We list in Figure 2 the \(4\) contributing curves, all of which are of type A._ Figure 2: The \(4\) tropical curves contributing to \(\mathsf{tropTev}_{\mathsf{f}}^{\mathcal{H}_{a}}\) in Example 11 **Example 12**.: _Suppose \(a=1\) and \(\mu_{1}=(1,1,1)\), \(\mu_{2}=(1)\), \(\mu_{3}=(3)\) and \(\mu_{4}=(4)\). In this case \(|\mu_{3}|+|\mu_{4}|\geq n-1\). We list in Figure 3 below the \(2\) contributing curves: one of type \(A\) and one of time \(B\)._ ### Comparison of virtual fundamental classes for maps to Hirzebruch surfaces We can use fixed-domain curve counts to distinguish the virtual fundamental class of the moduli spaces of logarithmic and stable maps to Hirzebruch surfaces. More precisely, let \(X=\mathcal{H}_{a}\) be a Hirzebruch surface and \(\beta\in H_{2}(X,\mathbb{Z})\) be an effective curve class. Let \(c\) be defined by \[\mu_{i}=\underbrace{(1,\ldots,1)}_{\beta\cdot D_{i}\text{ times}}\] for \(i=1,2,3,4\) and let \(n\in\mathbb{N}\) be such that the dimensional constraint (1) holds. Consider the natural (proper) morphism \[\alpha:\overline{\mathcal{M}}_{\Gamma}(\mathcal{H}_{a})\to\overline{\mathcal{ M}}_{0,n}(\mathcal{H}_{a},\beta)\] of virtually equidimensional Deligne-Mumford stacks. **Theorem 13**.: _In the following cases:_ 1. \(a=2j\) _where_ \(j\in\mathbb{Z}_{\geq 1}\) _and_ \(\beta=d[(j+1)D_{1}+D_{2}]\) _for_ \(d>0\) _such that_ \(2d=n-1\)_; or_ 2. \(a=2j+1\) _where_ \(j\in\mathbb{Z}_{\geq 1}\) _and_ \(\beta=[j(d-k)+d]D_{1}+(d-k)D_{2}\) _for_ \(d\) _and_ \(k\) _integers such that_ \(0\leq k\leq d\)_,_ \(0\leq k\leq n-1-d\) _and_ \(3d-k=2(n-1)\)_;_ _we have_ \[\alpha_{*}[\overline{\mathcal{M}}_{\Gamma}(\mathcal{H}_{a})]^{\mathrm{vir}} \neq[\overline{\mathcal{M}}_{0,n}(\mathcal{H}_{a},\beta)]^{\mathrm{vir}}.\] This is achieved in SS5 by comparing the corresponding Tevelev degrees and using the results of [12]. As already observed in [10], Hirzebruch surfaces provide an excellent example for the fact that in general Gromov-Witten invariants might well count curves in the boundary components of the moduli spaces. The proof of theorem 7 and Theorem 13 show that logarithmic stable maps behave better from this point of view. ### Further directions Our approach for computing \(\mathsf{tropTev}_{\mathsf{f}}^{X}\) for Hirzebruch surfaces should generalize to other geometries and higher dimensional varieties. Higher dimensional generalizations of \(\mathcal{H}_{a}\) includes \(\mathbb{P}^{1}\)-bundles \(\mathbb{P}(\mathcal{O}_{\mathbb{P}^{r}}\oplus\mathcal{O}_{\mathbb{P}^{r}}(a))\) over \(\mathbb{P}^{r}\), for which we conjecture the following formula to hold. **Conjecture 14**.: _Let \(X=\mathbb{P}(\mathcal{O}_{\mathbb{P}^{r}}\oplus\mathcal{O}_{\mathbb{P}^{r}}(a))\) and let \(D_{1},\ldots,D_{r+1}\) be the fibers over the invariant hyperplanes \(\{x_{1}=0\},\ldots,\{x_{r+1}=0\}\), and let \(D_{r+2},D_{r+3}\) be the zero section and the infinity section, respectively. Then, when \(\mathsf{vTev}_{\mathsf{f}}^{X}\) is not \(0\),_ \[\mathsf{vTev}_{\mathsf{f}}^{X}=\mathsf{tropTev}_{\mathsf{f}}^{X}=\left(\prod_ {i=1}^{r+3}\frac{|\mu_{i}|!\prod_{j=0}^{|\mu_{ij}|}\mu_{i,j}}{\prod_{u\geq 1}| \{v\ |\ \mu_{i,v}=u\}|!}\right)a^{(n-1)-|\mu_{r+2}|-|\mu_{r+3}|}\binom{n-1-|\mu_{r+2}| }{|\mu_{r+3}|}.\] The Hirzebruch surface \(\mathcal{H}_{1}\) is isomorphic to the blow-up of \(\mathbb{P}^{2}\) at one point. In [11], the authors computed the geometric degrees with simple incidence conditions with the toric boundary for blowups of \(\mathbb{P}^{r}\) at up to \(r+1\) points. We also conjecture the following generalization of that formula ho hold. **Conjecture 15**.: _Let \(X\) be the blowup of \(\mathbb{P}^{r}\) at \(r\) of the torus fixed points, and let \(D_{1},\ldots,D_{r}\) be the exceptional divisors of \([0:1:\ldots:0],\ldots,[0:\ldots:1]\), and \(D_{r+1},\ldots,D_{2r+1}\) the strict transforms of the linear subspaces \(\{x_{1}=0\},\ldots,\{x_{r+1}=0\}\) of \(\mathbb{P}^{r}\). Then if \(\mathsf{vTev}_{\mathsf{f}}^{X}\) is nonzero,_ \[\mathsf{vTev}_{\mathsf{f}}^{X}=\mathsf{tropTev}_{\mathsf{f}}^{X}=\left(\prod_ {i=0}^{2r+1}\frac{|\mu_{i}|!\prod_{j=0}^{|\mu_{ij}|}\mu_{i,j}}{\prod_{u\geq 1}| \{v\ |\ \mu_{i,v}=u\}|!}\right)\prod_{i=1}^{r}\binom{n-1-|\mu_{i+r+1}|}{|\mu_{i}|}.\] Finally, we expect a more complicated formula could be obtained with our method for the blow-up of \(\mathbb{P}^{r}\) at the \(r+1\) torus fixed points (with any tangencies with the toric boundary). ### Acknowledgments This project began with the participation of the first author in the MSRI summer school titled "Tropical Geometry" at St. Mary's College in Moraga, California, in August 2022. The first author is deeply grateful to the organizers, Renzo Cavalieri, Hannah Markwig, and Dhruv Ranganathan, for teaching him tropical and logarithmic geometry. We would also like to thank these three researchers for their invaluable assistance with this project during their visit to ETH Zurich in the spring semester of 2023. Lastly, we thank Gavril Farkas, Carl Lian, Rahul Pandharipande, and Johannes Schmitt for several useful discussions regarding fixed domain curve counts. A.C. received support from SNF-200020-182181. A.I.L. was supported by ERC-2017-AdG-786580-MACI. The project received funding from the European Research Council (ERC) under the European Union Horizon 2020 research and innovation programme (grant agreement 786580). The correspondence theorem In this section, we assume familiarity with the intersection theory on balanced fans (see [11] for an introduction). The starting point to prove the correspondence theorem 5 are [10, 11]. **Lemma 16**.: _The natural maps \(\overline{\mathcal{M}}_{0,n}\to\prod_{i=4}^{n}\overline{\mathcal{M}}_{0,\{1,2,3,i\}}\) and \(M_{0,n}^{\mathrm{trop}}\to\prod_{i=4}^{n}M_{0,\{1,2,3,i\}}^{\mathrm{trop}}\) have degree \(1\)._ Proof.: The statement is clear for the first map and for the second follows from the first map having degree \(1\) and [11, Theorem 4.1]. The degree of the map \(2\) is then equal to the degree of the map \[\bigg{(}\prod_{i=4}^{n}\mathrm{ft}_{i}\bigg{)}\times\mathrm{ev}:\overline{ \mathcal{M}}_{\mathsf{f}}(X)\to\prod_{i=4}^{n}\overline{\mathcal{M}}_{0,\{1,2,3,i\}}\times X^{n} \tag{4}\] obtained by the forgetful morphisms \(\mathrm{ft}_{i}:\overline{\mathcal{M}}_{\mathsf{f}}(X)\to\overline{\mathcal{M }}_{0,\{1,2,3,i\}}\) for \(i=4,\ldots,n\) and the evaluation map. Similarly, the degree of the map \(3\) is equal to the degree of \[\bigg{(}\prod_{i=4}^{n}\mathrm{ft}_{i}\bigg{)}\times\mathrm{ev}:M^{\mathrm{ trop}}(\mathbb{R}^{r},\mathsf{\Gamma})\to\prod_{i=4}^{n}M_{0,\{1,2,3,i\}}^{ \mathrm{trop}}\times(\mathbb{R}^{r})^{n} \tag{5}\] The correspondence theorem 5 will now follow from [10, Theorem 5.1], a generalization of [11, Proposition 3.5] to higher dimensions and next two lemmas. **Lemma 17**.: _In genus \(0\) the moduli spaces \(\overline{\mathcal{M}}_{\mathsf{f}}(X)\) are irreducible, generically of expected dimension. Moreover,_ \[[\overline{\mathcal{M}}_{\mathsf{f}}(X)]=[\overline{\mathcal{M}}_{\mathsf{f} }(X)]^{\mathrm{vir}}\in A_{*}(\overline{\mathcal{M}}_{\mathsf{f}}(X)).\] Proof.: Let \(\mathfrak{M}_{0,n}\) be the moduli space of genus \(0\) and \(n\)-marked prestable curves, endowed with the logarithmic structure given by \(\partial\mathfrak{M}_{0,n+m}\). Let also \(\mathcal{L}og_{\mathfrak{M}_{0,n+m}}\) be the stack constructed in [10]. It has pure dimension \(-3+n+m\). Consider the natural (strict) logarithmic map \[\varphi:\overline{\mathcal{M}}_{\mathsf{f}}(X)\to\mathcal{L}og_{\mathfrak{M}_ {0,n+m}}\] The relative perfect obstruction theory (see [10, Section 5]) is \[E^{\bullet}=Rp_{*}(f^{*}T_{X}^{\mathrm{log}})^{\vee}\to L^{\bullet}_{ \overline{\mathcal{M}}_{\mathsf{f}}(X)/\mathcal{L}og_{\mathfrak{M}_{0,n+m}}}\] where \(p:\mathcal{C}\to\overline{\mathcal{M}}_{\mathsf{f}}(X)\) is the universal curve and \(f:\mathcal{C}\to X\) is the universal map. In our situation, we have \(T_{X}^{\mathrm{log}}=\mathcal{O}_{X}^{r}\) and therefore \(E^{\bullet}\cong\Omega_{\overline{\mathcal{M}}_{\mathsf{f}}(X)/\mathcal{L}og _{\mathfrak{M}_{0,n+m}}}\) is a vector bundle with fiber over \([f:C\to X]\) given by \((H^{0}(C,\mathcal{O})^{r})^{\vee}\). It follows that \(\varphi\) is smooth (and therefore log-smooth) and that \([\overline{\mathcal{M}}_{\mathsf{f}}(X)]=[\overline{\mathcal{M}}_{\mathsf{f} }(X)]^{\mathrm{vir}}\). The irreducibility statement is [12, Proposition 3.3.5]. **Remark 18**.: _The proof of [11, Proposition 3.5] straightforwardly generalizes to the higher-dimensional setting. In the proof of Theorem 5 below, we will use its higher-dimensional generalization._ Proof of Theorem 5.: By Lemma 17, \(\mathsf{vTev}_{\mathsf{f}}^{X}\) equals the _geometric_ numbers of logarithmic stable maps from a fixed general curve \((C,p_{1},...,p_{n})\) to \(X\) with contact orders prescribed by \(c\). By Lemma 16, [17, Theorem 5.1] and the higher-dimensional generalization of [1, Proposition 3.5], the number of such maps where in addition the domain curve is smooth and the image of the map does not meet any toric point of \(X\) is equal to the degree of \(\operatorname{trop}(\tau)\). In order to conclude it is then enough to exclude contributions in \(\mathsf{vTev}_{\mathsf{f}}^{X}\) from \(\partial\overline{\mathcal{M}}_{\mathsf{f}}(X)\) and from the locus \(B\) of maps containing toric points in their image. The boundary \(\partial\overline{\mathcal{M}}_{\mathsf{f}}(X)\) and \(B\) are both proper subsets of \(\overline{\mathcal{M}}_{\mathsf{f}}(X)\)[1, Proposition 3.3.3] so, by lemma 17, they cannot dominate \(\overline{\mathcal{M}}_{0,n}\times X^{n}\) for dimensional reasons. ## 3 Genus \(0\) tropical count for Hirzebruch surfaces In this section we calculate \(\operatorname{trop}\mathsf{Tev}_{\mathsf{f}}^{X}\) for tropical maps to Hirzebruch surfaces. ### Tropical curves and intersection theory The moduli spaces \(M_{0,n}^{\operatorname{trop}}\) and \(M^{\operatorname{trop}}(\mathbb{R}^{n},\Gamma)\) are fans and therefore the machinery of tropical intersection theory developed in [10], can be applied to it. All the results in this sections are valid for any toric surface, or in general a toric variety of any dimension after appropriate modifications, specially to Corollary 24. **Notation 19**.: _For a morphism of fans, there is a notion of pullback of Cartier divisors. For a point \(\bar{x}=(\bar{x}_{1},\bar{x}_{2})\in\mathbb{R}^{2}\) and any affine cycle \(Z\) of \(M^{\operatorname{trop}}(\mathbb{R}^{n},\Gamma)\), we make the abbreviation_ \[\operatorname{ev}^{*}(p)\cdot Z=(\operatorname{pr}_{1}\circ\operatorname{ev}) ^{*}(\bar{x}_{1})\cdot(\operatorname{pr}_{2}\circ\operatorname{ev})^{*}(\bar{ x}_{2})\cdot Z,\] _where \(\operatorname{pr}_{i}:\mathbb{R}^{2}\to\mathbb{R}\) are the two projections and \(\bar{x}_{i}\in\mathbb{R}\) is regarded as a Cartier divisor for \(i=1,2\)._ **Lemma 20**.: _The degree of \(\operatorname{trop}(\tau)\) is equal to the degree of the tropical \(0\)-cycle_ \[\prod_{i=4}^{n}\operatorname{ft}_{i}^{*}(0)\cdot\prod_{i=1}^{n}\operatorname {ev}_{i}^{*}(x_{i})\cdot M^{\operatorname{trop}}(\mathbb{R}^{2},\Gamma), \tag{6}\] _where \(\operatorname{ft}_{i}:M^{\operatorname{trop}}(\mathbb{R}^{2},\Gamma)\to M_{0, \{1,2,3,i\}}^{\operatorname{trop}}\), where \(0\) represents the unique \(4\)-valent curve in \(M_{0,4}^{\operatorname{trop}}\), and \(x_{i}\in\mathbb{R}^{2}\)._ Proof.: Let \(((\lambda_{1},\dots,\lambda_{n-3}),(x_{1},\dots,x_{n}))\in\left(M_{0,4}^{ \operatorname{trop}}\right)^{n-3}\times(\mathbb{R}^{2})^{n}\) be a general point and let \(g=\bigg{(}\prod_{i=4}^{n}\operatorname{ft}_{i}\bigg{)}\times\operatorname{ev}\) be the morphism of tropical fans in 5. The equality \[\deg(g)=\deg\left(\prod_{i=4}^{n}\operatorname{ft}_{i}^{*}(\lambda_{i})\cdot \prod_{i=1}^{n}\operatorname{ev}_{i}^{*}(x_{i})\cdot M^{\operatorname{trop}}( \mathbb{R}^{2},\Gamma)\right)\] follows essentially from [11, Lemma 1.2.9], and the equality \[\deg\left(\prod_{i=4}^{n}\operatorname{ft}_{i}^{*}(\lambda_{i})\cdot\prod_{i= 1}^{n}\operatorname{ev}_{i}^{*}(x_{i})\cdot M^{\operatorname{trop}}(\mathbb{R} ^{2},\Gamma)\right)=\deg\left(\prod_{i=4}^{n}\operatorname{ft}_{i}^{*}(0)\cdot \prod_{i=1}^{n}\operatorname{ev}_{i}^{*}(x_{i})\cdot M^{\operatorname{trop}}( \mathbb{R}^{2},\Gamma)\right)\] follows from the fact that pullbacks, intersections and the degree are defined up to rational equivalence, and all points in \(M_{0,4}^{\mathrm{trop}}\) are rationally equivalent. **Lemma 21**.: _Consider the tropical affine cycle_ \[Z=\prod_{i=4}^{n}\mathrm{ft}_{i}^{*}(0)\cdot M^{\mathrm{trop}}(\mathbb{R}^{2},\Gamma)\] _Then the combinatorial type of any top-dimensional polyhedron of \(Z\) is such that one vertex \(V\) is \(n\)-valent, the rest of them are 3-valent, and for each each edge is contained in the unique path to a unique marking. Moreover, the weight of all these polyhedra is \(1\)._ Proof.: Let \([h:\mathsf{C}\to\mathbb{R}^{2}]\) be such a curve. Since for any \(i=4,\ldots,n\) we have \(\mathrm{ft}_{i}([h])=0\), for each \(i\) there must be a unique vertex \(V_{i}\) in \(\mathsf{C}\) and \(4\) different edges \(e_{1},e_{2},e_{3},e_{i}\) attached to \(V_{i}\) that are part of the unique path between \(V_{i}\) and \(p_{1}\), \(p_{2}\), \(p_{3}\) and \(p_{i}\), respectively. In particular, \(V_{i}\) must be contained in the unique path joining \(p_{1}\), \(p_{2}\), and in the unique path containing \(p_{1}\), \(p_{3}\). Therefore, there exists a vertex \(V\) such that \(V_{i}=V\) for all \(i\). By [1, Lemma 3.11], for any vertex \(W\) of \(\mathsf{C}\) \[\mathrm{val}(W)=3+|\{i\mid V_{i}=W\}|,\] And this implies the first assertion of the lemma. The multiplicities of such polyhedra are also computed in the same lemma [1, Lemma 3.11], and, in the case we are concerned with, they are all \(1\). This amounts to say that there is only one (see Figure 4 below) way of resolving the cross ratios \((p_{1},p_{2};p_{3},p_{i})\) for \(i=4,\ldots,n\). **Definition 22**.: _In the situation of Lemma 21, we will refer to the \(n\)-valent vertex \(V\) as the **central vertex** of \(\mathsf{C}\). Also, we will call the connected components of \(\mathsf{C}\smallsetminus\{V\}\)**leaves** of \(\mathsf{C}\)._ In light of the previous lemma, each leaf contains a unique marked point and all of its vertices are trivalent. **Corollary 23**.: _The contribution of a curve \([h:\mathsf{C}\to\mathbb{R}^{2}]\) to_ \[\prod_{i=1}^{n}\mathrm{ev}_{i}^{*}(x_{i})\cdot Z,\] _where the points \(p_{i}\) are in general position, is equal to the product of its local \(\mathrm{ev}\)-multiplicities at each vertex, as defined in [1, Definition 3.18]._ Figure 4: Resolving the cross-ratios in the proof of Lemma 21. Proof.: By the previous lemma the weights of \(Z\) are all \(1\), thus this immediately follows from [1, Proposition 3.16] and [1, Lemma 3.19]. **Corollary 24**.: _Let \([h:\mathsf{C}\to\mathbb{R}^{2}]\) be a curve that contributes to_ \[\prod_{i=1}^{n}\mathrm{ev}_{i}^{*}(x_{i})\cdot Z,\] _where the points are in general position. Then \(\mathsf{C}\) has one of the two shapes in Proposition 10._ Proof.: Denote by \(V\) the central vertex of \(\mathsf{C}\) and let \(\mathsf{L}\) be one of the leaves, which contains the marked point \(p_{i}\). Then: 1. if \(\mathsf{L}\) has no vertices, it has to be itself the marked point; 2. if \(\mathsf{L}\) has one vertex, it must consist of one bounded edge, one end and the marked point \(p_{i}\); 3. if \(\mathsf{L}\) has two vertices, it must consist of two bounded edges, one marking and two ends. Moreover, the vertex with no marking attached has to be between \(V\) and \(p_{i}\). Otherwise we could find a string (i.e. an embedding \(\mathbb{R}\to\mathsf{C}\) disjoint from any marked point), so we could deform the curve without changing its ev-multiplicity, and so it cannot contribute; 4. \(\mathsf{L}\) cannot have more than \(2\) vertices, or there would be again be a string in \(\mathsf{L}\) and the contribution would be \(0\). If \(L_{i}\) is the number of leaves with \(i\) vertices, counting the number of markings we obtain \(L_{0}+L_{1}+L_{2}=n\), whereas counting the number of ends, \(L_{1}+2L_{2}=|\Delta|=2(n-1)\). In particular, \(2L_{0}+L_{1}=2\), which leaves only two possibilities, that correspond to the description of \(\mathsf{A}\) and \(\mathsf{B}\). ### Proof of Theorem 7 We now assume \(X=\mathcal{H}_{a}\) and \(\Sigma\) is the fan in \(6\). Corollary 24 motivates the following definition. Figure 5: The pictures for the situations (1), (2), (3) and (4) in the proof. **Definition 25**.: _For a curve \([h:\mathsf{C}\to\mathbb{R}^{2}]\) contributing to (6), let \(V\) be its central vertex and define:_ \[\begin{array}{ll}\alpha([h])=&\text{number of leaves from $V$ having ends whose primitive vectors are $n_{1}$ and $n_{2}$},\\ \beta([h])=&\text{number of leaves from $V$ having ends whose primitive vectors are $n_{1}$ and $n_{3}$},\\ \gamma([h])=&\text{number of leaves from $V$ having ends whose primitive vectors are $n_{2}$ and $n_{3}$},\\ \delta([h])=&\text{number of leaves from $V$ having ends whose primitive vectors are $n_{3}$ and $n_{4}$},\\ \chi([h])=&\text{number of leaves from $V$ having ends whose primitive vectors are $n_{4}$ and $n_{1}$},\\ \epsilon_{1}([h])=&\text{number of leaves from $V$ having one end whose primitive vector is $n_{1}$},\\ \epsilon_{2}([h])=&\text{number of leaves from $V$ having one end whose primitive vector is $n_{2}$},\\ \epsilon_{3}([h])=&\text{number of leaves from $V$ having one end whose primitive vector is $n_{3}$},\\ \epsilon_{4}([h])=&\text{number of leaves from $V$ having one end whose primitive vector is $n_{4}$}.\end{array}\] **Remark 26**.: _Note that these are the possibilities that can occur. More precisely, let_ \[\begin{array}{ll}\sigma_{1}=\{x<0,ax+y>0\},\\ \sigma_{2}=\{x>0,y>0\},\\ \sigma_{3}=\{x>0,y<0\},\\ \sigma_{4}=\{x<0,ax+y<0\}\end{array}\] _be the interiors of the maximal cones of the fan \(\Sigma\) in Notation 6. Then leaves counted in \(\alpha([h])\) corresponds to points \(x_{i}\) in \(\sigma_{1}\), leaves in \(\beta([h])\) corresponds to points \(x_{i}\) in \(\sigma_{1}\) or \(\sigma_{2}\), leaves in \(\gamma([h])\) corresponds to points in \(\sigma_{2}\), leaves in \(\delta([h])\) corresponds to points in \(\sigma_{3}\) and leaves in \(\chi([h])\) corresponds to points in \(\sigma_{4}\)._ **Lemma 27**.: _For a curve \([h:\mathsf{C}\to\mathbb{R}^{2}]\) contributing to (6), the vector_ \[(\alpha,\beta,\gamma,\delta,\chi,\epsilon_{1},\epsilon_{2},\epsilon_{3}, \epsilon_{4})=(\alpha([h]),\beta([h]),\gamma([h]),\delta([h]),\chi([h]), \epsilon_{1}([h]),\epsilon_{2}([h]),\epsilon_{3}([h]),\epsilon_{4}([h])).\] _satisfies the following system_ \[\left\{\begin{array}{ll}\alpha+\beta+\chi&=|\mu_{1}|-\epsilon_{1}\\ \alpha+\gamma&=|\mu_{2}|-\epsilon_{2}\\ \beta+\gamma+\delta&=|\mu_{3}|-\epsilon_{3}\\ \chi+\delta&=|\mu_{4}|-\epsilon_{4}\end{array}\right. \tag{7}\] Proof.: The proof is clear. In particular \[\alpha+\beta+\gamma+\delta+\chi=n-1-\delta_{B}^{\text{type of $[h]$}}\] where \(\delta_{B}^{\text{type of $[h]$}}\) is 1 when the type of \([h]\) is \(B\) and 0 when it is \(A\). Solving the system in \(\alpha\), we obtain \[\left\{\begin{array}{ll}\beta&=n-1-\delta_{B}^{\text{type of $[h]$}}-|\mu_{4}|-|\mu_{ 2}|+\epsilon_{2}+\epsilon_{4}\\ \gamma&=|\mu_{2}|-\epsilon_{2}-\alpha\\ \delta&=|\mu_{3}|+|\mu_{4}|-(n-1-\delta_{B}^{\text{type of $[h]$}})-\epsilon_{3}- \epsilon_{4}+\alpha\\ \chi&=n-1-\delta_{B}^{\text{type of $[h]$}}-|\mu_{3}|+\epsilon_{3}-\alpha \end{array}\right. \tag{8}\] **Remark 28**.: _If system (8) does not have an admissible solution, then \(\mathsf{tropTev}_{\mathsf{f}}^{\mathcal{H}_{a}}=0\). In particular, if_ \[|\mu_{2}|+|\mu_{4}|>n-1\] _then \(\beta<0\) (note that \(\epsilon_{2}([h])\) and \(\epsilon_{4}([h])\) cannot both be \(1\) for \([h]\) contributing to (6) being the \(x_{i}\) in general position) and \(\mathsf{tropTev}_{\mathsf{f}}^{\mathcal{H}_{a}}=0\). Similarly, if_ \[|\mu_{3}|>n-1\] _then \(\mathsf{tropTev}_{\mathsf{f}}^{\mathcal{H}_{a}}=0\). Finally, there is an isomorphism of \(\Sigma\) preserving \(n_{2}\) and \(n_{4}\) and switching \(n_{1}\) and \(n_{3}\), so also_ \[|\mu_{1}|>n-1\] _imlpies \(\mathsf{tropTev}_{\mathsf{f}}^{\mathcal{H}_{a}}=0\)_ The next lemma computes the multiplicities of any curve to 6 for Hirzebruch surfaces. **Lemma 29**.: _The contribution of a curve \([h:\mathsf{C}\to\mathbb{R}^{2}]\) to 6 is_ \[\left(\prod_{i=1}^{4}\prod_{j=1}^{|\mu_{ij}|}\mu_{i,j}\right)a^{n-1-|\mu_{2}| -|\mu_{4}|}\] Proof.: By Corollary 23, we need to compute the product of the ev-multiplicity at each vertex. For a leaf \(\mathsf{L}_{i}\) with two vertices, marking \(x_{i}\), and weighted vectors associated to its end \(\lambda_{i}\cdot n_{j_{i}}\), \(\eta_{i}\cdot n_{k_{i}}\), the multiplicity at the vertex of \(\mathsf{L}_{i}\) that is not adjacent to \(x_{i}\) is precisely \(\lambda_{i}\eta_{i}|\det(n_{j_{i}},n_{k_{i}})|\). Note that for \(j<k\), \[\det(n_{j},n_{k})=\left\{\begin{array}{ll}a&\mbox{ if }j=1,k=3\\ 0&\mbox{ if }j=2,k=4\\ 1&\mbox{ otherwise}\end{array}\right.\] * If the curve is of type \(A\), \(V\) will have local multiplicity 1 by [1, definition 3.18], and as \(i\) goes from 2 to \(n\), we obtain a factor of \(a\) for each leaf counted in \(\beta([h])=n-1-|\mu_{2}|-|\mu_{4}|\) (see Equation 8), and the numbers \(\lambda_{i}\) and \(\eta_{i}\) go through all the elements of each partition, so we obtain the number in the statement of the lemma. * If the curve is of type \(B\), we can assume without lost of generality that the leaves with one end are \(\mathsf{L}_{1}\), \(\mathsf{L}_{2}\), with markings \(x_{1}\), \(x_{2}\) and that the end of \(\mathsf{L}_{i}\) has weight \(\lambda_{i}\) and primitive vector \(n_{ji}\), for \(i=1,2\). Then the leaves \(\mathsf{L}_{1}\), \(\mathsf{L}_{2}\) are precisely the fixed components of \(V\), in the language of [1, Definition 3.18]. We divide into cases: * If \(\epsilon_{1}([h])+\epsilon_{3}([h])=1\), the ev-multiplicity of \(V\) is \(\lambda_{1}\lambda_{2}\) and the local multiplicity for the rest of the vertices from the leaves \(\mathsf{L}_{3},\ldots,\mathsf{L}_{n}\) are counted as in the case \(A\), so the multiplicity of the curve would be \[\left(\prod_{i=1}^{4}\prod_{j=1}^{|\mu_{ij}|}\mu_{i,j}\right)a^{\beta([h])}.\] Note that, by 8, in this case \(\beta([h])=n-1-|\mu_{4}|-|\mu 2|\) because \(\epsilon_{2}([h])+\epsilon_{4}([h])=1\). If \(\epsilon_{1}([h])+\epsilon_{3}([h])=2\), the ev-multiplicity of \(V\) is \(a\lambda_{1}\lambda_{2}\) and the rest of local multiplicities are computed in the same way giving the number \[\left(\prod_{i=1}^{4}\prod_{j=1}^{|\mu_{ij}|}\mu_{i,j}\right)a^{\beta([h])+1}.\] In this case, by looking at \(8\), \(\beta([h])=n-2-|\mu_{2}|-|\mu_{4}|\). Next, we explain which curves appear in \(6\). We distinguish two cases. #### 3.2.1 Case \(|\mu_{3}|+|\mu_{4}|\geq n-1\) In this case, the following \[\left\{\begin{array}{ll}\bar{\alpha}&=0,\\ \bar{\delta}_{B}^{\text{type of }h}&=0\\ \bar{\epsilon}_{i}&=0\text{ for }i=1,\ldots,4,\\ \bar{\beta}&=n-1-|\mu_{2}|-|\mu_{4}|\\ \bar{\gamma}&=|\mu_{2}|\\ \bar{\delta}&=|\mu_{3}|+|\mu_{4}|-(n-1)\\ \bar{\chi}&=n-1-|\mu_{3}|\end{array}\right. \tag{9}\] is a solution of the system (8). We put the points \(x_{1},\ldots,x_{n}\) in the plane \(\mathbb{R}^{2}\) in general position and in such a way that: \(x_{1}\) at \((0,0)\), and there are exactly \(\bar{\alpha}\) points in \(\sigma_{1}\), \(\bar{\beta}+\bar{\gamma}\) in \(\sigma_{2}\), \(\bar{\delta}\) in \(\{ax+y>0,y<0\}\subseteq\sigma_{3}\) and \(\bar{\chi}\) in \(\{x<0,y<0\}\subseteq\sigma_{4}\). It is then clear that we can form \[\prod_{i=1}^{4}|\mu_{i}|!\binom{\bar{\beta}+\bar{\gamma}}{\bar{\beta}}=\prod_{ i=1}^{4}|\mu_{i}|!\binom{n-1-|\mu_{4}|}{|\mu_{2}|}\] curves \([h:\mathsf{C}\to\mathbb{R}^{2}]\) contributing to the intersection (6) with domain \(\mathsf{C}\) of type \(A\) and \((\alpha([h]),\ldots\chi([h]))=(\bar{\alpha},\ldots,\bar{\chi})\) as in Remark 26. Here the factorial terms are counting the different ways of labelling the ends of \(\mathsf{C}\). We will prove in SS3.2.3 that these are all the contributing curves in this case. #### 3.2.2 Case \(|\mu_{3}|+|\mu_{4}|<n-1\) This case is more complicated. In this case, the following \[\left\{\begin{array}{ll}\bar{\alpha}&=n-1-|\mu_{3}|-|\mu_{4}|,\\ \bar{\delta}_{B}^{\text{type of }h}&=0\\ \bar{\epsilon}_{i}&=0\text{ for }i=1,\ldots,4,\\ \bar{\beta}&=n-1-|\mu_{4}|-|\mu_{2}|\\ \bar{\gamma}&=n-1-|\mu_{1}|\\ \bar{\delta}&=0\\ \bar{\chi}&=|\mu_{4}|\end{array}\right. \tag{10}\] is a solution of (8). As before, we put the points \(x_{1},\ldots,x_{n}\) in the plane \(\mathbb{R}^{2}\) in general position and in such a way that: \(x_{1}\) is at \((0,0)\), and there are exactly \(\bar{\alpha}\) points in \(\sigma_{1}\), \(\bar{\beta}+\bar{\gamma}\) in \(\sigma_{2}\), \(\bar{\delta}\) in \(\{ax+y>0,y<0\}\subseteq\sigma_{3}\) and \(\bar{\chi}\) in \(\{x<0,y<0\}\subseteq\sigma_{4}\). We can form \[\prod_{i=1}^{4}|\mu_{i}|!{\bar{\beta}+\bar{\gamma}\choose\bar{\beta}}=\prod_{i =1}^{4}|\mu_{i}|!{|\mu_{3}|\choose n-1-|\mu_{4}|-|\mu_{2}|} \tag{11}\] curves \([h:\mathsf{C}\to\mathbb{R}^{2}]\) contributing to the intersection (6) with domain \(\mathsf{C}\) of type \(A\) as in the previous case. However, there also are some curves \([h:\mathsf{C}\to\mathbb{R}^{2}]\) of type \(B\) which we now describe. The vertex \(V\) is mapped to \[h(V)\in\{x>0,ax+y=0\},\] and \[(\epsilon_{1}([h]),\epsilon_{2}([h]),\epsilon_{3}([h]),\epsilon_{4}([h]), \alpha([h]),\beta([h]),\gamma([h]),\delta([h]),\chi([h]))=(1,1,0,0,\bar{\alpha }-1,\bar{\beta},\bar{\gamma},\bar{\delta},\bar{\chi}).\] Note that the marking \(x_{1}\) is reached from \(V\) by a single bounded edge in the direction \(n_{1}\). There are \[\prod_{i=1}^{4}|\mu_{i}|!\sum_{k=1}^{\bar{\beta}}{\bar{\alpha}+k-1 \choose\bar{\alpha}-1}{\bar{\beta}+\bar{\gamma}-k\choose\bar{\beta}-k} \tag{12}\] \[= \prod_{i=1}^{4}|\mu_{i}|!\sum_{k=1}^{n-1-|\mu_{2}|-|\mu_{4}|}{n-2 -|\mu_{3}|-|\mu_{4}|+k\choose n-2-|\mu_{3}|-|\mu_{4}|}{|\mu_{3}|-k\choose n-1- |\mu_{2}|-|\mu_{4}|-k}\] of such curves. **Lemma 30**.: _The curves in (11) and (12) are a total of_ \[\prod_{i=1}^{4}|\mu_{i}|!{n-1-|\mu_{4}|\choose|\mu_{2}|}\] _curves._ Proof.: We will use the following two well-known combinatorial identities \[{x\choose y}=(-1)^{y}{y-x-1\choose y} \tag{13}\] and \[{x\choose y}=(-1)^{x-y}{-y-1\choose x-y} \tag{14}\] valid for \(x\in\mathbb{Z}_{>0}\) and \(y\in\mathbb{Z}_{\geq 0}\), and Vandermonde identity \[\sum_{k=0}^{N}{x\choose k}{y\choose N-k}={x+y\choose N} \tag{15}\] valid for \(N\in\mathbb{Z}_{\geq 0}\) and \(x,y\in\mathbb{C}\). We start with noticing that the \(k=0\) term in the sum (12) is exactly the binomial coefficient in (11). Therefore, the sum of the two contributions is (up to taking the product with \(\prod_{i=1}^{4}|\mu_{i}|!\)) \[\sum_{k=0}^{n-1-|\mu_{2}|-|\mu_{4}|}\binom{n-2-|\mu_{3}|-|\mu_{4}|+ k}{n-2-|\mu_{3}|-|\mu_{4}|}\binom{|\mu_{3}|-k}{n-1-|\mu_{1}|}\] \[= (-1)^{n-1-|\mu_{3}|-|\mu_{4}|}\sum_{k=0}^{n-1-|\mu_{2}|-|\mu_{4}| }\binom{|\mu_{3}|+|\mu_{4}|-(n-1)}{k}\binom{|\mu_{1}|-n}{|\mu_{1}|+|\mu_{3}|-( n-1)-k}\] \[= (-1)^{n-1-|\mu_{3}|-|\mu_{4}|}\sum_{k=-\infty}^{\infty}\binom{| \mu_{3}|+|\mu_{4}|-(n-1)}{k}\binom{|\mu_{1}|-n}{|\mu_{1}|+|\mu_{3}|-(n-1)-k}\] \[= (-1)^{n-1-|\mu_{3}|-|\mu_{4}|}\binom{-|\mu_{2}|-1}{|\mu_{1}|+| \mu_{3}|-(n-1)}\] \[= \binom{n-1-|\mu_{4}|}{n-1-|\mu_{4}|-|\mu_{2}|}\] where in the first equality we used (13), in the second equality the fact that \(|\mu_{1}|+|\mu_{3}|-(n-1)-k<0\) for \(k>n-1-|\mu_{2}|-|\mu_{4}|\), in the third equality (15) and in the last equality (14). This concludes the proof. We will prove in SS3.2.3 that these are all the contributing curves in this case. #### 3.2.3 Exclusion of further contributions We have to prove that the curves listed above are all the curves contributing to the intersection 6. We start with adopting a unifying perspective. **Remark 31**.: _The solutions (9) and (10) are determined as the unique solution of the system (8) where we set \(\delta_{B}^{\text{type of }[h]}=0\), \(\epsilon_{i}=0\) and ask for \(\alpha\) to be the minimum possible._ Call \((\bar{\alpha},\bar{\beta},\bar{\gamma},\bar{\delta},\bar{\chi})\) such solution (so either as in (9) or as in (10)) **Remark 32**.: _Suppose \((\alpha,\beta,\gamma,\delta,\chi)\) is a solution of (8), with \(\delta_{B}^{\text{type of }[h]}=1\) and \(\epsilon_{i}=1\) for exactly two indices \(i\) and \(\epsilon_{i}=0\) otherwise. Then \(\alpha\geq\bar{\alpha}\) unless_ \[(\epsilon_{1},\epsilon_{2},\epsilon_{3},\epsilon_{4},\alpha,\beta,\gamma, \delta,\chi)=(1,1,0,0,\bar{\alpha}+1,\bar{\beta},\bar{\gamma},\bar{\delta}, \bar{\chi}).\] Proof.: Suppose for example \(\epsilon_{1}=\epsilon_{2}=0\) and \(\epsilon_{3}=\epsilon_{4}=1\). Then \((\alpha,\beta,\gamma+1,\delta,\chi)\) is a solution of the system (8) with \(\delta_{B}^{\text{type of }[h]}=0\) and \(\epsilon_{i}=0\) for all \(i\). Therefore \(\alpha\geq\bar{\alpha}\). The other cases are treated similarly. The case \(\epsilon_{2}=\epsilon_{4}=1\) and \(\epsilon_{1}=\epsilon_{3}=0\) is not possible because the points \(x_{i}\) are in general positions and in particular there are no two of them in the same line \(x=\text{constant}\). We first exclude other contributions \([h:\mathsf{C}\to\mathbb{R}^{2}]\) with \(\mathsf{C}\) of type \(A\). For such \([h]\) we would have: * if \(h(V)\in\sigma_{1}\), then \(\alpha([h])\leq\bar{\alpha}\) and \(\delta([h])>\bar{\delta}\), * if \(h(V)\in\sigma_{2}\), then \(\gamma([h])>\bar{\gamma}\), * if \(h(V)\in\sigma_{3}\cap\{ax+y>0\}\), then \(\gamma([h])>\bar{\gamma}\), * if \(h(V)\in\{ax+y<0\}\), then \(\alpha([h])+\beta([h])+\gamma([h])>\bar{\alpha}+\bar{\beta}+\bar{\gamma}\), each of which is in contradiction with system (8) and the choice of \(\bar{\alpha}\). Assume now we are not in the exceptional situation of Remark 32. Under this assumption, we now prove that there are no more contributions of type B. For such a \([h:\mathsf{C}\to\mathbb{R}^{2}]\) we would have one of the following contradictions: * if \(h(V)\in\{y>0\}\cup\{x>0,y=0\}\), then \(\alpha([h])+\beta([h])+\gamma([h])\leq\bar{\alpha}+\bar{\beta}+\bar{\gamma}-1\) and therefore by (8) it must be \(\epsilon_{4}([h])=0\), which would then imply \(\alpha([h])+\beta([h])+\gamma([h])=\bar{\alpha}+\bar{\beta}+\bar{\gamma}-2\) in contradiction with (8), * if \(h(V)\in\{x>0,ax+y>0\}\), then \(\chi([h])>\bar{\chi}\) in contradiction with (8) and our assumption on \(\bar{\alpha}\), * if \(h(V)\in\{x>0,ax+y=0\}\), then \(\epsilon_{1}([h])=1\) and \(\epsilon_{4}([h])=0\) for the choice of the points \(x_{i}\). If \(\epsilon_{3}=0\), then system (8) prescribes \(\chi([h])<\bar{\chi}\) but \(\chi([h])\geq\bar{\chi}\), * if \(h(V)\in\{x\geq 0,ax+y<0\}\) we have \(\alpha([h])+\beta([h])+\gamma([h])\geq\bar{\alpha}+\bar{\beta}+\bar{\gamma}\) and so, by (8), this is an equality and \(\epsilon_{4}([h])=1\). However, \(\epsilon_{4}([h])=0\) by the position of the points \(x_{i}\), * if \(h(V)\in\{x<0,y<0\}\) then \(\delta([h])+\chi([h])<\bar{\delta}+\bar{\chi}\) and so by (8), \(\epsilon_{4}([h])=1\). It follows that \(\epsilon_{2}([h])=0\) and so \(\delta([h])+\chi([h])\leq\bar{\delta}+\bar{\chi}-2\), which is in contradiction with (8), * if \(h(V)\in\{x<0,y=0\}\) then \(\epsilon_{3}([h])=1\). If \(\epsilon_{4}([h])=0\), then \(\alpha([h])<\bar{\alpha}\) which is not possible for the choice of \(\bar{\alpha}\) and our assumptions; if instead \(\epsilon_{4}([h])=1\), then \(\chi([h])<\bar{\chi}\) and so, by (8), \(\alpha([h])>\bar{\alpha}\) which is in contradiction with the fact that there are no points \(x_{i}\) which are reachable from \(h(V)\) via leaves counted in \(\delta([h])\) and are not reachable with the same type of leaves from \((0,0)\). Finally, we deal with curves of type B in the exceptional case of Remark 32. In this case: * \(h(V)\) cannot be on \(\{ax+y\leq 0\,x\geq 0\}\smallsetminus\{(0,0)\}\) otherwise \(\epsilon_{1}([h])=0\), * \(h(V)\in\{x<0,ax+y>0\}\), then \(\alpha([h])\leq\bar{\alpha}-2\) which is not the case, * \(h(V)\) cannot be on \(\{y>0,x=0\}\) otherwise \(\epsilon_{2}([h])=0\), * if \(h(V)\in\{x>0,ax+y>0\}\), then \(\chi([h])>\bar{\chi}\) which is also not the case, * if \(h(V)\in\{x>0,ax+y=0\}\) then the curves of type B in SS3.2.2 appear, * if \(h(V)\in\{x>0,ax+y<0\}\) then \(\alpha([h])+\beta([h])+\gamma([h])\geq\bar{\alpha}+\bar{\beta}+\bar{\gamma}\) which is not happening. This proves that the curves listed in SS3.2.1 and SS3.2.2 are all the curves contributing to the intersection (6) and concludes the proof of Theorem 7. Absence of rational curves interpolating \(n\) points on Hirzebruch surfaces Let \(a\geq 2\) and assume that \(\mu_{i,j}=1\) for all \(i,j\). Let also \(\beta\in H_{2}(X,\mathbb{Z})\) be the unique curve class with \(\beta.D_{i}=|\mu_{i}|\) for \(i=1,2,3,4\). As in Remark 8, we have \[\mathsf{v}\mathsf{Tev}_{\mathsf{f}}^{\mathcal{H}_{a}}=0.\] By Lemma 17, this is equivalent to say that the map (2) is not dominant. We aim to now explain this phenomenon geometrically. By the proof of Theorem 5, it is enough to show that the restriction \[\overset{\circ}{\tau}:\mathcal{M}_{\mathsf{f}}(\mathcal{H}_{a})\to\mathcal{M} _{g,n}\times\mathcal{H}_{a}\] of (2) is not dominant. By contradiction, suppose that \(\overset{\circ}{\tau}\) is dominant. Let \(\rho:\mathcal{H}_{a}=\mathbb{P}(\mathcal{O}\oplus\mathcal{O}(a))\to\mathbb{P}^ {1}\) be the projective bundle map and let \[0\to\mathcal{O}_{\mathcal{H}_{a}}(-1)\to\rho^{*}(\mathcal{O}_{\mathbb{P}^{1} }\oplus\mathcal{O}_{\mathbb{P}^{1}}(a))\to Q\to 0 \tag{16}\] be the associated universal exact sequence. **Lemma 33**.: _We have \(2|\mu_{1}|\geq n-1\)._ Proof.: Composing maps in \(\mathcal{M}_{\mathsf{f}}(\mathcal{H}_{a})\) with \(\rho\), we obtain maps \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree \(|\mu_{1}|\). Let \(\mathcal{M}_{\mathsf{f}^{\prime}}(\mathbb{P}^{1})\) be the moduli space of logarithmic stable maps \((\mathbb{P}^{1},p_{1},...,p_{n},q_{1},...,q_{2d})\to\mathbb{P}^{1}\) with prescribed intersection multiplicities with \(\partial\mathbb{P}^{1}\) all \(1\). By Lemma 17, this space is irreducible. It follows that the natural map \(\mathcal{M}_{\mathsf{f}^{\prime}}(\mathbb{P}^{1})\to\mathcal{M}_{0,n}\times( \mathbb{P}^{1})^{n}\) is also dominant and in particular that \[2|\mu_{1}|+n-2=\dim(\mathcal{M}_{\mathsf{f}^{\prime}}(\mathbb{P}^{1}))\geq \dim(\mathcal{M}_{0,n}\times(\mathbb{P}^{1})^{n})=2n-3\] The conclusion follows. Given a map \(f:\mathbb{P}^{1}\to\mathcal{H}_{a}\), we can can pull back to \(\mathbb{P}^{1}\) the exact sequence (16), obtaining \[\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{\mathbb{P}^{1}}(a|\mu_{1}|) \overset{v}{\to}\mathcal{O}_{\mathbb{P}^{1}}(|\mu_{2}|+a|\mu_{1}|)\to 0\] such that \[f(p)=\ker(v).\] We will think of \(v\) as the data of two sections \(s_{1}\in H^{0}(\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(|\mu_{2}|))\) and \(v_{2}\in H^{0}(\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(|\mu_{2}|+a|\mu_{1 }|)\). The fact that \(\overset{\circ}{\tau}\) is dominant amounts to say that for general points \[\mathsf{L}_{i}\in\mathbb{P}(\mathcal{O}_{\mathbb{P}^{1}}(|\mu_{2}|)\oplus \mathcal{O}_{\mathbb{P}^{1}}(|\mu_{2}|+a|\mu_{1}|))\cong\mathcal{H}_{a}\text{ and }p_{1},\ldots,p_{n}\in\mathbb{P}^{1}\] there are sections \(v_{1}\in H^{0}(\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(|\mu_{2}|))\) and \(v_{2}\in H^{0}(\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(|\mu_{2}|+a|\mu_{1 }|)\) such that \[\langle v_{1}(p_{i}),v_{2}(p_{i})\rangle=\mathsf{L}_{i}\] for each \(i=1,\ldots,n\). Since the codimension of \[\oplus_{i=1}^{n}\mathsf{L}_{i}\subseteq\oplus_{i=1}^{n}\mathcal{O}_{\mathbb{ P}^{1}}(|\mu_{2}|)|_{p_{i}}\oplus\mathcal{O}_{\mathbb{P}^{1}}(|\mu_{2}|+a|\mu_{1 }|)|_{p_{i}}\] is \(n\), the rank of the evaluation map \[\Psi:H^{0}(\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(|\mu_{2}|))\oplus H^{0}( \mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(|\mu_{2}|+a|\mu_{1}|))\to\bigoplus_{ i=1}^{n}\mathcal{O}_{\mathbb{P}^{1}}(|\mu_{2}|)|_{p_{i}}\oplus\mathcal{O}_{\mathbb{P}^{1}} (|\mu_{2}|+a|\mu_{1}|)|_{p_{i}}\] must be at least \(n+1\). Also, by Lemma 33, it is at most \[2|\mu_{2}|+a|\mu_{1}|+2=2(n-1)-2|\mu_{1}|+2\leq n+1.\] Therefore, \(2|\mu_{1}|=n-1\) and \[n-1=|\mu_{2}|+|\mu_{4}|\geq|\mu_{4}|=a|\mu_{1}|+|\mu_{2}|>n-1\] for \(a\geq 2\). This yields a contradiction. ## 5 Proof of Theorem 13 It is well-known that for any \(a\in\mathbb{N}\) the Hirzebruch surfaces \(\mathcal{H}_{a}\) and \(\mathcal{H}_{a+2}\) are deformation equivalent. We will use this fact to compute certain virtual Tevelev degrees of \(\mathcal{H}_{a}\) by reducing to \(\mathcal{H}_{0}=\mathbb{P}^{1}\times\mathbb{P}^{1}\) or \(\mathcal{H}_{1}=\mathrm{Bl}_{p}\mathbb{P}^{2}\). Theorem 13 will then follow by comparing the result with Theorem 7 above. Assume \(a\geq 2\). **Lemma 34**.: _Let \(0<2j\leq a\). Then there exists a smooth family \(\pi:\mathcal{U}\to\mathbb{A}^{1}\) such that:_ 1. \(\pi^{-1}(0)\) _is isomorphic to_ \(\mathcal{H}_{a}\)_;_ 2. \(\mathcal{U}\smallsetminus\pi^{-1}(0)\) _is isomorphic to_ \(\mathcal{H}_{(a-2j)}\times(\mathbb{A}^{1}\smallsetminus\{0\})\) _over_ \(\mathbb{A}^{1}\smallsetminus\{0\}\)_;_ 3. \(\pi\) _admits a section._ Proof.: This is probably a well-known construction. Let \[\pi:\mathcal{U}=\{([x_{0},x_{1}],[y_{0},y_{1},y_{2}],s)\ |\ x_{0}^{a}y_{1}-x_{1}^{a}y_{0}+ sx_{0}^{j}x_{1}^{j+1}y_{2}=0\}\to\mathbb{A}^{1}\] where \(\mathcal{U}\subseteq\mathbb{P}^{1}\times\mathbb{P}^{2}\times\mathbb{A}^{1}\) and \(\pi\) is the projection onto \(\mathbb{A}^{1}\). We claim that \(\pi\) is as stated in the lemma. In order to provide the required isomorphisms we will use the following description [13, 14]: \[\mathcal{H}_{a}=(\mathbb{C}^{2}\smallsetminus\{0\})\times(\mathbb{C}^{2} \smallsetminus\{0\})/\sim\] where the equivalence relation \(\sim\) is given by the \((\mathbb{C}^{*})^{2}\)-action is given by \[(\lambda,\eta).(l_{0},l_{1},t_{0},t_{1})=(\lambda l_{0},\lambda l_{1},\lambda^ {a}\eta t_{0},\eta t_{1}).\] Then \[\mathcal{H}_{a} \to\pi^{-1}(0)\] \[[l_{0},l_{1},t_{0},t_{1}] \mapsto([l_{0},l_{1}],[t_{1}l_{0}^{a},t_{1}l_{1}^{a},t_{0}])\] and \[\mathcal{H}_{(a-2m)}\times(\mathbb{A}^{1}\smallsetminus\{0\}) \to\mathcal{U}\smallsetminus\pi^{-1}(0)\] \[([l_{0},l_{1},t_{0},t_{1}],s) \mapsto([l_{0},l_{1}],[sl_{0}^{j}t_{0},sl_{1}^{j+1}t_{1},l_{1}^{a -j-1}t_{0}-l_{0}^{a-j}t_{1}],s)\] are isomorphisms and \[s :\mathbb{A}^{1}\to\mathcal{U}\] \[s \mapsto ([0,1],[0,s,1])\] is a section. Write \(a=2j\) or \(a=2j+1\) for \(j\in\mathbb{Z}_{\geq 1}\) depending on the parity of \(a\) and let \[\pi:\mathcal{U}\to\mathbb{A}^{1}\] be a smooth family as in Lemma 34. **Remark 35**.: _Given a line bundle \(L\) on \(\mathcal{U}\smallsetminus\pi^{-1}(0)\), we can always extend \(L\) to a line bundle on the all \(\mathcal{U}\) (which is possible being \(\mathcal{U}\) smooth). Even if there are many extensions of \(L\), for each \(s\in\mathbb{A}^{1}\) the restriction \(L_{s}\) of \(L\) to \(\pi^{-1}(s)\) is independent of the extension._ We treat the two case in Theorem 13 separately. #### 5.0.1 Case \(a=2j\) Let \(L\) be the pullback of \(\mathcal{O}(1)\boxtimes\mathcal{O}(1)\) from \(\mathcal{H}_{0}\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\) on \(\mathcal{U}\smallsetminus\pi^{-1}(0)\cong\mathcal{H}_{0}\times(\mathbb{A}^{1} \smallsetminus\{0\})\). **Lemma 36**.: _We have_ \[c_{1}(L_{0})=(1+j)D_{1}+D_{2}\in A_{*}(\mathcal{H}_{a})\] _where \(D_{1}\) and \(D_{2}\) are the toric divisors of \(\mathcal{H}_{a}\) as in Notation 6_ Proof.: Clearly \[c_{1}(L_{0})^{2}=c_{1}(L_{1})^{2}=2. \tag{17}\] Also since the normal bundle to each fiber of \(\pi\) is trivial, we have \[c_{1}(L_{0}).c_{1}(\mathcal{T}_{\pi^{-1}(0)})=c_{1}(L_{1}).c_{1}(\mathcal{T}_{ \pi^{-1}(1)})=4 \tag{18}\] These two equations completely determine \(c_{1}(L_{0})\). Namely, \(D_{1}\) and \(D_{2}\) form a basis of \(A_{1}(\mathcal{H}_{a})\) and writing \(c_{1}(L_{1})=xD_{1}+yD_{2}\), Equations 17 and 18 yields \[\begin{cases}2=2xy-y^{2}a,\\ 4=2x+(2+a)y-2ya\end{cases}\] whose integral solution is \((x,y)=(1+j,1)\). Fix \(d>0\) such that \(2d=n-1\) and let \(\beta=d[(1+j)D_{1}+D_{2}]\). Since Gromov-Witten invariants are deformation invariant, we get \[1=\mathsf{v}\mathsf{Tev}_{0,n,(d,d)}^{\mathbb{P}^{1}\times\mathbb{P}^{1}}= \mathsf{v}\mathsf{Tev}_{0,n,\beta}^{\mathcal{H}_{a}}\] The first equality follows from [1, Example 2.2 and Proposition 2.3]. Note that the existence of a section of \(\pi\) guarantee that on each fiber the point class can be realized as the restriction of a class from \(\mathcal{U}\). To conclude, if \(\mathsf{\Gamma}\) and \(\alpha\) are as in Theorem 13, we have \[\mathsf{v}\mathsf{Tev}_{\mathsf{\Gamma}}^{\mathcal{H}_{a}}=0\neq 1=\mathsf{v} \mathsf{Tev}_{0,n,\beta}^{\mathcal{H}_{a}}\] and thus \(\alpha_{*}[\overline{\mathcal{M}}_{\mathsf{\Gamma}}(X)]^{\mathrm{vir}}\neq[ \overline{\mathcal{M}}_{0,n}(X,\beta)]^{\mathrm{vir}}\) as desired. #### 5.0.2 Case \(a=2j+1\) This case is very similar, but uses [13, Theorem 14] instead of [1]. Let \(L_{\mathsf{H}}\) (resp. \(L_{\mathsf{E}}\)) be the pullback of \(\mathcal{O}(\mathsf{H})\) (resp \(\mathcal{O}(\mathsf{E})\)) from \(\mathcal{H}_{1}\cong\mathrm{Bl}_{p}\mathbb{P}^{2}\) on \(\mathcal{U}\times\pi^{-1}(0)\cong\mathcal{H}_{1}\times(\mathbb{A}^{1}\smallsetminus \{0\})\). Here \(\mathsf{H}\) (resp. \(\mathsf{E}\)) is the hyperplane class (resp. the class of the exceptional divisor) on \(\mathrm{Bl}_{p}\mathbb{P}^{2}\). **Lemma 37**.: _We have_ \[c_{1}((L_{\mathsf{H}})_{0})=(1+j)D_{1}+D_{2}\] _and_ \[c_{1}((L_{\mathsf{E}})_{0})=jD_{1}+D_{2}\] Proof.: Proceed as in Lemma 36 Fix \(d\) and \(k\) integers such that \(0\leq k\leq d\), \(0\leq k\leq n-1-d\) and \(3d-k=2(n-1)\). Call \(\beta=[j(d-k)+d]D_{1}+(d-k)D_{2}\). Then \[\binom{n-1-d}{k}=\mathsf{v}\mathsf{Tev}_{0,n,d\mathsf{H}-k\mathsf{E}}^{ \mathrm{Bl}_{p}\mathbb{P}^{2}}=\mathsf{v}\mathsf{Tev}_{0,n,\beta}^{\mathcal{H }_{a}}\] where the first equality follows from [13, Theorem 14]. If \(\mathsf{\Gamma}\) and \(\alpha\) are as in Theorem 13, we have \[\mathsf{v}\mathsf{Tev}_{\mathsf{r}}^{\mathcal{H}_{a}}=0\neq\binom{n-1-d}{k}= \mathsf{v}\mathsf{Tev}_{0,n,\beta}^{\mathcal{H}_{a}}\] and thus \(\alpha_{*}[\overline{\mathcal{M}}_{\mathsf{f}}(X)]^{\mathrm{vir}}\neq[ \overline{\mathcal{M}}_{0,n}(X,\beta)]^{\mathrm{vir}}\). This concludes the proof of Theorem 13.
2309.15481
A remark on Penney's algorithm
Based on the well-known algorithm of W. Penney we determine the set of lengths of the canonical representation of integers with respect to the trinomial X^2m + 2X^m + 2.
Horst Brunotte
2023-09-27T08:23:35Z
http://arxiv.org/abs/2309.15481v1
# A remark on Penney's algorithm ###### Abstract. Based on the well-known algorithm of W. Penney [7] we determine the set of lengths of the canonical representation of integers with respect to the trinomial \(X^{2m}+2X^{m}+2\). Key words and phrases:integer representation, canonical representation, radix representation 2020 Mathematics Subject Classification: 11A63, 11A67, 11C08, 11R11 \({\mathbb{N}}\) is the set of positive rational integers and \({\mathbb{N}}_{0}={\mathbb{N}}\cup\{0\}\). W. Penney [7] used \(-1+i\) as a basis of representing complex numbers1 Footnote 1: footnotemark: \[a+bi\qquad\qquad(a,b\in\{k/2^{n}\;:\;k\in{\mathbb{Z}},\;n\in{\mathbb{N}}_{0}\})\] by writing \(a,b\) as \[\sum_{j=0}^{k}c_{j}\big{(}(-1+i)^{4}\big{)}^{j}\qquad\qquad(c_{0}\ldots,c_{k} \in\{0,1,2,3\}).\] Based on his algorithm we here consider the so-called canonical representation of integers with respect to the minimal polynomial of \(-1+i\) and straightforwardly extend this method to particular integral trinomials of even degrees. Canonical number systems are natural generalizations of radix representations of ordinary integers to algebraic integers. Given a monic integer polynomial \(p\) with \(|p(0)|>1\) we say that \(z\in{\mathbb{Z}}\) admits a \(p\)-canonical representation if there exist a positive integer \(\ell\) and \[u_{0},\ldots,u_{\ell-1}\in D_{p}:=\{0,\ldots,|p(0)|-1\}\subset{\mathbb{N}}_{0}\] such that \[z\equiv\sum_{j=0}^{\ell-1}u_{j}X^{j}\pmod{p}\,,\] and we denote by \(Z_{p}\) the set of \(p\)-canonically representable integers. It is well-known that this representation (if it exists) is unique (e.g., see [6]). If \(\ell\) is minimal with the property above then \[\ell_{p}(z):=\ell\] is called the length of the \(p\)-canonical representation of \(z\), and we shortly write \[z=(u_{\ell-1}\cdots u_{0})_{p}\,.\] For a given integer \(c\) with \[c>|p(0)|\qquad\text{ and }\qquad|p(0)|,\ldots,c-1\in Z_{p} \tag{1}\] we define the function \[\lambda_{p,-c}:{\mathbb{Z}}\to{\mathbb{N}}\] as follows. For \(z\in{\mathbb{Z}}\) we determine its \(-c\)-representation \[z=(v_{k}\cdots v_{0})_{-c}\qquad\qquad(v_{0},\ldots,v_{k}\in\{0,1,\ldots,c-1\})\] and set \[\lambda_{p,-c}(z):=\ell_{p}(v_{k})\,.\] Note that the right hand side is well-defined because we actually have \[\{0,1,\ldots,c-1\}\subseteq Z_{p}\,.\] Using this function we can state our main result. **Theorem 1**.: _Let_ \[p:=X^{2}+2X+2\,.\] 1. \(\ell_{p}(z)=4\big{(}\ell_{-4}(z)-1\big{)}+\lambda_{p,-4}(z)\) \((z\in\mathbb{Z})\) _and we have_ \[\ell_{p}(\mathbb{Z})=\{a(n)\;:\;n\in\mathbb{N}\}\,,\] _where the sequence_ \(a(n)_{n\in\mathbb{N}}\) _specifies the non-negative integers congruent to_ \(0\) _or_ \(1\) _modulo_ \(4\) _and is given by_ \[a(n)=a(n-1)+(-1)^{n}+2\qquad\qquad(n\in\mathbb{N})\] _with_ \(a(0)=0\) _(see_ _[_8_, A042948]__)._ 3. _For_ \(n,m\in\mathbb{N}\) _we have_ \[\ell_{p}(n)\neq\ell_{p}(-m)\,.\] 4. _Fix_ \(\ell\in\mathbb{N}\)_. If_ \(\ell\) _is odd and_ \(n\in\mathbb{N}\) _maximal with the property_ \[\ell_{-4}(n)=\ell\] _then we have_ \[\lambda_{p,-4}(n)=4,\quad\lambda_{p,-4}(n+1)=1,\quad\ell_{p}(n)\equiv 0\pmod{4}\] _and_ \[\ell_{p}(n+1)=\ell_{p}(n)+5\,.\] _If_ \(\ell\) _is even and_ \(n\in\mathbb{N}\) _least with the property_ \[\ell_{-4}(-n)=\ell\] _then we analogously have_ \[\lambda_{p,-4}(-n)=4,\quad\lambda_{p,-4}(-(n+1))=1,\quad\ell_{p}(-n)\equiv 0 \pmod{4}\] _and_ \[\ell_{p}(-(n+1))=\ell_{p}(-n)+5\,.\] 5. _The subsequence of_ \(\ell_{p}(\mathbb{Z})\) _which describes the lengths of the_ \(p\)_-representations of the non-negative (negative, respectively) integers is given by the sequences of consecutive pairs_ \[\big{(}a(4n-3),\,a(4n-2)\big{)}_{n\in\mathbb{N}}\] _and_ \[\big{(}a(4n-1),\,a(4n)\big{)}_{n\in\mathbb{N}},\] _respectively._ 6. _If_ \(n<m\) _are positive integers then we have_ \[\ell_{p}(m)=\ell_{p}(n)\qquad\text{or}\qquad\ell_{p}(m)\geq\ell_{p}(n)+3\] _and_ \[\ell_{p}(-m)=\ell_{p}(-n)\qquad\text{or}\qquad\ell_{p}(-m)\geq\ell_{p}(-n)+3\,.\] 7. \(-2\leq\lambda_{p,-4}(x)+\lambda_{p,-4}(y)-\lambda_{p,-4}(xy)\leq 7\qquad\qquad(x,y \in\mathbb{Z}),\) _and in both cases equality is possible._ 8. _For_ \(x,y\in\mathbb{Z}\) _we have_ \[\ell_{p}(x+y)\leq\ell_{p}(x)+\ell_{p}(y)+2\] _and_ \[\ell_{p}(xy)\leq\ell_{p}(x)+\ell_{p}(y)+10\,.\] 9. _Let_ \(z=(\delta_{\ell_{p}(z)-1}\cdots\delta_{0})_{p}\in\mathbb{Z}\) _and define_ \[s_{k+1}(z)=\frac{1}{2}\big{(}s_{k-1}(z)+s_{k-2}(z)\big{)}\qquad\qquad(k\geq 2)\] _with_ \[s_{0}(z)=z,\;s_{1}(z)=0\quad\text{ and }\quad s_{2}(z)=\frac{1}{2}z\,.\] _Then there exists some_ \(K\in\mathbb{N}\) _such that_ \(s_{K}(z)\) _is an even integer,_ \[s_{k}(z)=s_{K}(z)\qquad\qquad(k\geq K)\] _and the sum of digits of_ \(z\) _is_ \[\sum_{i=0}^{\ell_{p}(z)-1}\delta_{i}=z-\frac{5}{2}s_{K}(z)\,.\] Our result above immediately delivers analogous statements for trinomials of higher degrees. **Corollary 2**.: _For_ \[P=X^{2m}+2X^{m}+2\in\mathbb{Z}[X]\qquad\qquad(m\in\mathbb{N})\] _we have_ \[\ell_{P}(\mathbb{Z})=\left\{m\big{(}a(n)-1\big{)}+1\;:\;n\in\mathbb{N}\right\},\] _where the sequence \(a(n)_{n\in\mathbb{N}}\) is given in Theorem 1._ Proof.: In view of \[P(X)=p(X^{m})\] with \(p\) as in Theorem 1 we have \[\ell_{P}(z)=m(\ell_{p}(z)-1)+1\qquad\qquad(z\in\mathbb{Z})\] by Proposition 8 below, and then our claim drops out from Theorem 1. A simple example illustrates this result. **Example 3**.: _Setting \(m=2\) in Corollary 2 we obtain_ \[\ell_{X^{4}+2X^{2}+2}(\mathbb{Z})=\left\{2a(n)-1\;:\;n\in\mathbb{N}\right\},\] _and we have_ \[\ell_{X^{4}+2X^{2}+2}(\mathbb{Z})=\left\{b(n)\;:\;n\in\mathbb{N}_{0}\right\},\] _where the sequence \(b(n)_{n\in\mathbb{N}}\) lists the positive integers congruent to \(1\) or \(7\) modulo \(8\) (see [8, A047522]) and is given by_ \[b(n)=\sqrt{8(c(n+1)-1)+1}\qquad\qquad\qquad(n\in\mathbb{N}_{0})\] _with \(c(n)_{n\in\mathbb{N}}\) presented in [8, A014494]. Exploiting the remarks on this sequence we can write_ \[c(n)=\frac{1}{2}\Big{(}4n^{2}+(-1)^{n}(2n-1)-4n+3\Big{)}\qquad\qquad(n\in \mathbb{N}).\] Let us now prepare the proof of our theorem by several auxiliary results. **Lemma 4**.: _Let \(n,d\in\mathbb{N}\), \(D\) a subset of a ring, \(0\in D\) and \(f_{0},\ldots,f_{n}\in D[X]\). If \(f_{n}\neq 0\) and2_ Footnote 2: We use the convention \(\deg(0)=-\infty\). \[\deg(f_{j})<d\qquad\qquad(j=0,\ldots,n)\] _then we have_ \[f:=\sum_{j=0}^{n}f_{j}X^{jd}\in D[X]\qquad\text{and}\qquad\deg(f)=nd+\deg(f_{ n})\,.\] Proof.: This can easily be checked. In the following we denote by \(\Omega_{f}\) the set of roots of the polynomial \(f\in\mathbb{C}[X]\). **Lemma 5**.: _Let \(p\in\mathbb{Z}[X]\) be monic with \(|p(0)|>1\)._ * _Let_ \(q\in\mathbb{Z}\setminus\{0\},qp(0)\in Z_{p}\) _and_ \(\ell=\ell_{p}(z)\)_. Then there exist_ \(v_{1},\ldots,v_{\ell-1}\in D_{p}\) _such that_ \[qp(0)+r=(v_{\ell-1}\cdots v_{1}r)_{p}\qquad\qquad(r\in D_{p}).\] * _Let_ \(z\in\mathbb{Z}\) _and assume that all roots of_ \(p\) _are simple. If_ \(g\in D_{p}[X]\) _with_ (2) \[g(\rho)=z\qquad\qquad(\rho\in\Omega_{p})\] _then_ \(g\) _is the_ \(p\)_-canonical representative of_ \(z\)_._ Proof.: This is immediately verified. Now we can present our main tool based on the arguments of [7]. **Lemma 6**.: _Let \(c\in\mathbb{N}\) and \(p\) be a monic integer polynomial with only simple roots and_ \[|p(0)|>1\quad\text{ and }\quad q|p(0)|\in Z_{p}\qquad(q\in\mathbb{N},\;q\leq(c-1)/ |p(0)|)\,.\] _Further suppose that there is some \(d\in\mathbb{N}\) such that \(p\) divides \(X^{d}+c\) and_ \[d>\deg(p)\qquad\text{ and }\qquad d\geq\max\left\{\ell_{p}(i)\;:\;i\in\{0,1, \ldots,c-1\}\right\}.\] _Then every \(z\in\mathbb{Z}\) is \(p\)-canonically representable, its \(p\)-representative can easily be deduced from its \(-c\)-representative and we have_ \[\ell_{p}(z)=d\big{(}\ell_{-c}(z)-1\big{)}+\lambda_{p,-c}(z).\] Proof.: Obviously, we have \[\rho^{d}=-c\qquad\qquad(\rho\in\Omega_{p}), \tag{3}\] and since \(d\) exceeds \(\deg(p)\) we have \[c>|p(0)|\,.\] By our prerequisites and Lemma 5 we convince ourselves that \[D:=\{0,1,\ldots,c-1\}\subseteq Z_{p}\,,\] thus for each \(i\in D\) there exist \[u_{d-1}^{(i)},\ldots,u_{0}^{(i)}\in D_{p}\] (possibly with some leading \(0\)'s) such that for \[h_{i}:=\sum_{j=0}^{d-1}u_{j}^{(i)}X^{j}\in D_{p}[X]\] we have \[\deg(h_{i})<d\,,\quad h_{i}\equiv i\pmod{p}\qquad\text{ and }\qquad h_{i}(\rho)=i \qquad\qquad(\rho\in\Omega_{p}). \tag{4}\] For \(z\in D_{p}\) we have \[\ell_{p}(z)=\lambda_{p,-c}(z)=1\] and our claim is trivial. Now let \(z\in\mathbb{Z}\setminus D_{p},\;\ell:=\ell_{-c}(z)\) and \[z=(v_{\ell-1}\cdots v_{0})_{-c}\qquad\qquad(v_{0},\ldots,v_{\ell-1}\in\{0,1, \ldots,c-1\}),\] thus \[v_{\ell-1}\neq 0,\;g:=\sum_{i=0}^{\ell-1}v_{i}X^{i}\in D[X],\;\deg(g)=\ell-1, \;z\equiv g\pmod{(X+c)},\;\lambda_{p,-c}(z)=\ell_{p}(v_{\ell-1})\,. \tag{5}\] Exploiting Lemma 4 we have \[h_{v_{\ell-1}}\neq 0,\qquad G:=\sum_{i=0}^{\ell-1}h_{v_{i}}X^{id}\in D_{p}[X] \quad\text{and}\quad\deg(G)=\deg(h_{v_{\ell-1}})+(\ell-1)d\,. \tag{6}\] Using (3) and (4) we have \[G(\rho)=\sum_{i=0}^{\ell-1}h_{v_{i}}(\rho)(\rho^{d})^{i}=\sum_{i=0}^{\ell-1}v_ {i}(-c)^{i}=g(-c)=z\qquad\qquad(\rho\in\Omega_{p})\,,\] hence \[G\equiv z\pmod{p}\] by Lemma 5. Thus \(G\) is the \(p\)-canonical representative of \(z\) and applying (6) and (5) we find \[\ell_{p}(z) = \deg(G)+1=\deg(h_{v_{\ell-1}})+(\ell-1)d+1=\ell_{p}(v_{\ell-1})-1 +(\ell-1)d+1\] \[= \lambda_{p,-c}(z)+d(\ell-1)\,.\] For the sake of completeness we collect some well-known facts of the canonical representation of integers with a negative integer base. **Proposition 7**.: _Let \(b,n\in\mathbb{N}\) with \(b>1\)._ * \(\ell_{-b}(n)\) _is odd and_ \(\ell_{-b}(-n)\) _is even._ 2. \(\ell_{-b}(\mathbb{Z})=\mathbb{N}\)__ 3. \(\ell_{-b}\) _is increasing on_ \(\mathbb{N}_{0}\) _and decreasing on_ \(-\mathbb{N}\)_._ 3. \(\ell_{-b}(n+1)=\ell_{-b}(n)\) _or_ \(\ell_{-b}(n+1)=\ell_{-b}(n)+2\)__ 4. \(\ell_{-b}(-(n+1))=\ell_{-b}(-n)\) _or_ \(\ell_{-b}(-(n+1))=\ell_{-b}(-n)+2\)__ 5. _For_ \(k\in\mathbb{N}_{0}\) _the largest positive integer of_ \(-b\)_-length_ \(2k+1\) _is_ \[n=((b-1)0\ldots 0(b-1))_{-b}=\frac{b^{2(k+1)}-1}{b+1}\,,\] _and_ \[n+1=\underbrace{(1(b-1)0\cdots 0(b-1)0)}_{2(k+1)+1}=\frac{b}{b+1}(b^{2k+1}+1)\] _is the least positive integer of_ \(-b\)_-length_ \(2k+3\)_. For_ \(k\in\mathbb{N}\) _the least negative integer of_ \(-b\)_-length_ \(2k\) _is_ \[-n=((b-1)0\ldots(b-1)0)_{-b}=-\frac{b}{b+1}(b^{2k}-1)\,,\] _and_ \[-(n+1)=\underbrace{(1(b-1)0\cdots(b-1)0(b-1))}_{2(k+1)}=-\frac{1}{b+1}(b^{2k+ 1}+1)\] _is the largest negative integer of_ \(-b\)_-length_ \(2(k+1)\)_._ 4. _For_ \(x,y\in\mathbb{Z}\) _we have_ \[\ell_{-b}(x+y)\leq\max\left\{\ell_{-b}(x),\ell_{-b}(y)\right\}+1\,,\] _and there exists_ \(e\in\{-3,-1,1\}\) _such that_ \[\ell_{-b}(xy)=\ell_{-b}(x)+\ell_{-b}(y)+e\,.\] Proof.: (i) E.g. see [3, Proposition 3.1]. (ii), (vi) Obvious. (iii) E.g. see [2, Lemma 5.5]. (iv), (v) Clear by (i) and (iii). (vi) This can straightforwardly be verified. (vii) We set \(k:=\ell(x)\) and \(m:=\ell(y)\) and assume \(x\leq y\). In case \(x>0\) we have \(k\leq m\) and [2, Proposition 5.3] yields \[x\leq\frac{b^{k+1}-1}{b+1}\qquad\text{ and }\qquad y\leq\frac{b^{m+1}-1}{b+1}\,,\] thus \[x+y\leq\frac{b^{m+2}-1}{b+1}\,,\] and then \[\ell(x+y)\leq m+1\,.\] Now we consider the case \(x<0\). If \(x+y\geq 0\) we see \[0\leq x+y<y\] and (ii) yields \[\ell(x+y)\leq\ell(y)\,,\] and our claim drops out. Finally, we consider \(x+y<0\). If \(y>0\) then we have \[x<x+y<0\] and (ii) implies \[\ell(x+y)\leq\ell(x)\,,\] and similarly the case \(y<0\) is settled. For the second claim see [2, Proposition 5.3]. After these preparations we are now in a position to prove our main result. For convenience we write \[\lambda:=\lambda_{p,-4}.\] (i) We immediately check \[(X^{2}-2X+2)\cdot p=X^{4}+4\] and \[0=(0)_{p},\ 1=(1)_{p},\ 2=(1100)_{p},\ 3=(1101)_{p}\,, \tag{7}\] thus \[\lambda(\mathbb{Z})=\{1,4\}. \tag{8}\] In view of \[2\in Z_{p}\qquad\text{ and }\qquad 4\geq\ell_{p}(j)\qquad(j=0,\ldots,3),\] an application of Lemma 6 with \[c=d=4\] yields our claim. (ii) By (i) and the definition of the sequence \((a_{n})\) we have \[\ell_{p}(\mathbb{Z})\subseteq\{a(n)\ :\ n\in\mathbb{N}\}\,. \tag{9}\] To show equality we convince ourselves that the sequence \((a(n))_{n\in\mathbb{N}}\) can also be written in the form \[8k+1,8k+4,8k+5,8k+8,\ldots\qquad\qquad(k=0,1,2,3,\ldots).\] Fix \(k\in\mathbb{N}_{0}\). First, choose \(n\in\mathbb{N}\) minimal with \[\ell_{-4}(n)=2k+1\,, \tag{10}\] thus by Proposition 7 and (7) \[\lambda(n)=\ell_{p}(1)=1\] and then by (i) and (10) \[\ell_{p}(n)=4(2k+1-1)+1=8k+1\,.\] Second, choose \(n\in\mathbb{N}\) maximal with (10), thus analogously as before \[\lambda(n)=\ell_{p}(3)=4\] and further \[\ell_{p}(n)=4(2k+1-1)+4=8k+4\,.\] Third, we choose \(n\in\mathbb{N}\) minimal with \[\ell_{-4}(-n)=2(k+1)\,, \tag{11}\] thus by Proposition 7 \[\lambda(-n)=\ell_{p}(1)=1\] and then \[\ell_{p}(-n)=4(2(k+1)-1)+1=8k+5\,.\] Fourth, we choose \(n\in\mathbb{N}\) maximal with (11) and deduce \[\ell_{p}(-n)=8k+8\,.\] Thus, equality in (9) is clear. (iii) The assumption of equality yields \[4(\ell_{-4}(n)-\ell_{-4}(-m))=\lambda(-m)-\lambda(n)\in\{-3,0,3\}\] by (i). But in view of (ii) this is impossible because Proposition 7 yields \[\ell_{-4}(n)\neq\ell_{-4}(-m)\,.\] (iv) Clear by (i) and Proposition 7. (v) Similarly as in the proof of (ii) we immediately verify that \[a(4n-3)\quad\text{ and }\quad a(4n-2)\] are the consecutive \(p\)-lengths of positive integers (with difference \(3\)), and the elements \[a(4n-1)\quad\text{ and }\quad a(4n)\] are the consecutive \(p\)-lengths of negative integers (also with difference \(3\)). Each pair of consecutive \(p\)-lengths of positive integers is followed by a pair of consecutive \(p\)-lengths of negative integers. (vi) This can immediately be checked by the definition of the sequence \(a(n)\). (vii) The first claim is trivial. To show possible equality, we may consider \[4=(130)_{-4},\;5=(131)_{-4},\;20=(230)_{-4}\,,\] \[2=(2)_{-4},\;410=(22222)_{-4},\;820=(1303030)_{-4}\] \[\lambda(1)=\ell_{p}(1)=1,\;\lambda(2)=\ell_{p}(2)=4\,,\] hence \[\lambda(20)=\lambda(4)+\lambda(5)+2\qquad\text{ and }\qquad\lambda(820)= \lambda(2)+\lambda(410)-7\,.\] (viii) We may assume \[\ell_{-4}(x)\leq\ell_{-4}(y)\,,\] and exploiting (i) and Proposition 7 we obtain \[\ell_{p}(x+y) = 4(\ell_{-4}(x+y)-1)+\lambda(x+y)\leq 4\max\left\{\ell_{-4}(x), \ell_{-4}(y)\right\}+4\leq 4\ell_{-4}(y)+4\] \[= 4(\ell_{-4}(y)-1)+\lambda(y)-\lambda(y)+4=\ell_{p}(y)+\ell_{p}(x )-4(\ell_{-4}(x)-1)-\lambda(x)-\lambda(y)+4\] \[= \ell_{p}(x)+\ell_{p}(y)-4\ell_{-4}(x)-\lambda(x)-\lambda(y)+8\leq \ell_{p}(x)+\ell_{p}(y)-4-2+8\] \[= \ell_{p}(x)+\ell_{p}(y)+2\,.\] Analogously we establish the second claim and we leave the details to the reader. (ix) Clear by [4, Proposition 2.2 and Corollary 2.3] with \(a=1\). The proof of our theorem is now completed, and we finish by rounding off the proof of Corollary 2 with a brief glance at the representation of integers by non-primitive polynomials3. Footnote 3: According to [5] we say that a polynomial is primitive if it is not of the form \(g(X^{k})\) for some \(k>1\). **Proposition 8**.: _Let \(p\in\mathbb{Z}[X]\) be monic, \(k>1\) and \(P:=p(X^{k})\)._ * \(D_{P}=D_{p}\)__ * _If_ \(g\in D_{p}[X]\) _canonically represents_ \(z\in\mathbb{Z}\) _modulo_ \(p\) _then_ \(g(X^{k})\) _canonically represents_ \(z\) _modulo_ \(P\)_. In particular, we have_ \(Z_{p}\subseteq Z_{P}\)_, and for_ \(z\in Z_{p}\) _we have_ \[\ell_{P}(z)=k(\ell_{p}(z)-1)+1\,,\] _thus_ \[\ell_{P}(z)\equiv 1\pmod{k}\,.\] _Explicitly,_ \(z=(u_{l-1}u_{l-2}\cdots u_{0})_{p}\) _implies_ \[z=(u_{l-1}wu_{l-2}w\cdots wu_{0})_{P}\] _with_ \[w:=\underbrace{0\cdots 0}_{k-1}\,.\] Proof.: (i) Obvious. (ii) Let \(t\in\mathbb{Z}[X]\) and \(n\in\mathbb{N}_{0}\) with \[pt=\sum_{i=0}^{n}u_{i}X^{i}-z\qquad\qquad(u_{0},\ldots,u_{n}\in D_{p}),\] hence \[P(X)t(X^{k})=p(X^{k})t(X^{k})=(pt)(X^{k})=\sum_{i=0}^{n}u_{i}X^{ki}-z,\] and this implies our first claim. Clearly, we have \[\ell_{p}(z)=n+1\qquad\text{ and }\qquad\ell_{P}(z)=kn+1,\] and this implies our second assertion. **Remark 9**.: 1. _The applicability of our main tool is very restricted. For instance, consider_ \[p=X^{2}+4X+8\in\mathbb{Z}[X]\] _and_ \[(X^{2}-4X+8)\cdot p=X^{4}+64\,.\] _We easily verify_4__ Footnote 4: For instance, use the algorithm given in [6]. \(8=(1340)_{p},16=(1200)_{p},24=(2540)_{p},32=(2400)_{p},40=(3740)_{p},48=(3600)_ {p},56=(1470140)_{p}\,,\) _thus in view of_ \[\ell_{p}(8q)=4\quad(q=1,\dots,6)\qquad\text{ and }\qquad\ell_{p}(8\cdot 7)=7\] _Lemma 6 cannot be applied here._ 2. _Other aspects of modified canonical number systems in_ \(\mathbb{Z}[i]\) _are thoroughly studied in_ _[_1_]__._
2309.11197
The Languini Kitchen: Enabling Language Modelling Research at Different Scales of Compute
The Languini Kitchen serves as both a research collective and codebase designed to empower researchers with limited computational resources to contribute meaningfully to the field of language modelling. We introduce an experimental protocol that enables model comparisons based on equivalent compute, measured in accelerator hours. The number of tokens on which a model is trained is defined by the model's throughput and the chosen compute class. Notably, this approach avoids constraints on critical hyperparameters which affect total parameters or floating-point operations. For evaluation, we pre-process an existing large, diverse, and high-quality dataset of books that surpasses existing academic benchmarks in quality, diversity, and document length. On it, we compare methods based on their empirical scaling trends which are estimated through experiments at various levels of compute. This work also provides two baseline models: a feed-forward model derived from the GPT-2 architecture and a recurrent model in the form of a novel LSTM with ten-fold throughput. While the GPT baseline achieves better perplexity throughout all our levels of compute, our LSTM baseline exhibits a predictable and more favourable scaling law. This is due to the improved throughput and the need for fewer training tokens to achieve the same decrease in test perplexity. Extrapolating the scaling laws leads of both models results in an intersection at roughly 50,000 accelerator hours. We hope this work can serve as the foundation for meaningful and reproducible language modelling research.
Aleksandar Stanić, Dylan Ashley, Oleg Serikov, Louis Kirsch, Francesco Faccio, Jürgen Schmidhuber, Thomas Hofmann, Imanol Schlag
2023-09-20T10:31:17Z
http://arxiv.org/abs/2309.11197v1
# The Languini Kitchen: Enabling Language Modelling Research at Different Scales of Compute ###### Abstract The Languini Kitchen serves as both a research collective and codebase designed to empower researchers with limited computational resources to contribute meaningfully to the field of language modelling1. We introduce an experimental protocol that enables model comparisons based on equivalent compute, measured in accelerator hours. The number of tokens on which a model is trained is defined by the model's throughput and the chosen compute class. Notably, this approach avoids constraints on critical hyperparameters which affect total parameters or floating-point operations. For evaluation, we pre-process an existing large, diverse, and high-quality dataset of books that surpasses existing academic benchmarks in quality, diversity, and document length. On it, we compare methods based on their empirical scaling trends which are estimated through experiments at various levels of compute. This work also provides two baseline models: a feed-forward model derived from the GPT-2 architecture and a recurrent model in the form of a novel LSTM with ten-fold throughput. While the GPT baseline achieves better perplexity throughout all our levels of compute, our LSTM baseline exhibits a predictable and more favourable scaling law. This is due to the improved throughput and the need for fewer training tokens to achieve the same decrease in test perplexity. Extrapolating the scaling laws leads of both models results in an intersection at roughly 50,000 accelerator hours. We hope this work can serve as the foundation for meaningful and reproducible language modelling research. Footnote 1: See languini-kitchen.github.io ###### Contents * 1 Introduction * 2 Background: Language Modelling * 2.1 Why Scalability Matters * 2.2 Existing Benchmarks * 3 The Languini Books Benchmark * 3.1 Comparison based on Compute Class * 3.2 The Dataset * 3.2.1 Evaluation and Test Sets * 4 The Baselines * 4.1 Tokenisation Analysis * 4.1.1 Analysing SentencePiece Vocabularies * 4.1.2 Performance Comparison of Different Vocabulary Sizes * 4.2 The Feed-Forward Baseline * 4.2.1 Evaluation * 4.2.2 Results * 4.3 The Recurrent Baseline * 4.3.1 The Model * 4.3.2 Results * 5 The Languini Codebase * 6 Open Research Questions * 7 Conclusion * A OOD Scale Plots ## 1 Introduction Language modelling, a critical aspect of natural language processing (NLP), involves predicting the probability distribution over a sequence of words in a language. Its importance underpins a variety of NLP tasks such as machine translation (Vaswani et al., 2017), text generation (Brown et al., 2020), and question answering (Devlin et al., 2019). Presently, language modelling research primarily emphasises finetuning large pre-trained models (Ding et al., 2023; Zhang et al., 2023) as well as techniques on prompting (Liu et al., 2023) and programming with large language models (Schlag et al., 2023; Dohan et al., 2022) which have greatly improved performance across a variety of NLP tasks. However, this focus has maladvertently hampered the development of novel language modelling methodologies that require the model to be trained from scratch. The prevailing sentiment of "bigger equals better" can overshadow the potential benefits of alternative architectures and innovative methodologies, which may offer unique advantages. Transformers, the backbone of this trend, have proven their efficacy by setting the standard across a broad spectrum of tasks (Vaswani et al., 2017). Interestingly, recent work shows how the Transformer can be derived from Fast Weight Programmers from the '90s (Schmidhuber, 1991; Katharopoulos et al., 2020; Schlag et al., 2021). However, Transformers are not without limitations. They exhibit issues such as quadratic computational complexity with respect to the sequence length, difficulty in capturing relevant tokens from large contexts (Tworkowski et al., 2023), and limitations due to the finite nature of its context (Dong et al., 2023). Furthermore, transformers have a large inference cost, which can pose a significant challenge when deploying models in resource-constrained environments (Chitty-Venkata et al., 2023; Bondarenko et al., 2021). These limitations underscore the need for continued refinement and innovation. Additionally, recent work argues that published modifications to the vanilla Transformer architecture did not meaningfully improve performance on a question-answering task (Narang et al., 2021). After extensive empirical evaluation, the authors of that study conjecture that various improvements do not transfer across implementation and tasks -- an issue also present in other machine learning areas such as e.g. recommendation systems (Ferrari Dacrema et al., 2019), optimisation (Sivaprasad et al., 2020; Choi et al., 2020), or generative adversarial networks (Lucic et al., 2018). To address these challenges, we introduce the _Languini Kitchen_, a novel benchmark, codebase, and research collective. Languini Kitchen, or just Languini, aims to create an environment that enables researchers, in particular those with limited computational resources, to make meaningful contributions to language modelling research. This is achieved through an experimental protocol that constrains experiments to various scales of compute and through a public code repository that enables reproducible experiments. Languini is a blend of the words _language_ and _linguine_ where the latter is a type of pasta similar to spaghetti which ironically stands for the research nature of the code in the Languini code repository. Recent work showed that the scaling laws are not universal across various model architectures (Tay et al., 2023). Furthermore, their results indicate that the vanilla transformer still comes out on top in a direct comparison with eleven other recently published models. To enable progress, Languini focuses on fair comparisons and reproducible results on complex and general benchmark. Different models are compared based on their performance trend as compute increases (Kaplan et al., 2020; Hoffmann et al., 2022) and the resulting scale plots will hopefully serve as a platform for identifying promising models or techniques that warrant further scale-up. For evaluation, we use a filtered version of the books3 subset from The Pile (Gao et al., 2020) and the BigScience ROOTS corpus (Laurencon et al., 2022) which has been used previously as training data for various large language models (e.g. see Scao et al. (2022); Dey et al. (2023); Biderman et al. (2023); Touvron et al. (2023a;b)). After rigorous filtering, our version of the dataset consists of approximately 85GB of high-quality monolingual text from 158,577 published books which span a large variety of modern topics and stories that significantly surpass the complexity and size of previous academic benchmarks. Models which are trained on the Languini Books benchmark are compared at different compute scales based on their perplexity on held-out data. This includes out of distribution splits with books on certain topics (such as learning a new language) which are excluded from the training data in order to evaluate the model's predictive ability across several books as context. Languini's open-source codebase provides a range of functions, from facilitating the model development process to logging mechanisms. The project is inspired by Scenic, a lightweight library that facilitates rapid prototyping of novel vision models (Dehghani et al., 2022). Similar to Scenic, the Languini codebase aims to keep the core functionality simple and prevents any dependencies between projects. Furthermore, researchers are encouraged to incorporate their projects into the Languini codebase, hopefully fostering a continually increasing collection of previous work to ease the comparison to new methods. In this work, we introduce the two initial Languini models: a feed-forward, GPT-based, decoder-only Transformer model (Section 4.2) and a recurrent quasi-LSTM (Section 4.3). For each model and each compute class, we empirically find the best hyperparameter configuration which results in scaling plots that allow the comparison of each model's scaling law. In summary, our research contributions are the following: * An experimental protocol for the comparison of language modelling research under different scales of compute. * A high-quality filtering of the books3 datasets for language modelling research with out of distribution splits for evaluating long-range dependencies. * A scaling law comparison between a GPT-based model and a quasi-LSTM model where the quasi-LSTM's scaling law is superior. * A codebase for researchers to simplify development and enable fair and meaningful comparison with scalability in mind. * An empirical analysis of byte-pair encoding tokenisation. ## 2 Background: Language Modelling Language modelling is a central task in NLP where raw text is typically segmented into a sequence of words or subwords using a tokeniser, which operates based on a predefined vocabulary (Mikolov et al., 2010; Al-Rfou et al., 2019). These segmented units are commonly referred to as tokens. With this tokenised representation in place, the goal of a language model becomes the prediction of a subsequent token given its preceding sequence of tokens. This objective can be formally defined as maximising the probability of a sequence of tokens \(w_{1},w_{2},...,w_{N}\): \[p(w_{1},w_{2},...,w_{N})=\prod_{t=1}^{N}p(w_{t}|w_{0},...,w_{t-1}) \tag{1}\] where \(p(w_{t}|w_{0},...,w_{t-1})\) is the probability of token \(w_{t}\) given the sequence of previous tokens \(w_{0},...,w_{t-1}\). The performance of a language model can be evaluated using the total cross-entropy loss, which for a given dataset is defined as: \[\mathcal{L}=-\sum_{t=1}^{N}\log p(w_{t}|w_{0},...,w_{t-1}) \tag{2}\] The cross-entropy measures the negative log-likelihood of the observed data under the model. Lower loss indicates a better model, but comparing raw loss values can be unintuitive. Therefore, the loss is often transformed into perplexity, a more interpretable measure, defined as: PPL \[=\exp\left(-\frac{1}{N}\sum_{t=1}^{N}\log p(w_{t}|w_{0},w_{1},..., w_{t-1})\right)\] (3) \[=\exp\left(\frac{\mathcal{L}}{N}\right)\] (4) where the cross entropy, or average loss, is equal to \(\frac{\mathcal{L}}{N}\), and \(N\) is the number of tokens in the sequence. Consider a model predicting from vocabulary of \(M\) tokens that will predict \(p(w_{t}|w_{0},w_{1},...,w_{t-1})=\frac{1}{M}\) for any \(t\). Such a uniform model would have the following perplexity: PPL \[=\exp\left(-\frac{1}{N}\sum_{t=1}^{N}\log p(w_{t}|w_{0},w_{1},..., w_{t-1})\right)\] (5) \[=\exp\left(-\frac{1}{N}\sum_{t=1}^{N}\log\frac{1}{M}\right)\] (6) \[=\exp\left(-\log\frac{1}{M}\right)\] (7) \[=\exp\left(\log(M)\right)\] (8) \[=M\] (9) Thus, by exponentiating the average loss during training, perplexity can be interpreted as the effective vocabulary size of a uniform model. While perplexity is a standard measure in language modelling, it has limitations. The perplexity measures of two methods are only directly comparable when the same tokenisation is used. This is because its value is influenced by the granularity of tokenisation which depends on the tokenisation algorithm and vocabulary size. A larger vocabulary increases the difficulty of each individual prediction (higher loss) but may reduce the number of predictions in total (lower \(N\)). Previous work has shown that this is not an equal trade-off as increasing the vocabulary has diminishing returns (see appendix in Hutchins et al. (2022)). Consequently, models trained with different tokenisers can produce perplexity values that are not directly comparable. To alleviate this, we introduce the measure of normalised perplexity. This measure adjusts for differences in tokenisation granularity by dividing the cross-entropy with the total number of bytes of the decoded text \(B\), rather than the number of tokens \(N\): \[\text{normalised PPL}=\exp\left(\frac{\mathcal{L}}{B}\right) \tag{10}\] Normalised perplexity makes it possible to compare the same model trained on different tokenisers, as it provides a standardised measure that adjusts for the variability introduced by the choice of tokenisation. Naturally, if different models are compared using different tokenisation algorithms, it remains open if the relative difference is due to the choice of model or choice of tokenisation. Nevertheless, this measure ensures a more equitable comparison between methods and contributes to a more nuanced understanding of their relative performance. Furthermore, normalised perplexity is dataset independent, allowing also for a relative comparison of perplexity across different problems such as modelling natural language or modelling code. ### Why Scalability Matters The scalability of a language model refers to its ability to improve performance as more computational resources are invested, usually by training larger models on more training data. Scalability is a critical aspect to consider when evaluating the potential of an architecture because it indicates how well the model can leverage additional resources. Scaled-up language models, i.e. large language models (LLMs; see Zhao et al. (2023a) for a recent review), have demonstrated to be broad few-shot learners (Brown et al., 2020; Schulman et al., 2022; Chowdhery et al., 2022; OpenAI, 2023). LLMs excel on numerous tasks without or with little need for task-specific finetuning. They achieve excellent results on question answering, text summarisation, translation, and other NLP tasks (OpenAI, 2023), but are also increasingly applied to other modalities such as images (Saharia et al., 2022; Alayrac et al., 2022), audio Ghosal et al. (2023), and reinforcement learning settings Driess et al. (2023). However, raw LLMs do not align well with human values and additional work is necessary to transform a raw LLM into a robust, helpful, and harmless conversational agent (Bai et al., 2022a;b). While the ability of LLMs on various downstream tasks can provide valuable insights, relying on downstream performance as the main measure for comparison presents several challenges. First, downstream performance keeps improving due to the development of new finetuning and prompting strategies (Hu et al., 2021; Liu et al., 2023). Thus, any fixed prompting strategy will quickly be outdated. Second, many evaluation datasets for LLMs are too difficult for models that were trained at smaller scales. Third, evaluating such datasets adds a considerable amount of complexity to the evaluation process. Lastly, downstream performance has been found to correlate strongly with pretraining perplexity (Raffel et al., 2020). For these reasons, in this work, we only focus on the perplexity on held-out data. ### Existing Benchmarks Language modelling has a rich history with a variety of benchmarks for model evaluation. Some notable examples include Penn Treebank (PTB, Mikolov et al. (2010)), WikiText-2 (WT2, Merity et al. (2017)), WikiText-103 (Merity et al., 2017), enwik8 and enwik9 Mahoney (2011), and Project Gutenberg (PG19, Rae et al. (2020)). These datasets represent a broad range of sizes and complexity. The PTB and WT2 are tiny corpora with a limited vocabulary and little variety. The enwik8 and enwik9 datasets are used for evaluating the performance of compression algorithms. They consist of the first \(10^{8}\) and \(10^{9}\) bytes of an English Wikipedia XML dump from 2006. With just 1 GB of text, models often train multiple epochs on these datasets and are prone to overfit on the training data. WikiText-103 was created in 2016 and contains about 100M tokens from a fixed vocabulary of 103k different words resulting in about 515 MBs of data. It consists of preprocessed Wikipedia articles and has been often used as an academic language modelling benchmark since then. The issue with Wikipedia articles is their relatively short size. The average length of a Wikipedia article is about 3,600 words (approximately 4,300 tokens), limiting the length of long-term dependencies. The most recent and largest dataset is PG19. PG19 consists of 28,752 books with an average length of 69k tokens resulting in about 10 GB of data or about 2M training tokens when using a subword vocabulary of 32k. The PG19 dataset is large enough to train models with billions of parameters. However, all books were published over 100 years ago and thus don't reflect today's English language or diversity of topics. Besides, on previous benchmarks models were often compared simply based on the average loss or perplexity on held-out data. While such comparisons offer insights, the best models are often also the most compute-intensive ones Brown et al. (2020). With the rise of well-funded industry labs, it has become increasingly difficult for academic labs to do research at that scale. E.g., all publications which advance the state of the art on PG19 are from Google or Google Deepmind with model sizes of up to 1.3B parameters (Hutchins et al., 2022). Training such models requires dedicated servers with multiple state of the art accelerators training for several days just to reproduce the results. Recent work presents the idea of _cramming_ experiments into a single day and a single consumer GPU (Geiping & Goldstein, 2023). In Section 3, we will also advocate for a shift away from unconstrained perplexity comparisons. While experiments offer valuable insights, they do not adequately account for the scalability factor, a key element in training large language models. The Languini benchmark, in an effort to demonstrate scalability, compares models based on different amounts of accelerator hours, resulting in a scaling plot or scaling law (Kaplan et al., 2020; Hoffmann et al., 2022). This approach seeks to provide a fair and meaningful comparison of language modelling research at varying compute scales, thereby promoting inclusivity for research groups with limited funding resources. ## 3 The Languini Books Benchmark The Languini Books benchmark represents a notable shift from previous language modelling benchmarks. It emphasizes reproducibility, scalability, and a comparison based on accelerator hours. By focusing on these aspects, Languini fosters a direct and effective comparison of different language models based on their performance at different scales of computational resources, aligning closely with the practical reality of training and evaluating such models. ### Comparison based on Compute Class A critical component of the Languini benchmark involves the concept of a _compute class_. This measure represents the number of accelerator hours (both parallel and sequential) spent during the training of the model. It diverges from the convention of comparing models based on their number of parameters or the total number of floating point operations (FLOPs). The number of parameters or total FLOPs are hardware-agnostic metrics. However, these measures fall short of capturing the actual computational efficiency of the evaluated algorithms. Two models with an identical number of parameters or FLOPs can exhibit vastly different performances due to the model's underlying design and its ability to exploit the hardware's capabilities. In particular, these hardware-agnostic metrics fail to account for the parallelisability of a model. As advancements in semiconductor technology, particularly in parallel computing and high-performance microarchitecture, continue to reshape the industry, models that scale well with an increased number of parallel processors can vastly outperform others given the same amount of total FLOPs. On the Languini benchmark, the evaluation requires the measure of normalised perplexity (see Section 2) at different levels of accelerator hours spent. With accelerator hours increasing exponentially, this data serves to estimate the scaling law, helping researchers understand and extrapolate the trajectory of model performance as computational resources are scaled up further. In practice, the number of accelerator hours used in this paper is _not_ the actual training time but is calculated before training based on a specific model's throughput (tokens per second) w.r.t. specific hardware. This increases flexibility as it allows the model to be trained on any hardware as long as the throughput is apriori measured w.r.t. the same reference hardware. The Languini codebase provides a script to measure the throughput of any PyTorch language model. Currently, this reference hardware is the Nvidia RTX 3090, chosen for its prevalence and accessibility in academic organisations and the compute classes considered in this work are 6, 12, 24, 48, and 96 hours. We use the following software versions PyTorch 2.0.0, Triton 2.0.0, Nvidia driver 535, and CUDA version 12.2. Consider an example where we have a specific model architecture with a given hyperparameter configuration (or _config_ in short). We first evaluate its training throughput \(v\) (number of tokens per second) on our reference hardware using an untrained instance with the throughput script provided by the Languini codebase. The throughput script uses the profiler of the DeepSpeed library (Rasley et al., 2020) to measure the time it takes to perform a forward pass, a backward pass, and a weight update for any PyTorch model. For a specific compute class given by \(h\) accelerator hours, we can calculate the total number of tokens \(T\) that we can process in that time: \(T=3600vh\). Given the total number of tokens \(T\), we calculate the number of steps by dividing it by the number of tokens per batch which is batch size \(\times\) sequence length \(\times\) number of gradient accumulation steps. Note that because we measured the throughput before training we do not actually need to train our model on our reference hardware. We can train on any other or even multiple accelerators as long as we use the same model config used to measure throughput. As we train the model we log the loss or normalised perplexity at certain training step intervals. To transform the learning curves from loss-over-steps into loss-over-accelerator-time we simply multiply the current step number by the number of tokens per step and divide by the throughput of that model configuration on the reference hardware. This can be done during or after training. Furthermore, it is also possible to approximately convert the compute class of \(n\) hours on accelerator \(A\) into \(k\) hours on accelerator \(B\) through the total number of tokens \(T\). This is because given a model \(M\) we can measure the throughput on the accelerators \(A\) and \(B\) and calculate the respective accelerator hours needed to consume the same number of tokens. E.g. training a specific GPT model for \(T\) tokens takes 6h on an RTX 3090 but training the same config on an A100 takes 2.5h. We find that this factor is roughly constant throughout various scales of GPT models. Hence, future work may eventually move on to better hardware without the need to retrain all previous models. In Table 6 we included the accelerator hours for other common deep learning hardware that was available to us at the time. A limitation of this method is that certain models might perform better on new hardware. As a result, the performance ratio between model X and model Y on hardware A might differ when tested on newer hardware B. Given the common use of GPUs to train models this is effectively already the case Hooker (2021). The use of a reference accelerator is mainly to enable effective compute constraints. Future researchers may decide to use different hardware for their evaluation. But for a fair comparison, previous work would have to be also evaluated on that reference hardware. ### The Dataset The Languini codebase is designed to support various datasets. In this work, we introduce the first dataset dubbed _Languini Books_. Languini Books is a filtered version from the popular books3 dataset, a subset of The Pile (Gao et al., 2020) which has been used as training data for various LLMs (e.g. Scao et al. (2022); Dey et al. (2023); Biderman et al. (2023); Touvron et al. (2023a,b)). The books3 dataset comprises a large collection of published books, encompassing approximately 101 GB of data. We remove all books which are shorter than roughly 50 KB as they mostly consist of boilerplate text and little to no content. We also remove all non-English books as there are too few for any reasonable multilingual language modelling. To do so, we repeatedly sampled 200 bytes of text from each book and classify the language using langdetect (Joulin et al., 2016, 2016) until we either sampled 50 times or one language has achieved above 90% presence. We then remove all books where English is not the most common language and with more than 5 non-English samples. The only exception here are books used for the language learning data split which we elaborate further in Section 3.2.1. We tokenise all remaining books using a 32k SentencePiece model using BPE that was trained on the data of WikiText-103 (Merity et al., 2017). Through manual inspection, we find that books with relatively low average bytes per token are often undesired books with large amounts of numerical values (e.g. food calorie tables, price guides), non-latex mathematics, books with little natural text (e.g. a collection of artworks with titles, dates, author names, and auction prices, but obviously without images), or books with otherwise extensive unusual formatting (e.g. large number of lines for the reader to write down their own business plan). Upon manual inspection, we decided to remove all books with less than 3.2 average bytes per token. Lastly, we train a Gensim Doc2Vec model (Rehurek and Sojka, 2011) to encode each book as a vector representation. We then use the cosine similarity measure to find exact and near duplicates. Previous work showed that even simple deduplication methods can speed up training significantly (Tirumala et al., 2023). After extensive manual inspection, we decided to remove any books that have a cosine similarity of 0.87 or higher. This captures various duplicates and near duplicates such as new editions or books which have been published again with slight differences (such as differences due to catering to British and American markets). This step resulted in the removal of 5,514 or 3.36% of books. The final dataset consists of 84.5 GB of text data across 158,577 books with a total of 23.9B tokens given the WikiText-trained vocabulary. Each book has on average 559 KB of text or about 150k tokens, and a median of 476 KB of text or 128k tokens. We plot a T-SNE projection of the vector representations of the languini books in Figure 2 to visualise the diversity of the data. Furthermore, we distribute a list of filenames and a script with which the processed data can be extracted from the official books3 dataset. #### 3.2.1 Evaluation and Test Sets From the Languini Books data, we remove various books for evaluation purposes. This includes a standard i.i.d. test set with 80 books sampled at random. Furthermore, we create several out of distribution test sets to measure a model's ability to capture long dependencies and learn during inference through e.g. in-context learning (Dong et al., 2022; Kirsch et al., 2022), dynamic evaluation Krause et al. (2018), or meta-learning (Irie et al., 2022; Kirsch and Schmidhuber, 2021). We split these test sets into the following categories: French Language Learning, Discworld, Java, Statistics, and Woodworking. The size of each of these sets is shown in Table 1. French Language LearningThis dataset tests a model's ability to generalize to an unseen language under a curriculum. If the model is able to generalize online well, it should perform increasingly well on each sample in this dataset as it progresses through them. This ordered dataset consists of 17 French learning books with English text followed by 17 pure French books. Each subset is roughly ordered according to the perceived difficulty it would pose to a language model trained only on English text. As most books with \begin{table} \begin{tabular}{c c c c c c} Split & Topic & Books & Bytes & Tokens & Bytes per Token \\ \hline langlearn & French Language Learning & 34 & 16,571,748 & 6,582,737 & 2.52 \\ discworld & Discworld Series & 45 & 24,095,020 & 6,944,831 & 3.47 \\ java & Java Programming & 109 & 108,747,871 & 30,818,604 & 3.53 \\ stats & Statistics & 43 & 30,266,165 & 8,283,405 & 3.65 \\ wood & Woodworking & 19 & 7,846,725 & 2,146,089 & 3.67 \\ \end{tabular} \end{table} Table 1: Size and topics of the books from every out of distribution split. Tokens and bytes per token are measured using WikiText tokenisation from Section 4.1.2. Figure 1: Distribution of book lengths (in bytes) of the Languini Books dataset. non-English content were removed in the early preprocessing of the data, this dataset was constructed from a subset of the removed books. The French learning books were further selected for having a good balance of French and English tokens as well as having the word "French" in their title. The pure French books were arbitrarily taken from books that exclusively contained French tokens. Additional curation of the dataset was done heuristically. The dataset was ordered heuristically using titles and token counts as guides. DiscworldThe Discworld dataset consists of the available novels of Terry Pratchett set in the Discworld fantasy universe. There are 45 books in this dataset, with only 6 books missing from the main series. Books are ordered chronologically with the 35 available books in the main series first, then the 4 Science of Discworld books, and finally, the remaining 6 found books ordered arbitrarily. As the books have recurring characters, places, or themes, a language model able to generalize online well should perform increasingly well on each sample of the dataset and should do markedly worse on this dataset than it would on many other books in the principle datasets. As early processing of the dataset filtered similar books out of the dataset already, this dataset was constructed by searching for relevant keywords in the filenames of the books (i.e., "pratchett", "discworld", "diskworld", "josh kidby", or "paul kidby"). JavaThis Java dataset focuses on the Java programming language. In total, there are 109 books in this dataset covering a wide variety of applications. There is no specific ordering for this dataset; however, a language model able to generalize online well should perform increasingly well, in expectation, as it processes subsequent samples of this dataset. This dataset was created by searching through all the books that contain both the string "java" and the string "public static" (the most common type declaration in java) anywhere in their full text. Through an analysis of the titles of the found books, books deemed to not be regarding Java or a library using Java were removed from all datasets. Figure 2: T-SNE plot of the learned vector representation for each book in the Languini Books dataset. Under a small Doc2Vec model, the books cluster into semantic concepts. The plot gives an impression of the large variety of books due to the large number of small clusters. StatisticsThis dataset tests a model's ability to understand statistics while having little to no previous learning on the subject. It consists of 44 books. As with the Java dataset, there is no specific ordering to this dataset, but, in expectation, a model able to generalize online well should perform increasingly well as it processes subsequent samples of this dataset. This dataset was created by searching through the titles of books to find all that contained either the word "probability" or "statistics". An introductory textbook on statistics should most likely contain either of these terms. Subsequent filtering was done by hand. WoodworkingThis is a simple dataset consisting of only books related to woodworking projects. There are a total of 19 books in this dataset. There is no specific ordering to this dataset, but, as with the other datasets, a model able to generalize well online should, in expectation, perform increasingly well as it processes subsequent samples of this dataset. This dataset was created by searching for all book titles containing "woodworking" or some variation of it. Project books with woodworking will mostly contain this word. Subsequent filtering was done by hand. Note that some of the books in this dataset use images to convey some information which is not available in the dataset. ## 4 The Baselines In language modelling research, setting appropriate baselines is fundamental. Baselines offer a standard point of reference, facilitating the comparative analysis of new models or methodologies. For this study, we have selected two distinct architectures. The first is the widely-used GPT model, a highly parallelisable feed-forward architecture, for which we will conduct an in-depth performance analysis. The second is a recurrent architecture derived from the LSTM (Hochreiter & Schmidhuber, 1997). This section aims to detail these baseline models and their results. But before that, we will discuss tokenisation in Section 4.1. ### Tokenisation Analysis Tokenisation is a fundamental step in language modelling, aiming to convert input text into a sequence of tokens that a model can process. A common standard for tokenisation is currently the use of SentencePiece models (Kudo & Richardson, 2018) with Byte-Pair Encoding (BPE, Gage (1994); Sennrich et al. (2015)). These tokenisation models find the most frequent pairs of bytes in a text and repeatedly merge them to form tokens. Including all possible bytes as part of the initial vocabulary allows the tokeniser to encode any text and thus handle even words that have not been seen during training. Existing LLMs utilise vocabularies of varying sizes. These range from thousands of tokens to over 200,000 tokens. Larger vocabularies are often used for large multi-lingual models such as GPT-3 (Brown et al., 2020), PaLM (Chowdhery et al., 2022), or Megatron-LM (Shoeybi et al., 2019). In this subsection, we delve into some of the challenges and intricacies of using a BPE tokeniser and analyse the performance implications of different vocabulary sizes. #### 4.1.1 Analysing SentencePiece Vocabularies In this subsection we present our analysis of various SentencePiece models which were trained on the training split of the Languini Books data. We analyse vocabularies from 2,048 to 131,072 unique tokens. The analysis reveals various shortcomings of the produced byte-pair encoding vocabularies which may be addressed in future work. SentencePiece constructs duplicate tokens that are never used.In our experiments, we observed that SentencePiece tokeniser generates a number of tokens that have identical string decoding. For example, one such token is "b\(\backslash\)xef\(\backslash\)xbf\(\backslash\)xbd" which is the UTF-8 encoding for the special character U+FFFD also known as the "replacement character". There are 128 different tokens that all decode to this character. Apart from this special character, there are 80 other tokens which we found to have duplicates mapping into the same string representation. These tokens typically represent single characters, such as lower- and upper-case letters, punctuation characters and special characters such as "#", "/", "\(\_\)" etc. The two duplicate token groups are grouped together in terms of the order in the vocabulary. Interestingly, one group is at the "beginning" of the vocabulary, with token values below 200, whereas the other group is at the "end" of the vocabulary. Only one of these groups of tokens is actually used by the tokeniser to tokenise the text, ensuring that the tokenisation encoding-decoding process is deterministic. However, this phenomenon is wasteful, as in total there are 207 duplicate tokens that would ideally be used to encode other (sub-)words. We observed the same duplicate tokens across all vocabulary sizes we investigated and also across all data sets we used to extract these vocabularies. SentencePiece constructs near duplicates which make up 24.9% of the vocabulary.SentencePiece by default encodes semantically identical words into different tokens, for example, we observed that "the", "The", " the", " The", "THE" and " THE" all get assigned to different tokens. See Table 2 for further examples. Making all token string representations lowercase and removing whitespace or punctuation marks we found that there are 8,160 duplicate tokens in a vocabulary of 32,768. This does not include further similarities such as the genus, numerus, and kasus of different words. It is likely counterproductive to have separate tokens for such semantically similar strings because improving the token representation of one token does not translate to improvements in other tokens. A BPE vocabulary constructed from randomly sampled Languini Books contains 63% tokens which are identical with a vocabulary constructed from English Wikipedia.When creating the Languini Books dataset (by filtering the books3 dataset) we constructed a BPE vocabulary of size 32,768 from the WikiText-103 dataset (Merity et al., 2017). We then constructed a BPE vocabulary of the same size by training on the Languini Books dataset. Due to the different nature of text contained in books compared to Wikipedia articles, we expected these vocabularies to have large differences, and the one trained on the Languini Books dataset to offer advantages for language modelling on this dataset. To our surprise, we found that the two vocabularies share 20,771 or 63% of all tokens. The frequency of the tokens follows approximately a Zipfian distribution.Natural language has the property that the frequency of a word is approximately proportional to one over its rank when ordered (Zipf, 2013). We find that BPE vocabs closely follow this distribution. In Figure 3, we compare the frequency of tokens over their rank in the training data for vocabulary sizes ranging from 2,048 to 131,072. In this log-log plot, we normalised the rank of the token to the range \([0,100]\) which means that a point on the x-axis should be interpreted as a percentage of all tokens since smaller vocabularies are rescaled to fit into the same plot with larger vocabularies. The sharp drop to the right is due to rare and unused tokens as discussed above. While it is probably advantageous that the learned vocabulary follows the same distribution as natural language text, this may also be a disadvantage because the number of training steps that are performed on a specific token is directly proportional to its frequency. We further note that a larger vocabulary follows the same trend and only reduces the frequency of each token. Thus, a large vocabulary may be undesirable because it "allocates" significantly fewer training steps for a significant number of tokens. Unigram tokenisation results in an almost identical frequency distribution.In Figure 4 we compare vocabularies of size 32,768 encoded with either the BPE or unigram algorithm. The data suggests that unigram tokenisation results in a similar distribution except for the last 10% of the tokens where it \begin{table} \begin{tabular}{c c c c c} subword & +space & +case & +space, +case & +caps & +space, +caps \\ \hline “of” & “ of” & “Of” & “Of” & “OF” & - \\ “ent” & “ ent” & “Ent” & “Ent” & “ENT” & - \\ “was” & “ was” & “ Was” & “ Was” & - & “ WAS” \\ “not” & “ not” & “Not” & “ Not” & “NOT”, & - \\ “from” & “ from” & “From” & “ From” & “FROM” & - \\ “house” & “ house” & - & “ House” & - & - \\ “love” & “ love” & “ Love” & “ Love” & - & - \\ “chapter” & “ chapter” & “Chapter” & “Chapter” & “CHAPTER” & “ CHAPTER” \\ \end{tabular} \end{table} Table 2: Further examples of vocabulary entries explained by simple transformations. appears to be more equally distributed compared to BPE. When trained, the language modelling performance between a unigram and BPE tokenisation with a vocabulary of 32,768 unique tokens resulted in no significant performance difference. The vocabulary distribution is uniform across the dataset.In Figure 5, we compare the frequency distribution of 5 randomly sampled books with the distribution across 200 books. We find that the order of the tokens changes but the overall distribution is approximately identical. Figure 4: Comparison of the sorted token frequency for a vocabulary due to byte-pair and unigram tokenisation. Figure 3: Token frequency sorted and scaled such that its rank (x-axis) lies within 0 and 100. The dashed line is the ideal Zipfian distribution scaled to fit the vocabulary with 16,384 unique tokens. #### 4.1.2 Performance Comparison of Different Vocabulary Sizes Language models face the trade-off between tokenisation granularity and model complexity. A smaller vocabulary typically implies a higher granularity of tokenisation which in turn increases the total number of tokens to be processed. Conversely, a larger vocabulary results in a shorter training sequence as each token tends to represent more bytes. A larger vocabulary is particularly useful for feed-forward models due to their finite context. A large vocabulary captures more bytes of raw text per tokens, thus the feed-forward model can condition on more text while processing the same number of tokens. However, larger vocabularies increase the number of parameters and reduce the throughput. For the purpose of Languini, the ideal vocabulary size strikes a balance between computational efficiency and model perplexity. In this section, we empirically explore the effects of different vocabulary sizes on normalised perplexity. To study the impact of vocabulary size, we kept the model architecture (up to the embedding) and optimisation fixed and only change the granularity of tokenisation. We trained a GPT tiny model with a batch size of 160 for 138k steps across seven different vocabulary sizes from 2048 until 131,072. We present our results in Figure 6. The findings from our comparison highlight a clear trade-off between vocabulary size and computational efficiency. Increasing the size of the vocabulary improves performance but has diminishing gains and significantly slow down the model (see Table 3). Our results indicate that a 16k and 32k vocabulary strikes a good balance. They offer improved performance without an excessive increase in computational costs. Since most models in languini will be on the smaller side, we decided on using a 16k vocabulary for the remaining experiments. ### The Feed-Forward Baseline With the decoder-only Transformer, the feed-forward approach has emerged as the new paradigm in language modelling. Its advantage is the processing and prediction of all sequence elements in a parallel manner. However, despite the theoretical ability to process longer sequences, the decoder-only Transformer model struggles to generalise to sequences that are significantly longer than what it has been trained with (Press et al., 2022). Although recent work has made progress in this regard (Luo et al., 2022). Figure 5: Comparison of the frequency distribution of 5 randomly sampled books against the distribution over 200 randomly chosen books. The feature of the autoregressive decoder-only Transformer model to scale well with parallel compute allowed the scaling to hundreds of billions of parameters (Brown et al., 2020). Our implementation is based on the official TensorFlow implementation of GPT-2 (Radford et al., 2019; Karpathy, 2023). Within our implementation, we provide configurations, i.e., hyperparameter settings, for different model sizes. Our configurations follow Radford et al. (2019) with ranges from 110M parameters to 1.5B parameters. To widen the scale of models to the lower end, we have included two additional sizes, namely, mini and tiny. All models are trained with a BPE vocabulary of 16,384 unique tokens as described in Section 4.1. The decoder-only transformer architecture consists of multiple transformer layers. Each layer consists of two sublayers: The self-attention and position-wise multi-layer perception (MLP). The self-attention mechanism allows the model to weigh the significance of the proceeding tokens in the input sequence relative to the current token. Using multi-head attention, it can capture various types of relationships in the data. The \begin{table} \begin{tabular}{c c c|c c c c} GPT Model & gigaFLOPs & Params & \(d_{\text{model}}\) & \(n_{\text{layers}}\) & \(n_{\text{heads}}\) & \(d_{\text{head}}\) \\ \hline mini & 20.4 & 27.6M & 512 & 4 & 8 & 32 \\ tiny & 45.1 & 53.9M & 768 & 4 & 12 & 64 \\ small & 109.6 & 110.6M & 768 & 12 & 12 & 64 \\ medium & 352.4 & 336.4M & 1024 & 12 & 16 & 64 \\ large & 760.5 & 731.1M & 1536 & 24 & 16 & 96 \\ XL & 1,555.2 & 1,478.2M & 2048 & 24 & 24 & 128 \\ \end{tabular} \end{table} Table 4: Overview of our GPT model sizes, flops, parameters, and differing hyperparameters. Sizes are chosen such that they are comparable in size and flops with the GPT models. FLOPs are the total number of floating point operations during the forward pass with a batch size of 1 as measured by the DeepSpeed flops profiler. \begin{table} \begin{tabular}{c c c} vocabulary size & bytes per token & tokens per second \\ \hline 2,048 & 2.84 & 183,065 \\ 4,096 & 3.21 & 174,568 \\ 8,192 & 3.54 & 163,448 \\ 16,384 & 3.81 & 141,434 \\ 32,768 & 4.02 & 112,855 \\ 65,536 & 4.17 & 78,778 \\ 131,072 & 4.25 & 45,742 \\ \end{tabular} \end{table} Table 3: Best throughput of a GPT tiny model with different vocabulary sizes. Figure 6: Fast evaluation of normalised perplexity on held-out data for a GPT tiny model trained with a batch size of 160 for 138k steps across seven different SentencePiece BPE vocabulary sizes. Larger vocabularies require more memory and FLOPs to run which leads to different amounts of accelerator time. position-wise MLP sublayer is applied next. It consists of 2 linear maps with the ReLU non-linearity applied in between. Typically, the middle activations of the MLP are four times the size of the hidden representation. Both sublayers are connected residually (Srivastava et al., 2015; Hochreiter, 1991) and are preceded by layer normalisation (Ba et al., 2016). We add a learned position embedding to each token before the first layer. Our implementation leverages PyTorch's support for mixed precision training (Huang et al., 2020). In this mode, suitable operations (e.g. matrix multiplication) are executed with float16, leveraging tensor cores, while parameters for which more precision are kept in float32 which is crucial. The conversions are done automatically. This not only enhances training speed but also ensures stability. Additionally, we leverage the compile functionality of PyTorch 2.0.0 and Triton 2.0.0 (Wu, 2023) to further improve a model's throughput. All models are trained with the Adam optimiser and with a cosine learning rate decay schedule which decays the starting learning rate of 0.0006 by a factor of 100 over the total number of training steps. It's worth mentioning that our experiments did not incorporate the recently published flash attention mechanism (Dao et al., 2022), despite its growing popularity. In our experiments, the use of flash attention results in significant speedups but coincides with unexpected and large gradient spikes when used on our reference hardware (Nvidia's RTX 3090), hindering the model's ability to recover and continue training. In contrast, our native PyTorch implementation displayed stability throughout the entire training process of all experiments. The issue seemingly lies with certain hardware requirements of flash attention (Dao, 2023). We leave it for future work to provide an in-depth experimental evaluation of flash attention within the constraints of the Languini Books benchmark. To calculate the affordable number of training steps of every GPT model for various compute classes, we measured their throughput w.r.t. our reference hardware. For future work, we also documented the throughput of all our GPT models on other accelerators here and in the GPT project folder of the GitHub repository. #### 4.2.1 Evaluation We evaluate the GPT models using a fast and slow evaluation. To track performance (average loss, normalised perplexity, or others) during training, we evaluate the models on the held-out data for just 500 batches, using a batch size of 16, the default sequence length of 512 tokens, and, most crucially, by averaging the loss across all predictions. This is a low estimate of the actual performance of the model because the context for every prediction varies from 1 to 512 tokens and predictions with less context perform significantly worse (Kaplan et al., 2020). Ideally, we would measure performance with a batch size of 1 and such that each token has the largest possible context. However, due to the nature of our implementation and the use of learned position embeddings, this would require us to recompute all previous tokens for every prediction. Figure 7: Maximum throughput (tokens per second) achieved on various accelerators for all GPT model sizes. We choose a middle ground where we evaluate over the last 128 tokens using a batch size of 1. We refer to this evaluation as the slow evaluation because it requires the processing of four times as many batches. In practice, We found that the normalised perplexity of our models does not significantly improve when evaluating over the last 64 or the last 32 tokens in the sequence but are exponentially more expensive. Thus, the slow evaluation of all our GPT models is only evaluating the last 128 predictions of the sequence which results in a context length between 384 and 512 tokens. Consequently, during evaluation, we slide our batches in the temporal direction not by the sequence length of 512 tokens but by 128 tokens which requires us to process four times as many batches. #### 4.2.2 Results Given a specific model size, its ideal throughput, and compute class, we calculate the total number of tokens we can process in that time. In Figure 8, we present the fast evaluation results (i.e. loss averaged over all tokens of a batch instead of only the over the last \(n\) predictions) for the trade-off between batch size and number of training steps for all model sizes. Note that larger models have slower throughputs which means that they process fewer tokens given the same compute class. Our results indicate that the compute-optimal batch size for all GPT models lies roughly between 256 and 512 elements. All models are trained with a sequence length of 512 which results in the optimal number of tokens per batch to be roughly between 131k and 262k. As seen in Figure 8, the compute-optimal batch size seems to increase slightly as the same size model is trained for longer. This may indicate that the compute-optimal batch size is not constant throughout training but increases as previous work suggests (Smith et al., 2017; McCandlish et al., 2018; Shallue et al., 2018; Zhang et al., 2019). This insight is also supported by the common strategy of increasing the batch size towards the end of a long training run when training large language models. We leave it for future work to propose strategies for adaptive batch sizes and how they relate to the common practice of decaying the learning rate. In Table 6 we summarise the normalised perplexity results for each compute class by evaluating the last 128 tokens of each batch as elaborated in Section 4.2.1 (unlike the results in Figure 6 and 8 which plot the fast evaluation). Note, that the hours on our reference hardware can be converted into hours on any other hardware through the total number of tokens (see Section 3.1 for further details). Using the normalised perplexity at different scales, we create a scale plot over accelerator seconds and FLOPs in Figure 9. As expected, perplexity over accelerator time closely follows a power law. We also evaluate the best GPT models for each scale on the out of distribution splits. The results are listed in Table 11 below together with the evaluation of the qLSTM model of Section 4.3. ### The Recurrent Baseline Recurrent Neural Networks (RNNs), particularly the Long Short-Term Memory (LSTM; Hochreiter & Schmidhuber (1997)), have been a cornerstone in the development and evolution of deep learning for sequential data. RNNs are universal function approximators, i.e. they can be seen as general computers with finite state that can process arbitrary long inputs with constant memory and compute which is linearly proportional to the length of the sequence (Siegelmann & Sontag, 1991; Siegelmann, 1996; Schmidhuber, \begin{table} \begin{tabular}{c c c c c} GPT Model & RTX 3090 & P100-16GB & V100-32GB & A100-80GB \\ \hline mini & 238,719 (97) & 40,292 (56) & 194,130 (320) & 562,594 (320) \\ tiny & 141,864 (76) & 21,452 (40) & 108,221 (104) & 343,182 (272) \\ small & 55,416 (34) & 8,310 (16) & 43,839 (50) & 135,107 (128) \\ medium & 16,618 (10) & 2,503 (3) & 14,387 (17) & 47,851 (53) \\ large & 7,058 (4) & OOM & 7,296 (9) & 27,713 (36) \\ XL & OOM & OOM & OOM & 14,559 (20) \\ \end{tabular} \end{table} Table 5: Overview of the best tokens per second measures and their respective batch sizes in brackets for different GPT model sizes on several accelerators. OOM stands for out of memory. 1990). In contrast, the transformer model from Section 4.2, while also Turing complete (Perez et al., 2019), requires a quadratic increase in compute and memory, and, in practice, cannot effectively process arbitrary long sequences. Furthermore, recurrence can be an advantageous bias for sequence models which enables them to generalise to out of distribution settings in more systematic ways (e.g. see Anil et al. (2022)). On language modelling benchmarks, however, transformer models have outperformed RNNs. The inherent sequential computation in RNNs limits their parallel processing capabilities, something easily achieved by attention models. Unlike RNNs which compute a vector representation step by step, attention models do so by attending to the entire sequence at once. This makes it more resource-intensive but allows for time-based Figure 8: Fast evaluation of normalised perplexity on held-out data for GPT models of different sizes trained in different compute classes with different trade-offs between batch size and number of training steps. All models seem to have a similar compute-optimal batch size despite large differences in model size. parallel processing. Consequently, attention models can fully leverage the parallel processing capabilities of today's accelerators. There has been previous work on enhancing the parallel processing abilities of RNNs. One intriguing direction is the quasi-RNN (qRNN, Bradbury et al. (2016)). Typically, a qRNN has a recurrent function that can also be processed in parallel. To make this possible, the recurrent weight matrix is often removed. In this section, we introduce the quasi-LSTM (qLSTM), a qRNN which achieves significantly higher throughputs while staying as true to the original LSTM as possible. While the presented qLSTM still lags behind our GPT baseline in throughput, we find that its compute-optimal batch size is significantly smaller, and it also achieves a larger gain in perplexity while processing the same number of tokens. In comparison with the GPT baseline, the data efficiency counterbalances its relatively reduced throughput, allowing for a measurable performance on the Languini Books benchmark. #### 4.3.1 The Model The qLSTM is a variation of the LSTM which is why we will first describe the LSTM model. Our LSTM model uses the same architecture as the Transformer baseline from Section 4.2 where the only difference is the multi-head attention sublayer which we replace with a multi-head LSTM cell. Analogous to the multi-head attention, the multi-head LSTM cell splits the LSTM cell into multiple heads which perform the same operation in a lower dimensional space. The following equations describe the classic LSTM cell adapted for one head: \[\mathbf{f}_{t} =\sigma(\mathbf{W}_{f}\mathbf{x}_{t}+\mathbf{U}_{f}\mathbf{h}_{t-1}+\mathbf{b}_{f}) \tag{11}\] \[\mathbf{i}_{t} =\sigma(\mathbf{W}_{i}\mathbf{x}_{t}+\mathbf{U}_{i}\mathbf{h}_{t-1}+\mathbf{b}_{i})\] (12) \[\mathbf{z}_{t} =\phi(\mathbf{W}_{i}\mathbf{x}_{t}+\mathbf{U}_{i}\mathbf{h}_{t-1}+\mathbf{b}_{z})\] (13) \[\mathbf{o}_{t} =\sigma(\mathbf{W}_{o}\mathbf{x}_{t}+\mathbf{U}_{o}\mathbf{h}_{t-1}+\mathbf{b}_{o})\] (14) \[\mathbf{c}_{t} =\mathbf{c}_{t-1}\odot\mathbf{f}_{t}+\mathbf{i}_{t}\odot\mathbf{z}_{t}\] (15) \[\mathbf{h}_{t} =\mathbf{o}_{t}\odot\phi(\mathbf{c}_{t})\] (16) \[\mathbf{x}^{\prime}_{t} =\mathbf{W}_{h}\mathbf{h}_{t} \tag{17}\] where \(\mathbf{W}\in\mathbb{R}^{d_{\text{head}}\times d_{\text{head}}}\) are the feed-forward weight matrices and \(\mathbf{U}\in\mathbb{R}^{d_{\text{head}}\times d_{\text{head}}}\) are the recurrent weight matrices of this head, \(\mathbf{b}\in\mathbb{R}^{d_{\text{head}}}\) are bias vectors, \(\mathbf{x}_{t}\in\mathbb{R}^{d_{\text{head}}}\) is the hidden representation of step \(t\) of the current layer, \(\mathbf{h}_{t-1}\in\mathbb{R}^{d_{\text{head}}}\) is the state representation of this head for step \(t-1\), \(\sigma\) is the sigmoid function, \(\phi\) is the tanh function, \(\odot\) is the Hadaramard product or element-wise multiplication of two tensors, \begin{table} \begin{tabular}{c c|c c c c|c c} compute & normalised & \multirow{2}{*}{config} & \multirow{2}{*}{batch size} & \multicolumn{2}{c|}{total} & total & \multicolumn{1}{c}{\({}^{*}\)theoretical} & \multicolumn{1}{c}{\({}^{*}\)theoretical} \\ class & perplexity & & & train tokens & exaFLOPs & A100 hours & V100 hours \\ \hline 6h & 2.262 & small & 128 & 1.2B & 0.769 & 2.46 & 7.58 \\ 12h & 2.197 & small & 128 & 2.4B & 1.538 & 4.92 & 15.17 \\ 24h & 2.146 & small & 256 & 4.8B & 3.075 & 9.84 & 30.34 \\ 48h & 2.087 & medium & 256 & 2.9B & 5.930 & 16.67 & 55.44 \\ 96h & 2.032 & medium & 256 & 5.7B & 11.859 & 33.34 & 110.89 \\ \end{tabular} \end{table} Table 6: Average normalised perplexity results evaluated with a context of 384 or more of the overall best GPT runs of each compute class (based on the slow eval). The hours in each compute class are w.r.t. the ideal RTX 3090 throughput as measured by the Languini throughput script. From the hours and throughput, we calculate the total number of tokens to be processed during training. The total number of floating point operations is calculated by (total train tokens/sequence length) \(\times\) forward flops of the model \(\times\) 3. As in previous work, we estimate the FLOP count of the backward pass to be double the forward pass (Kaplan et al., 2020). \({}^{*}\)Given the total number of tokens we can compute the ideal accelerator hours for other hardware based on the throughput of the same model config. is the cell state of this head, \(\mathbf{W}_{h}\in\mathbb{R}^{d_{\text{model}}\times d_{\text{head}}}\) is the projection back to the embedding size \(d_{\text{model}}\), and \(\mathbf{x}_{t}^{\prime}\) is the output of the LSTM sublayer. Despite the increased parallel structure of the muli-head LSTM, each head performs multiple small matrix multiplications for every step in the sequence. With large backpropagation spans, such as the 512 steps we do in all our experiments, this results in significant sequential computation and drops in the utilisation of the Figure 9: Scale plot of the best GPT configs evaluated on the test split. Fast eval considered predictions over tokens in the sequence. Slow eval evaluates only on the last 128 predictions which increases the minimum context per token from 1 to 384. Top: normalised perplexity over accelerator seconds. Vertical dashed lines are the different compute classes starting from 6h. Bottom: normalised perplexity over giga FLOPs. Like Kaplan et al. (2020), we estimate the FLOPs of the backward pass to be two times the forward pass. accelerator hardware. By dropping the recurrent weights \(\mathbf{U}\) and the dependency of the gates on the previous state \(\mathbf{h}_{t-1}\) we further increase the parallelisability and arrive at our multi-head qLSTM formulation: \[\mathbf{f}_{t} =\sigma(\mathbf{W}_{f}\mathbf{x}_{t}+\mathbf{b}_{f}) \tag{18}\] \[\mathbf{i}_{t} =\sigma(\mathbf{W}_{i}\mathbf{x}_{t}+\mathbf{b}_{i})\] (19) \[\mathbf{z}_{t} =\phi(\mathbf{W}_{z}\mathbf{x}_{t}+\mathbf{b}_{z})\] (20) \[\mathbf{o}_{t} =\sigma(\mathbf{W}_{o}\mathbf{x}_{t}+\mathbf{b}_{o})\] (21) \[\mathbf{c}_{t} =\mathbf{c}_{t-1}\odot\mathbf{f}_{t}+\mathbf{i}_{t}\odot\mathbf{z}_{t}\] (22) \[\mathbf{h}_{t} =\mathbf{o}_{t}\odot\phi(\mathbf{c}_{t})\] (23) \[\mathbf{x}_{t}^{\prime} =\mathbf{W}_{h}\mathbf{h}_{t} \tag{24}\] Note that the only sequential operation that remains is an element-wise linear map: \[\mathbf{c}_{t}=\mathbf{c}_{t-1}\odot\mathbf{f}_{t}+\mathbf{u}_{t} \tag{25}\] where we summarised \(\mathbf{i}_{t}\odot\mathbf{z}_{t}\) into the update vector \(\mathbf{u}_{t}\in\mathbb{R}^{d_{\text{head}}}\). A parallel implementation of recurrence.The sequential part of the qLSTM in Eq. 25 can be expanded over 4 steps of the sequence as follows. \[\mathbf{c}_{t} =\mathbf{c}_{t-1}\odot\mathbf{f}_{t}+\mathbf{u}_{t} \tag{26}\] \[\mathbf{c}_{t} =(\mathbf{c}_{t-2}\odot\mathbf{f}_{t-1}+\mathbf{u}_{t-1})\odot\mathbf{f}_{t}+\mathbf{ u}_{t}\] (27) \[\mathbf{c}_{t} =((\mathbf{c}_{t-3}\odot\mathbf{f}_{t-2}+\mathbf{u}_{t-2})\odot\mathbf{f}_{t-1}+ \mathbf{u}_{t-1})\odot\mathbf{f}_{t}+\mathbf{u}_{t}\] (28) \[\mathbf{c}_{t} =((\mathbf{c}_{t-4}\odot\mathbf{f}_{t-3}+\mathbf{u}_{t-3})\odot\mathbf{f}_{t-1}+ \mathbf{u}_{t-1})\odot\mathbf{f}_{t-1}+\mathbf{u}_{t-1})\odot\mathbf{f}_{t}+\mathbf{u}_{t}\] (29) \[\mathbf{c}_{t} =\quad\mathbf{c}_{t-4}\odot\mathbf{f}_{t-3}\odot\mathbf{f}_{t-2}\odot\mathbf{f}_ {t-1}\odot\mathbf{f}_{t}\] \[\quad+\mathbf{u}_{t-3}\odot\mathbf{f}_{t-2}\odot\mathbf{f}_{t-1}\odot\mathbf{f}_ {t}\] \[\quad+\mathbf{u}_{t-2}\odot\mathbf{f}_{t-1}\odot\mathbf{f}_{t}\] \[\quad+\mathbf{u}_{t-1}\odot\mathbf{f}_{t}\] \[\quad+\mathbf{u}_{t-1}\odot\mathbf{f}_{t}\] \[\quad+\mathbf{u}_{t} \tag{30}\] We can rewrite Eq. 30: \[[\mathbf{c}_{t}]_{j}=\sum_{i}\begin{bmatrix}\mathbf{c}_{t-4}\\ \mathbf{u}_{t-3}\\ \mathbf{u}_{t-2}\\ \mathbf{u}_{t-1}\\ \mathbf{u}_{t-1}\\ \mathbf{u}_{t}\end{bmatrix}_{i,j}\begin{bmatrix}\mathbf{f}_{t-3}\odot\mathbf{f}_{t-2}\odot \mathbf{f}_{t-1}\odot\mathbf{f}_{t}\\ \mathbf{f}_{t-2}\odot\mathbf{f}_{t-1}\odot\mathbf{f}_{t}\\ \mathbf{f}_{t-1}\odot\mathbf{f}_{t}\\ \mathbf{f}_{t}\\ 1\end{bmatrix}_{i,j} \tag{31}\] Or more generally, we can describe a tensor \(\mathbf{\mathsf{F}}\in\mathbb{R}^{4\times 5\times d_{\text{head}}}\) which consists of the following matrix of vectors \[\mathbf{\mathsf{F}}=\begin{bmatrix}\mathbf{f}_{t-3}&1&0&0&0\\ \mathbf{f}_{t-3}\odot\mathbf{f}_{t-2}&\mathbf{f}_{t-2}&1&0&0\\ \mathbf{f}_{t-3}\odot\mathbf{f}_{t-2}\odot\mathbf{f}_{t-1}&\mathbf{f}_{t-2}\odot\mathbf{f}_{t-1} &\mathbf{f}_{t-1}&1&0\\ \mathbf{f}_{t-3}\odot\mathbf{f}_{t-2}\odot\mathbf{f}_{t-1}\odot\mathbf{f}_{t}&\mathbf{f}_{t-2} \odot\mathbf{f}_{t-1}\odot\mathbf{f}_{t}&\mathbf{f}_{t-1}\odot\mathbf{f}_{t}&\mathbf{f}_{t}&1\end{bmatrix} \tag{33}\] such that Running several experiments at different scales with models that achieve 100-fold less throughput is unfeasible. For this reason, we limit our experimental evaluation to the fastest LSTM variant, the Multi-Head qLSTM model with a block length of 8 or 16. For easier comparison, we define the qLSTM model sizes in Table 8 to be within roughly 10% of the parameter and/or total FLOP count of the respective GPT model sizes from Table 4. \begin{table} \begin{tabular}{c c c c c c} Model & \(n_{\text{heads}}\) & \(d_{\text{head}}\) & block length & tokens per second & implementation \\ \hline LSTM small & 1 & 768 & - & 1,462 & minimal for-loop \\ LSTM small & 12 & 64 & - & 1,464 & minimal for-loop \\ qLSTM small & 1 & 768 & - & 4,499 & minimal for-loop \\ qLSTM small & 12 & 64 & - & 4,494 & minimal for-loop \\ \hline qLSTM small & 1 & 768 & 1 & 2,235 & block-parallel \\ qLSTM small & 12 & 64 & 1 & 2,352 & block-parallel \\ qLSTM small & 12 & 64 & 16 & 11,143 & block-parallel \\ qLSTM small & 12 & 64 & 32 & 8,638 & block-parallel \\ \end{tabular} \end{table} Table 7: The best throughputs achieved on the RTX 3090 for the LSTM and quasi-LSTM with different implementations, block lengths and number of heads. The minimal for-loop implementation computes Eq. 25 in sequence whereas the block-parallel implementation uses Eq. 34 to compute all tokens within a block in parallel. Increasing the block length results in higher throughput and higher hardware utilisation but is less memory efficient. \begin{table} \begin{tabular}{c c c|c c c c c} qLSTM Model & gigaFLOPs & Params & \(d_{\text{model}}\) & \(n_{\text{layers}}\) & \(n_{\text{heads}}\) & \(d_{\text{head}}\) & block length \\ \hline mini & 19.9 & 27.8M & 512 & 4 & 8 & 32 & 16 \\ tiny & 44.3 & 55.9M & 768 & 4 & 12 & 64 & 8 \\ small & 107.3 & 117.3M & 768 & 12 & 12 & 64 & 16 \\ medium & 352.7 & 361.1M & 1024 & 12 & 16 & 64 & 16 \\ large & 780.4 & 787.0M & 1536 & 24 & 16 & 96 & 16 \\ XL & 1,633.6 & 1,628.2M & 2048 & 24 & 24 & 128 & 16 \\ \end{tabular} \end{table} Table 8: Overview of our qLSTM model sizes, flops, parameters, and differing hyperparameters. FLOPs are the total number of floating point operations during the forward pass with a batch size of 1 as measured by the DeepSpeed flops profiler. #### 4.3.2 Results We present the best normalised perplexity scores for the compute classes from 6h to 96h in Table 10. In Figure (b)b we compare the total FLOPs of the best qLSTM and GPT models. We find that the qLSTM models counter-balance their 5-6k times slower throughput with a faster convergence which requires fewer tokens than the GPT models to achieve a similar perplexity. As a result, our qLSTM model beats the GPT baseline after roughly 2 exaFLOPs. As indicated in the previous Section, our qLSTM implementation does not make ideal use of the accelerator hardware. In Figure (a)a we compare our best qLSTM models with our best GPT models on the basis of accelerator hours and we find that the qLSTM lags behind on all scales up to 96h. However, we observe that the qLSTM achieves a steeper scaling law than the GPT models, indicating a cross-over point at roughly 50,000 accelerator hours. We compare the qLSTM with the GPT model on our out of distribution splits in Table 11 (and Figure 11 in the appendix). On the langlearn, discworld, and wood splits the models tend to have higher normalised perplexity. On the java and stats this is less the case. In fact, both 96h models have lower normalised perplexity on java and stats books than on the regular test split. ## 5 The Languini Codebase The Languini Kitchen codebase is fundamentally a research-focused codebase, created with the intent of being easy to use, comprehensible, and sufficiently performant for our benchmark. One of our primary objectives is to provide researchers with an environment that enables them to draw meaningful and equitable comparisons with prior works. Furthermore, Languini also serves as a platform for identifying promising methods that have the potential to be scaled up. The codebase supports data parallel experiments but not model parallel. Model parallelism is necessary to scale up experiments to very large models, typically with billions of parameters or more, by distributing the model across multiple devices. However, few researchers will have access to such expansive computational resources and are thus outside the motivation of Languini. Models trained in Languini ought to fit within the GPU memory of the chosen reference hardware. \begin{table} \begin{tabular}{c c|c c c c} compute class & normalised perplexity & config & batch size & total train tokens & total exaFLOPs \\ \hline 6h & 2.518 & mini & 80 & 2.05B & 0.239 \\ 12h & 2.463 & tiny & 80 & 1.60B & 0.414 \\ 24h & 2.361 & small & 84 & 0.96B & 0.605 \\ 48h & 2.280 & small & 80 & 1.93B & 1.211 \\ 96h & 2.215 & small & 160 & 3.85B & 2.421 \\ \end{tabular} \end{table} Table 10: Normalised perplexity values of the best qLSTM runs for each compute class. The number of tokens and exaFLOPs are computed as in Table 6. \begin{table} \begin{tabular}{c c c c} qLSTM Model & RTX 3090 & V100-32GB & A100-80GB \\ \hline mini & 94,781 (82) & 82,318 (112) & 186,145 (284) \\ tiny & 36,930 (60) & 33,533 (84) & 62,031 (213) \\ small & 11,143 (17) & 9,060 (22) & 23,433 (63) \\ medium & 2,509 (5) & 1,929 (7) & 6,731 (22) \\ large & 1,006 (2) & 773 (3) & 4,064 (13) \\ XL & OOM & OOM & 1,720 (5) \\ \end{tabular} \end{table} Table 9: Overview of the best tokens per second measures and their respective batch sizes in brackets for different qLSTM model sizes on several accelerators. OOM stands for out of memory. The Languini codebase is inspired by Scenic, a lightweight library for the development of vision models (Dehghani et al., 2022). It similarly provides various model-agnostic features, ranging from logging and data loading to training and evaluation functionalities. In order to maintain clarity and avoid complexity, experiments and the code will be placed in distinct and isolated project folders. Every research endeavour will Figure 10: Scale plot of the best qLSTM configs and the best results of the GPT models using the slow evaluation over the last 128 predictions per batch on the test split. Top: normalised perplexity over accelerator seconds. Vertical dashed lines are the different compute classes starting from 6h. Bottom: normalised perplexity over total FLOPs. Like previous work, we estimate the FLOPs of the backward pass to be two times the forward pass (Kaplan et al., 2020). have its exclusive project directory, complete with its own necessary library code. This preserves simplicity and ensures that each project remains independent of subsequent advancements. For this reason, Languini will prevent interdependencies between projects. Once a project concludes, its respective folder ought to remain unchanged to guarantee reproducibility. Although this approach may lead to some code redundancy, we believe this is a justifiable trade-off for a research-based codebase to prevent the core from deteriorating. To be listed in the Languini leaderboard, researchers are expected to provide not just the model code, but also configurations for all compute classes. Furthermore, every project has to provide scripts to download training logs and final model checkpoints from an archival hoster. We recommend utilising Zenodo.org, a reputable open repository developed by CERN under the European OpenAIRE program, for this purpose (European Organization For Nuclear Research & OpenAIRE, 2013). The Languini Kitchen codebase is licensed under Apache 2.0, granting researchers the freedom to employ the code as they deem fit. Nonetheless, we urge researchers to contribute their most noteworthy results as a dedicated project folder, accompanied by instructions and a reference to their published work. This will further facilitate reproducibility and allow peers to draw comparisons with ease. ## 6 Open Research Questions The field of language modelling has never been more exciting. With a benchmark where virtually all models are underfitting, it shifts the focus away from ad-hoc regularisation techniques to innovations that will hopefully be directly applicable at scale. In this section, we highlight just a few of the interesting directions that future work may explore. Better tokenisation.Our exploration of common BPE tokenisation vocabularies, as detailed in Section 4.1, has brought several intriguing findings to light. Notably, many tokens can be derived using elementary symmetries. We also observed that the size of the vocabulary can substantially influence performance. These discoveries underscore the potential for innovative tokenisation methods. While recent studies underscore the benefits of byte-level modes, they remain inferior to BPE tokenisers in compute-constrained experiments. Implemental efficiency.Recent work, such as flash attention, has highlighted the inefficiencies inherent in the native implementation of resource-intensive aspects of the model. Enhanced compilers, libraries, or a more in-depth understanding of a low-level implementation could boost the throughput of a model without necessitating conceptual changes. An example of such is Rockmate (Zhao et al., 2023b) which is a tool to make models more memory efficient at the cost of re-computing certain activations. Optimisation improvements.While our experiments utilise simply Adam, there have been various advancements in the optimisation of language models. However, a recent study indicates that some perceived advantages diminish when experiments account for data or compute disparities (Kaddour et al., 2023). \begin{table} \begin{tabular}{c c|c c c c c} compute & \multirow{2}{*}{model} & \multicolumn{5}{c}{normalised perplexity on \(\mathrm{ood\ splits}\)} \\ class & & langlearn & discworld & java & stats & wood \\ \hline \multirow{2}{*}{6h} & GPT & 4.042 & 2.393 & 2.264 & 2.131 & 2.370 \\ & qLSTM & 5.168 & 2.670 & 2.772 & 2.531 & 2.719 \\ & GPT & 3.744 & 2.326 & 2.165 & 2.062 & 2.293 \\ & qLSTM & 4.865 & 2.619 & 2.736 & 2.466 & 2.639 \\ & GPT & 3.521 & 2.273 & 2.104 & 2.011 & 2.232 \\ & qLSTM & 4.525 & 2.526 & 2.588 & 2.354 & 2.511 \\ & GPT & 3.330 & 2.217 & 2.036 & 1.948 & 2.153 \\ & qLSTM & 4.158 & 2.440 & 2.464 & 2.252 & 2.424 \\ 96h & GPT & 3.135 & 2.158 & 1.977 & 1.898 & 2.088 \\ & qLSTM & 3.834 & 2.343 & 2.324 & 2.176 & 2.325 \\ \end{tabular} \end{table} Table 11: Evaluation of the best GPT and qLSTM models on all out of distribution splits. The Languini Books benchmark, being more expansive and akin to large-scale data than prior academic benchmarks, coupled with the existing model implementation within the Languini codebase, can facilitate a better assessment of novel optimisation techniques. Introduction of new models.Languini provides a feed-forward and a recurrent baseline. Each approach has its unique strengths and limitations. Over the past few years, several models have been published which declare their supremacy over the decoder-only transformer model in some way, but few have demonstrated their scalability. Examples of such are the following: a Linear Transformer (Schmidhuber, 1991; Katharopoulos et al., 2020; Schlag et al., 2021) called TransNormer (Qin et al., 2023), a block-recurrent Transformer (Hutchins et al., 2022), novel parallelisable RNN called RWKV (Peng et al., 2023), or a state-space model for language modelling called H3 (Fu et al., 2023). Unfortunately, each one of them has been trained and evaluated on different data and hardware making a direct comparison impossible. The Languini Books benchmark, however, could serve as a platform for such models to demonstrate their benefits in a fair and reproducible way with scalability in mind. Advancement in theory.The Languini Books benchmark boasts a significant enough scale to empirically demonstrate model-specific scaling laws. Furthermore, our preliminary results indicate that the compute-optimal batch size is also model-specific and depends weakly on the size of the model but more work is required to establish a principled approach that scales. Enhanced generalisation.The Languini Books dataset incorporates several out of distribution splits. These splits mirror the divergence between the data on which the language model was trained and the context wherein it is deployed. The splits we introduced emphasize vast volumes of unique context that were removed from the training corpus, necessitating models to adapt and learn on the fly. Given the limited context of current models, this may demand novel strategies, possibly via efficient online learning algorithms or novel and dynamic architectures equipped with the capacity to meta-learn. ## 7 Conclusion In this work, we introduced the Languini Kitchen, a research collective and codebase designed to democratize language modelling research by facilitating meaningful contributions across varying scales of computational resources. We presented an experimental protocol that emphasizes the use of accelerator hours as a more informative and equitable metric for comparison, addressing limitations inherent in the conventional measures of the number of parameters or FLOPs. Utilising a filtered version of the books3 dataset, we demonstrated the utility of our approach in offering a fair and meaningful platform for comparing language models. We provided two baseline models, a feed-forward model based on the GPT-2 architecture and a recurrent model based on the new LSTM variation designed for larger throughput. Our empirical analysis revealed that while the GPT-2-based model performs strongly in absolute terms, the quasi-LSTM exhibits superior scaling laws, converging more efficiently with fewer tokens. As a future direction, the scalability of our quasi-LSTM model offers intriguing possibilities for optimization and performance improvement. Furthermore, the Languini Kitchen's codebase is open for community contributions, encouraging ongoing research and development aimed at improving the performance of language models and identifying new candidates to be scaled up. By setting new standards for fair comparison and offering tools for practical implementation, we believe that the Languini Kitchen lays the foundation for advancing the state of the art in language modelling research. #### Broader Impact Statement The Languini Kitchen aims to democratize access to state-of-the-art language modelling research by creating an equitable framework for comparing performance across different scales of computational resources. In doing so, it opens up opportunities for researchers and institutions with limited computational capabilities to contribute meaningfully to the field. This democratization can lead to increased diversity in research perspectives, potentially yielding innovative solutions to existing problems and fostering greater inclusivity in the field of machine learning. Lastly, it's worth considering that any advancements in language modelling, including those made more accessible by the Languini Kitchen, come with ethical implications related to data privacy, algorithmic bias, and the potential misuse of generated text. As the Languini Kitchen makes it easier to develop more capable language models, it also magnifies the importance of ensuring that these technologies are developed and deployed responsibly. #### Author Contributions * **Aleksandar Stanic:** Ran experiments and contributed to the codebase, manuscript, and discussions. * **Dylan Ashley:** Contributed to the dataset and discussions. * **Louis Kirsch:** Contributed to the manuscript and discussions. * **Oleg Serikov:** Contributed in running experiments. * **Francesco Faccio:** Contributed to the presentation of the project and the manuscript. * **Jurgen Schmidhuber:** Advised the project. * **Thomas Hofmann:** Advised the project. * **Imanol Schlag:** Initiated and led the project. Ran experiments and contributed to the codebase, dataset, and manuscript. #### Acknowledgments We extend our sincere thanks to Bobby He and Sotiris Anagnostidis for their valuable feedback on the initial draft of this manuscript and to Vincent Herrmann for helping to setup the website. This work was partially funded by ERC Advanced grant no: 742870 to J. Schmidhuber.
2306.17363
Quantum optimization algorithm based on multistep quantum computation
We present a quantum algorithm for finding the minimum of a function based on multistep quantum computation and apply it for optimization problems with continuous variables, in which the variables of the problem are discretized to form the state space of the problem. Usually the cost for solving the problem increases dramatically with the size of the problem. In this algorithm, the dimension of the search space of the problem can be reduced exponentially step by step. We construct a sequence of Hamiltonians such that the search space of a Hamiltonian is nested in that of the previous one. By applying a multistep quantum computation process, the optimal vector is finally located in a small state space and can be determined efficiently. One of the most difficult problems in optimization is that a trial vector is trapped in a deep local minimum while the global minimum is missed, this problem can be alleviated in our algorithm and the runtime is proportional to the number of the steps of the algorithm, provided certain conditions are satisfied. We have tested the algorithm for some continuous test functions.
Hefeng Wang, Hua Xiang
2023-06-30T01:58:23Z
http://arxiv.org/abs/2306.17363v1
# Quantum optimization algorithm based on multistep quantum computation ###### Abstract We present a quantum algorithm for finding the minimum of a function based on multistep quantum computation and apply it for optimization problems with continuous variables, in which the variables of the problem are discretized to form the state space of the problem. Usually the cost for solving the problem increases dramatically with the size of the problem. In this algorithm, the dimension of the search space of the problem can be reduced exponentially step by step. We construct a sequence of Hamiltonians such that the search space of a Hamiltonian is nested in that of the previous one. By applying a multistep quantum computation process, the optimal vector is finally located in a small state space and can be determined efficiently. One of the most difficult problems in optimization is that a trial vector is trapped in a deep local minimum while the global minimum is missed, this problem can be alleviated in our algorithm and the runtime is proportional to the number of the steps of the algorithm, provided certain conditions are satisfied. We have tested the algorithm for some continuous test functions. Introduction Optimization problem is one of the most important problems in science and engineering. It includes a wide class of problems ranging from molecular modeling, quantum mechanical calculations, machine learning, to combinatorial optimization. These problems can be classified into different categories, e.g., continuous or discrete optimization, constrained or unconstrained optimization, convex or nonconvex optimization, differentiable or nondifferentiable optimization, deterministic or stochastic optimization [1; 2; 3; 4; 5], etc. There is no universal optimization algorithm. Most classical optimization algorithms start with a trial vector that is varied by using different techniques to find the optimum of an objective function. The cost of the algorithms can become very expensive due to the increase of the dimension of the state space of the problem, which is known as "the curse of dimension". Another problem that often happens for optimization algorithms is that the trial vector is trapped in a deep local minimum, while missing the global minimum of the objective function. Optimization has also been studied in the framework of quantum computation. Adiabatic quantum computing (AQC) is designed for solving combinatorial optimization problems [6], in which starting with the ground state of a simple initial Hamiltonian, the system is evolved adiabatically to a final Hamiltonian whose ground state encodes the solution to the optimization problem. Despite the theoretical guarantee of the adiabatic theorem, the condition of adiabaticity in AQC is difficult to maintain in practice, since the allowed rate of evolution is determined by the minimum energy gap between the ground and the first excited states of the adiabatic evolution Hamiltonian, which is not known _a priori_. Quantum annealing is a heuristic quantum optimization algorithm [7; 8; 9; 10; 11] that can be viewed as a relaxation of AQC, where the conditions of adiabaticity are not met and the evolution time from an initial Hamiltonian to the final Hamiltonian is determined heuristically. Whether or not quantum annealing can provide quantum speed-up over classical heuristic algorithms is still not clear. Variational quantum algorithms such as quantum approximate optimization algorithm (QAOA) [12] are hybrid quantum-classical algorithms designed for near-term noisy intermediate-scale quantum computers [13] without performance guarantees. It is known that in the infinite depth limit, the QAOA recovers adiabatic evolution and would converge to the optimal solution. The gradient decent methods are used for optimization problems with continuous variables. The methods find local minima of a smooth function by moving along the direction of the steepest descent. Quantum algorithm provides an efficient way in calculating numerical gradients [14], and has been used in iterative algorithms for polynomial optimization [15]. Optimization algorithms based on gradient decent require that the objective function to be smooth, and they have the problem of being trapped in a local minimum and missing the global minimum. Besides, as the dimensionality of the problem increases, the search of the phase space becomes more and more complicated, and the complexity of the algorithm increases. Another approach for continuous optimization is by using Grover's search algorithm [16]. Continuous optimization problems can be discretized and mapped to a search problem, thereby solved by using Grover's algorithm. The Grover adaptive search algorithms iteratively apply Grover search to find the optimum value of an objective function [17; 18; 19; 20; 21; 22; 23], and can achieve quadratic speedup over classical search algorithms. However, these brute force methods are prohibitively expensive due to the large search space of the problems. In a recent work [24], we proposed an efficient quantum algorithm for solving a search problem with nested structure through multistep quantum computation. The problem can be decomposed and the search space of the problem can be reduced in a polynomial rate. The runtime of the algorithm is proportional to the number of steps of the algorithm. In this work, we generalize this algorithm for optimization problems with continuous variables. The nested structured search problem [24] is a search problem that contains \(N\) items with one target item, and can be decomposed by using \(m\) [\(O(\log N)\)] oracles to construct \(m\) Hamiltonians, respectively, as \[H_{P_{i}}=-\sum_{\eta_{i}\in\Pi_{i}}|\eta_{i}\rangle\langle\eta_{i}|,\quad i =1,\ldots,m \tag{1}\] and \[H_{P_{m}}=H_{m}=H_{P}=-|\eta\rangle\langle\eta|, \tag{2}\] where the set \(\Pi_{i}\) contains \(N_{i}\) marked items in the \(N\) items and \(|\eta_{i}\rangle\) are the marked states associated with the marked items, and \(|\eta\rangle\) is the target state that defines the problem Hamiltonian of the search problem. These sets are nested as \(\Pi_{1}\supset\cdots\supset\Pi_{m-1}\supset\Pi_{m}\) with sizes \(N_{1}\), \(\cdots\), \(N_{m-1}\), \(N_{m}=1\), respectively. The ratio \(N_{i-1}/N_{i}\) are polynomial large, and \(N_{0}=N\). The goal is to find the the target state \(|\eta\rangle\) that is associated with the target item in the set \(\Pi_{m}\). Our algorithm solves the nested structured search problem by finding the ground state of the problem Hamiltonian \(H_{P}\) via a multistep quantum computation process, which is realized through quantum resonant transition (QRT) [25; 26]. In this algorithm, a probe qubit is coupled to an \(n\)-qubit register \(R\) that represents the problem. We construct a sequence of intermediate Hamiltonians to form a Hamiltonian evolution path to the problem Hamiltonian as \[H_{i}=\frac{N_{i}}{N}H_{0}+\left(1-\frac{N_{i}}{N}\right)H_{P_{i}},\quad i=0,1, \ldots,m-1, \tag{3}\] where \(H_{0}=-|\psi_{0}\rangle\langle\psi_{0}|\) and \(|\psi_{0}\rangle=\frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}|j\rangle\). Then we start from the ground state of the initial Hamiltonian, and evolve it through the ground states of the intermediate Hamiltonians sequentially through QRT to reach the ground state of the problem Hamiltonian. The ground state of an intermediate Hamiltonian is protected in an entangled state of the probe qubit and the register \(R\), such that it can be used repeatedly without making copies. Therefore the algorithm circumvents the restriction of the no-cloning theorem [27; 28] and realizes the multistep quantum computation. The algorithm can be run efficiently provided that: \((i)\) the energy gap between the ground and the first excited states of each Hamiltonian \(H_{i}\) and, \((ii)\) the overlaps between ground states of any two adjacent Hamiltonians are not exponentially small. For the nested structured search problem, the conditions of the algorithm are satisfied since the ratio \(N_{i-1}/N_{i}\) are polynomial large, therefore it can be solved efficiently, and the conditions for efficiently running our algorithm are not equivalent to those of the AQC algorithms [24]. In this algorithm, by using the Hamiltonians \(H_{P_{i}}\) sequentially in each step, the dimension of the search space of the problem is reduced in a polynomial rate, the solution state to the problem Hamiltonian is obtained step by step. The idea of reducing the search space in a polynomial rate step by step in our algorithm has a classical analogue as follows: suppose there are 80 balls, all of them have equal weights except one that is lighter than the others. How to find the lighter ball? If we randomly pick up a ball and compare its weight with the other balls, this will take about 40 trials on average. If we have a balance, then how many times do we have to use the balance to find the lighter ball? According to information theory, the number of times the balance has to be used is \(\log 80/\log 3\approx 4\). The procedure is as follows: we divide all the 80 balls into 3 groups, each group has 27, 27 and 26 balls, respectively; then pick up the two groups that both have 27 balls, and use the balance to determine if they have equal weights. If the answer is positive, pick the group with 26 balls and divide it into 3 groups again: \(9,9,8\); otherwise, take the group that is lighter and divide it into three new groups: \(9,9,9\). This process can be repeated until the lighter ball is found. In this example, we can see that the problem is divided into a series of nested sub-problems and the size of the search space is reduced in a rate about \(1/3\) by using a balance. The target ball is found through an iterative procedure and the cost is reduced exponentially. By using a different oracle in each step, the QRT procedure in our algorithm emulates the usage of the balance in solving the nested structured search problem. The procedure for solving the nested structured search problem can be applied for optimization problems that are transformed to finding the ground state of a problem Hamiltonian in quantum computation. Here, we propose a quantum algorithm based on multistep quantum computation for optimization problems with continuous variables. We first discretize the variables of the objective function to construct the state space of the problem. Then we construct a sequence of intermediate Hamiltonians to reach the problem Hamiltonian by decomposing the problem using a set of threshold values, and apply a multistep quantum computation process to reduce the search space of the problem step by step. The solution vector to the optimization problem is narrowed in a small state space and can be determined efficiently through measurements. If the search spaces of the Hamiltonians are reduced in a polynomial rate by using an appropriate set of threshold values, then the optimum of the function can be obtained efficiently. Meanwhile if the global minimum of the optimization problem is in the state space of the problem, then it can be obtained efficiently. The problem in many optimization algorithms where the trial vector is trapped in a deep local minimum and missing the global minimum can be avoided in our algorithm, provided the above conditions are satisfied. In quantum computing, the dimension of the Hilbert space of the qubits increases exponentially with the number of qubits, it is more efficient to represent a large state space on a quantum computer than on a classical computer, therefore increasing the probability of finding the global minimum of the problem. This paper is organized as follows: in Sec. II, we describe the quantum algorithm for optimization problems with continuous variables based on multistep quantum computation; in Sec. III, we apply the algorithm for some test optimization problems, and we close with a discussion. Quantum optimization algorithm based on multistep quantum computation Let \(S\) be the domain of \(\mathbf{x}\), an optimization problem can be formulated as a minimization problem: \[\text{minimize }F(\mathbf{x})\text{: subject to }\mathbf{x}\in S, \tag{4}\] where \(F\) is a real-valued objective function and \(\mathbf{x}\) is the vector of the variables. Here we focus on optimization problems with continuous variables, which can be described as follows: for a real-valued function of \(r\) variables, \(F\left(x_{1},x_{2},\cdots,x_{r}\right)\), find a vector of the variables such that the function has the minimum value. In the following, we present a quantum optimization algorithm based on multistep quantum computation for this problem. We discretize the continuous variables in the function domain into intervals of same length for all the variables, and map the problem on a quantum computer. For simplicity, suppose each variable is discretized into \(l\) elements in its definition domain, the dimension of the state space of the function is \(l^{r}\). We prepare \(r\) quantum registers and each register contains \(\lceil\log_{2}l\rceil\) qubits that represents the elements of the variable. Therefore \(n=r\lceil\log_{2}l\rceil\) qubits form the register \(R\) that represents the problem with state space of size \(N=2^{n}\) on a quantum computer. A vector of the discretized variables \(x_{1}^{(i)}\), \(x_{2}^{(j)}\), \(\ldots\), \(x_{r}^{(k)}\) is represented by state \(|ij\cdots k\rangle\), where \(x_{s}^{(j)}\) represents the \(j\)th element of the variable \(x_{s}\). The states \(|i\rangle\), \(|j\rangle\), \(\cdots\), \(|k\rangle\) are binary representation of the elements \(x_{1}^{(i)}\), \(x_{2}^{(j)}\), \(\ldots\), \(x_{r}^{(k)}\) on the quantum registers. These vectors form the computational basis states (CBS) of \(r\) quantum registers of dimension \(N\) as \(|J\rangle=|i\rangle|j\rangle\ldots|k\rangle\), \(J=0\), \(1\), \(\ldots\), \(N-1\), and the corresponding function value is \(F\left(x_{1}^{(i)},x_{2}^{(j)},\ldots,x_{r}^{(k)}\right)=F\left(J\right)=F_{J}\). The task is to find the vector \(|Q\rangle=|q_{1}\rangle|q_{2}\rangle\ldots|q_{r}\rangle\) such that \(F\left(x_{1}^{(q_{1})},x_{2}^{(q_{2})},\ldots,x_{r}^{(q_{r})}\right)\) is the minimum of the function \(F\). By using an oracle \(O_{F}\) where \(O_{F}|J\rangle|0\rangle=|J\rangle|F\left(J\right)\rangle\), the Hamiltonian of the optimization problem can be constructed as \[H_{F}|J\rangle=F_{J}|J\rangle, \tag{5}\] where \(F_{J}\) are eigenvalues of \(H_{F}\) with corresponding eigenstates \(|J\rangle\). The problem of finding the minimum of the function \(F\) is transformed to finding the ground state of the Hamiltonian \(H_{F}\) and its corresponding eigenvalue. We apply a multistep quantum computation process for solving this problem. We first estimate the range of the function value as \([F_{\min},\,F_{\max}]\), and prepare a set of threshold values {\(d_{1}\), \(d_{2}\), \(\ldots\), \(d_{m}\)}, and \(F_{\max}>d_{1}>d_{2}>\ldots>d_{m}>F_{\min}\). Then we construct \(m\) Hamiltonians as: \[H_{P_{i}}|J\rangle=h_{J}|J\rangle,\quad i=1,\ldots,m \tag{6}\] where \[h_{J}=\begin{cases}-1,\,\text{if}\,F_{J}\leq d_{i},\\ \,0,\,\text{if}\,F_{J}>d_{i}\,,\end{cases} \tag{7}\] and \(H_{P_{m}}=H_{m}=H_{P}\). This can be achieved by using an oracle that recognizes whether \(F_{J}\) is larger or less than a threshold value \(d_{i}\). It is a comparison logic circuitry and can be implemented efficiently on a quantum computer [29, p.264][17, 20, 30]. The CBS associated with integers that are less than or equal to \(d_{i}\) form a set \(A_{i}\) with size \(N_{i}\). They have the nested structure as \(A_{m}\subset A_{m-1}\subset\cdots\subset A_{1}\). The ground state of the problem Hamiltonian \(H_{P}\) contains CBS in \(A_{m}\) with eigenvalues that are below the threshold value \(d_{m}\). We construct a sequence of Hamiltonians that form a Hamiltonian evolution path to the problem Hamiltonian \(H_{P}\) as \[H_{i}=\frac{M_{i}}{N}H_{0}+\left(1-\frac{M_{i}}{N}\right)H_{P_{i}},\quad i=0, 1,\ldots,m-1, \tag{8}\] where \(H_{0}=-|\psi_{0}\rangle\langle\psi_{0}|\) and \(|\psi_{0}\rangle=\frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}|j\rangle\), and \(M_{i}\) is an approximate estimation of \(N_{i}\). We have demonstrated that as \(M_{i}=N_{i}\), the conditions for efficiently running the algorithm are satisfied provided that the ratio \(N_{i-1}/N_{i}\) are polynomial large [24]. The parameters \(M_{i}\) can be estimated efficiently by using the Monte Carlo sampling method [31], and we can adjust the threshold values such that the ratio \(N_{i-1}/N_{i}\) are polynomial large. Detailed analysis of the effect of \(M_{i}\) on the efficiency of the algorithm is presented in the appendix. The ground state of \(H_{P}\) can be obtained through the following multistep quantum computation process based on QRT in \(m\) steps. We use the \(i\)th step of the algorithm to illustrate the procedures. In the \(i\)th step, given the Hamiltonian \(H_{i-1}\), its ground state eigenvalue \(E_{0}^{(i-1)}\) and the ground state \(|\varphi_{0}^{(i-1)}\rangle\) obtained from the previous step, we are to prepare the ground state \(|\varphi_{0}^{(i)}\rangle\) of \(H_{i}\) by using the QRT method. The algorithm requires \((n+1)\) qubits with a probe qubit coupling to the \(n\)-qubit register \(R\). The algorithm Hamiltonian of the \(i\)th step is constructed as \[H^{(i)}=-\frac{1}{2}\omega\sigma_{z}\otimes I_{N}+H_{R}^{(i)}+c\sigma_{x} \otimes I_{N}, \tag{9}\] where \[H_{R}^{(i)}=\alpha_{i}|1\rangle\langle 1|\otimes H_{i-1}+|0\rangle\langle 0|\otimes H _{i},\,i=1,2,\cdots,m, \tag{10}\] \(I_{N}\) is the \(N\)-dimensional identity operator, and \(\sigma_{x}\) and \(\sigma_{z}\) are the Pauli matrices. The first term in Eq. (9) is the Hamiltonian of the probe qubit, the second term contains the Hamiltonian of the register \(R\) and describes the interaction between the probe qubit and \(R\), and the third term is a perturbation with \(c\ll 1\). The parameter \(\alpha_{i}\) is used to re-scale the energy levels of \(H_{i-1}\), and the ground state energy of \(\alpha_{i}H_{i-1}\) is used as a reference energy level to the ground state eigenvalue \(E_{0}^{(i)}\) of \(H_{i}\). The initial state of the \((n+1)\) qubits is set as \(|1\rangle|\varphi_{0}^{(i-1)}\rangle\), which is an eigenstate of \(H_{R}^{(i)}\) with eigenvalue \(\alpha_{i}E_{0}^{(i-1)}\). First we obtain the eigenvalue \(E_{0}^{(i)}\) of \(H_{i}\) by using the QRT method through varying the frequency of the probe qubit as shown in Ref. [24]. Then we set \(\alpha_{i}=\left(E_{0}^{(i)}-\omega\right)/E_{0}^{(i-1)}\), such that the condition of \(E_{0}^{(i)}-\alpha_{i}E_{0}^{(i-1)}=\omega\) for resonant transition between the probe qubit and the transition between states \(|\varphi_{0}^{(i-1)}\rangle\) and \(|\varphi_{0}^{(i)}\rangle\) is satisfied. When obtaining the eigenvalue \(E_{0}^{(i)}\) of \(H_{i}\), we can also obtain the overlap \(g_{0}^{(i)}=\langle\varphi_{0}^{(i-1)}|\varphi_{0}^{(i)}\rangle\) between the ground states of \(H_{i-1}\) and \(H_{i}\) through the Rabi's formula [32]. Then we can set the optimal runtime \(t_{i}=\pi/(2cg_{0}^{(i)})\) at which the probability for the system to be evolved to the state \(|0\rangle|\varphi_{0}^{(i)}\rangle\) reaches its maximum. The procedures for obtaining the ground state of \(H_{i}\) are summarized as follows: \((i)\) Initialize the probe qubit to its excited state \(|1\rangle\) and the register \(R\) in state \(|\varphi_{0}^{(i-1)}\rangle\); \((ii)\) Implement the unitary evolution operator \(U(t_{i})=\exp\left(-iH^{(i)}t_{i}\right)\); \((iii)\) Read out the state of the probe qubit. The system is approximately in state \(\sqrt{1-p_{0}^{(i)}}|1\rangle|\varphi_{0}^{(i-1)}\rangle+\sqrt{p_{0}^{(i)}}|0 \rangle|\varphi_{0}^{(i)}\rangle\) as the resonant transition occurs, where \(p_{0}^{(i)}=\sin^{2}\left(ct_{i}g_{0}^{(i)}\right)\) is the decay probability of the probe qubit of the \(i\)th step. The state \(|\varphi_{0}^{(i-1)}\rangle\) from the previous step is protected in this entangled state. By performing a measurement on the probe qubit, if the probe decays to its ground state \(|0\rangle\), it indicates that the resonant transition occurs and the system evolves from the state \(|1\rangle|\varphi_{0}^{(i-1)}\rangle\) to the state \(|0\rangle|\varphi_{0}^{(i)}\rangle\); otherwise if the probe qubit stays in state \(|1\rangle\), it means that the register \(R\) remains in state \(|\varphi_{0}^{(i-1)}\rangle\), then we repeat procedures \(ii)\)-\(iii)\) until the probe qubit decays to its ground state \(|0\rangle\). Therefore we can obtain the ground state \(|\varphi_{0}^{(i)}\rangle\) of \(H_{i}\) deterministically. By protecting the state \(|\varphi_{0}^{(i-1)}\rangle\) through entanglement, the state can be used repeatedly without copying it, such that the algorithm realizes multistep quantum computation. The runtime of the algorithm is proportional to the number of steps of the algorithm, and the success probability of the algorithm is polynomial large by setting the coupling coefficient appropriately [24]. After running the algorithm for \(m\) steps, we obtain the ground state of the problem Hamiltonian \(H_{P}\), which is a superposition state of a few CBS with eigenvalues below the threshold value \(d_{m}\). Then we can perform measurement on the state and find the CBS that has the minimum function value, therefore solving the optimization problem. We can run the algorithm for a few rounds by discretizing the variables in the neighborhood of the optimized vector to improve the precision of the solution to the optimization problem. The algorithm can be run efficiently if both the energy gap between the ground and the first excited states of each Hamiltonian \(H_{i}\) and the overlap between ground states of any two adjacent Hamiltonians \(g_{0}^{(i)}\) are not exponentially small. By solving the eigen-problem of the Hamiltonian \(H_{i}\), these conditions can be satisfied if the ratio \(N_{i-1}/N_{i}\) are polynomial large, and the parameters \(M_{i}\) are set such that: the point \(\left(N_{i}/N,M_{i}/N\right)\) is far away from the neighborhood of the point \(\left(0,1/2\right)\), and \(2M_{i}\left(N-N_{i}\right)<N^{2}\). Detailed analysis is shown in the appendix. ## III Application of the algorithm for some test functions We now apply the quantum optimization algorithm described above for some test functions of optimization problem: the Damavandi function, the Griewank function and the Price function. ### The Damavandi function The two dimensional Damavandi function is defined as \[f_{\text{Damavandi}}\left(x_{1},x_{2}\right)=\left[1-\left|\frac{\sin\left[\pi \left(x_{1}-2\right)\right]\sin\left[\pi\left(x_{2}-2\right)\right]}{\pi^{2} \left(x_{1}-2\right)\left(x_{2}-2\right)}\right|^{5}\right]\left[2+\left(x_{1 }-7\right)^{2}+2\left(x_{2}-7\right)^{2}\right], \tag{11}\] and the graph of this function is shown in Fig. 1. It has a very sharp global minimum of zero at \(\left\{x_{1}=2,\,x_{2}=2\right\}\). For classical optimization algorithms based on the gradients methods, it is very easy for a trial vector to be trapped in the bowl-like local minimum, while missing the global minimum. The overall success probability of current global optimization algorithms for finding the global minimum of this function is about \(0.25\%\)[33]. To apply our algorithm for this optimization problem, the two variables of the Damavandi function are discretized into 281 elements evenly with an interval of 0.05 in the range \([0,14]\). The dimension of the state space of the function is 78961. By discretizing the value of the function in an interval of 0.01 in the range of \([0,149]\), and counting the number of states in each interval, we can obtain the distribution of states of the function in each energy interval as shown in Fig. 2. The largest degeneracy is about 56, which is a small number compare to the dimension of the state space. We construct a set of threshold values as \(\{d_{1}=70\), \(d_{2}=30\), \(d_{3}=15\), \(d_{4}=7\), \(d_{5}=4\), \(d_{6}=3\), \(d_{7}=2.5\), \(d_{8}=2.2\), \(d_{9}=2.1\), \(d_{10}=2.02\), \(d_{11}=2.0\), \(d_{12}=1.0\}\), and run the algorithm. The dimension of the corresponding state space in each step of the algorithm are reduced to \(\{56634\), \(24939\), \(11573\), \(4452\), \(1772\), \(892\), \(448\), \(178\), \(94\), \(20\), \(5\), \(1\}\), respectively. The dimension of the state space is reduced smoothly with reduction rates of \(\{0.717\), \(0.440\), \(0.464\), \(0.385\), \(0.398\), \(0.503\), \(0.502\), \(0.397\), \(0.528\), \(0.213\), \(0.250\), \(0.200\}\) in each step of the algorithm, respectively. The parameter \(M_{i}\) can be estimated through Monte Carlo sampling, if it is set approximately as \(N_{i}\) above, the conditions of the algorithm can be satisfied. The ratio \(M_{i}/N\) that are closest to \(1/2\) is \(0.316\). We can see that the dimension of the state space is reduced to a few CBS after a number of steps. Therefore the final state that encodes the solution to the optimization problem can be readout and checked efficiently to find the global optimum of the function. ### The Griewank function The Griewank function has the form \[f_{\text{Griewank}}(x_{1},\cdots,x_{n})\!=\!\frac{1}{4000}\sum_{k=1}^{n}\!x_{ k}^{2}\!-\!\prod_{k=1}^{n}\!\cos\!\left(\!\frac{x_{k}}{\!\sqrt{k}}\!\right)\!+ \!1. \tag{12}\] Fig. 3 shows the second-order Griewank function with two variables, we can see that the function has many local minima. For classical optimization algorithms, it is very easy for a trial vector to be trapped in one of the local minima, while missing the global minimum of the function. This situation can be avoided in our algorithm. We discretize the two variables of the Griewank function into 801 elements evenly with interval of 0.1 in the range \([-40,40]\). The dimension of the state space of the function is 641601. By discretizing the function value in intervals of 0.0001, the distribution of states in each energy interval of the function is shown in Fig. 4. The largest degeneracy is 32. The threshold value set is constructed as \(\{d_{1}=1.0\), \(d_{2}=0.6\), \(d_{3}=0.4\), \(d_{4}=0.3\), \(d_{5}=0.2\), \(d_{6}=0.1\), \(d_{7}=0.06\), \(d_{8}=0.04\), \(d_{9}=0.02\), \(d_{10}=0.01\), \(d_{11}=0.005\), \(d_{12}=0.002\}\). The sizes of the corresponding state spaces for each step of the algorithm are {197363, 76951, 34453, 18937, 8283, 2033, 723, 319, 77, 23, 5, 1}, respectively. The dimension of the state space of the problem is reduced smoothly in each step of the algorithm in rate of {0.31, 0.39, 0.45, 0.55, 0.44, 0.25, 0.36, 0.44, 0.24, 0.30, 0.22, 0.20}, respectively. After running the algorithm for a number of steps, the state space of the problem is reduced to a very small space and can be readout to calculate the corresponding function value and find the global minimum. ### The Price function The Price01 function can be written in form of \[f_{\rm Price}\left(x_{1},x_{2}\right)=\left(|x_{1}|-5\right)^{2}+\left(|x_{2}| -5\right)^{2}, \tag{13}\] Figure 1: (Color online) The graph of the Damavandi function. with four minima as shown in Fig. 5. Our algorithm can be applied to obtain the four vectors corresponding to the minimum of the function in a degenerate state. The two variables of the Price function are discretized into 201 elements evenly with an interval of 0.1 in the range \([-10,10]\). The dimension of the state space of the problem is 40401. A set of threshold values is constructed as \(\{d_{1}=20\), \(d_{2}=10\), \(d_{3}=5\), \(d_{4}=2\), \(d_{5}=1\), \(d_{6}=0.5\), \(d_{7}=0.2\), \(d_{8}=0.1\), \(d_{9}=0.05\), \(d_{10}=0.02\), \(d_{11}=0.01\}\). The dimension of the corresponding state space in each round of the algorithm are reduced to \(\{25108\), \(12532\), \(6260\), \(2484\), \(1220\), \(596\), \(244\), \(116\), \(52\), \(20\), \(4\}\), respectively. The corresponding reduction rates in each round of the algorithm are \(\{0.62\), \(0.50\), \(0.50\), \(0.40\), \(0.49\), \(0.49\), \(0.41\), \(0.48\), \(0.45\), \(0.38\), \(0.20\}\). The final state is in an equal superposition of the four global minima of the function and can be obtained by readout of the state of the circuit. Figure 2: Distribution of states by discretizing the value of the Damavandi function in intervals of 0.01. ## IV Discussion In this work, we present a quantum optimization algorithm for solving optimization problems with continuous variables based on multistep quantum computation. The state space of the problem is constructed by discretizing the variables of the objective function. By applying a multistep quantum computation process, the search space of the problem can be reduced step by step. We construct a sequence of Hamiltonians based on a set of threshold values, such that the search spaces corresponding to the Hamiltonians form a nested structure. If the dimension of search spaces is reduced sequentially in polynomial rate, then the algorithm can be run efficiently. The reduction rate can be adjusted by setting the threshold values appropriately. The final state obtained by the algorithm is a superposition of a few CBS (or a CBS) and the minimum of the function can be determined efficiently by measuring the state and evaluating the corresponding function value. One of the most difficult problems for optimization algorithms is that a trial vector is trapped in a deep local minimum, while missing the global minimum. In our algorithm, we locate the global minimum of the problem by using a number of threshold values, and obtain the corresponding state vector through a multistep quantum computation process Figure 3: (Color online) The second-order Griewank function. by narrowing the search space of the problem step by step. The global minimum can be obtained if it is in the state space of the problem and the conditions of the algorithm are satisfied. One advantage of quantum computing is that exponential number of CBS can be stored in polynomial number of qubits. Therefore we can construct a large state space of the problem by using a small number of qubits, such that increasing the probability of finding the global minimum of the objective function. The precision of the algorithm can be improved by running the algorithm for a few rounds in the neighborhood of the minimum being found. ###### Acknowledgements. We thank A. Miranowicz and F. Nori for helpful discussions. This work was supported by National Key Research and Development Program of China (2021YFA1000600), the Fundamental Research Funds for the Central Universities (Grant No. 11913291000022), and Figure 4: Distribution of states by discretizing the value of the second-order Griewank function in intervals of 0.0001. the Natural Science Fundamental Research Program of Shaanxi Province of China under grant No. 2022JM-021. ## Appendix A Solving the eigen-problem of the intermediate Hamiltonians In the following, we solve the eigen-problem of the intermediate Hamiltonian to calculate the energy gap between the ground and the first excited states of the Hamiltonian, and the overlap between the ground states of two adjacent Hamiltonians. In the quantum optimization algorithm, we construct a sequence of intermediate Hamiltonians to form a Hamiltonian evolution path to the problem Hamiltonian as \[H_{i}=\frac{M_{i}}{N}H_{0}+\left(1-\frac{M_{i}}{N}\right)H_{P_{i}},\ \ i=1,2, \cdots,m \tag{10}\] where \[H_{0}\!\!=-|\psi_{0}\rangle\langle\psi_{0}|, \tag{11}\] Figure 5: (Color online) The Price01 function. with \(|\psi_{0}\rangle=\frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}|j\rangle\), and \[H_{P_{i}}=-\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}|, \tag{10}\] where \(A_{1}\supset\cdots\supset A_{m-1}\supset A_{m}\) are determined by Eqs. (6) and (7), and the sizes of the sets \(A_{1}\), \(\cdots\), \(A_{m}\) are \(N_{1}\), \(\cdots\), \(N_{m}\), respectively, and \(N_{1}>\cdots>N_{m}\). Let \(H_{m}=H_{P}=H_{P_{m}}\) and the set \(A_{m}\) contains the target states \(|q\rangle\) with size \(N_{m}\). We construct a Hamiltonian evolution path \(H_{0}\to H_{1}\rightarrow\cdots\to H_{m}=H_{P}\) and start from the ground state \(|\varphi_{0}^{(0)}\rangle\) of \(H_{0}\), evolve it through ground states of the intermediate Hamiltonians sequentially via quantum resonant transition (QRT), finally reach the ground state \(|\varphi_{0}^{(m)}\rangle\) of \(H_{P}\) in \(m\) steps. The algorithm can be run efficiently provided: \((i)\) the energy gap between the ground and the first excited states of each Hamiltonian and, \((ii)\) the overlaps between ground states of any two adjacent Hamiltonians are not exponentially small. In the following we solve the eigen-problem of the Hamiltonian \(H_{i}\). Let \[|\psi_{0}\rangle=\frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}|j\rangle=\frac{1}{\sqrt{N} }\sum_{q_{i}\in A_{i}}|q_{i}\rangle+\sqrt{\frac{N-N_{i}}{N}}|q_{i}^{\perp}\rangle. \tag{11}\] where \[|q_{i}^{\perp}\rangle=\frac{1}{\sqrt{N-N_{i}}}\sum_{k\notin A_{i}}|k\rangle. \tag{12}\] Then in basis \(\left(\left\{|q_{i}\rangle\right\}_{q_{i}\in A_{i}},\,|q_{i}^{\perp}\rangle\right)\), we have \[H_{0} = -|\psi_{0}\rangle\langle\psi_{0}| \tag{13}\] \[= -\left(\begin{array}{cccc}\frac{1}{N}&\cdots&\frac{1}{N}& \frac{\sqrt{N-N_{i}}}{N}\\ \vdots&\ddots&\vdots&\vdots\\ \frac{1}{N}&\cdots&\frac{1}{N}&\frac{\sqrt{N-N_{i}}}{N}\\ \frac{\sqrt{N-N_{i}}}{N}&\cdots&\frac{\sqrt{N-N_{i}}}{N}&\frac{N-N_{i}}{N} \end{array}\right),\] and \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{14}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{15}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{16}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{17}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{18}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{19}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{20}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{21}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{22}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{23}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{24}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{25}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{26}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{27}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{28}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{29}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{30}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{31}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}| \tag{32}\] \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\\ 0&\cdots&0&0\end{array}\right).\] Then \[H_{P_{i}} = -\sum_{q_{i}\in A_{i}}|q_{i}\rangle\langle q_{i}|\] (33) \[= -\left(\begin{array}{cccc}1&\cdots&0&0\\ \[H_{i} = \frac{M_{i}}{N}H_{0}+\left(1-\frac{M_{i}}{N}\right)H_{P_{i}} \tag{10}\] \[= -\left(\begin{array}{ccccc}\frac{M_{i}}{N^{2}}+1-\frac{M_{i}}{N}& \frac{M_{i}}{N^{2}}&\cdots&\frac{M_{i}}{N^{2}}&\frac{M_{i}\sqrt{N-N_{i}}}{N^{2} }\\ \frac{M_{i}}{N^{2}}&\frac{M_{i}}{N^{2}}+1-\frac{M_{i}}{N}&\cdots&\vdots&\vdots \\ \vdots&\vdots&\ddots&\\ \frac{M_{i}}{N^{2}}&\frac{M_{i}}{N^{2}}&\cdots&\frac{M_{i}}{N^{2}}+1-\frac{M_ {i}}{N}&\frac{M_{i}\sqrt{N-N_{i}}}{N^{2}}\\ \frac{M_{i}\sqrt{N-N_{i}}}{N^{2}}&\frac{M_{i}\sqrt{N-N_{i}}}{N^{2}}&\cdots& \frac{M_{i}\sqrt{N-N_{i}}}{N^{2}}&\frac{M_{i}\left(N-N_{i}\right)}{N^{2}}\end{array} \right).\] Let \(n=N_{i}+1\), \[{\bf e}=\left(\begin{array}{ccccc}1&\cdots&1&1\end{array}\right)_{1\times n }^{T},\qquad{\bf e}_{n}=\left(\begin{array}{ccccc}0&\cdots&0&1\end{array} \right)_{1\times n}^{T}, \tag{11}\] then \(H_{0}\) can be rewritten as \[H_{0} = -\left(\frac{1}{\sqrt{N}}{\bf e}+\frac{\sqrt{N-N_{i}}-1}{\sqrt{N }}{\bf e}_{n}\right)\left(\frac{1}{\sqrt{N}}{\bf e}+\frac{\sqrt{N-N_{i}}-1}{ \sqrt{N}}{\bf e}_{n}\right)^{T} \tag{12}\] \[= -\frac{1}{N}{\bf e}{\bf e}^{T}-\frac{\sqrt{N-N_{i}}-1}{N}\left( {\bf e}{\bf e}_{n}^{T}+{\bf e}_{n}{\bf e}^{T}\right)-\frac{\left(\sqrt{N-N_{i }}-1\right)^{2}}{N}{\bf e}_{n}{\bf e}_{n}^{T},\] and \[H_{P_{i}}=-\left(I_{n}-{\bf e}_{n}{\bf e}_{n}^{T}\right), \tag{13}\] where \(I_{n}\) is the \((N_{i}+1)\)-dimensional identity operator. Thus \[H_{i} = \frac{M_{i}}{N}H_{0}+\left(1-\frac{M_{i}}{N}\right)H_{P_{i}} \tag{14}\] \[= -\frac{M_{i}}{N}\Biggl{[}\frac{1}{N}{\bf e}{\bf e}^{T}\!+\!\frac {\sqrt{N\!-\!N_{i}}\!-\!1}{N}\left({\bf e}{\bf e}_{n}^{T}\!+\!{\bf e}_{n}{\bf e }^{T}\right)\!+\!\frac{\left(\!\sqrt{N\!-\!N_{i}}\!-\!1\!\right)^{2}}{N}{\bf e }_{n}{\bf e}_{n}^{T}\!\Biggr{]}\!-\!\left(\!1\!-\!\frac{M_{i}}{N}\right)\left( I_{n}\!-\!{\bf e}_{n}{\bf e}_{n}^{T}\right)\] \[= -\frac{M_{i}}{N^{2}}{\bf e}{\bf e}^{T}\!-\!\frac{M_{i}\left(\! \sqrt{N-N_{i}}\!-\!1\!\right)}{N^{2}}\left({\bf e}{\bf e}_{n}^{T}\!+\!{\bf e} _{n}{\bf e}^{T}\right)\!+\!\left[\!\left(\!1\!-\!\frac{M_{i}}{N}\right)\!-\! \frac{M_{i}\left(\!\sqrt{N\!-\!N_{i}}\!-\!1\!\right)^{2}}{N^{2}}\right]{\bf e }_{n}{\bf e}_{n}^{T}\] \[-\left(1-\frac{M_{i}}{N}\right)I_{n}\] \[\equiv \alpha{\bf e}{\bf e}^{T}+\beta\left({\bf e}{\bf e}_{n}^{T}+{\bf e }_{n}{\bf e}^{T}\right)+\left(\gamma-2\beta\right){\bf e}_{n}{\bf e}_{n}^{T}- \left(1-\frac{M_{i}}{N}\right)I_{n},\] where \(\alpha=-\frac{M_{i}}{N^{2}}\), \(\beta=-\frac{M_{i}\left(\sqrt{N-N_{i}}-1\right)}{N^{2}}\), \(\gamma=2\beta+1-\frac{M_{i}}{N}-\frac{M_{i}\left(\sqrt{N\!-N_{i}}-1\right)^{2 }}{N^{2}}\). Please note that, with a bit abuse of notation, in the following we will reuse the notations \({\bf e}\), \({\bf e}_{n}\), \(\alpha\), \(\beta\) and \(\gamma\), and their dimensions and values can be determined easily from the context. Define \(\tilde{\bf e}\) to be an \(N_{i}\times 1\) vector of all ones and we can rewrite \(H_{i}\) as \[H_{i} = \alpha{\bf e}{\bf e}^{T}+\left(\begin{array}{cc}{\bf 0}_{N_{i}}& \beta\tilde{\bf e}\\ \beta\tilde{\bf e}^{T}&\gamma\end{array}\right)-\left(1-\frac{M_{i}}{N}\right)I _{n} \tag{101}\] \[\equiv \alpha{\bf e}{\bf e}^{T}+G-\left(1-\frac{M_{i}}{N}\right)I_{n}.\] (\(i\)) Define the vector space \(V=\{{\bf e},{\bf e}_{n}\}\) of dimension 2. Then from Eq. (100), \(\forall x\in V^{\perp}\), we have \(H_{i}x=-\left(1-\frac{M_{i}}{N}\right)x\), therefore the eigenvalues are \(-\left(1-\frac{M_{i}}{N}\right)\), corresponding to \(N_{i}-1\) eigenvectors. (\(ii\)) The vector space of \(V\) can be spanned by vectors \[W=\left[\frac{1}{\sqrt{N_{i}}}\left(\begin{array}{c}1\\ \vdots\\ 1\\ 0\end{array}\right),\left(\begin{array}{c}0\\ \vdots\\ 0\\ 1\end{array}\right)\right]. \tag{102}\] It is easy to check that \[W^{T}GW=\left(\begin{array}{cc}0&\beta\sqrt{N_{i}}\\ \beta\sqrt{N_{i}}&\gamma\end{array}\right), \tag{103}\] and \[W^{T}{\bf e}=\left(\begin{array}{c}\sqrt{N_{i}}\\ 1\end{array}\right). \tag{104}\] Then we can verify that \[W^{T}H_{i}W = \alpha W^{T}{\bf e}{\bf e}^{T}W+W^{T}GW-\left(1-\frac{M_{i}}{N} \right)I\] (105) \[= \left(\begin{array}{cc}\alpha N_{i}&(\alpha+\beta)\sqrt{N_{i} }\\ (\alpha+\beta)\sqrt{N_{i}}&\alpha+\gamma\end{array}\right)-\left(\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Besides these \(N_{i}+1\) eigenvalues above, there are also \(N-(N_{i}+1)\) degenerate eigenstates with eigenvalue \(0\), and they are orthogonal to both the vector space \(V=\left\{\mathbf{e},\mathbf{e}_{n}\right\}\) and the vector space \(V^{\perp}\) of dimension \(N_{i}-1\). In the following, we evaluate the energy gap between the ground and the first excited states of the intermediate Hamiltonians, and the overlap between the ground states of two adjacent intermediate Hamiltonians, to figure out how to satisfy the conditions of the algorithm. \((i)\) Estimation of the energy gap between the ground and the first excited states of the intermediate Hamiltonians. Define \(\frac{N_{i}}{N}=a,\ \frac{M_{i}}{N}=b\), the energy gap is \(\Delta E=\sqrt{1-4b+4ab+4b^{2}-4ab^{2}}\). In Fig. 6, we show \(\Delta E\) as a function of \(a\) and \(b\). By solving the equation \(\frac{\partial\left(\Delta E\right)}{\partial b}=0\), we have \(b=1/2\). For a given \(a\), the minimum of the energy gap \(\Delta E\) is at \(b=1/2\). The minimum of \(\Delta E\) is \(0\) at \(a=0\) and \(b=1/2\). In Fig. 7, we show the energy gap as a function of \(b\) for \(a=0,0.01,0.05\), respectively. In the algorithm we have to set \(M_{i}\) appropriately such that the point \((a,b)\) should be far away from the neighborhood of the point \((0,1/2)\). In our algorithm, \(M_{i}\) is an approximate estimation of \(N_{i}\) by using Monte Carlo sampling. Let \(b=a+\delta\), where \(\delta\) is a small number, the energy gap can be expanded as \[\Delta E = \sqrt{1-4a+8a^{2}-4a^{3}}-\frac{2\left(1-a\right)\left(1-2a \right)}{\sqrt{1-4a+8a^{2}-4a^{3}}}\delta \tag{19}\] \[+\frac{2a\left(1-a\right)}{\left(1-4a+8a^{2}-4a^{3}\right)^{3/2}} \delta^{2}+O\left(\delta^{3}\right).\] The first term has a minimum of \(\sqrt{11}/3\sqrt{3}\approx 0.638\) at \(a=1/3\). \((ii)\) Evaluation of the overlap between the ground states of two adjacent Hamiltonians. Let \(\mathbf{e}=\left(1,\cdots,1\right)^{\mathrm{T}}\) and \(\mathbf{0}=\left(0,\cdots,0\right)^{\mathrm{T}}\) be \(N_{i}\times 1\) vectors, respectively. The ground state of \(H_{i}\) is \(\left|V_{-}^{(i)}\right\rangle=x_{1}^{(i)}\left(\mathbf{e},0\right)^{\mathrm{ T}}+x_{2}^{(i)}\left(\mathbf{0},1\right)^{\mathrm{T}}\), where \(\left[x_{1}^{(i)}\right]^{2}+\left[x_{2}^{(i)}\right]^{2}=1\). The components \(x_{1}^{(i)}\) and \(x_{2}^{(i)}\) are in the following form: \[x_{1}^{(i)}=\frac{1}{A}\left[\frac{1+\sqrt{1-4b-4ab^{2}+4b(a+b)}}{2b\sqrt{a \left(1-a\right)}}-\sqrt{\frac{1}{a}-1}\right]. \tag{20}\] and \(x_{2}^{(i)}\!=\frac{1}{A}\), where \(A=\sqrt{1+\left[\frac{1+\sqrt{1-4b-4ab^{2}+4b(a+b)}}{2b\sqrt{a(1-a)}}-\sqrt{ \frac{1}{a}-1}\right]^{2}}\). In basis \(\left(\left\{\left|q_{i}\right\rangle\right\}_{q_{i}\in A_{i}},\,\left|q_{i}^{ \perp}\right\rangle\right)\), where \(\left|q_{i}^{\perp}\right\rangle=\frac{1}{\sqrt{N-N_{i}}}\sum_{k\notin A_{i}} \left|k\right\rangle\), the state \(\left|V_{-}^{(i)}\right\rangle\) can be written as: \[|V_{-}^{(i)}\rangle\!=\!x_{1}^{(i)}\frac{1}{\sqrt{N_{i}}}\sum_{q_{i}\in A_{i}}|q _{i}\rangle+x_{2}^{(i)}\frac{1}{\sqrt{N-N_{i}}}\sum_{k\notin A_{i}}|k\rangle. \tag{101}\] Correspondingly, the state \(|V_{-}^{(i-1)}\rangle\) can be written as: \[|V_{-}^{(i-1)}\rangle = x_{1}^{(i-1)}\frac{1}{\sqrt{N_{i-1}}}\sum_{q_{i-1}\in A_{i-1}}|q _{i-1}\rangle+x_{2}^{(i-1)}\frac{1}{\sqrt{N-N_{i-1}}}\sum_{k\notin A_{i-1}}|k\rangle \tag{102}\] \[= x_{1}^{(i-1)}\frac{1}{\sqrt{N_{i-1}}}\left(\sum_{k\in A_{i-1} \backslash A_{i}}|k\rangle+\sum_{k\in A_{i}}|k\rangle\right)+x_{2}^{(i-1)} \frac{1}{\sqrt{N-N_{i-1}}}\sum_{k\notin A_{i-1}}|k\rangle.\] Thus the overlap between the ground states \(|V_{-}^{(i-1)}\rangle\) and \(|V_{-}^{(i)}\rangle\) is \[g_{0}^{(i)} = \langle V_{-}^{(i\!-\!1)}|V_{-}^{(i)}\rangle\!=\!\sqrt{\frac{N_{i} }{N_{i-1}}}\!x_{1}^{(i\!-\!1)*}x_{1}^{(i)}\!+\!\frac{N_{i\!-\!1}\!-\!N_{i}}{ \sqrt{N_{i\!-\!1}\!(N\!-\!N_{i})}}\!x_{1}^{(i\!-\!1)*}x_{2}^{(i)}\!+\!\sqrt{ \frac{N\!-\!N_{i\!-\!1}}{N\!-\!N_{i}}}\!x_{2}^{(i\!-\!1)*}x_{2}^{(i\!)}\!. \tag{103}\] Figure 6: (Color online) Energy gap between the ground and the first excited states of the Hamiltonian \(H_{i}\) as a function of \(a\) and \(b\). By setting \(b=a+\delta\), the ratio \(x_{1}^{(i)}/x_{2}^{(i)}\) can be expanded as \[x_{1}^{(i)}/x_{2}^{(i)} = \frac{-2a+2a^{2}+1+\sqrt{1-4a+8a^{2}-4a^{3}}}{2a\sqrt{a\left(1-a \right)}}+\frac{-1+2a-2a^{2}-\sqrt{1-4a+8a^{2}-4a^{3}}}{2a^{2}\sqrt{a\left(1-a \right)}\sqrt{1-4a+8a^{2}-4a^{3}}}\delta+O(\sqrt{1-4a+8a^{2}-4a^{3}}) \tag{82}\] The first term of the above expansion is shown in Fig. 8, which has a minimum of \(2.21\) at \(a=\left(3-\sqrt{3}\right)/2\). In order to make the overlap \(g_{0}^{(i)}\) not to be exponentially small, we require that \(x_{1}^{(i)}>x_{2}^{(i)}\), such that the overlap \(g_{0}^{(i)}\) is guaranteed to be polynomial large when the ratio \(N_{i-1}/N_{i}\) is polynomial large. This can be achieved by setting \(M_{i}\) or \(b\) appropriately. By solving the inequality \(x_{1}^{(i)}>x_{2}^{(i)}\), we have \(b<\frac{1}{2(1-a)}\), which also can be written as \(2M_{i}\left(N-N_{i}\right)<N^{2}\). Both \(M_{i}\) and \(N_{i}\) are in decreasing order with the increasing of the steps of the algorithm, therefore the condition is easily satisfied in the last few steps of the algorithm. While at the beginning steps, \(N_{i}\) can be obtained approximately by using the Monte Carlo sampling. Then we can set \(M_{i}\) accordingly to satisfy the condition. Summarizing the above calculation results, we find that the parameters \(M_{i}\) should be set Figure 7: (Color online) Energy gap between the ground and the first excited states of the Hamiltonian \(H_{i}\) as a function of \(b\) by setting \(a=0,0.01,0.05\), respectively. such that: the point \((a,b)\) should be far away from the neighborhood of the point \((0,1/2)\), and \(2M_{i}\left(N-N_{i}\right)<N^{2}\).
2309.03593
Quantum Graph-State Synthesis with SAT
In quantum computing and quantum information processing, graph states are a specific type of quantum states which are commonly used in quantum networking and quantum error correction. A recurring problem is finding a transformation from a given source graph state to a desired target graph state using only local operations. Recently it has been shown that deciding transformability is already NP-hard. In this paper, we present a CNF encoding for both local and non-local graph state operations, corresponding to one- and two-qubit Clifford gates and single-qubit Pauli measurements. We use this encoding in a bounded-model-checking set-up to synthesize the desired transformation. For a completeness threshold, we provide an upper bound on the length of the transformation if it exists. We evaluate the approach in two settings: the first is the synthesis of the ubiquitous GHZ state from a random graph state where we can vary the number of qubits, while the second is based on a proposed 14 node quantum network. We find that the approach is able to synthesize transformations for graphs up to 17 qubits in under 30 minutes.
Sebastiaan Brand, Tim Coopmans, Alfons Laarman
2023-09-07T09:35:31Z
http://arxiv.org/abs/2309.03593v1
# Quantum Graph-State Synthesis with SAT ###### Abstract In quantum computing and quantum information processing, graph states are a specific type of quantum states which are commonly used in quantum networking and quantum error correction. A recurring problem is finding a transformation from a given source graph state to a desired target graph state using only local operations. Recently it has been shown that deciding transformability is already NP-hard. In this paper, we present a CNF encoding for both local and non-local graph state operations, corresponding to one- and two-qubit Clifford gates and single-qubit Pauli measurements. We use this encoding in a bounded-model-checking set-up to synthesize the desired transformation. For a completeness threshold, we provide an upper bound on the length of the transformation if it exists. We evaluate the approach in two settings: the first is the synthesis of the ubiquitous GHZ state from a random graph state where we can vary the number of qubits, while the second is based on a proposed 14 node quantum network. We find that the approach is able to synthesize transformations for graphs up to 17 qubits in under 30 minutes. Quantum computing, graph states, bounded model checking + Footnote †: 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 40 International (CC BY 4.0). ## 1 Introduction The creation, manipulation and transmission of quantum information brings into reach applications which are unfeasible or even impossible using classical computers, such as provably-secure communication [1, 2], more accurate clock synchronization [3], and chemistry applications [4]. Various questions regarding simulation, modeling and design of quantum computers and networks can be phrased using graph states, a subset of all possible states of a register of quantum bits (qubits) which can be described using graphs [5]. Additionally, graph states are crucial to a universal model of quantum computation called measurement-based quantum computing [6]. Furthermore, when augmented with a finite set of quantum operations called Clifford gates and single-qubit Pauli measurements, the graph-state formalism gives rise to efficient classical simulation of a large class of quantum circuits [7] and forms the basis for many quantum error correction schemes [8], a prerequisite for scaling up quantum computing with imperfect devices, as well as many quantum-networking applications [9, 5]. These applications have a focus on _local_ quantum operations, i.e. on a single or few spatially-close qubits, for reasons regarding experimental implementation with imperfect devices. Given this wide applicability, graph-state transformations have been extensively studied from the theory standpoint for various sets of allowed local quantum operations [10, 11, 12, 13, 5, 14]. In this work, we specifically consider the following problem: given a source graph state, synthesize a desired target graph state using single-qubit Clifford gates and single-qubit Pauli measurement. This problem was shown before [15] to be equivalent to transforming the associated graphs under two graph operations: an edge-toggling operation called _local complementation_ (LC), corresponding to single-qubit Cliffords, and _vertex deletion_ (VD), corresponding to measurements. The decision problem (can a source graph be transformed to a target graph under LC+VD?) has been shown to be NP-complete [16], even when restricting the target graph to a practically-relevant scenario [17]. Although there exists an algorithm [15] (based on techniques from [18, 19]) which is fixed-parameter tractable (FPT) in the rank-width \(r\) of the graph, the authors of the algorithm themselves remark it is not useful in practice due to a giant FPT-prefactor equalling ten times repeated exponentiation with base \(2\) (i.e. \(2^{2\cdots 2^{r}}\)) [16]. We tackle the problem of graph-state synthesis under LC+VD with bounded model checking (BMC) [20, 21]. To this end we present a Boolean encoding for graph states and the operations on them, and provide a completeness threshold for this problem. We also give an encoding for two-qubit graph operations, which together with single-qubit operations enable all possible Clifford operations [14]. This approach can be applied to arbitrary graphs, in contrast to special cases for which poly-time algorithms have been found [16, 22] or unsatisfiability can be determined analytically [23, 24]. We evaluate this approach in two settings of particular interest [16, 24]: first, we synthesize the ubiquitous Greenberger-Horne-Zeilinger (GHZ) state [25] from random graphs with varying number of qubits. Next, we target a 14 node quantum network proposal [26]. Within 30 minutes BMC finds transformations for graphs up to 17 nodes (qubits). In comparison, for transformations under single-qubit Clifford operations without measurements (a setting where deciding reachability is in P [27, 28] and counting reachable graphs is #P-complete [29]), various properties of equivalence classes have been explored up to 12 qubits [30, 31, 32]. Aside from graph problems which have been tackled with SAT-based methods [33, 34, 35, 36, 37, 38, 39, 40], SAT has also been used on problems in quantum computing. For example, synthesizing optimal Clifford circuits without measurements (closely related to graph-state synthesis under LC + flipping arbitrary edges, but without VD) has been tackled with BMC [41]. Without the optimality constraint (i.e. shortest circuit) this problem is in P [42], while the complexity with the optimality constraint is unknown. SAT-based techniques have also been applied to quantum circuit equivalence checking for a limited selection of circuits [43]. BMC specifically has been applied to Clifford circuit (without measurements) equivalence checking [44] (a problem that is also in P [45]), and SMT and planning based approaches have been used to map logical quantum circuits to physical quantum-chips [46, 47]. Unlike much previous work we include measurements, which for our problem raises the complexity from P to NP-complete. ## 2 Preliminaries and problem definition ### Quantum computing We very briefly introduce quantum bits (qubits) and how to act on them with quantum gates and measurements (see [48] for a complete introduction). The state \(|\psi\rangle\) of a single qubit is a complex 2-vector of unit norm, equalling the _computational-basis states_\(\left|0\right\rangle=\left(1\ \ 0\right)^{\intercal}\) or \(\left|1\right\rangle=\left(0\ \ 1\right)^{\intercal}\) or any linear combination of those, i.e. in general a single-qubit state is \(\left|\psi\right\rangle=\alpha_{0}\left|0\right\rangle+\alpha_{1}\left|1 \right\rangle=\left(\alpha_{0}\ \ \alpha_{1}\right)^{\intercal}\) for complex numbers \(\alpha_{0},\alpha_{1}\) satisfying \(|\alpha_{0}|^{2}+|\alpha_{1}|^{2}=1\) (here, \(\intercal\) denotes vector transposition). A general \(n\)-qubit quantum state is represented as a complex vector of length \(2^{n}\) with norm 1, e.g. \(\left(\frac{1}{\sqrt{2}}\ \ \frac{i}{\sqrt{2}}\right)^{\intercal}\) and \(\left(\frac{2}{\sqrt{13}}\ \ 0\ \ 0\ \ -\frac{3}{\sqrt{13}}\right)^{\intercal}\) are quantum states. The joint state of two separate quantum registers in states \(\left|\phi\right\rangle,\left|\psi\right\rangle\) is \(\left|\phi\right\rangle\otimes\left|\psi\right\rangle\), where \(\otimes\) denotes the tensor product: given \(r_{V}\times c_{V}\) matrix \(V\) and \(r_{W}\times c_{W}\) matrix \(W\), the \(r_{V}r_{W}\times c_{V}c_{W}\) matrix \(V\otimes W\) is \[V\otimes W=\begin{pmatrix}V_{00}W&V_{01}W&\ldots&V_{0c_{V}}W\\ \vdots&\vdots&\ddots&\\ V_{r_{V}0}W&V_{r_{V}1}W&\ldots&V_{r_{V}c_{V}}W\end{pmatrix}.\] Given a bipartition \(A\cup B=\left\{1,2,\ldots,n\right\}\), an \(n\)-qubit state \(\left|\psi\right\rangle\) is called _separable over_\(A,B\) if we can write \(\left|\psi\right\rangle=\left|\varphi\right\rangle_{A}\otimes\left|\phi\right\rangle _{B}\). It is _entangled_ otherwise, a feature that has no classical analogue and is a prerequisite to many applications with a quantum advantage. For example \(\left(\frac{1}{\sqrt{2}}\ \ 0\ \ \frac{1}{\sqrt{2}}\ \ 0\right)^{\intercal}= \left(\frac{1}{\sqrt{2}}\ \ \frac{1}{\sqrt{2}}\right)^{\intercal}\otimes\left(1\ \ 0\right)^{\intercal}\) is not entangled, but \(\left(\frac{1}{\sqrt{2}}\ \ 0\ \ 0\ \ \frac{1}{\sqrt{2}}\right)^{\intercal}\) is. A quantum gate (always reversible) on \(n\) qubits is given by a \(2^{n}\times 2^{n}\) unitary matrix and the output state can be found by matrix-vector multiplication, for example \(H\) (see right) which maps input \(\begin{pmatrix}1\\ 0\end{pmatrix}\) to output \(\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\cdot\begin{pmatrix}1\\ 0\end{pmatrix}=\begin{pmatrix}\frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}\end{pmatrix}\). An example universal gate set is shown on the right. The tensor product is used to apply gates in parallel to separate registers, e.g. \(I\otimes H\otimes I\) is a 3-qubit gate performing a \(H\) on the second qubit and \(I\) on the first and third. The result of a two-qubit gate (e.g. controlled-\(Z\) (\(CZ\)), which maps e.g. \(\left(\frac{1}{\sqrt{2}}\ \ 0\ \ 0\ \ \frac{1}{\sqrt{2}}\right)^{\intercal}\) to \(\left(\frac{1}{\sqrt{2}}\ \ 0\ \ 0\ \ -\frac{1}{\sqrt{2}}\right)^{\intercal}\) ) between two non-adjacent qubits can be computed by swapping qubits: e.g. for qubits \(q_{0},q_{1},q_{2}\), \(CZ(q_{0},q_{2})=\text{SWAP}(q_{1},q_{2})\,CZ(q_{0},q_{1})\text{SWAP}(q_{1},q_{2})\), where \(\text{SWAP}(q_{1},q_{2})\) replaces \(\left|a\right\rangle\otimes\left|b\right\rangle\otimes\left|c\right\rangle \rightarrow\left|a\right\rangle\otimes\left|c\right\rangle\otimes\left|b\right\rangle\) for \(a\), \(b\), \(c\in\left\{0,1\right\}\). The gates \(H,T^{2}\) together generate (under matrix multiplication and tensoring with \(I\)) the group of _single-qubit Clifford gates_, and \(H,T^{2},\,CZ\) together generate all Clifford gates. A computational-basis measurement is a non-reversible operation which projects a single qubit state \(\alpha_{0}\left|0\right\rangle+\alpha_{1}\left|1\right\rangle\) to one of \(\left|0\right\rangle,\left|1\right\rangle\) with probability \(|\alpha_{0}|^{2}\) or \(|\alpha_{1}|^{2}\). For example, measuring a qubit \(\left|\psi\right\rangle=\sqrt{\nicefrac{{1}}{{3}}}\left|0\right\rangle+\sqrt{ \nicefrac{{2}}{{3}}}\left|1\right\rangle\) yields the state \(\left|0\right\rangle\) with probability \(1/3\) and the state \(\left|1\right\rangle\) with probability \(2/3\). Any \(n\)-qubit state \(\left|\psi\right\rangle\) can be written as \(\left|\psi\right\rangle=\alpha\left|0\right\rangle\otimes\left|\psi_{0}\right\rangle +\beta\left|1\right\rangle\otimes\left|\psi_{1}\right\rangle\) where \(\left|\alpha\right|^{2}\) (\(\left|\beta\right|^{2}\)) is the probability of finding the first qubit in the \(\left|0\right\rangle\) (\(\left|1\right\rangle\)) state after measuring it (for expressing measurement on the other qubits, swap qubits first). A Pauli measurement equals a computational-basis measurement preceded by a single-qubit Clifford gate. Sequences of quantum operations are typically visualized in a quantum circuit (see Fig. 1). ### Graph states and graph-state reachability Graph states are a subset of all quantum states. An \(n\)-qubit graph state \(\left|G\right\rangle\) is represented by an undirected simple graph \(G=(V,E)\) with \(\left|V\right|=n\) vertices and no self-loops (where \(V\) is the vertex set and \(E\subseteq V\times V\) the edge set), constructed as starting from the state \(H^{\otimes n}\ket{0}^{\otimes n}\), followed by a \(\mathit{CZ}\) gate on each pair of qubits \((u,v)\in E\). An example is given in Fig. 2. From here on we say 'graph' to mean 'undirected simple graph without self-loops'. Intuitively, the graph \(G\) captures information about the entanglement between the qubits, where two qubits are entangled if they are (directly or indirectly) connected in the graph. The two graph transformations corresponding to single-qubit quantum operations are: * _Local complementation_\(LC_{k}\) on vertex \(k\in V\) transforms \(G=(V,E)\) into \(LC_{k}(G)=(V,E^{\prime})\) where \(E^{\prime}\) is obtained from \(E\) by flipping the edges in the neighborhood of \(k\), i.e. for all \(u,v\in\mathcal{N}_{k}\), if \((u,v)\in E\) then \((u,v)\not\in E^{\prime}\) and if \((u,v)\not\in E\) then \((u,v)\in E^{\prime}\). Here, the neighborhood \(\mathcal{N}_{k}\) is the set of all vertices adjacent to \(k\), i.e. \(\mathcal{N}_{k}=\{v\mid(k,v)\in E\}\). For any graphs \(G\) and \(G^{\prime}\), \(\ket{G^{\prime}}\) is reachable from \(\ket{G}\) using only single-qubit Clifford operations if and only if \(G^{\prime}\) is reachable from \(G\) using local complementations. More specifically, the graph state \(\ket{LC_{k}(G)}\) equals the resulting quantum state when applying a certain sequence of single-qubit Clifford operations to \(\ket{G}\) (see [10] for details). * _Vertex deletion_\(\mathit{VD}_{k}\) of vertex \(k\in V\) transforms \(G=(V,E)\) to \(\mathit{VD}_{k}(G)=(V,E^{\prime})\) with \(E^{\prime}=E\setminus\{(v,k)\mid v\in V\}\), i.e. \(k\) becomes _isolated_ (all edges adjacent to \(k\) are removed). Vertex deletion of vertex \(k\) implements measurement on qubit \(k\): for each graph \(G\), the graph state \(\ket{\mathit{VD}_{k}(G)}\) is single-qubit Clifford equivalent to \(\ket{G}\) at which a computational-basis measurement has been performed on qubit \(k\)[12]. And although not the primary focus of this work, we can also consider two-qubit operations: * Given a subset of pairs of nodes \(D\subseteq V\times V\) (for convenience \(u<v\) for \((u,v)\in D\)), \(G\) can be transformed into \(G^{\prime}\) by _edge flips_ among \(D\) and local complementations on Figure 1: An example 2-qubit quantum circuit. Operations are applied from left to right. The controlled-\(Z\) (\(\mathit{CZ}\)) gate is visualized as \(\underline{\uparrow}\). As is common, we write \(\ket{01}\) as shorthand for \(\ket{0}\otimes\ket{1}\), \(\ket{01}=\ket{0}\otimes\ket{1}\), etc. Measuring both qubits at the end gives \(\ket{00}\) or \(\ket{11}\) with equal probability. Figure 2: The circuit in (a)a generates the state \(\ket{G_{2}}\), corresponding to the graph in (c)c. Examples of local complementation and vertex deletion are shown in (d)d and (e)e. vertices in \(V\) if and only if \(|G\rangle\) can be transformed into \(|G^{\prime}\rangle\) using two-qubit Clifford operations on the qubit pairs in \(D\) and single-qubit Clifford on qubits in \(V\)[14-Th.1]. Rather than generating a graph state from scratch using \(\mathit{CZ}\) gates (as in Figs. 1(a) to 1(c)) a problem of interest for e.g. quantum networking is to obtain a particular graph from an existing graph _using only single-qubit operations_ (LC+VD, and \(D=\emptyset\)). Below is a practical example. **Example 2.1**.: _Alice is part of a 6-node quantum network and wants to run a quantum secret sharing scheme [9] between herself and three other parties, each having one qubit. For this she needs a 4-qubit Greenberger-Horne-Zeilinger (GHZ) state [25], given by \(G_{\text{GHZ}}\) on the right. However, generating \(|G_{\text{GHZ}}\rangle\) using \(\mathit{CZ}\)-gates (Figs. 1(a) to 1(c)) requires generating entanglement [49, 50-Fig.1(d)], which is a time-consuming probabilistic process [51]. At some point in time the network is a state \(|G_{s}\rangle\). Because single-qubit operations (LC+VD on the graph) are much easier to perform than entanglement generation, Alice wants to know whether a given \(G_{s}\) can be transformed into \(G_{\text{GHZ}}\) using only LC+VD._ This motivates the problem we will study in this work, posed before in [16] for single-qubit operations (LC+VD and \(D=\emptyset\)) and in [14] for multi-qubit operations (LC+VD and \(D\neq\emptyset\)). **Definition 2.1** (Graph-state synthesis).: Given source and target graphs \(G_{s}=(V,E_{s})\) and \(G_{t}=(V,E_{t})\), find (if it exists) a sequence of local complementations and vertex deletions on any \(v\in V\) (and also edge flips on \((u,v)\in D\) for some given \(D\subseteq V\times V\) in case multi-qubit Clifford operations are allowed on \(D\)) which transforms \(G_{s}\) into \(G_{t}\). We remark that if \(D=V\times V\), any graph can be trivially synthesized because an edge may be added or removed between any pair of nodes (Figs. 1(a) to 1(c)). We also remark that we are not necessarily interested in the shortest sequence of graph transformations, as any sequence of LC+VD translates into at most one single-qubit Clifford and one measurement per qubit. ## 3 SAT encoding As seen in the previous section, quantum operations on graph states can be expressed through graph transformations. In this section, we give Boolean encodings for these operations, as well as an encoding for the transition relation as a whole. In Appendix A we detail how these Boolean expressions are written in conjunctive normal form (CNF). The encoding of a single transformation step from graph \(G\) to \(G^{\prime}\) uses variables \(\vec{x}\) for \(G\) and \(\vec{x}^{\prime}\) for \(G^{\prime}\). We encode a graph as follows. **Definition 3.1** (Graph encoding).: An undirected graph \(G\) of \(n\) vertices is encoded as a conjunction over \(n(n-1)/2\) literals \(x_{uv}\) (\(\neg x_{uv}\)), for \((u,v)\in\mathbb{U}=\{(u,v)\in V\times V\mid u<v\}\), indicating there is (not) an edge between nodes \(u\) and \(v\). ### Encoding of graph transformations The Boolean encoding for deleting a vertex \(k\), denoted \(\mathsf{VD}_{k}\), is given in Eq. (1). All edges \((u,v)\) connected to \(k\) are set to false (\(\neg x^{\prime}_{uv}\)) while all others remain unchanged (\(x^{\prime}_{uv}\leftrightarrow x_{uv}\)). \[\mathsf{VD}_{k}=\bigwedge_{(u,v)\in\mathbb{U}}\begin{cases}\neg x^{\prime}_{uv }&\text{ if }u=k\text{ or }v=k\\ x^{\prime}_{uv}\leftrightarrow x_{uv}&\text{ otherwise.}\end{cases} \tag{1}\] The encoding for performing a local complementation on vertex \(k\), denoted \(\mathsf{LC}_{k}\), is given in Eq. (2) and can be read as follows: if vertices \(u,v\) are in the neighborhood of \(k\) (\(x_{uk}\wedge x_{vk}\)) then the value of the edge \((u,v)\) is flipped (\(x^{\prime}_{uv}\leftrightarrow\neg(1\oplus\neg x_{uv})\). \[\mathsf{LC}_{k}=\bigwedge_{(u,v)\in\mathbb{U}}\begin{cases}x^{\prime}_{uv} \leftrightarrow\neg((x_{uk}\wedge x_{vk})\oplus\neg x_{uv})&\text{ if }u \neq k\text{ and }v\neq k\\ x^{\prime}_{uv}\leftrightarrow x_{uv}&\text{ otherwise.}\end{cases} \tag{2}\] To encode edge flips on a selection of edges \(D\) (Def. 2.1), we take \(D\) to be an indexed set \(D=\{(u_{1},v_{1}),(u_{2},v_{2}),\dots\}\) with \(u_{i}<v_{i}\). Given this indexed set, the constraint in Eq. (3) encodes an edge flip of \((u_{i},v_{i})\). \[\mathsf{EF}_{i}=\bigwedge_{(u,v)\in\mathbb{U}}\begin{cases}x^{\prime}_{uv} \oplus x_{uv}&\text{ if }u=u_{i}\text{ and }v=v_{i}\\ x^{\prime}_{uv}\leftrightarrow x_{uv}&\text{ otherwise}\end{cases} \tag{3}\] In order to combine the transition relations \(\mathsf{LC}_{k}\), \(\mathsf{VD}_{k}\), and \(\mathsf{EF}_{i}\) into a single CNF formula we use a construction similar to the BMC encoding of different concurrent threads in [52]: we add \(\lceil\log_{2}(\max(|V|,|D|)+1)\rceil\) variables \(\vec{y}\) for the binary encoding of \(k\in V\) or \(i\in\{1,\dots,|D|\}\), and two variables \(\vec{z}\) to indicate whether a given operation is a local complementation (\(\vec{z}=0\)), a vertex deletion (\(\vec{z}=1\)), or an edge flip (\(\vec{z}=2\)). For example the constraint \(\vec{y}=3\wedge\vec{z}=1\) represents vertex deletion of node \(3\). Using these additional variables, we encode all local complementations, vertex deletions, and edge flips as in Eqs. (4) to (6). \[R_{\mathsf{LC}}(\vec{x},\vec{x}^{\prime})=\bigwedge_{k\in V}\left[(\vec{y}=k \wedge\vec{z}=0)\rightarrow\mathsf{LC}_{k}(\vec{x},\vec{x}^{\prime})\right] \tag{4}\] \[R_{\mathsf{VD}}(\vec{x},\vec{x}^{\prime})=\bigwedge_{k\in V}\left[(\vec{y}=k \wedge\vec{z}=1)\rightarrow\mathsf{VD}_{k}(\vec{x},\vec{x}^{\prime})\right] \tag{5}\] \[R_{\mathsf{EF}}(\vec{x},\vec{x}^{\prime})=\bigwedge_{i\in\{1,\dots,|D|\}} \left[(\vec{y}=i\wedge\vec{z}=2)\rightarrow\mathsf{EF}_{i}(\vec{x},\vec{x}^{ \prime})\right] \tag{6}\] Additionally we add an identity transition \(R_{\mathsf{Id}}(\vec{x},\vec{x}^{\prime})=(\vec{z}=3)\rightarrow\mathsf{Id}( \vec{x},\vec{x}^{\prime})\) to ensure that if a transformation of length \(d\) exists, a transformation of length \(d^{\prime}\geq d\) also exists (to avoid searching over all \(d\)), and we appropriately constrain the unused values of \(\vec{y}\) and \(\vec{z}\) by adding \(C=(\vec{y}\,<\,|V|\,\lor\,z\,=\,2)\wedge(\vec{y}\,<\,|D|\lor z\neq 2)\). Finally, we obtain the the global transition relation in Eq. (7). When converted to CNF this formula has \(m+n(n-1)\) variables and \(\leq 3.5n^{3}+2mn^{2}+0.5n^{2}+0.5|D|n^{2}\) clauses, where \(n=|V|\) and \(m=\lceil\log_{2}(\max(|V|,|D|)+1)\rceil\). \[R_{\text{global}}(\vec{x},\vec{x}^{\prime})=R_{\mathsf{LC}}\wedge R_{\mathsf{VD}} \wedge R_{\mathsf{EF}}\wedge R_{\mathsf{Id}}\wedge C \tag{7}\] We use the transition relation specified in Eq. (7) in a bounded-model-checking set-up, i.e. we create Eq. (8) below, where \(S(\vec{x}_{1})\) encodes a source graph \(G_{s}\), \(T(\vec{x}_{d})\) a target graph \(G_{t}\), and \(d\) is the search depth. \[S(\vec{x}_{1})\wedge\bigwedge_{i=1}^{d-1}R_{\text{global}}(\vec{x}_{i},\vec{x} _{i+1})\wedge T(\vec{x}_{d}) \tag{8}\] The formula is satisfiable if and only if a sequence of operations of at most \(d\) steps exists which transforms \(G_{s}\) into \(G_{t}\). In Section 3.2, we prove an upper bound on the required depth \(d\). ### Completeness threshold To provide a completeness threshold for graph-state synthesis under LC+VD, we use the following observations to bound the search depth. 1. If \(G_{s}\) can be transformed to \(G_{s}^{\prime}\) under LC, a transformation exists of at most \(M\) local complementations, where \(M=3(|V|-s)/2\) with \(s=|V|(\text{mod }2)\)[27, 28, 29, 30, 31, 32, 33, 34]. 2. If \(G_{s}\) can be transformed into \(G_{t}\) under LC+VD, then vertex deletion needs to be performed on exactly the \(\Delta\) vertices which are isolated in \(G_{t}\).1 Footnote 1: Without loss of generality, we assume \(G_{s}\) has no isolated vertices. If \(G_{s}\) has isolated vertices which are not isolated in \(G_{t}\), then \(G_{t}\) is trivially unreachable under LC+VD. 3. For \(k\in V\), \(LC_{k}\) after \(\mathit{VD}_{k}\) leaves the graph unchanged, i.e. \(LC_{k}(\mathit{VD}_{k}(G))=\mathit{VD}_{k}(G)\). 4. For \(j,k\in V\) and \(j\neq k\), \(LC_{j}\) and \(\mathit{VD}_{k}\) commute, i.e. \(LC_{j}(\mathit{VD}_{k}(G))=\mathit{VD}_{k}(LC_{j}(G))\). From points 3 and 4, it follows that all vertex deletions (measurements) can be postponed until after the local complementations (single-qubit Clifford gates). We then get that if \(G_{s}\) can be transformed into \(G_{t}\) under LC+VD, it can be transformed by a circuit of the form given in Fig. 3, taking at most \(M\) local complementations and \(\Delta\) vertex deletions. ## 4 Empirical evaluation We evaluate our approach in two settings: synthesizing a GHZ state from random graphs for an increasing number of qubits, and synthesis of graphs based on a proposal of a 14 node quantum network in the Netherlands [26]. For all experiments, we perform binary search over \(d\) up to the completeness threshold specified in Section 3.2. In our current setup, the solver is restarted for every different \(d\). Experiments2 were run on Ubuntu 18 with an AMD Ryzen 7 5800x CPU. Two different SAT solvers, Glucose 4 [53] and Kissat [54], have been used. Footnote 2: Reproducible experiments are available online at [https://github.com/sebastiaanbrand/graph-state-synthesis](https://github.com/sebastiaanbrand/graph-state-synthesis). We first evaluate our approach in a setting where the target states are 4-qubit GHZ states (see Example 2.1), matching the target states in the empirical evaluation in [16]. GHZ states are used in a large number of applications such as quantum secret sharing [9] (see also Example 2.1), Figure 3: Graph-state transformation circuit under LC+VD. anonymous transfer [55] and conference key agreement [56]. The polynomial time algorithm presented in [16] can only be applied when the source graph has special properties (specifically it needs to have rank-width 1). To evaluate our method we replace the restricted random graphs used in [16] with more general Erdos-Renyi random graphs, which have also been used in other work concerning graph-state synthesis [57, 58]. Results are shown in Fig. 4. With a timeout of 30 minutes Kissat can synthesize transformations for graphs up to 17 qubits. Determining unreachability, which we do using the completeness threshold, can be done up to 8 qubits by Glucose within this timeout. Next, we evaluate our approach on the specific quantum network architecture proposed in [26-Fig.3] (visualized in Fig. 4(a)). As source states, we consider graphs with nodes from this network, and random edges as follows: \((u,v)\in E\) with probability \(p^{d}\), where \(d\) is the distance (number of hops + 1) between the nodes, motivated by the fact that generating entanglement Figure 4: The total SAT solver time for BMC with binary search over the depth up the completeness threshold (see Section 3.2). For each number of qubits we run on three Erdős-Renyi random graphs with \(p=0.8\), with a 4-qubit GHZ state as target, with only LC+VD on the left, and LC+VD+EF on a random set \(D\) with \(|D|=\frac{1}{2}|V|\) on the right. 4(b) shows the difference between the solvers for the data points from both the left and right plot in 4(a). Open symbols indicate timeouts. Solid spheres indicate unreachability at the depth of the completeness threshold. The largest solved instance is for 17 qubits at \(d=16\), which has a formula with \(\sim\)2400 variables and \(\sim\)300,000 clauses (see above Eq. (7) for \(d=1\)). Figure 5: The 14-node quantum network proposed in [26-Fig.3], and the SAT solver time to synthesize a transformation into a GHZ state for different amounts of entanglement (\(p\)) in the network. Open circles indicate timeouts. Solid spheres indicate unreachability as at the depth of the completeness threshold. over larger distances is harder [51]. The target state is a GHZ state between the main network nodes (squares in Figure 5a). Fig. 5b shows the results for varying \(p\). A higher \(p\) corresponds to a larger amount of entanglement in the network. We observe that for fixed number of nodes, the time it takes to synthesize a transformation increases with the density of the source graph. ## Acknowledgments This work was supported by the NEASQC project, funded by European Union's Horizon 2020, Grant Agreement No. 951821, and by the Dutch National Growth Fund, as part of the Quantum Delta NL programme.
2309.13780
Modern Software Development for JUNO offline software
The Jiangmen Underground Neutrino Observatory (JUNO), under construction in South China, primarily aims to determine the neutrino mass hierarchy and to precise measure the neutrino oscillation parameters. The data-taking is expected to start in 2024 and the detector plans to run for more than 20 years. The development of the JUNO offline software (JUNOSW) started in 2012, and it is quite challenging to maintain the JUNOSW for such a long time. In the last ten years, tools such as Subversion, Trac, and CMT had been adopted for software development. However, new stringent requirements came out, such as how to reduce the building time for the whole project, how to deploy offline algorithms to an online environment, and how to improve the code quality with code review and continuous integration. To meet the further requirements of software development, modern development tools are evaluated for JUNOSW, such as Git, GitLab, CMake, Docker, and Kubernetes. This contribution will present the software development system based on these modern tools for JUNOSW and the functionalities achieved: CMake macros are developed to simplify the build instructions for users; CMake generator expressions are used to control the build flags for the online and offline environments; a tool named git-junoenv is developed to help users partially checkout and build the software; a script is used to build and deploy the software on the CVMFS server; a Docker image with CVMFS client installed is created for continuous integration; a GitLab agent is set up to manage GitLab runners in Kubernetes with all the configurations in a GitLab repository.
Tao Lin
2023-09-25T00:13:47Z
http://arxiv.org/abs/2309.13780v1
# Modern Software Development for JUNO offline software ###### Abstract The Jiangmen Underground Neutrino Observatory (JUNO), under construction in South China, primarily aims to determine the neutrino mass hierarchy and to precise measure the neutrino oscillation parameters. The data-taking is expected to start in 2024 and the detector plans to run for more than 20 years. The development of the JUNO offline software (JUNOSW) started in 2012, and it is quite challenging to maintain the JUNOSW for such a long time. In the last ten years, tools such as Subversion, Trac, and CMT had been adopted for software development. However, new stringent requirements came out, such as how to reduce the building time for the whole project, how to deploy offline algorithms to an online environment, and how to improve the code quality with code review and continuous integration. To meet the further requirements of software development, modern development tools are evaluated for JUNOSW, such as Git, GitLab, CMake, Docker, and Kubernetes. This contribution will present the software development system based on these modern tools for JUNOSW and the functionalities achieved: CMake macros are developed to simplify the build instructions for users; CMake generator expressions are used to control the build flags for the online and offline environments; a tool named git-junoenv is developed to help users partially checkout and build the software; a script is used to build and deploy the software on the CVMFS server; a Docker image with CVMFS client installed is created for continuous integration; a GitLab agent is set up to manage GitLab runners in Kubernetes with all the configurations in a GitLab repository. ## 1 Introduction to JUNO experiment The Jiangmen Underground Neutrino Observatory (JUNO) experiment [1] has a rich physics program, including the determination of the neutrino mass ordering, precise measurement of neutrino oscillation parameters, detecting neutrinos from reactor, atmosphere, solar, supernova burst, etc [2, 3]. JUNO is under construction in southern China in a underground laboratory, with 700 m overburden (1800 m.w.e.). It is expected to start data-taking in 2024, running for more than 20 years. As shown in Figure 1, the JUNO detector consists of a central detector, a water Cherenkov detector, and a top tracker. The innermost part is the central detector with an acrylic spherical vessel filled with 20 kton liquid scintillator (LS), equipped with 17,612 20-inch photomultiplier tubes (LPMT) and 25,600 3-inch photomultiplier tubes (SPMT). The central detector is submerged in a water pool, equipped with 2,400 LPMTs, which is the water Cherenkov detector to detect cosmic ray muons. On the top of the water pool, the top tracker is also used to measure the muons. Further details can be found elsewhere [2, 3]. JUNOSW is the offline software for data processing, which is one of the crucial parts of the JUNO experiment [4]. It consists of the physics generators, detector simulation, electronics simulation, waveform reconstruction, and event reconstruction. The software is developed based on an underlying framework called SNiPER [5], whose concept is very similar to the Gaudi framework [6], which includes event loop, algorithm, service and tool. There is an event loop while the framework is executed. For each event, an algorithm is invoked by the framework to perform a dedicated task. A service provides some common functionalities, which could be invoked by the algorithms. A tool is a piece of code in an algorithm, which improves the code modularity of the algorithm by defining interfaces. All these components are implemented in C++ language and then configured in Python language. The development of JUNOSW began in 2012, using Subversion (SVN), Trac and CMT [7] tools. SVN is used for version control and a dedicated web service called Trac is deployed to host the source code. Following the rules of SVN, a SVN repository is created for the JUNOSW, with the three directories: "trunk", "branches" and "tags". The "trunk" branch is used for software development. Developers check out this branch and commit their changes back to this branch. The other branches are stored under "branches" directory and only used for the preparation of software releases. All the releases are stored under "tags" directory. The tool Trac provides a web interface for developers to browse code and submit issues. The tool CMT is used for building the project. The project is organized in packages. Developers could check out dedicated packages with SVN and build with CMT. The number of packages has increased to more than 200 over the past ten years of development, posing several challenges for both software development and deployment. * There is a performance issue when building software with CMT. Even though CMT could build a package in parallel, it cannot build different packages at the same time. Building the entire project in a blade server with 28 CPU cores takes about half an hour. This causes the developers to wait for a long time if the project is built from scratch. * Software development lacks code review. When a developer commits the changes to SVN repository, the other developers only receive notifications about the change from a mailing list. Especially when developers add new packages, some binary data could be also committed. The SVN repository on the server could not be changed, which caused the SVN repository to become large. Figure 1: Schematic view of the JUNO detector. * There is a maintenance issue for the continuous integration, which is based on Bitten [8]. Bitten is built on Trac, which only supports Python 2. It consists of a master and several slaves, which need separate deployment. XML-based configuration files need to be set up in the master. When a new commit is pushed to the master, a build task will be created and dispatched to a slave according to the configurations. The slave invokes the commands encoded in the XML when it receives a message from the master. ## 3 Adopting modern software development practices for JUNOSW According to the best practices on software development and deployment from HSF (HEP Software Foundation) [9], modern software tools are adopted by JUNOSW. As shown in figure 2, the development and deployment tools are all migrated to modern ones, including CMake, Git/GitLab, Docker and Kubernetes. Meanwhile, some high-level scripts are still developed to help users and developers. The migration consists of three stages. * In the first stage, the CMT-based configurations are migrated to the CMake-based. Several CMake macros are developed to help users compile libraries and create setup scripts. With the help of these CMake macros, developers only need to define the name of a package, the dependent targets, and the additional environment variables. * During the second stage, the repository is migrated from SVN to Git repository. To reduce the size of the repository, the original SVN repository is split into two: one for the source code; and another for the data. Then the latest snapshot of the source code is imported into a new Git repository. The data is put into another Git repository based on Git-LFS (Git Large File Storage). The original histories are also imported into a dedicated Git repository for archival purposes. * In the third stage, the monolithic project is split into multiple projects. The installation script is moved to a dedicated repository called junoenv. The common packages are moved into a new repository called CommonSW. The CMake macros are modified to handle the dependencies between different projects automatically. In order to support the partial checkout and partial build, a git sub-command named git-junoenv is also developed. Figure 2: Overview of tools for software development and deployment in JUNOSW. ## 4 Software development ### Migration to CMake CMake macros and functions have been developed to put all the common functionalities in the same place. As there are some existing conventions defined in CMT, some of them are used in the CMake macros. According to the instructions in Modern CMake, the following rules have been used in the software development with CMake: * The source code, build directory and installation directory of a project are separated. Even though CMT adopts a similar rule, CMT puts all the build directories under the package directories. When moving to CMake, they are all separated to keep the source code clean. An example is the source code generation for the event data model. When using CMT, the files are generated in the source code directory, which causes the check-in by mistakes sometimes. After moving to CMake, these files are generated under build directories. * A project is organized into packages. A package consists of a header directory for public interfaces, a source directory for the private headers and detailed implementation, a python directory for exporting the library in Python, a share directory for the regularly used scripts, and a test directory for the testing scripts. * A CMake target is used when building a package. It is used to represent a shared library, a module library or an executable. Its dependencies on the other different packages are described by the other CMake targets. The CMake target properties are used to control how a target is built instead of using global settings. * CMake generator expression is used to control the flags instead of using the if statement in CMake. This is useful when the same package is built for online and offline environments. In this case, the linking libraries could be different. By using the generator expression, the libraries could be enabled or disabled by checking an option defined in CMake. * Macros PKG and EDM are developed to build a regular package and a package containing event data model respectively. The macro EDM generates C++ source code and ROOT dictionaries at the CMake configuration stage, and builds a shared library at build stage. The macro PKG creates a shared library by default. If a module library needs to be created, then an option MODULE is needed. If there is no library or executable created, a custom target will be created to install the python and share directories. * Environment variables of packages are collected in the macros PKG and EDM. By adding package names to a global property in the project, all the information of a package could be accessed, including the environment variables. A CMake script is used to create both bash and tcsh scripts before installation. All the environment variables are added at the end of the scripts. * When a project is installed, a CMake config will be created automatically, including all the targets within the same namespace. Another project needs to use the CMake config file to locate the project and load the exported targets. * The CMake commands and options are put in the build.sh script. Below is an example of CMakeLists.txt for the package Geometry: ``` 1PKG(Geometry 2DEPENDS 3Identifier 4$<<NOT::$BOOL::${BUILD_ONLINE}>>:Parameter> 5Boost::filesystemBoost::system 6Boost::pythonPython::Python 7ROOT::Geom 8SETENV 9JUN0_GEOMETRY_PATH="$ENV{JUN0TOP}/data/Detector/Geometry" 10} ``` **Algorithm 1** In this example, the PKG declares the package name. As there are no explicit files to be compiled, all the files under the source code directory will be used. As there is no option MODULE, a shared library will be created. When compiling and building this package, it will depend on several libraries, which are defined after option DEPENDS. As mentioned before, the target names are used. Both Identifier and Parameter are from JUNOSW, while the others are from external libraries. The target Parameter is not used if the software is built for online. Below is another example of the build script, which consists of three steps: ``` 1functionrun-build(){ 2localinstalldir=$(install-dir) 3localblddir=$(build-dir) 4check-build-dir 5check-install-dir 6pushd$blddir 7 8cmake..$(check-var-enabledgraphviz)\ 9$(check-var-enabledwithoec)\ 10$(check-var-enabledonline)\ 11$(check-var-enabledPerformanceCheck)\ 12-DCMAKE_CXX_STANDARD=17\ 13-DCMAKE_BUILD_TYPE=$(cmake-build-type)\ 14-DCMAKE_INSTALL_PREFIX=$installdir\ 15||error:"ERRORFoundduringcmakesstage." 16localnjobs=-j$(nproc) 17cmake--build.$njobs||error:"ERRORFoundduringmakesstage." 18cmake--install.||error:"ERRORFoundduringmakesinstallstage." 19 20pop 21} ``` ### Partial checkout and build using git-junoenv As Git is already widely used in the HEP community, the migration to Git is not so difficult. After the migration is done, users request to support the partial checkout and build. This is common when using CMT. In order to support partial checkout, git sparse checkout is used. For the partial build, the CMakeLists.txt is set up with customized build targets. In order to support the customized build targets, users are allowed to provide their own file, named CMakeLists.user.txt. This file could be edited by users, or controlled by the git-junoenv tool. When using git-junoenv to check out packages partially, these package names will be registered into CMakeLists.user.txt automatically. This could also used to build packages for online environment. Below is an example: ``` 1if(BUILD_ONLINE) 2message(STATUS"UsingonlineOECpackageslists") 3include(${CMAKE_SOURCE_DIR}/CMakeLists.online.txt) 4elseif(EXISTS"${CMAKE_SOURCE_DIR}/CMakeLists.user.txt") 5message(STATUS"Usingusercustomizedpackageslists") 6find_package(jumosw) 7include(*${CMAKE_SOURCE_DIR}/CMakeLists.user.txt") 8else() 9message(STATUS"Usingdefaultpackageslists") 10include(*${CMAKE_SOURCE_DIR}/CMakeLists.default.txt") 11endif() ``` After both partial checkout and build are working, a shell script called git-junoenv is created. By prefixing git in the command name, this command could become a sub-command of git. Below is an example when using it: ``` 1$gitjunoenvinit-projectjunosw&&cdjunosw#getthejunoswwithoutpackages 2$gitjunoenvlist-pkgs#listalltheavailablepackages 3$gitjunoenvadd-pkgReconstruction/OMILREC#addapackage When users need to develop with JUNOSW, the first step is using init-project to clone the code from the official repository. In order to hide all the packages, both sparse and no-checkout options are used during the git clone. After cloning, the script will check out the CMake-related code and initialize the CMake file of users. Then, users could list all the available packages in the project. The command git ls-files is used to list all the directories containing CMakeLists.txt. Users could use add-pkg to checkout and enable the package. ## 5 Software deployment ### junoenv: the installation script The installation script junoenv is inspired by the ENV project [10], which collects all the necessary installation scripts in the same repository. There are more than 50 scripts for building external libraries, and about 30 libraries are deployed in the official release. Modularized bash functions are used to describe the metadata of the external libraries. When installing a package, the script junoenv loads the metadata of the package and drives the installation of it. There are five steps defined during the installation: * get: download the source code by cURL, wget or git; * conf: configure the package; * make: build the package; * install: install the package; * setup: create setup scripts for both bash and tcsh. Five corresponding common bash functions are in charge of these steps. If additional configuration is needed, the package can define its own functions to override the default functions. Following is an example: ``` 1functionjuno-ext-libs-cmake-conf-{ 2localmsg="===%FUNCAME:" 3#begintoconfigure 4echo$msg./bootstrap-prefix=$(juno-ext-libs-cmake-install-dir) 5./bootstrap-prefix=$(juno-ext-libs-cmake-install-dir) 6} 7functionjuno-ext-libs-cmake-conf{ 8juno-ext-libs-PKG-confcmake 9} ``` This is used to configure the package CMake. As the CMake does not use the configure script by default, an additional function suffixed with a dash is defined to override the default behavior. So the invoking procedure is: junoenv invokes the conf function of CMake, named juno-ext-libs-cmake-conf; then this function invokes the common function, named juno-ext-libs-PKG-conf; the common function then invokes the overridden function. Reproducible is important during the deployment. For a dedicated release of JUNOSW, the versions of all external libraries need to be recorded. In order to track all of them, a shell script is used to collect them. When deploying a release, the corresponding script is loaded. The shell script self is created by invoking the vlist command, which prints all the installed packages and their versions. Below is an example of this script: ``` 1functionjuno-ext-libs-git-version-{echo2.37.3;} 2functionjuno-ext-libs-cmake-version-{echo3.24.1;} 3functionjuno-ext-libs-python-version-{echo3.9.14;} 4functionjuno-ext-libs-python-setuptools-version-{echo58.1.0;} 5functionjuno-ext-libs-python-pip-version-{echo22.2;} ``` All the software is deployed into CVMFS. When deploying the software, the installation prefix could be different from the original one when building software. For most packages, it is not an issue. However, there are still several packages that hardcode the paths, such as physics generators. Inspired by the package manager spack [11], the paths during building and deployment are set as the same length, and they are replaced using the tool sed when deployed into CVMFS. ### Docker images for continuous integration Continuous integration (CI) is one of the important parts of software development. If building JUNOSW starts from the external libraries, it will take quite a long time. In order to reduce the CI running time, the external libraries should be pre-installed. Two types of Docker images are explored: * Lightweight image with CVMFS clients installed. The image is built based on the CentOS 7. The size of the Docker image after compression is 372.87 MB. This is useful for both CI and developers with good network connections. It is required to run the privileged Docker container so that the CVMFS client can work correctly. * Full image with all the external libraries installed. There are two flavors: one is based on CentOS 7 with a size of 17.29 GB; another is based on Ubuntu 22.04, with a size of 20.37 GB. No special privilege is needed to run the container. The lightweight image is chosen in the GitLab CI. The CVMFS client needs to be available before setting up the JUNOSW software environment. Below is the YAML configuration file: ``` 1variables: 2JUNOTOP:/cvmfs/juno.ihep.ac.cn/centos7_amd64_gcc1120/Pre-Release/J23.1.x 3 4default:#Setthedockerimage 5image:mirguest/juno-cvmfs 6 7stages:#Listofstagesforjobs,andtheirorderofexecution 8-build 9-test 10 11build-job-gcc:#Thisjobrunsinthebuildstage,whichrunsfirst. 12stage:build 13script: 14-sudomount-tcvmfsjuno.ihep.ac.cn/cvmfs/juno.ihep.ac.cn 15-source$JUNOTOP/setup.sh 16-./build.sh ``` The benefit is that the Docker image could be reused when upgrading the external libraries. In the above example, only the JUNOTOP needs to be updated in the YAML file. ### GitLab runners in Kubernetes cluster GitLab runners are in charge of the execution of the CI jobs. They need to be deployed and associated with the GitLab projects. As there are multiple projects, GitLab group runners are set up in a self-hosted Kubernetes cluster. GitLab agents are used to connect Gitlab and Kubernetes. Then the Gitlab runners are managed by the Gitlab agents. All the configurations are managed in GitLab repositories. The GitLab agent is set up as below. ``` 1gitops: 2manifest_projects: 3-id:JUNO/offline/gitlab-agent 4default_namespace:junooffline 5 paths: 6- glob:'manifests/*.{yaml,json}' 7- glob: '/**/*.{yaml,json}' 8 9ci_access: 10groups: 11- id: JUNO/offline ``` Then GitLab runners are installed with the cluster management project, which is a Git repository. The configuration of GitLab runner is enabled first, then the runners will be set up automatically. ``` 1repositories: 2-name:gitlab 3url:[https://charts.gitlab.io](https://charts.gitlab.io) 4 5releases: 6-name:runner 7namespace:gitlab-managed-apps 8chart:gitlab/gitlab-runner 9version:0.44.0 10installed:true 11values: 12-values.yaml.gotmpl ``` ## 6 Conclusions The tools used for JUNO software development have been migrated from CMT and SVN to CMake, Git/GitLab, Docker and Kubernetes. Some additional scripts have been also developed to help the users. In late 2022, all the migration had been done. More than 160 members are available in the JUNO GitLab. In the past nine months, more than 300 Merge Requests have been merged into the official JUNOSW repository, and more than 2,400 pipelines have been executed with a success ratio of 92.04%. ## Acknowledgments This work is supported by National Natural Science Foundation of China (12375195, 12025502, 11805223), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDA10010900), and Youth Innovation Promotion Association, CAS.
2309.05684
Particle dispersion and clustering in surface ocean turbulence with ageostrophic dynamics
Upper-ocean turbulent flows at horizontal length scales smaller than the deformation radius depart from geostrophic equilibrium and develop important vertical velocities, which are key to marine ecology and climatic processes. Due to their small size and fast temporal evolution, these fine scales are difficult to measure during oceanographic campaigns. Instruments such as Lagrangian drifters have provided another way to characterize these scales through the analysis of pair-dispersion evolution, and have pointed out striking particle convergence events. By means of numerical simulations, we investigate such processes in a model of surface-ocean turbulence that includes ageostrophic motions. This model originates from a Rossby-number expansion of the primitive equations and reduces to the surface quasi-geostrophic model, a paradigm of submesoscale dynamics, in the limit of vanishing Rossby number. We focus on the effect of the ageostrophic dynamics on the pair-dispersion and clustering properties of Lagrangian tracer particles at the ocean surface. Our results indicate that while over long times the pair separation process is barely affected by the ageostrophic component of the velocity field, the latter is responsible for the formation of temporary particle aggregates, and the intensity of this phenomenon increases with the Rossby number. We further show that Lagrangian tracers preferentially accumulate in cyclonic frontal regions, which is in agreement with observations and other more realistic modeling studies. These findings appear interesting to improve the understanding of the turbulent transport by ocean fine scales, and in light of upcoming, new high-resolution satellite data of surface velocity fields.
Michael Maalouly, Guillaume Lapeyre, Bastien Cozian, Gilmar Mompean, Stefano Berti
2023-09-11T09:21:30Z
http://arxiv.org/abs/2309.05684v2
# Particle dispersion and clustering in surface ocean turbulence with ageostrophic dynamics ###### Abstract Upper-ocean turbulent flows at horizontal length scales smaller than the deformation radius depart from geostrophic equilibrium and develop important vertical velocities, which are key to marine ecology and climatic processes. Due to their small size and fast temporal evolution, these fine scales are difficult to measure during oceanographic campaigns. Instruments such as Lagrangian drifters have provided another way to characterize these scales through the analysis of pair-dispersion evolution, and have pointed out striking particle convergence events. By means of numerical simulations, we investigate such processes in a model of surface-ocean turbulence that includes ageostrophic motions. This model originates from a Rossby-number expansion of the primitive equations and reduces to the surface quasi-geostrophic model, a paradigm of submesoscale dynamics, in the limit of vanishing Rossby number. We focus on the effect of the ageostrophic dynamics on the pair-dispersion and clustering properties of Lagrangian tracer particles at the ocean surface. Our results indicate that while over long times the pair separation process is barely affected by the ageostrophic component of the velocity field, the latter is responsible for the formation of temporary particle aggregates, and the intensity of this phenomenon increases with the Rossby number. We further show that Lagrangian tracers preferentially accumulate in cyclonic frontal regions, which is in agreement with observations and other more realistic modeling studies. These findings appear interesting to improve the understanding of the turbulent transport by ocean fine scales, and in light of upcoming, new high-resolution satellite data of surface velocity fields. Submesoscales, turbulence, surface quasi-geostrophy, Lagrangian dispersion, clustering ## I Introduction Ocean flows at scales comparable and smaller than the deformation radius, i.e. in the meso and submesoscale ranges, are characterized by quasi two-dimensional (2D) turbulent dynamics. In spite of this important common feature, remarkable differences distinguish submesoscales from mesoscales. Flow structures in the mesoscale range have horizontal sizes of several tens to few hundreds of kilometers and they extend over depths of \(O(1000)\) m. Such eddies contain most of the kinetic energy in the ocean. Their vertical velocities, however, are quite small, namely of \(O(1-10)\) m day\({}^{-1}\). On the other hand, submesoscales correspond to eddies and, importantly, filaments with smaller horizontal scales of \(O(1-10)\) km. These structures reach depths of only \(O(100)\) m, and evolve on faster timescales of \(O(1)\) day. Theoretical arguments and high-resolution numerical simulations indicate that their vertical velocities can be up to an order of magnitude larger than the mesoscale ones [1; 2]. They are then expected to provide a relevant contribution to vertical transport, and thus to play a key role for both marine ecology and the coupling between the ocean and the atmosphere [3]. In recent years, many evidences about submesoscales have emerged from Lagrangian drifter data. Based on the possibility to relate particle pair-dispersion statistics to the properties of the underlying turbulent flow (see, e.g., Ref. [4]), several authors focused on the determination of the laws controlling the spreading process of drifters deployed at the surface of the ocean. By taking this approach, and computing the scale-by-scale pair separation rate, regimes of enhanced relative dispersion at fine scales were detected in different regions, pointing to energetic submesoscales (see, e.g., Refs. [5; 6; 7; 8]). Another striking feature that was recently observed, first in the Gulf of Mexico [9] and later in other regions, is the occurrence of temporary drifter clustering. This means that while globally Lagrangian particles still spread in time, every now and then many of them are brought together in regions of very limited size. Such convergence events are associated with large vorticity (and divergence) values highlighting the departure from geostrophic balance - meaning that the Rossby number, roughly estimated by \(Ro=\zeta/f\) (with \(\zeta\) relative vorticity and \(f\) Coriolis frequency), is not negligibly small - and with the onset of important vertical velocities. Explaining this phenomenon is currently an open point, and requires going beyond the quasi-geostrophic (QG) approximation, obtained from a development of the basic equations of motion (primitive equations) at the lowest order in \(Ro\), in which the flow is strictly horizontal and non-divergent. To include the physics of both clustering and dispersion, a natural possibility is to improve the dynamics of an idealized QG model by adding higher-order corrections when developing (in \(Ro\)) the primitive equations. While, by construction, the resulting model does not include important sources of ageostrophy, such as high-frequency motions (internal gravity waves and tides), which are further off from geostrophic equilibrium, it properly accounts for ageostrophic motions associated with frontogenesis. Moreover, it allows separating the geostrophic and ageostrophic flow components in a straightforward manner. Understanding the role of ageostrophic turbulent dynamics on Lagrangian transport is relevant in view of future satellite measurements, such as those from the Surface Water and Ocean Topography (SWOT) mission. This satellite, launched at the end of 2022, has started measuring sea surface height (SSH) at a spatial resolution of \(\approx 15\) km, which represents an order of magnitude of improvement with respect to presently available data [10]. As a result, it should provide access to the fine mesoscale and submesoscale ranges at global scale. Determining to what extent small-scale processes, associated with non-negligible Rossby numbers, hinder the possibility to retrieve surface currents from SSH through geostrophic balance represents an important challenge for the exploitation and the theoretical interpretation of these new data. For this purpose, Lagrangian statistics based on drifter datasets appear promising; different from Eulerian ones, they reflect the temporal evolution of fluid parcels, and may thus enable a clear separation between fast (ageostrophic) processes, that could contaminate the satellite-derived velocity, and slower (geostrophic) ones. In this study, by means of numerical simulations, we investigate the spreading of Lagrangian tracer particles at the ocean surface in a model of upper-ocean turbulence derived as an extension of the QG approximation, and including ageostrophic effects. We particularly focus on the reproduction of Lagrangian convergence events, and on the quantification of the importance of the latter with increasing Rossby number. Furthermore, by comparing pair-dispersion statistics for particles advected by flows at different values of \(Ro\), we aim at assessing the relevance of ageostrophic motions on the relative dispersion process. This article is organized as follows. In Sec. II we introduce the flow model; the main features of its turbulent dynamics are discussed in Sec. III. The results of the analysis of Lagrangian particle statistics are reported in Sec. IV. There, we separately characterize the role of ageostrophic motions on relative dispersion (Sec. IV.1), and the clustering properties, as well as their relation with the flow structure (Sec. IV.2). Finally, discussions and conclusions are presented in Sec. V. ## II Model A convenient theoretical framework to address the dynamics of the upper ocean in the fine-scale range (scales comparable and, to some extent, smaller than the deformation radius) is offered by QG models. Indeed, these models allowed a relatively good understanding of the larger mesoscale [\(O(100)\) km] regime [1], and can be taken as the basis for model improvement when approaching the lower end [\(<O(10)\) km] of the fine-scale range. They are obtained from an expansion at lowest order in \(Ro\) of the momentum and buoyancy evolution equations, within the Boussinesq and hydrostatic approximations (see, e.g., Ref. [11]). The main dynamical equation, resulting from this approach, assumes constant stratification and states that in the interior of the considered fluid layer potential vorticity (PV) is conserved along the geostrophic flow. Surface quasi-geostrophy (SQG) [12; 13] is a special case of QG dynamics. Within this model the interior PV is assumed to be exactly equal to zero. The associated flow is then entirely driven by the evolution of surface buoyancy (or, equivalently, temperature). Previous studies highlighted the interest of this model for ocean submesoscale turbulence (see Ref. [13] for a review), for phytoplankton diversity [14] as well as Lagrangian dispersion [15; 16]. Indeed, SQG dynamics give rise to energetic small-scale flows, and are considered as one of the possible mechanisms of submesoscale generation via mesoscale straining processes. While other mechanisms can also be invoked, such as mixed-layer instabilities, which energize submesoscales also at depth and can be related to the seasonal cycle [17; 18], the SQG model presents the advantage of a simpler mathematical formulation. Observations, as well as realistic or primitive-equation-based simulations, however, revealed some important features, such as the asymmetry of vorticity statistics, with cyclones prevailing over anticyclones [19; 20; 21], and the occurrence of Lagrangian convergence events [22; 23; 9; 24], which cannot be explained by QG theory. In order to overcome the limitations of the QG framework, an interesting possibility is to extend it by including ageostrophic motions through the development of primitive equations to next order in \(Ro\). By doing so, one obtains the \(\text{QG}^{+1}\) system, which encompasses ageostrophic corrections [25; 26], potentially responsible of those phenomena. In the case of surface-driven dynamics, this approach leads to the so-called \(\text{SQG}^{+1}\) model. The latter was first introduced in an atmospheric context in Ref. [27], where it was shown through simulations of freely decaying turbulence that it gives rise to the expected cyclone-anticyclone asymmetry. Here we consider the \(\text{SQG}^{+1}\) system to investigate surface-ocean turbulence in the fine-scale range, a question that to our knowledge has not been addressed before. Our main aim is to provide a minimal model, based on the fundamental dynamical equations, accounting for the above mentioned submesoscale features, and to use it to investigate the effect of the ageostrophic flow on the spatial distribution of tracer particles. Other models based on a Rossby-number development of primitive equations exist, such as the surface semi-geostrophic one [28], which reproduces both cyclone-anticyclone asymmetries and strong vertical velocities at fronts. Here we chose the \(\text{SQG}^{+1}\) model as several of its properties have been well documented. In the following we shortly introduce the mathematical formulation of the model, adapting the original derivation (see Ref. [27] for more details) to the present oceanic conditions. We assume that the vertical coordinate is \(-\infty<z\leq 0\), and that the dynamics are controlled by the lateral advection of temperature (buoyancy) at the surface (\(z=0\)). The main governing equation retains the same form as in the SQG system (corresponding to \(Ro=0\)), and it expresses the conservation of surface temperature along the surface flow. This reads: \[\partial_{t}\theta^{(s)}+\mathbf{u}^{(s)}\cdot\mathbf{\nabla}\mathbf{\theta}^{(s)}=0, \tag{1}\] where \(\theta(\mathbf{x},t)\) is the temperature fluctuation field, the super script (\(s\)) indicates quantities evaluated at \(z=0\), and the total velocity field is given by the sum of the geostrophic component \(\mathbf{u}_{g}\) (computed at the lowest order in \(Ro\)) and two (next order in \(Ro\)) ageostrophic terms \(\mathbf{u}_{\varphi}\) and \(\mathbf{u}_{a}\), \[\mathbf{u}=\mathbf{u}_{g}+Ro\left(\mathbf{u}_{\varphi}+\mathbf{u}_{a}\right). \tag{2}\] The geostrophic velocity can be expressed in terms of the streamfunction \(\mathbf{\phi}\): \[\mathbf{u}_{g}=\left(-\partial_{\varphi}\phi,\partial_{x}\phi\right), \tag{3}\] where \(x\) and \(y\) denote the horizontal coordinates. Note that here and in what follows we use nondimensional units. As in SQG, the streamfunction is related to surface temperature through \[\phi=\mathscr{F}^{-1}\left[\frac{\mathscr{F}(\mathbf{\theta}^{(s)})}{k}e^{kz} \right], \tag{4}\] where \(\theta\) is here taken at lowest order, \(\mathscr{F}\) stands for the horizontal Fourier transform and \(k\) for the horizontal wavenumber modulus. The above relation is a direct consequence of the assumption of zero interior PV, \(\mathbf{\nabla}_{H}^{2}\mathbf{\phi}+\partial_{z}^{2}\mathbf{\phi}=0\) (with \(\mathbf{\nabla}^{2}\) the Laplacian operator and the subscript \(H\) indicating that only horizontal coordinates are considered), with the boundary conditions \(\theta^{(s)}=\partial_{z}\phi|_{z=0}\) and \(\partial_{z}\phi\to 0\) for \(z\to-\infty\). The ageostrophic velocity components, absent in SQG, can be expressed as \[\mathbf{u}_{\varphi}=\left(-\partial_{y}\varphi,\partial_{x}\varphi\right), \tag{5}\] \[\mathbf{u}_{a}=-\partial_{z}\mathbf{A}, \tag{6}\] where the functions \(\varphi\) and \(\mathbf{A}\) are related to surface and lower-order quantities by: \[\varphi=\frac{\theta^{2}}{2}-\mathscr{F}^{-1}\left\{\frac{\mathscr{F}\left[ \theta^{(s)}(\partial_{z}\theta)^{(s)}\right]}{k}e^{kz}\right\}, \tag{7}\] \[\mathbf{A}=-\theta\mathbf{u}_{g}+\mathscr{F}^{-1}\left[\mathscr{F}(\mathbf{\theta}^{(s)} \mathbf{u}_{g}^{(s)})e^{kz}\right], \tag{8}\] again with \(\theta\) taken at lowest order. Equation (7) follows from the requirement of having zero interior PV at all orders in \(Ro\), while Eq. (8) is a form of the omega equation obeyed by vertical velocities (see also Refs. [25; 13], [25], and [27]). The functions \(\varphi\) and \(\mathbf{A}\) are such that \(\partial_{z}\varphi=0\) and \(\mathbf{A}=\mathbf{0}\) at \(z=0\). Note that \(\mathbf{u}_{a}\) has both a rotational and a divergent component from (8) while \(\mathbf{u}_{\varphi}\) is nondivergent. Remark that the model specified by Eqs. (1)-(8), by construction, accounts for ageostrophic motions related to fronts, meaning those associated with next-order corrections to the balanced (i.e. geostrophic) flow. Other sources of ageostrophy are instead excluded. In particular, this applies to higher-frequency motions, such as internal gravity waves and tides, which are not close to geostrophic equilibrium. ## III Turbulent flow properties The model evolution equations (Sec. II) are numerically integrated by means of a pseudospectral method on a doubly periodic square domain of side \(L_{0}=2\pi\) at resolution \(N^{2}=1024^{2}\), starting from an initial condition corresponding to a streamfunction whose Fourier modes have random phases and small amplitudes. The code was adapted from an original one developed by Ref. [29] and previously used in Refs. [15; 18; 30]. We consider the forced and dissipated version of Eq. (1), which allows reaching a statistically stationary flow state. Specifically, we add on the right-hand side of the equation a random (\(\delta\)-correlated in time) forcing acting over a narrow range of wavenumbers \(4\leq k_{f}\leq 6\) (and whose intensity is \(F=0.02\)), as well as a hypofriction term \(-\alpha\mathbf{\nabla}_{H}^{-2}\mathbf{\theta}\) to remove energy from the largest scales, and a hyperdiffusion term \(-\nu\mathbf{\nabla}_{H}^{4}\mathbf{\theta}\) to assure small-scale dissipation and numerical stability. For the dissipative terms we set \(\alpha=0.5\) and we determine \(\nu\) according to the condition \(k_{max}l_{\nu}\gtrsim 6\), with \(l_{\nu}\) the dissipative scale (estimated for \(Ro=0\)). These choices correspond to quite large dissipations, and will limit the number of active scales; however, it turned out that they were necessary for controlling the numerical stability of the code at the largest \(Ro\) value explored. Indeed, the integration of the SQG\({}^{+1}\) system is delicate due to the effective compressibility of the horizontal flow introduced by the ageostrophic corrections, which creates strong gradients that are difficult to resolve. The surface-temperature evolution equation, Eq. (1) with forcing and dissipation terms, is advanced in time using a third-order Adams-Bashforth scheme. We verified that the results are essentially unchanged when using a fourth-order Runge-Kutta algorithm, but the latter is computationally less efficient. The time step was set to the quite small value \(dt=10^{-4}\), which was verified to ensure temporally converged results for different values of the Rossby number. The latter being the main control parameter, we performed different simulations by increasing it from \(Ro=0\) to \(Ro=0.075\), which is the largest value we can safely reach. In the following we present the main characteristics of the turbulent flows, for both \(Ro=0\) (SQG) and \(Ro>0\) (SQG\({}^{+1}\)) that will be of interest for the dynamics of Lagrangian tracer particles. ### Kinetic energy spectra When the Rossby number is increased, starting from \(Ro=0\), the flow develops stronger and stronger gradients and the total kinetic energy grows monotonically with \(Ro\) (not shown). Its spatial structure is characterized by eddies of different sizes and, especially, by sharp fronts (see also Sec. IV). Kinetic energy spectra \(E(k)\) computed from the total velocity \(\mathbf{u}\), for the smallest (\(Ro=0\)) and the largest (\(Ro=0.075\)) Rossby number are shown in Fig. 1. They display a scaling close to \(k^{-2}\) (see inset of Fig. 1) over about a decade. They are flatter than in QG barotropic dynamics, where \(E(k)\sim k^{-3}\). However they are slightly steeper than the theoretical prediction \(k^{-5/3}\) for the direct cascade of buoyancy variance in the SQG system. This steepening effect is essentially independent of \(Ro\) and is more important at low wavenumbers, suggesting that its origin likely lies in the presence of large-scale persistent structures of size \(\approx 2\pi/k_{f}\), as also noted in previous studies of SQG and SQG\({}^{+1}\) turbulence [30; 31; 13; 27]. At high wavenumbers the scaling range is limited by the large values of the dissipation coefficients, which are needed to control the formation of very intense gradients. At low wavenumbers, we do not observe the \(k^{-1}\) scaling corresponding to an inverse cascade in SQG, as the forcing acts on large scales and hypofriction is strong enough to damp modes below \(k_{f}\). ### Vorticity statistics As mentioned earlier, an important feature of oceanic (and atmospheric) flows, which is not captured by QG theory, is the asymmetry of vorticity statistics. This was detected in data from both observations [19; 21] and primitive-equation simulations [20; 32]. The latter numerical works also highlighted the role of surface dynamics on the prevalence of cyclonic over anticyclonic flow regions. Different mechanisms can explain this asymmetry. A first one is related to nonlinear Ekman pumping. As the stress at the air-sea interface is proportional to the difference of winds and currents, it creates a surface drag causing the decay of ocean anticyclones [33; 34]. Another mechanism relies on the vortex-stretching term in the vorticity equation \(\partial_{t}\zeta\sim(f+\zeta)\partial_{z}w+...\) for finite Rossby numbers. Here \(w\) is the vertical velocity, \(f\) the Coriolis frequency and relative vorticity is defined as \(\zeta=\partial_{x}v-\partial_{y}u\) [where \(\mathbf{u}=(u,v)\) is the horizontal flow]. As discussed in previous works (see, e.g., Refs. [27; 35], at fronts, through the ageostrophic term \(\zeta\partial_{z}w\), vortex stretching amplifies more cyclonic vorticity (on the heavy side of the front) than anticyclonic vorticity (on the light side of the front). Note that within a purely QG framework vortex stretching would instead give a contribution to the vorticity growth rate (\(\partial_{t}\zeta\sim f\partial_{z}w\)) that is independent of the sign of \(\zeta\). Clear asymmetry in favor of stronger cyclones is also observed in QG\({}^{+1}\) and SQG\({}^{+1}\) models in which next-order corrections in \(Ro\) to QG equations are included [25; 27]. It was argued that the symmetry is broken because the divergence due to ageostrophic frontogenesis at small scales accelerates (slows down) the contraction of dense (light) filaments [27; 36], which gives rise to intense and localized cyclones, and weaker more broadly spread anticyclones. This is the case in our forced simulations of SQG\({}^{+1}\) turbulence as cyclones prevail over anticyclones whenever \(Ro>0\), and vorticity statistics are similar to those in decaying turbulence at fixed Rossby number [27]. The probability density function (pdf) of \(\zeta\), rescaled by its standard deviation \(s_{\zeta}\) and averaged over time, is shown in Fig. 2 for \(Ro=0\) and \(Ro=0.075\). As it can be seen in the figure, the right tail of the pdf (\(\zeta>0\)) is much higher than the left one (\(\zeta<0\)) when \(Ro=0.075\), while the two tails essentially overlap over a whole range of \(|\zeta|\) values for \(Ro=0\). The skewness of the vorticity distribution \(S_{\zeta}=\langle\zeta^{3}\rangle/\langle\zeta^{2}\rangle^{3/2}\) grows, approximately quadratically, with \(Ro\) (see inset of Fig. 2), indicating that the magnitude of the asymmetry increases with the intensity of the ageostrophic flow. Based on the results in this section, the SQG\({}^{+1}\) simulations considered here appear appealing to explore the transport and dispersion properties of Lagrangian tracers in turbulent flows, relevant for surface-ocean dynamics and possessing (weakly) ageostrophic components. Figure 2: Probability density function of vorticity \(\zeta\) (rescaled by its rms value \(s_{\zeta}\)), temporally averaged over several flow realizations in the statistically steady state, for \(Ro=0\) (empty black points) and \(Ro=0.075\) (filled red points), with different point types indicating \(\zeta>0\) and \(\zeta<0\). For reference, the standard Gaussian distribution is also shown (dashed gray curve). Inset: vorticity skewness \(S_{\zeta}\) as a function of the Rossby number; the solid green line corresponds to \(S_{\zeta}\sim Ro^{1.87}\). Figure 1: Kinetic energy spectra, temporally averaged over several flow realizations in the statistically steady state for \(Ro=0\) and \(Ro=0.075\). The dashed black line in the main panel corresponds to the expectation for SQG dynamics. Inset: the same spectra compensated by \(k^{-2}\) and rescaled with a coefficient such that, in both cases, the scaling range corresponds to the wavenumbers for which \(E(k)k^{2}\simeq 1\). Lagrangian dynamics We now consider the dynamics of Lagrangian tracer particles in the turbulent flows produced by the model of Sec. II, both at \(Ro=0\) and at \(Ro>0\). In order to qualitatively compare the main features of our results with those from ocean drifters, we restrict the motion to occur at the surface. Particles then move according to the following equation: \[\frac{d\mathbf{x}_{i}}{dt}=\mathbf{u}(\mathbf{x}_{i}(t),t), \tag{9}\] where \(\mathbf{x}_{i}=(x_{i},y_{i})\) is the horizontal position of particle \(i\) (with \(i=1,...,N_{p}\)) and \(\mathbf{u}(\mathbf{x}_{i},t)\) is the total velocity (i.e. including the ageostrophic component, for \(Ro\neq 0\)) at its position. Equation (9) is numerically integrated using a third-order Adams-Bashforth scheme and bicubic interpolation in space of the velocity field at particle positions [37]. Except where explicitly stated, we assume that the particle motion occurs in an infinite domain and use the spatial periodicity of the Eulerian flow to compute the Lagrangian velocities outside the computational box. The temporal accuracy of the resulting trajectories was verified by varying the time step, and also according to the Lagrangian acceleration criteria proposed in Ref. [38]. A total of \(N_{p}=49152\) particles are seeded in the turbulent flows once the latter are at a statistically steady state. Their initial positions correspond to a regular arrangement of \(M=128\times 128\) triplets over the entire domain. Each triplet forms an isosceles right triangle, with a particle pair along \(x\) and one along \(y\), both of which are characterized by an initial separation \(R(0)=\Delta x/2\) (with \(\Delta x\) the grid spacing). In the following, we introduce the distance between two particles \(R(t)=\sqrt{R_{x}(t)^{2}+R_{y}(t)^{2}}\) [where \(R_{x}(t)\) and \(R_{y}(t)\) are the separations along \(x\) and \(y\), respectively, at time \(t\)]. To compute dispersion statistics only original pairs were used, which in our case, amounts to \(32768\) pairs. It was verified that the pair separation statistics do not depend on the initial orientation (along \(x\) or \(y\) direction) of the pairs. Moreover, provided that enough pairs are chosen, the results are mostly insensitive to their number. An illustration of typical particle spatial distributions, at a given instant of time in the statistically steady state of the flow, is shown in Fig. 3 for both \(Ro=0\) and \(Ro=0.075\), together with the corresponding vorticity fields. Here, particles are placed back in the original doubly periodic domain to see the effect of accumulation in space (while we assume that they leave this domain when computing dispersion statistics). Independently of the value of \(Ro\), vorticity is characterized by quite a filamentary structure in addition to almost elliptical vortices of various sizes. For nonzero \(Ro\) cyclonic eddies (\(\zeta>0\)) are more coherent than anticyclonic ones (\(\zeta<0\)), and vorticity is globally more intense in root-mean-square (rms) value (not shown). Concerning particles, it is here apparent that at \(Ro=0.075\) they do not uniformly spread over the spatial domain (as is the case for \(Ro=0\)), which highlights the occurrence of clustering. In the following, we will separately address the characterization of their relative dispersion process, and of their aggregation properties in the flow, for varying Rossby number. ### Pair-dispersion statistics Here, we examine the effect of varying the Rossby number on particle pair dispersion, using both fixed-time and fixed-scale indicators. The latter typically better allow to disentangle contributions from different flow scales [6; 15; 39; 40]. We then mainly focus on the scale-by-scale dispersion rate, by computing the finite-size Lyapunov exponent (FSLE) [39; 40], defined as: \[\lambda(\delta)=\frac{\log r}{\langle\tau(\delta)\rangle}\,, \tag{10}\] where the average is over all pairs and \(\tau(\delta)\) is the time needed to observe the separation growing from \(\delta\) to a scale \(r\delta\) (with \(r>1\)). In a nonlocal dispersion regime, for which the separation process is controlled by the largest flow features, and normally associated with a steep kinetic energy spectrum of the flow [\(E(k)\sim k^{-\beta}\), with \(\beta>3\)], the FSLE is expected to attain a scale-independent, constant value. This reflects in an exponential growth of the mean squared pair separation distance, i.e. relative dispersion: \[\langle R^{2}(t)\rangle=\langle|\mathbf{x}_{t}(t)-\mathbf{x}_{J}(t)|^{2}\rangle. \tag{11}\] Note that relative dispersion is a fixed-time metric, with the average computed at time \(t\), over all pairs \((i,j)\) such that at \(t=0\) (the release time) \(|\mathbf{x}_{i}(0)-\mathbf{x}_{j}(0)|=R(0)\). When the turbulent flow possesses energetic small scales [\(E(k)\sim k^{-\beta}\), with \(\beta<3\)], the separation process should be controlled by velocity increments at a length scale comparable to the distance between particles within a pair. The dispersion regime is therefore referred to as a local one, and both the FSLE and relative dispersion are expected to display power-law behaviors: \(\lambda(\delta)\sim\delta^{(\beta-3)/2}\) and \(\langle R^{2}(t)\rangle\sim t^{4/(3-\beta)}\), respectively. At separations larger than the largest flow scales, or at very large times, particles in a pair experience essentially uncorrelated velocities and their separation distance grows diffusively, implying that the FSLE scales as \(\lambda(\delta)\sim\delta^{-2}\) and relative dispersion as \(\langle R^{2}(t)\rangle\sim t\). Another indicator that may be used to discriminate between different dispersion regimes is the kurtosis of the separation distance: \[ku(t)=\frac{\langle R^{4}(t)\rangle}{\langle R^{2}(t)\rangle^{2}}. \tag{12}\] Under nonlocal dispersion, \(ku(t)\) should grow exponentially in time, while for local dispersion it should attain a constant value (equal to 5.6 for Richardson dispersion, expected for \(\beta=5/3\)) at intermediate times [15; 41]. At very large times, the kurtosis should in any case converge to \(ku=2\) corresponding to the diffusive limit of dispersion [15; 41]. The FSLE measured in our simulations for different values of the Rossby number is shown in Fig. 4. Independently of \(Ro\), the curves are remarkably flat at small separations, and approach the diffusive behavior at the largest ones [larger than the flow integral lengthscale \(2\pi\int_{0}^{\infty}k^{-1}E(k)dk/\int_{0}^{\infty}E(k)dk]\). The slight deviations from the expected \(\delta^{-2}\) scaling are here likely due to the limited inertial range of our turbulent flows. Indeed, previous studies reported similar observations in simulations with reduced inertial ranges, and proposed the use of an alternative, pdf-based indicator [42] to improve the agreement with the large-scale theoretical prediction. No clear evidence of a power-law scaling \(\lambda(\delta)\sim\delta^{-1/2}\) [following from a kinetic energy spectrum \(E(k)\sim k^{-2}\)] is detected, except perhaps on a narrow range of intermediate separations (see inset of Fig. 4). This result suggests that the dispersion process is essentially nonlocal. This is also confirmed by the temporal evolution of the kurtosis (Fig. 5), which displays a fast growth at short times, and approaches 2 at large times. At intermediate times, \(ku(t)\) never approaches a constant plateau, which would correspond to a local dispersion regime. This behavior, pointing to nonlocal dispersion while local dispersion would be expected, may appear quite surprising. Interestingly, it bears some resemblance to measurements of drifter separation in the Gulf of Mexico [43; 44], once inertial oscillations are removed. One possibility to explain it is related to the presence of large-scale coherent structures in the flow, which can provide a dominant contribution to the dispersion process [31]. To test this hypothesis, we rescale the FSLE with the flow integral timescale \(T_{I}=\ell_{I}/\sqrt{E}\), with \(E\) the total kinetic energy. As it can be seen in Fig. 4, for all \(Ro\), the plateau values of the rescaled FSLE range between 1.1 and 0.8, which are close to 1, supporting this explanation. The values of FSLE (not rescaled by \(T_{I}\)) at small \(\delta\) slightly increase with the Rossby number (inset of Fig. 4), consistently with the increase of velocity gradients with \(Ro\). A similar trend is observed from the short-time behavior of relative dispersion, which grows faster for larger \(Ro\) (inset of Fig. 5). At later times, \(\langle R^{2}(t)\rangle\) does not present a clear scaling, though Figure 3: Vorticity normalized by its rms value for \(Ro=0\) (a) and \(Ro=0.075\) (c) at a fixed instant of time in statistically stationary conditions. Panels (b) and (d) show a closeup view of the region in the black rectangle in the main panels (a) and (c), respectively, including the particle distribution at that time. on a limited time interval it may not be far from the \(t^{4}\) theoretical expectation. More interestingly, its growth slows down when the Rossby number is increased, which hints to temporary phases during which some particles aggregate and thus the efficiency of the global separation process is reduced. We conclude that the \(Ro\)-dependence of the different measures of pair separation is overall weak, indicating that ageostrophic motions do not substantially alter pair-dispersion statistics. This suggests that, in this system, when the Rossby number is increased, large eddies conserve their capacity to drive the dispersion process. ### Particle clustering and relation with the Eulerian flow structure While on average, over long times, Lagrangian tracers separate, their spatial distribution is not homogeneous and clusters can form in the course of time. To investigate this point, the first quantity we consider is the averaged divergence experienced by particles along their trajectories, also known as the dilation rate [22], a numerically efficient single-particle indicator of tracer accumulation. The divergence of the velocity field \(\langle\mathbf{\nabla}\cdot\mathbf{u}\rangle_{\mathbf{x}_{i},l}\), computed at particle positions \(\mathbf{x}_{i}\) and averaged over time and all particles, is shown as a function of \(Ro\) in Fig. 6. It is negative for nonzero Rossby numbers and grows roughly quadratically in \(Ro\) in absolute value, indicating that particles aggregate more when ageostrophic motions are more intense. Due to the compressibility they experience, particles are attracted to contracting flow regions and hence do not homogeneously sample the phase space. This fact has been shown to give rise to differences between Lagrangian and Eulerian statistics in other situations, such as that of time-correlated compressible flows [45; 46]. A qualitative understanding on what occurs in our experiments can be obtained by looking at the pdf of the Eulerian divergence, \(P(\mathbf{\nabla}\cdot\mathbf{u})\) (Fig. 7). When \(Ro\) is increased, the tails of this pdf rise, highlighting the more likely occurrence of very intense divergence events. Its shape is remarkably symmetric, though, meaning that positive and negative values of \(\mathbf{\nabla}\cdot\mathbf{u}\) are equally probable. The negative sign of the averaged Lagrangian divergence \(\langle\mathbf{\nabla}\cdot\mathbf{u}\rangle_{\mathbf{x}_{i},l}\) then results from particles getting trapped in convergence regions and spending a significant fraction of the time there, a phenomenon which increases in intensity with increasing Rossby number. The occurrence of clustering in our system is clearly demonstrated by the pdf of Voronoi normalized cell areas, a statistical tool that is often used to characterize the aggregation of inertial particles in (incompressible) turbulent Figure 4: FSLE (rescaled by the flow integral time scale) for different Rossby numbers. Inset: the same without rescaling the FSLE. The \(\delta^{-1/2}\) scaling law is the dimensional prediction for a kinetic energy spectrum \(E(k)\sim k^{-2}\). The scale amplification factor is \(r=1.2\), and it was verified that the results are robust with respect to the choice of this parameter value. Figure 5: Kurtosis of particle relative displacements (main panel) and relative dispersion (inset) as a function of time for different Rossby numbers. The \(t^{3}\) (Richardson dispersion) and \(t^{4}\) scaling laws in the inset are the expectations for a kinetic energy spectrum \(E(k)\sim k^{-5/3}\) and \(E(k)\sim k^{-2}\), respectively. Figure 6: Velocity divergence sampled by particles, averaged over time and over all particles, as a function of the Rossby number. Here the error bars correspond to the standard deviation of the temporal statistics. The black dashed line is proportional to \(-Ro^{\alpha}\), with \(\alpha\simeq 2.07\) from a best fit. flows [47; 48]. The cells are constructed by partitioning the spatial domain into regions containing one particle and all the points that are closer to that particle than to any other [47; 48; 49]. The nonhomogeneity of the particle distribution produces deviations of the pdf \(P\left(A/\langle A\rangle_{\mathbf{x}_{i}}\right)\) (the average being taken over all areas, containing each one particle) from the corresponding one computed for uniformly random distributed particles. As it can be seen in Fig. 8, for \(Ro=0\), \(P\left(A/\langle A\rangle_{\mathbf{x}_{i}}\right)\) agrees with the probability distribution expected for uniformly spread particles in a 2D domain [50], \(f_{2D}\left(A/\langle A\rangle_{\mathbf{x}_{i}}\right)=343/15\sqrt{7/(2\pi)}\left( A/\langle A\rangle_{\mathbf{x}_{i}}\right)^{5/2}\exp\left(-7/2A/\langle A \rangle_{\mathbf{x}_{i}}\right)\) (solid gray line in the figure). When the Rossby number increases, however, its left tail gets monotonically higher, indicating that the probability of finding particles at small distances, and hence to observe clustering, is larger. We can contrast the case of \(Ro=0.075\) with one where we advect particles by its geostrophic component only. As expected from particle transport in geostrophic turbulence [51], the pdf corresponding to uniformly distributed particles is recovered [case of \(\left(Ro=0.075\right)_{\mathbf{x}}\) in Fig. 8], which further proves that this phenomenon is entirely due to the ageostrophic flow component. Aiming to understand where particles accumulate, we first look at the fine-scale properties of clustering. The latter originate from the contraction of volumes in the phase space (here coinciding with the physical space) of the dissipative (\(\mathbf{\nabla}\cdot\mathbf{u}<0\)) dynamical system of Eq. (9). Consequently, after a transient, the Lagrangian dynamics take place on a fractal set. A common quantitative indicator of clustering is the correlation dimension [52], \(D_{2}\), of the dynamical attractor. A decrease to values \(D_{2}<d\), with \(d\) the dimension of the physical space (\(d=2\) in the present case), indicates an increased occurrence of small distances separating particle pairs. This fractal dimension is defined as: \[D_{2}=\lim_{r_{p}\to 0}\frac{\log[C(r_{p})]}{\log(r_{p})}, \tag{13}\] with the correlation sum \(C(r_{p})\) given by \[C(r_{p})=\lim_{N_{p}\rightarrow\infty}\frac{2}{N_{p}(N_{p}-1)}\sum_{i,j>t}^{N_ {p}}\Theta(r_{p}-\left|\mathbf{x}_{i}-\mathbf{x}_{j}\right|),\] where \(\Theta\) is the Heaviside step function, \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) are the positions of particles belonging to pair \((i,j)\), and the distance \(\left|\mathbf{x}_{i}-\mathbf{x}_{j}\right|\) is the shortest one, after taking into account the \(2\pi\)-periodicity of the computational box. Equation (13) then means that, for small \(r_{p}\), the probability to find particle pairs separated by a distance less than \(r_{p}\) scales as \(C(r_{p})\sim r_{p}^{D_{2}}\). Figure 9 shows the measurement of the correlation dimension as a function of the Rossby number. For \(Ro=0\), as expected, \(D_{2}=2\) within statistical accuracy, which confirms the spatially homogeneous distribution of particles in the SQG system. Here, the small deviation from the theoretical value 2 may be attributed to the finite number of particles. At nonzero values of \(Ro\), \(D_{2}\) decreases monotonically, highlighting that clustering now takes place and that its intensity grows with the Rossby number. Again, this is a direct consequence of the transport of Lagrangian tracers by the ageostrophic flow. Indeed, when advection is realized by the geostrophic velocity only in the SQG\({}^{+1}\) model, the nonhomogeneity of the particle distribution disappears and \(D_{2}\simeq 2\), as shown by the blue empty point in the figure for the highest value of \(Ro\) explored (but the same holds for all \(Ro\)). Overall, these results suggest that particles aggregate on flow structures with a dimensionality smaller than that of the physical space and progressively more unidimensional with increasing \(Ro\). We now discuss in what regions of the flow particles tend to cluster. The question is of primary importance in oceanography, e.g. to identify areas of pollutant accumulation in surface flows, or locations of intense vertical velocities relevant for nutrient upwelling and plankton dynamics. Figure 8: Probability density function of Voronoi cell areas, normalized by the averaged cell area, \(P(A/\langle A\rangle_{\mathbf{x}_{i}})\), at an instant of time in the statistically steady flow state, for different values of the Rossby number. The curve labeled by \(\left(Ro=0.075\right)_{\mathbf{x}}\) has been obtained from particles advected by the geostrophic flow only. The solid gray line is the theoretical prediction for uniformly distributed particles \(f_{2D}\left(A/\langle A\rangle_{\mathbf{x}_{i}}\right)\) (see text). Figure 7: Probability density function of the Eulerian flow divergence \(\mathbf{\nabla}\cdot\mathbf{u}\), temporally averaged over several flow realizations in the statistically steady state, for different values of \(Ro\). While inspection of Fig. 3d already suggests some tendency of particles to avoid negative-vorticity (anticyclonic) regions and to concentrate along filamentary structures, a more quantitative approach is needed. A classical tool to identify different (2D) flow regions, and to characterize their role in transport phenomena, is the Okubo-Weiss parameter [53; 54], \[Q=\sigma^{2}-\zeta^{2}, \tag{14}\] where \(\sigma=\sqrt{\sigma_{n}^{2}+\sigma_{s}^{2}}\) is the total strain (\(\sigma_{n}=\partial_{x}u-\partial_{y}v\) and \(\sigma_{s}=\partial_{x}v+\partial_{y}u\) being the normal and shear strain, respectively) and \(\zeta\) is vorticity. The parameter \(Q\) allows to discriminate between strain-dominated (\(Q>0\), i.e. \(\sigma>|\zeta|\)) and rotation-dominated (\(Q<0\), i.e. \(\sigma<|\zeta|\)) regions, and reveals useful, for instance, to explain the dynamics of tracer-field gradients [55; 56]. Note that a more refined criterion was further obtained in incompressible flows to take into account the rotation of the strain eigenvectors that can affect the straining properties [57]. These strain and rotation-dominated regions can be related to dispersion properties through the linearization \(d(\mathbf{x}_{i}-\mathbf{x}_{j})/dt=\mathbf{u}_{i}-\mathbf{u}_{j}\simeq(\mathbf{\nabla}\mathbf{u})(\bm {x}_{i}-\mathbf{x}_{j})\). It is then clear that velocity gradients will also determine the particle small-scale dispersion or aggregation properties. In order to determine the regions where particles preferentially cluster, we follow Ref. [58] and compute the flow divergence conditionally averaged over all grid points of the domain with given values of vorticity and strain, noted \(\overline{\Delta}^{\zeta}\sigma\). This is a robust statistical tool originally introduced to investigate the vertical fluxes of a passive scalar field in submesoscale turbulence [58]. Figure 10a shows its measurement in our SQG\({}^{+1}\) simulations for \(Ro=0.075\) at the same instant of time chosen for the visualization of Fig. 3d (but it was verified that its features do not significantly change when a time-average is also taken). It is here apparent that strong divergence (\(\overline{\Delta}^{\zeta}\sigma>0\)) and convergence (\(\overline{\Delta}^{\zeta}\sigma<0\)) predominantly occur in strain-dominated regions (\(\sigma>|\zeta|\)), extending along tails above the lines \(\sigma=|\zeta|\). The asymmetric shape of the tails is a direct consequence of the dominance of cyclonic vorticity (see Fig. 2), due to ageostrophic dynamics. Here, the association of convergence with \(\zeta>0\) values is arguably due to the same vortex-stretching effects that amplify cyclonic vorticity (Sec. III.2). Note, too, that in rotation-dominated regions (\(|\zeta|>\sigma\)), the divergence \(\overline{\Delta}^{\zeta}\sigma\) is more likely to take both positive and negative values that tend to cancel out more. The above features are generic, and also appear at smaller values of \(Ro\) (not shown), except that the tails associated with large positive and negative values of \(\overline{\Delta}^{\zeta}\sigma\) become more symmetric, and divergence is smaller in absolute value, when the Rossby number is decreased. To complete the picture, we also show in Fig. 10b the divergence, in vorticity-strain space, computed at particle positions, \(\overline{\Delta}^{\zeta}_{\mathbf{x}_{i}}\sigma\). The Rossby number and the instant of time are the same as in Fig. 10a (and, again, we verified that averaging over time does not considerably modify the results). By comparing Fig. 10a and Fig. 10b, it is evident that the Lagrangian Figure 10: Mean divergence \(\overline{\Delta}^{\zeta}\sigma\) conditionally averaged over vorticity (\(\zeta\)) and strain (\(\sigma\)), from Eulerian (a) and Lagrangian (b) statistics, at a fixed instant of time in the statistically steady state of the flow, for \(Ro=0.075\). For the Lagrangian estimate, the subscript \(\mathbf{x}_{i}\) indicates that \(\Delta\), \(\zeta\) and \(\sigma\) are computed at particle positions. In both (a) and (b) the dashed lines correspond to \(\sigma=|\zeta|\). Figure 9: Correlation dimension \(D_{2}\) as a function of \(Ro\), obtained from data in several statistically steady flow realizations. Uncertainties are estimated from the standard deviations of best fits over the range of small distances \(r_{p}\) where \(C(r_{p})\sim r_{p}^{D_{2}}\). The empty blue point is for particles advected by the geostrophic flow component only at \(Ro=0.075\). The black dashed line corresponds to the second-order Taylor expansion \(D_{2}\simeq 2+aRo+bRo^{2}\), with \(a\simeq-2.9\) and \(b\simeq-50.2\) from a fit. and Eulerian estimates of divergence, conditionally averaged over the values taken by vorticity and strain, share the same general characteristics (similarly to what is found for vertical velocity in Ref. [59]). The partial attenuation of extreme events when using Lagrangian statistics is likely due to the smaller sample. Apart from this, it can be noted that the patterns from the Lagrangian estimate are sharper and characterized by a reduced frequency of \(\overline{\Delta}^{\zeta}\sigma>0\) events, in comparison with those from the Eulerian estimate. This is due to the tendency of particles to aggregate in flow-convergence regions, and hence to predominantly sample negative values of divergence. Overall, Fig. 10b confirms the preference of Lagrangian tracers to concentrate in regions of positive vorticity and large strain (\(\sigma>|\zeta|\)). This finding quite nicely matches the spatial organization of particles that is observed from a closeup view of a portion of the full domain at the same instant of time (Fig. 3d). Indeed, regions of negative vorticity (\(\zeta<0\)) tend to be relatively particle-free. On the contrary, particles are abundant in filamentary, positive vorticity regions (corresponding to \(\zeta>0\) and \(\sigma>\zeta\)) while it is less the case inside cyclonic eddies (corresponding to \(\zeta>0\) and \(\sigma<\zeta\)). The previous analysis indicates that particle clustering takes place in cyclonic strain-dominated regions. These correspond mostly to filaments and fronts outside coherent eddies. Indeed, a straight front along the \(y\) direction [with velocity \(\mathbf{u}=\mathbf{u}(x)\) independent of \(y\)] is characterized by negative divergence (\(\mathbf{\nabla}\cdot\mathbf{u}=\partial_{x}u<0\)) in its crosswise direction (which sustains the front) and by strain exceeding vorticity. The fact that \(\sigma>|\zeta|\) follows from the relation \(\sigma^{2}=(\mathbf{\nabla}\cdot\mathbf{u})^{2}+\zeta^{2}>\zeta^{2}\) holding for a velocity field that only depends on the cross-front coordinate \(x\). Our findings support those from a recent, more complex modeling study, which, taking an Eulerian point of view, reported on strong vertical velocities and flow convergence in cyclonic submesoscale fronts [58]. Furthermore, they provide clear evidence of Lagrangian-tracer clustering in cyclonic regions, also observed from real surface-drifter data [60; 9], and a possible explanation of the basic mechanisms controlling the phenomenon in the framework of a minimal model accounting for ageostrophic dynamics. ## V Conclusions We studied Lagrangian particle dynamics in an idealized model of surface-ocean turbulence that includes ageostrophic motions by means of numerical simulations. We particularly focused on the effect of ageostrophy on the spreading process of tracer particles, by examining both relative dispersion and clustering properties. The turbulent dynamics were assumed to be described by the SQG\({}^{+1}\) system, which accounts for frontogenetic ageostrophic motions, and is obtained from a development of primitive equations to next order in _Ro_, with respect to standard QG models. This approach, originally introduced in an atmospheric context [27], allowed us to reproduce the cyclone-anticyclone asymmetry, a phenomenon that is observed in both primitive-equation simulations [20] and data from observations [21; 19] of ocean turbulence at sufficiently fine scales, but is missed by QG models. The turbulent flows from our simulations for different Rossby numbers are characterized by energetic small scales, particularly in the form of filamentary structures associated with intense gradients. Kinetic energy spectra are not far from the theoretical expectation in the SQG system (recovered by setting \(Ro=0\) in the governing equations), although slightly steeper. Their scaling behavior is close to \(E(k)\sim k^{-2}\), as also found at submesoscales in more realistic simulations [61; 62; 63]. In the present case, the steepening of the spectrum is most likely due to the presence of large-scale coherent structures, a feature that was already observed in both the SQG [31; 33] and the SQG\({}^{+1}\) systems [27]. To explore how ageostrophic fluid motions impact the particle separation process, we compared the measurements from different indicators of pair dispersion as a function of _Ro_. Given that the total kinetic energy increases when increasing _Ro_, we used mostly dimensionless diagnostics allowing a fair comparison between the different simulations. We found that, irrespective of the Rossby number, dispersion is essentially nonlocal, except perhaps on a narrow range of separations, as highlighted by the extended region of scale independent FSLE and by the fast initial growth in time of the kurtosis of relative displacements. As the FSLE, where constant, was found to be close to the inverse large-eddy turnover time of the flow, we could show that this apparently surprising result is due to the presence of large persistent flow structures, which dominate the dispersion process. Overall, the general picture emerging from different metrics of relative dispersion is that, in the present simulations, dispersion only weakly depends on the intensity of the ageostrophic flow dynamics (i.e. _Ro_). Nevertheless, when increasing _Ro_, the latter manifest in a small, but measurable, increase of the separation rate at short times (and small distances), due to velocity gradients becoming stronger, and in a subsequent slowdown of relative dispersion at later times, possibly arising from the formation of temporary particle aggregations. The occurrence of clustering events was demonstrated by computing the averaged divergence experienced by particles (the dilation rate [22]), and the pdf of cell areas from a Voronoi tessellation. The decrease of the dilation rate to more and more negative values, and the rise of the left tail of the Voronoi cell-area pdf, indicate that particles are progressively more likely to be at small distances one from the other, when _Ro_ is increased. While this phenomenon is a direct consequence of the compressibility of the ageostrophic flow component, it is not straightforward to relate Eulerian and Lagrangian measures of clustering, as already noted in previous studies of Lagrangian tracer dynamics in compressible turbulence [45; 46]. Here, at a qualitative level, we argued that clustering arises from the increased probability of very large flow divergence values, at larger _Ro_, and hence the longer fraction of time spent by particles in negative-divergence regions. Determining where convergence, and thus particle clustering, takes place in surface-ocean flows is of paramount importance, both to predict the accumulation of biogeochemical substances or pollutants, and to identify locations of large vertical velocities. To address this question, we first computed the correlation dimension of the sets over which particles concentrate, which is directly related to the probability of finding a pair of them within a given distance. With increasing \(Ro\), this was found to decrease from \(D_{2}=2\) (corresponding to uniformly distributed particles) to smaller values, indicative of clustering and pointing to less than 2D aggregates (possibly quasi one-dimensional ones, for large enough Rossby numbers). To further understand in what flow regions clusters can be found, we examined the divergence conditionally averaged over vorticity and strain. This quantity was recently introduced as a generalization of Okubo-Weiss parameter to divergent flows, in order to partition 2D flows into regions with different stirring properties [58]. We found that divergence has an asymmetric distribution in vorticity-strain space that reflects the cyclone-anticyclone asymmetry. More interestingly, it is predominantly negative and large (in absolute value) where strain overcomes vorticity and the latter is positive, which indicates that clusters form in cyclonic frontal regions. Such a picture agrees with the results in more realistic simulations of submesoscale dynamics in the Antarctic Circumpolar Current, focused on the vertical fluxes of tracer fields [58]. It may also be useful to better understand observations of surface-drifter clustering in cyclonic regions in the Gulf of Mexico [9]. To conclude, the SQG\({}^{+1}\) system revealed a useful minimal model to investigate some basic mechanisms, related to ageostrophy, controlling the separation and clustering of Lagrangian tracer particles at the ocean surface. Ageostrophic effects only weakly affect the nonlocal relative dispersion while they are responsible of non-negligible clustering in filamentary cyclonic regions. This is remarkably similar to the observations from drifters in the Gulf of Mexico, which also indicated both nonlocal dispersion [43] and small-scale clustering [9]. Note that, in addition to ageostrophy, in the real ocean, other processes play a role in the transport of particles in the surface layer, such as Ekman currents induced by the wind [64], or Stokes drift due to ocean waves. The dispersion of floating material may also be affected by inertial effects [65] or by the drag exerted by the wind (the so-called windage). A natural perspective of this study is to extend the analysis to realistic simulations, in order to explore the effects of the ocean fast variability, which cannot be accounted for by the modeling framework considered here. Finally, the present results also appear to us interesting in consideration of the satellite data at high spatial resolution acquired by the SWOT spatial mission [10]. The weak dependence of pair-dispersion indicators on the Rossby number suggests that the geostrophically derived surface velocities may be essentially accurate for relative-dispersion applications. On the other hand, to access finer details of the particle dynamics, such as clustering phenomena, further information on the ageostrophic flow components would clearly be required. ###### Acknowledgements. This work is a contribution to the joint CNES-NASA SWOT project DIEGO and is supported by the French CNES TOSCA program. ## Data Availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
2309.17280
STRONG -- Structure Controllable Legal Opinion Summary Generation
We propose an approach for the structure controllable summarization of long legal opinions that considers the argument structure of the document. Our approach involves using predicted argument role information to guide the model in generating coherent summaries that follow a provided structure pattern. We demonstrate the effectiveness of our approach on a dataset of legal opinions and show that it outperforms several strong baselines with respect to ROUGE, BERTScore, and structure similarity.
Yang Zhong, Diane Litman
2023-09-29T14:31:41Z
http://arxiv.org/abs/2309.17280v1
# STRONG - Structure Controllable Legal Opinion Summary Generation ###### Abstract We propose an approach for the structure controllable summarization of long legal opinions that considers the argument structure of the document. Our approach involves using predicted argument role information to guide the model in generating coherent summaries that follow a provided structure pattern. We demonstrate the effectiveness of our approach on a dataset of legal opinions and show that it outperforms several strong baselines with respect to ROUGE, BERTScore, and structure similarity. ## 1 Introduction Discourse structure plays an essential role in text generation in domains ranging from news Van Dijk (2013) to peer-reviewed articles Shen et al. (2022). In the legal domain, it's equally important to draft a summary that can follow a blueprint Xu et al. (2021). For instance, in Figure 1, given a long legal opinion with thousands of words as input, a legal expert organized the summary by making the argument clear in terms of the issues the decision addressed, the decision's conclusion, and the reasoning behind the decision. While progress has been made in controllable generation, limited research has controlled discourse structure. Recently, Spangher et al. (2022) and Shen et al. (2022) proposed approaches to generate sentences with discourse structure labels. However, no existing controllable generation work addresses the legal domain, where the argumentative structure is pivotal. While prior work in the legal field highlighted the significance of argumentative structure from the input Elaraby and Litman (2022), the potential for utilizing argument structure to guide text generation remains unexplored. Based on a corpus analysis showing that experts use common patterns to summarize legal opinions (the most frequent one is shown in Figure 1), we develop a novel structure-prompting approach called STRONG (**S**tructure as a sentence-by-sentence generation, which led to a longer inference time compared to token generation baselines. _We explore structure control in legal opinions, which is challenging due to long input texts and argumentative discourse structures._ In the legal domain, besides directly adopting the raw document-summary pairs into supervised training using abstractive summarization models such as BART Lewis et al. (2020) and Longformer Encoder Decoder (LED) Beltagy et al. (2020), Elaraby and Litman (2022) proposed highlighting the salient argumentative sentences in the inputs and training a model that is argument-aware. _We instead focus on improving argument structure adherence by exploiting the summaries' annotated discourse structures to create structure prompts rather than by manipulating the original articles._ ## 3 Dataset We leverage the CanLII dataset of legal case opinions and human-written abstractive summaries.12 It consists of 28,290 legal opinions and human-written summary pairs. For testing, we first leverage the annotated subset produced by Xu et al. (2021), including 1,049 pairs with manually annotated **IRC argument labels**: _Issues_ (the legal questions addressed in the case), _Conclusions_ (the court's decisions for the related issue), _Reasons_ (text snippets illustrating the reasons for the court's decision) and _Non_IRC_ (none of the above). We further split the remaining 27,241 unannotated pairs into 80/10/10 percent for model training, validation, and extra testing. Corpus statistics are in Table 1. Footnote 1: The data was obtained through an agreement with the Canadian Legal Information Institute (CanLII): [https://www.canlli.org/en/](https://www.canlli.org/en/) As introduced in SS1 and Figure 1, legal experts devised different strategies to construct the summaries. We thus analyze the patterns of the IRC labels in the 1,049 annotated summaries. To comprehend the high-level structures better, we remove the Non_IRC tags and collapse adjacent text segments with the same tag into one. The most common "normalized" patterns are "Issue - Conclusion - Reason" (54%) and "Issue - Conclusion - Reason - Conclusion" (9%). Pie charts of the top normalized and original patterns are in Appendix A. ## 4 Method Figure 2 illustrates our proposed STRONG approach. We start by extending the small-scale annotations to the larger dataset. Since we only have the 1,049 test set manually annotated with oracle summary argument labels, different from Elaraby and Litman (2022) who used a classifier on input sentences, we propose to train a sentence classifier on summary sentences (Stage 1) and then utilize it to predict silver labels for all unannotated summaries in Stage 2.3 Our approach distinguishes itself from Shen et al. (2022), which relied solely on manually annotated structure sequences, resulting in a smaller training set than our larger dataset with silver labels. In the next step of Stage 2, we introduce special marker tokens to guide the model in generating summaries following specified structure patterns. Specifically, we extract the argumentative "IRC" labels from summary sentences, concatenate them with split " 1 " tokens and prepend before the original input text, and connect them with a special marker "==>". This operationalizes the argument mining of salient information blueprint, providing better guidance for the model in generating legal summaries. That is, Stage 2 utilizes the predicted structure labels to fine-tune the LED model. Once the model has been trained, we generate summaries using different sets of structure labels for the two test sets during Stage 3 of the inference process. Footnote 3: We include the model details in Appendix B.2. ## 5 Experimental Setup We compare STRONG to two baselines. **NoStructure** uses the Longformer-Encoder-Decoder (LED) base model for generating summaries. The second baseline re-implements **SentBS**Shen et al. (2022) \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline **Split** & **Case/Summ pairs** & **Case len** & **Summ len** & **sents** \\ \hline \multicolumn{5}{c}{_No Manual Annotations_} \\ \hline Train & 21794 & 3979.4 & 276.2 & 10.9 \\ Valid & 2724 & 4067.4 & 279.8 & 11.0 \\ Test & 2723 & 3899.9 & 278.8 & 10.9 \\ \hline \multicolumn{5}{c}{_Manual IRC Annotations_} \\ \hline 1049-test & 1049 & 3741.1 & 245.4 & 11.0 \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset statistics of CanLII. Case/Summary len is the text length in terms of the number of words, while sents is the sentence count per summary. and is structure-aware. It uses a prompt-based backbone model to generate sentences, optimizing candidate selections based on the model likelihood and structure label probability. All implementation details are in Appendix B. All experiments are evaluated using ROUGE-1 (R-1), ROUGE-2 (R-2), ROUGE-L (R-L) F1 (Lin, 2004), BERTScore (BS) (Zhang et al., 2020), and structure similarity (\(SS\)) (Shen et al., 2022). More details on the structure metric are in Appendix C. ## 6 Results and Analysis ### Automatic Result This section addresses two research questions: **RQ1**. Does STRONG improve summarization quality compared to baselines? **RQ2**. How do models compare in preserving structure? We then conduct analyses based on the observations and perform a small-scale human evaluation. **RQ1.** Using the left results section of Table 2, we first compare STRONG with the NoStructure baseline on traditional ROUGE and BERTScore summarization metrics. For the 1049 test set, when the maximum generation output length is limited to 256 tokens, we observe that STRONG obtains an average of 2.1, 0.7, 2.1, and 0.2 improvements across ROUGE-1, 2, L, and BERTScore (rows 3 vs. 2), which are **significant** based on 95% confidence intervals. STRONG also outperformed the re-implemented SentBS baseline (rows 3 vs. 1). We also explored the impact of increasing the maximum output length to 512 tokens, based on the observation that oracle summaries tended to be longer (Table 1). Similar trends were seen when the maximum output length is increased to 512 tokens (rows 5 vs. 4), as well as when all analyses are repeated using the 2,723 silver set (rows 6-8, 9-10). This illustrates that the target structure information helps STRONG generate higher-quality summaries. Appendices D and E present examples and analysis to demonstrate model output differences in content coverage. **RQ2.** In the 1049 test set, compared to the NoStructure model (row 2), the STRONG model (row 3) significantly improves the structure similarity scores by 0.03. While SentBS (row 1) outperforms both methods (rows 2/3), the tradeoff is increasing inference time (last column). In contrast, with the extended 512 generation length where we could not even run SentBS, STRONG obtained the best oracle test set performance in the table, with a margin of 0.1 compared to SentBS (rows 5 vs. 1). Albeit imperfect, on the silver test set where our IRC sentence classifier predicts the structure labels, STRONG also gains 0.1 improvements to NoStructure (rows 7 vs. 8, and 9 vs. 10), and now even surpasses SentBS (row 6 vs. 8) on structure similarity while again reducing inference time. ### Length Control The second to last column of Table 2 shows that STRONG generates the longest summaries, which may have impacted the above assessments. We thus force NoStructure and STRONG to continue generating tokens until reaching the same specified limit of {64, 128, 256, and 512} tokens.4 Table 3 Figure 2: Illustration of our structure prompting approach (STRONG). shows the results for the 256 token limit,5and indicates that the Table 2 performance gap (repeated in the first two rows of Table 3) diminishes when the length is controlled (the last two rows). This suggests that the structural benefits of STRONG become less important when output length is fixed. However, controlled length can lead to incomplete generations (see an example in Appendix D.1), and STRONG can dynamically adjust and generate similar length summaries compared to the oracle when they can stop generation if needed. Additionally, for both NoStructure and STRONG, we observe a drop in ROUGE performance for extremely long summaries (512 tokens) compared to smaller output lengths (see Appendix D.1), likely because 512 tokens deviate from the distribution of human summarization lengths. We additionally experimented with another setup to adjust the minimum generation length of each model and with higher length penalties. These results are detailed in Table 9, located in Appendix D.2. We observed that our STRONG model outperformed the baseline and reinforced the notion that structural information plays a crucial role in guiding the model to produce summaries with the appropriate length and level of detail. \begin{table} \begin{tabular}{l l} \hline \hline **Model** & \multicolumn{2}{l}{**SummaC\({}_{\text{Conv}}\)**} \\ \hline \multicolumn{3}{l}{_Max output of 256 tokens_} \\ \hline SentBS & 0.660 \\ NoStructure & 0.663 \\ STRONG & 0.704* \\ \hline \multicolumn{3}{l}{_Max output of 512 tokens_} \\ \hline NoStructure & 0.658 \\ STRONG & 0.697* \\ \hline \hline \end{tabular} \end{table} Table 4: Results of the average factuality scores for models in Table 2 over the CanLII oracle test set. * means the result is significantly different from the previous row using paired t-test. \begin{table} \begin{tabular}{l|l|c c c|c|c c c} \hline \hline **ID** & **Model** & **R-1** & **R-2** & **R-L** & **BS** & **SS** & **Avg Length** & **Infer. Time** \\ \hline \multicolumn{8}{c}{1049 Oracles} \\ \hline \hline \multicolumn{8}{c}{_Max output of 256 tokens_} \\ \hline 1 & SenBS & 48.31 & 23.86 & 44.73 & 86.87 & 0.436 & 129.6 & 8.5 hours* \\ 2 & NoStructure* & 50.33 & 25.84 & 46.47 & 87.39 & 0.344 & 159.2 & 2.2 hours \\ 3 & STRONG* & 52.47 & 26.54 & 48.57 & 87.63 & 0.372 & 186.3 & 2.5 hours \\ \hline \multicolumn{8}{c}{_Max output of 512 tokens_} \\ \hline 4 & NoStructure & 51.61 & 26.72 & 47.76 & 87.49 & 0.383 & 198.1 & 4.2 hours \\ 5 & STRONG* & **55.90** & **28.61** & **51.97** & **87.78** & **0.535** & 263.0 & 4.3 hours \\ \hline \hline \multicolumn{8}{c}{2723 Silver Test Set} \\ \hline 6 & SenBS & 49.24 & 25.43 & 45.58 & 85.47 & 0.470 & 118.0 & 21.5 hours* \\ 7 & NoStructure* & 50.76 & 26.84 & 46.78 & 87.75 & 0.330 & 160.6 & 6.2 hours \\ 8 & STRONG* & 52.84 & 27.90 & 48.73 & 87.97 & 0.493 & 179.3 & 6.3 hours \\ \hline \multicolumn{8}{c}{_Max output of 512 tokens_} \\ \hline 9 & NoStructure & 52.22 & 27.57 & 48.18 & 87.69 & 0.440 & 196.9 & 13.0 hours \\ 10 & STRONG* & **57.17** & **29.87** & **52.93** & **88.10** & **0.543** & 255.9 & 13.1 hours \\ \hline \hline \end{tabular} \end{table} Table 2: Results of different models on the CanLII oracle and silver test sets. BS refers to BERTScore, SS means structure similarity, respectively. Models with * mean all results are statistically different from the previous row, based on 95% confidence intervals. All results are reported as an average of 3 runs initialized with random seeds. Best results are highlighted with **bold**, and best results under the 256 token settings are underlined. Rows 1 and 6 (with *) experiment with an RTX3090Ti card with larger memory, which will make the inference time faster than on the default cards, which are RTX5000s and used for all other experiments. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & **Control Len.** & **R-1** & **R-2** & **R-L** & **BS** \\ \hline NoStructure & No & 50.33 & 25.84 & 46.47 & 87.39 \\ STRONG & No & 52.47 & 26.54 & 48.57 & 87.63 \\ \hline NoStructure & Yes & 50.74 & 25.91 & 47.07 & 87.17 \\ STRONG & Yes & 50.96 & 26.26 & 47.33 & 87.39 \\ \hline \hline \end{tabular} \end{table} Table 3: Results of models when summary has a maximum (top) versus controlled (bottom) length of 256 tokens. Although STRONG still outperforms the baseline, the delta is reduced when the length is controlled. ### Factuality To evaluate the factuality of generated text, we picked the \(\textsc{SummaC}_{\textsc{Conv}}\) score from Laban et al. (2022), which utilizes the NLI model to detect summary inconsistencies and performs well on multiple factuality benchmarks (details in the original paper) compared to other metrics such as FactCC (Kryscinski et al., 2020) and DAE (Goyal and Durrett, 2020). As shown in Table 4, our STRONG model obtains the highest scores, which means the highest consistency between document and generated summaries. ### Human Evaluation Human evaluation is under-explored for legal tasks, as it is labor-intensive due to long documents / summaries and requires evaluators with legal expertise (Jain et al., 2021). As a first step, we conducted a small-scale human evaluation using five legal decisions to assess the quality of summaries generated by all models in Table 2. Three legal experts were asked to evaluate the coherence of the generated texts and assess the coverage of argumentative components when compared to the oracle summaries crafted by the human CanLII experts.6 The evaluator feedback indicated that longer summaries could potentially introduce more factual errors, and there was inconsistency in terms of fluency and readability, with mixed performance observed (one annotator reported issues in two cases). On the other hand, the advantage of controllable structure generation was more evident when generating longer summaries. In two out of five cases, the summaries generated by STRONG were preferred in the 512-length setting, while under the 256-length setting, only one STRONG-generated summary was favored. Footnote 6: We provide the evaluation details in Appendix F. ## 7 Conclusion We proposed the STRONG approach for improving the summarization of long legal opinions by providing target-side structure information. STRONG accepts different types of prompts and generates summaries accordingly. Experiments demonstrated that the content coverage, summary length, structure adherence, and inference time are all improved with STRONG compared to prior structure-control and no-structure baselines. ## Limitations Our research results are constrained by our dependence on a single dataset for experimentation as well as by computing resource limitations. While prior work demonstrated that the SentBS approach could obtain negligible performance drop with regard to automatic metrics such as ROUGE and BERTScore compared to a finetuning structure prompted baseline, our current experiment is hindered by extreme demand of GPU memories given the much longer legal input and large parameter searching space. We also demonstrate that the slowness of compared work is more severe when transferring the model to our tasks. Further experiments on more extensive setups of the prior baselines can be important for future work to verify the past work's conclusions. We recognize that our methodology relies on annotated data for structure labels, particularly when adapting to novel domains. In future research, we aim to investigate zero-shot learning techniques to enable structure classification without the necessity for annotations. While our paper uses standard summarization metrics and a similarity measure particularly related to our focus on structure controllability, we do not yet extensively investigate how STRONG impacts factuality besides the \(\textsc{SummaC}_{\textsc{Conv}}\) score (Laban et al., 2022). A recent study (Wan et al., 2023) demonstrates that improvements in factuality-related metrics come with the sacrifice of dropping automatic metrics such as ROUGE and BERTScore, while Min et al. (2023) harness the power of LLMs to evaluate the factuality of long-form text generation. Deviating from prior work (Zhong and Litman, 2022) that studies the extractive summarization task, we focused on the abstractive summarization, which has shown to surpass the performance of extractive methods by a noticeable margin, while both strategies introduce unfaithfulness (Zhang et al., 2023). Another limitation is that we only exploited the IRC structure representations due to the availability of oracle summary annotations. Exploring the use of structures based on other methods such as Lu et al. (2018) is a promising area for future work. Also, the automatic evaluation metrics may be deficient compared to human evaluations, thus unfaithfully representing the final quality of generated summaries compared to real legal experts. Moreover, in a real application, end users may propose and inquire about different out puts with self-designed structure prompts7, which remains an open-ended challenge and may need human validation for future works. Footnote 7: We provide an example of feeding different prompts to generate diverse summaries in Appendix E.1. ## Ethical Considerations Using generated abstractive summary results from legal opinions remains a problem, as abstractive summarization models have been found to contain hallucinated artifacts that do not faithfully present the source texts [14, 15]. The generation results of our models may carry certain levels of non-factual information and need to be used with extra care. Similarly, CanLII has taken measures (i.e., blocking search indexing) to limit the disclosure of defendants' identities, while abstractive approaches may cause potential user information leakage. ## Acknowledgements This work is supported by the National Science Foundation under Grant No. 2040490 and by Amazon. We want to thank the members of the Pitt AI Fairness and Law Project members, the Pitt PETAL group, and anonymous reviewers for their valuable comments in improving this work.
2309.11829
Making Mathematical Research Data FAIR: A Technology Overview
The sharing and citation of research data is becoming increasingly recognized as an essential building block in scientific research across various fields and disciplines. Sharing research data allows other researchers to reproduce results, replicate findings, and build on them. Ultimately, this will foster faster cycles in knowledge generation. Some disciplines, such as astronomy or bioinformatics, already have a long history of sharing data; many others do not. The current landscape of so-called research data repositories is diverse. This review aims to perform a technology review on existing data repositories/portals with a focus on mathematical research data.
Tim Conrad, Eloi Ferrer, Daniel Mietchen, Larissa Pusch, Johannes Stegmuller, Moritz Schubotz
2023-09-21T07:06:30Z
http://arxiv.org/abs/2309.11829v1
# Making Mathematical Research Data FAIR: ###### Abstract The sharing and citation of research data is becoming increasingly recognized as an essential building block in scientific research across various fields and disciplines. Sharing research data allows other researchers to reproduce results, replicate findings, and build on them. Ultimately, this will foster faster cycles in knowledge generation. Some disciplines, such as astronomy or bioinformatics, already have a long history of sharing data; many others do not. The current landscape of so-called research data repositories is diverse. This review aims to perform a technology review on existing data repositories/portals with a focus on mathematical research data. ## Introduction The importance of sharing and citing research data is steadily gaining recognition as a foundational element in scientific research across different fields and subjects. Sharing research data allows other researchers to reproduce results and replicate findings [1, 2]. Ultimately, this promotes the generation of knowledge at a faster pace. Some disciplines already have a long history of sharing data and are benefiting from it [3], but many others do not. Although the term _data sharing_ is not used unambiguously in the literature [4], technically, data sharing is mainly organized through Research Data Repositories (RDR). The current RDR landscape is diverse. However, it can be roughly classified into four categories: institutional, disciplinary, multidisciplinary, and project-specific [5]. In the field of mathematics, significant progress has already been made in terms of research data sharing. Typical data types include theorem libraries or number sequences (see Table 1 for more examples). As a highly structured and rigorous field, mathematics fits well with the development of shared data resources. Particularly, theorems and proofs can be conveniently disseminated and checked using available checking engines [6, 7]. Overall, the available sites and repositories provide mathematicians with a large assortment of mathematical objects that can be utilized to solve new problems, establish new theories, and increase knowledge - not only in mathematics. However, despite the advancements made in the mathematics community, data sharing is still a subject that requires continuous attention and improvement. Researchers, institutions, and funding agencies must prioritize the development of rules and infrastructure that facilitate the sharing and citation of research data to increase the prevalence of good data-sharing practices in mathematics - and other fields. By doing so, we may develop an environment for research that is more open and collaborative, thereby accelerating the rate of scientific discovery. Also in other fields, data sharing enables more transparent and repeatable scientific research, making it an increasingly important topic in recent years. By releasing data openly, researchers can increase the likelihood that their findings can be repeated and validated by others, so bolstering the credibility of the results. However, issues persist in ensuring that data is shared in an efficient and accountable manner. To ensure its integrity and usability, the shared data must be carefully curated and documented. Caution needs also to be taken such that privacy and confidentiality issues are addressed to prevent misuse or misinterpretations of the data. By establishing best practices for data sharing and citation and facilitating the creation of standardized metadata and data management standards, these challenges can be overcome. In conclusion, the sharing and citation of research data is a crucial part of scientific research that has the ability to boost collaboration and speed the production of new knowledge. The field of mathematics has made substantial progress in this area, but additional efforts are required to guarantee that data sharing becomes a generally accepted and well-supported practice in all fields. **Objectives & Outline** Through this technology review, we sought to answer the following research questions, with a particular focus on mathematical research data. 1. What is the current status of open data platforms in academia? 2. What are the main requirements for an open data platform? 3. What are the biggest challenges and obstacles that are preventing the successful implementation of widely used open data platforms? We have structured the paper as follows: We first give the necessary background and emphasize the significance of open data platforms in mathematical research and their role in promoting Open Science in the following section. In the following methods section, we describe our methodology for compiling and evaluating a comprehensive list of mathematics-related open data platforms. The open data platforms that made it to the final list are described in the results section, in which we provide a comprehensive analysis, assessing their features and conformance to the FAIR principles. The discussion section provides an analysis of the results, highlighting the challenges and limitations of the existing open data platforms. In the conclusion part of this paper, we provide guidelines for authors submitting to research data repositories. ### Open Science in Mathematics: The Role of Open Data and Data Platforms Not only has the digital revolution altered how we conduct research, but also how we share it [8]. In this regard, the Open Science movement, which advocates for the accessibility and reuse of scientific research, has been a game-changer. It has played a crucial role in promoting openness, effectiveness, and collaboration within the scientific community [9]. The main goal is to increase the use of research results by society, industry, and science itself, thereby making the scientific community more transparent and efficient. Open access to scientific publications, open-source software, open data, and free educational materials are essential components of Open Science. The concept of open data, which emphasizes making research data publicly accessible, is fundamental to Open Science. This practice facilitates not only the replication and validation of research findings, but also the exploration of new research questions and hypotheses. In the context of mathematics, open data takes on a significance of its own. A variety of data types, including symbolic formulae, numerical arrays, and observational data, characterize mathematical research [10]. Understanding these data types is indispensable for the efficient analysis and communication of mathematical research. This section delves deeper into the implications of open data in mathematical research and investigates how open data platforms can be utilized to make such data accessible to a larger audience. #### Open Data in Mathematics As a central component of Open Science, open data refers to the practice of making research data publicly accessible under open licenses. This practice facilitates the replication and validation of findings by allowing other researchers to verify and expand upon previous research. In addition, open data enables the investigation of new research questions and hypotheses, as well as the combination of data from multiple sources to uncover novel insights and patterns. As a result, open data is becoming the norm in an increasing number of scientific disciplines. The field of mathematical research provides an intriguing example in this regard. Numerous data types, including symbolic formulae and theorems, numerical arrays, and observational information, characterize mathematical research (see Table 1 for an overview). Understanding these various data types is essential for analyzing and communicating mathematical research effectively. With a focus on mathematical research data, we will investigate open data platforms and how they can be utilized to make such data accessible to the general public. By understanding the advantages and disadvantages of open data, researchers can make well-informed decisions regarding how to share their research and contribute to the expanding Open Science movement. #### Open Data Platforms In recent years, data sharing has become an essential component of scientific research, as it enables researchers to increase the impact of their work and promote transparency and collaboration. Open data platforms are digital environments in which scientists can store, exchange, and access datasets. Usually, these platforms include data management tools, metadata standardization, and version control. Zenodo and Figshare are prominent open data platforms, each with its own set of features and user communities. Nonetheless, the process of data sharing poses various challenges, such as ensuring accessibility and the need for effective metadata management and standardization. In this regard, an open data platform serves as a centralized repository for storing and sharing (research) data, thereby offering a solution to these challenges. Furthermore, an effective open data platform offers a range of features that facilitate data sharing and reuse, including easy accessibility, enhanced discoverability, simplified data submission mechanisms, functionalities for metadata management, and conformance with FAIR principles. The fundamental key features that an open platform should provide are: 1. Free use: The open data platform should be free to use for researchers, allowing them to share and access data without financial barriers. 2. Accessibility: Researchers should be able to access the data from any location and computational environment, via a standard web browser. Easy accessibility promotes transparency, facilitates reproducibility, and helps to avoid duplication of efforts. 3. Data submission mechanisms: Researchers should be able to submit data to the repository, making it available for future use and replication. This feature promotes the sharing of data and transparency of research results. Moreover, an effective open platform should aim to fulfill additional features such as: 1. FAIR principles compliance: The platform should adhere to the FAIR principles, which emphasize the importance of data being Findable, Accessible, Interoperable, and Reusable. These principles place particular importance on the ability to process data by machines and are listed in the Section on FAIR principles. 2. Data quality: The platform should ensure that the data is of high quality, reliable, and accurate. This feature is critical to ensure that research data is useful and impactful for future research. \begin{table} \begin{tabular}{p{71.1pt} p{113.8pt} p{113.8pt}} \hline \hline Category & Types & Description \\ \hline Symbolic data & Formulae, Theorems, Proofs, Functions & Mathematical expressions, theorems, proofs, and functions represented using symbolic notation. \\ \hline Numeric data & (Integer) number sequences, Matrices, Tensors, Finite lattices & Numerical values, matrices, lattices used in mathematical modeling and analysis. \\ \hline Geometric data & Curves, Surfaces, High-dimensional objects, Polytopes & Objects and structures used in geometry and related fields, such as curves, surfaces, and high-dimensional objects, e.g. manifolds. \\ \hline Models & Math models, BioModels & Abstractions of real-world phenomena used to make predictions and test hypotheses. \\ \hline Observational data & Simulations, Experiments, Observations & Empirical data collected through experiments, simulations, and observations of natural phenomena. \\ \hline Text data & arXiv.org, EuDML, Encyclopedia of Math & Written and digital sources of mathematical research, including papers, books, and online resources. \\ \hline \hline \end{tabular} \end{table} Table 1: Types of mathematical research data 3. Metadata management: The platform should provide tools for metadata management, including descriptions of the data, authors, and institutions. Metadata helps researchers to discover, access, and understand the data. 4. (Meta)data format and standardization: The data platform should support various data formats and adhere to standardization guidelines to ensure the data is easily findable, accessible, interoperable, and reusable. 5. Security and privacy: The platform should ensure the security and privacy of the data, protecting sensitive information from unauthorized access and misuse. 6. User-friendly interface: The platform should be easy to use and accessible, with low entry barriers, enabling scientists from diverse backgrounds to participate in the Open Science movement. This feature includes clear and concise documentation, as well as intuitive interfaces to upload and retrieve data. In the following, we will describe and review existing research data platforms along those features (see Results Section). Keep in mind that in this review, however, we will restrict our focus on platforms that contain significant mathematical research data. Before we dive into these details, we first introduce how the data was collected. ## Methods To assess the current status of open data platforms in the field of mathematics, the first challenging step consisted in obtaining a comprehensive list of all relevant platforms in the field, operational at the time of writing this review. A systematic search based only on a review of publications is not a feasible approach in this case, since most open platforms exist without being explicitly documented in the technical literature. Instead, most of the current open platforms are only findable through search engines. Thus, our approach to get an overview of the current ecosystem had to combine a literature review with the direct results obtained in a search engine. We started our search by reviewing published articles found in Google Scholar obtained by combining the terms "mathematics", "research data", "scientific data", "research metadata", "scientific metadata", "portal", "repository", "infrastructure", "platform", "metadata management" and "FAIR". By examining these publications as well as the URLs mentioned in them we obtained a first tentative list of open platforms, not only in the field of mathematics. We also identified further publicly-accessible portals directly through search engines using the same keywords and through searches in aggregators of data repositories that included FAIRsharing [11], MathHub [12], OpenDOAR and re3data [13]. The list was finally completed based on the authors' knowledge. The initial search was performed during the period June 2022 - October 2022. A second round of searches took place in the period March 2023 - June 2023. It is also important to note that as a restriction on all inquiries, the availability of English-language content was mandated. From the initial list of platforms, we excluded those that did not meet the essential requirements outlined in section on Open Data Platforms. Specifically, we discarded platforms that were not free to use, not publicly accessible, or did not offer the option of data submission. Not offering a direct and open mechanism for data submission is often a requirement for research data repositories that have strict requirements on data curation. For this review, this forced us to exclude known resources in the mathematical community such as the Encyclopedia of Triangle Centers [14], the ISGCI (Information System on Graph Classes and their Inclusions) [15], the Graded Ring Database [16], the L-functions and modular forms database [17] as well as a long list of publicly-accessible institutional repositories that only accept submissions by its members such as ATLAS of Finite Group Representations [https://brauer.maths.qmul.ac.uk/Atlas/v3/](https://brauer.maths.qmul.ac.uk/Atlas/v3/) or MIZAR [http://mizar.org/library/](http://mizar.org/library/). Platforms that just aggregate metadata and thus do not offer a mechanism to directly incorporate data and metadata submitted by users were also excluded from the review. These include aggregators such as re3data, DataCite [18] and Dimensions.ai [19]. We also excluded open platforms that did not include a significant amount of mathematical research data at the time of this review and were thus not relevant for our purposes. Among these platforms were B2share [20], Dryad [21], Fairdomhub [22], Mendeley Data [23] and Vivli [24]. Finally, within the mathematical ecosystem, there exists a distinct category of portals that mainly contain written articles that describe concepts within specific mathematical disciplines. These platforms, often built on the MediaWiki framework, encourage user contributions through a wiki-based approach. However, they are often predominantly populated only by a small group of active contributors within the field. While these platforms are free to use, accessible, and allow user contributions, they often display limited adherence to the FAIR principles. Notably, they frequently lack unique persistent identifiers for published articles, do not provide access to comprehensive metadata via an API, and often lack explicit license information. Due to these limitations, we chose not to include them in this review, as their inclusion would have significantly expanded the list with numerous items that only marginally comply with the FAIR principles. Nevertheless, given their significance in the field of open-access mathematical research data, we have included a non-exhaustive list of such platforms in Appendix B. The final list of platforms is included in Table 3. Each platform has been evaluated based on Austian et al.[25] using publicly available information, in the following categories (see Table 2): Infrastructure, Preservation, Security / Privacy, Archiving, Submission, Access / Sharing, Policy, and whether they are compliant with the FAIR principles[26]. This allowed us to identify presently implemented standards and features related to the management and sharing of research data, with a focus on mathematics. An analysis of the FAIR compliance for each platform is included in Table 5. The following section provides also a brief description of each included platform, based on the evaluation criteria. Together, this information serves as the basis for the discussion on the presented research questions. \begin{table} \begin{tabular}{l l l|l} \hline \hline Category & Sub-category & Options & \\ \hline Infrastructure & Platform & Dataverse \(|\) Figshare \(|\) MediaWiki \(|\) Proprietary \(|\) Open-Source (other) \\ & Cost & Free \(|\) Free to access, but contribution needed for deposit \\ & Size & Size of repository (number of datasets) \\ \hline Preservation & Redundancy & None \(|\) Multiple redundant copies \(|\) Geographically distributed redundant copies \\ & Persistent identifiers & No ID \(|\) DOI \(|\) other persistent ID \(|\) non-persistent ID \\ & Persistent data deposit & None \(|\) Long term data preservation \\ \hline Security / & Security & None \(|\) Authentication mechanism \\ Privacy & Privacy & None \(|\) Distinction between public and private data \\ \hline Archiving & Author identifier & None \(|\) zbMATH Open Author ID \(|\) ORCID ID \(|\) SCOPUS ID \(|\) Other \\ & Publication identifier & None \(|\) zbMATH Open Document ID \(|\) Reference to paper through DOI \(|\) Other reference \\ & Time stamping & None \(|\) Timestamp upon upload \(|\) Timestamp for every version \\ \hline Submission & Data types & No restriction \(|\) Restrictions to specific types \\ & Data size & No restriction \(|\) Restricted to maximal size \\ & Metadata & No metadata necessary \(|\) Controlled Language \(|\) Readme file \\ & Review/Data Quality & None \(|\) Submissions are reviewed and approved for metadata and compliance \\ \hline Access / Sharing & Online access & Data available for free and open download \(|\) User registration needed \\ & API & None \(|\) API for search available \(|\) API for search and submission available \\ & License & CC0 \(|\) Creative Commons License \(|\) Other license (open) \(|\) Other license (restrictive) \\ \hline Policy & Mandate & No \(|\) Yes (Info about: Under what authority does the repository operate (e.g. government)?) \\ & Data Ownership & No \(|\) Yes (Info about: Who owns the ingested data?) \\ & Data Licensing & No \(|\) Yes (Info about: How are the data licensed?) \\ & Preservation & No \(|\) Yes (Info about: What is the practice for long-term preservation?) \\ & Succession plan & No \(|\) Yes (Info about: What actions will be taken if the repository is closed?) \\ \hline FAIR Principles & Findability & No \(|\) Yes (Means: The data can be discovered by both humans and machines, for instance by exposing metadata and keywords to search engines) \\ & Accessibility & No \(|\) Yes (Means: The (meta-)data is archived in long-term storage and can be made available \\ & & using standard technical procedures) \\ & Interoperability & No \(|\) Yes (Means: The data can be exchanged and used across different applications and systems) \\ & Reusability & No \(|\) Yes (Means: The data is well documented and licensing information is provided \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation criteria, based on Austin et al. [25] and Wilkinson et al. [26] ## Results This section examines the current state of Open Science and Open Data platforms by reviewing the available literature and the implementation details of existing platforms. The evaluation is based on the criteria presented in Table 2 and on the adherence to the FAIR principles. Apart from a short description of each platform, summarized in Table 3, a comparison in terms of the most relevant features for data sharing has been included in Table 4. ### Open Data Platforms For each platform listed, a brief description of its key features is provided, including information related to its creation, technological framework, objectives, and mathematical focus. If available, we include scientific papers and white-papers in which the systems are described. If no such paper is available, we refer directly to the website. 4TU Research Data4TU.ResearchData is an online data repository for science, engineering and design, managed by the 4TU.ResearchData Consortium. It aims to facilitate the sharing of research datasets and guarantees their long-term access by adhering to FAIR principles. The repository has been online since 2010 [27], it is based on Figshare technology, and it is hosted and managed by the TU Delft Library. As of June 2023, it hosts slightly more than 8,000 items, which include 7,850 datasets and 174 software items. The vast majority of the items belong to the field of atmospheric sciences and climate studies. About 100 items are assigned to mathematical categories, including computation theory, numerical and computational mathematics, mathematical physics, applied and pure mathematics. Every uploaded dataset receives a DOI and can be assigned a license, the most popular being CC0 and CC BY 4.0. One of the most distinct attributes of this repository is its advanced functionality for software preservation, including integration with GitHub and GitLab, dedicated licenses for software and a repository sandbox for testing. Currently, the FAIR principles regarding findability, accessibility and interoperability are fully fulfilled while those concerning reusability are only partially fulfilled. Archive of Formal ProofsThe Archive of Formal Proofs [28, 29] is a collection of 700 proofs from various areas of mathematics, including number theory, algebra, analysis, and geometry, among others. All included items have passed both classical peer review and have been verified by the theorem prover Isabelle [30]. The repository additionally contains proof libraries and examples for Isabelle system. The site was launched in 2004 and is maintained by the Isabelle user community. The content is organized in the style of a journal, with each article being a set of Isabelle theories and proofs accompanied by definitions, theorems and corollaries which are written in the dedicated input language Isar, and as such executable using Isabelle. Each entry is citable via a locally unique identifier string. The proofs are available under BSD and LGPL software licenses. Overall, the Archive of Formal Proofs fulfills just over half of the FAIR Principles. arXivarXiv is an open access repository for scholarly preprints and postprints in eight subject areas, including mathematics, physics and computer science. It was founded in 1991, and it is currently maintained by Cornell University. Articles can be submitted to arXiv at no cost and are subject to a moderation process, but not peer-reviewed. As of June 2023, it is hosting well over 2 million scholarly articles organized into 32 distinct categories, with over 500,000 of them being within the field of mathematics. The platform has a strong commitment to open access, ensuring that all of its content is freely available to the public. It fully complies with the FAIR principles for findability, accessibility and reliability while only partially fulfilling the principles for interoperability. Authors can choose from several license types under which an item is made available, including various CC-BY variants, CC0 and an arXiv specific license. \begin{table} \begin{tabular}{l l l l l l} \hline \hline Portal Name & Math focus & Citability & License & \# Items & URL \\ \hline 4TU Research Data & Multidisciplinary & DOI & CC0 (default), CC & \(>8300\) & data.4tu.nl \\ \hline Archive of Formal Proofs & Proofs & URL & BSD, LGPL & \(\sim 700\) & Isa-afp.org \\ \hline arXiv & Multidisciplinary & DOI, arXiv ID & CC, arXiv perpetual & \(>2,2\)M & arxiv.org \\ \hline Biomodels & Mathematical models & Model ID & CC & \(>2500\) & ejae.uk/biomodels \\ \hline Database of Ring Theory & Rings & Internal ID & CC BY 4.0 & \(\sim 300\) & ringtheory.berokapp.com \\ \hline Encyclopedia of Graphs & Graphs & Graph ID & CC BY-NC-SA 3.0 & \(>11\)M & atlas.gregas.eu \\ \hline Figshare & Multidisciplinary & DOI & CC, MIT, GPL, \(>7,3\)M & figshare.com \\ \hline FindStat & Combinatorial statistics & Internal ID & CC BY 4.0 & \(\sim 2000\) & findstat.org \\ \hline HAL & Multidisciplinary & DOI, idHAL & CC, copyright & \(>3\)M & hal.archives-ouvertes.fr \\ \hline Harvard Dataverse & Multidisciplinary & DOI & CC0 (default), custom & \(>156000\) & dataverse.harvard.edu \\ \hline MathRepo & Supporting material & URL & diverse, undefined & \(\sim 70\) & mathrepo.mis.mpg.de \\ \hline Network Repository & Network datasets & URL & CC BY-SA & \(>6600\) & networkrepository.com \\ \hline OEIS & Integer sequences & Internal ID & CC BY-NC 4.0 & \(>35000\) & oeis.org \\ \hline Open Science Framework & Multidisciplinary & DOI, Internal ID & diverse & \(>7\)M & oe.iso \\ \hline Open Science Library & Multidisciplinary & DOI & diverse & \(>100\) & codeocean.com \\ \hline Papers with Code & Multidisciplinary & URL & CC BY-SA & \(>4000\) & math,paperswithcode.com \\ \hline \(\pi\)-Base & Topological counterexamples & Internal ID & CC BY 4.0 & \(\sim 500\) & topology.pi-base.org \\ \hline polyDB & Discrete geometric objects & Internal ID & undefined & \(\sim 500\)M & polydb.org \\ \hline Science Data Bank & Multidisciplinary & DOI, Internal ID & diverse & \(>7\)M & scidb.cn \\ \hline SuiteSparse Matrix Collection & Sparse matrices & Internal ID & CC BY 4.0 & \(\sim 2800\) & sparse.tamu.edu \\ \hline The House of Graphs & Graphs & Graph ID & Copyright & \(\sim 22000\) & houseofgraphs.org \\ \hline Wikidata & Multidisciplinary linked data & Wikidata ID & CC0 & \(>100\)M & wikidata.org \\ \hline Zenodo & Multidisciplinary & DOI, zenodo ID & diverse & \(>2,8\)M & zenodo.org \\ \hline \hline \end{tabular} \end{table} Table 3: List of included portals, sorted alphabetically. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Portal Name & Persistent ID & Author ID & Publication ID & Timestamping & API & Private data \\ \hline 4TU Research Data & ✓(DOI) & ✓(ORCID) & ✓(DOI) & ✓ & ✗ & ✓ \\ \hline Archive of Formal Proofs & ✗ & ✗ & ✓(DOI) & ✓ & ✗ & ✗ \\ \hline arXiv & ✓(DOI) & ✓(ORCID) & ✓(DOI) & ✓ & ✓ & ✗ \\ \hline Biomodels & ✓ & ✗ & ✓(DOI) & ✓ & ✓ & ✗ \\ \hline Database of Ring Theory & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline Encyclopedia of Graphs & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline Fishinger & ✓(DOI) & ✓(ORCID) & ✓(DOI) & ✓ ✓ & ✓ & ✓ \\ \hline FindStat & ✓ & ✗ & ✓ & ✓ ✓ & ✓ & ✗ \\ \hline HAL & ✓(DOI) & ✓(ORCID) & ✓(DOI) & ✓ & ✓ ✓ & ✗ \\ \hline Harvard Dataverse & ✓(DOI) & ✓(ORCID) & ✓(DOI) & ✓ & ✓ ✓ & ✓ \\ \hline MathRepo & ✗ & ✗ & ✓(DOI) & ✓ & ✗ & ✗ \\ \hline Network Repository & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline OEIS & ✓ & ✓ & ✗ & ✗ & ✓ & ✗ \\ \hline Open Science Framework & ✓(DOI) & ✓(ORCID) & ✓(DOI) & ✓ & ✓ ✓ & ✓ \\ \hline Open Science Library & ✓(DOI) & ✗ & ✓(DOI) & ✓ & ✗ & ✓ \\ \hline Papers with Code & ✓ & ✓(ORCID) & ✓(DOI) & ✗ & ✓ ✓ & ✗ \\ \hline \(\pi\)-Base & ✓ & ✗ & ✓(DOI) & ✗ & ✗ & ✗ \\ \hline polyDB & ✓ & ✗ & ✓(DOI) & ✗ & ✓ & ✗ \\ \hline Science Data Bank & ✓(DOI) & ✓(ORCID) & ✓(DOI) & ✓ ✓ & ✓ ✓ & ✗ \\ \hline SuiteSparse Matrix Collection & ✓ & ✗ & ✗ & ✗ & ✓ & ✗ \\ \hline The House of Graphs & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline Wikidata & ✓ & ✓ & ✓(DOI) & ✓ ✓ & ✓ ✓ & ✗ \\ \hline Zenodo & ✓(DOI) & ✓(ORCID) & ✓(DOI) & ✓ ✓ & ✓ ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 4: Main features of included portals. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c} & \multicolumn{4}{c|}{Findable} & \multicolumn{4}{c|}{Accessible} & \multicolumn{4}{c|}{Interoperable} & \multicolumn{4}{c}{Reusable} \\ \hline Portal Name & F1 & F2 & F3 & F4 & A1 & A1.1 & A1.2 & A2 & I1 & I2 & I3 & R1 & R1.1 & R1.2 & R1.3 \\ \hline 4TU Research Data & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & � & � \\ \hline Archive of Formal Proofs & � & ✓ & � & ✓ & ✓ & ✓ & ✓ & ✓ & � & ✓ & � & � & � & � & ✓ & ✓ & ✓ \\ \hline arXiv & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & � & ✓ & ✓ & � & ✓ \\ \hline Biomedicals & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline Database of Ring Theory & � & ✓ & � & ✓ & ✓ & ✓ & ✓ & ✓ & � & � & � & � & � & � & � & � & � & ✓ \\ \hline Encyclopedia of Graphs & ✓ & ✓ & � & ✓ & ✓ & ✓ & ✓ & ✓ & � & � & � & � & � & � & ✓ & � & ✓ \\ \hline Figshare & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & � & � & � \\ \hline FindStat & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & � & � & ✓ & � & ✓ \\ \hline HAL & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & � & � \\ \hline Harvard Dataverse & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & � & � \\ \hline MathRepo & � & � & � & � & ✓ & ✓ & ✓ & ✓ & ✓ & � & � & � & � & � & � & � & � \\ \hline Network Repository & ✓ & ✓ & � & ✓ & ✓ & ✓ & ✓ & ✓ & � & � & � & � & � & � & � & � \\ \hline OEIS & ✓ & ✓ & � & ✓ & ✓ & ✓ & ✓ & ✓ & � & ✓ & � & ✓ & ✓ & � & ✓ & ✓ \\ \hline Open Science Framework & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & � & � \\ \hline Open Science Library & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & � \\ \hline Papers with Code & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & � \\ \hline \(\pi\)-Base & ✓ & ✓ & � & ✓ & ✓ & ✓ & ✓ & ✓ & � & � & � & � & � & � & � \\ \hline polyDB & ✓ & ✓ & � & ✓ & ✓ & ✓ & ✓ & ✓ & � & ✓ & � & � & � & � & � \\ \hline Science Data Bank & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & � & � \\ \hline SuiteSparse Matrix Collection & ✓ & ✓ & � & ✓ & ✓ & ✓ & ✓ & ✓ & � & � & � & � & � & � & � \\ \hline The House of Graphs & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & � & � & � & � & � & � \\ \hline Wikidata & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline Zenodo & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & � \\ \hline \end{tabular} \end{table} Table 5. FAIR compliance of included portals, sorted alphabetically. BiomodelsThe European Bioinformatics Institute (EBI) hosts the BioModels [31] platform, an open data resource that provides access to more than 1000 curated computational models in systems biology. These models are derived from descriptions of biological phenomena found in the scientific literature, ranging from molecular and cellular processes to more complex views of whole organisms. Each model is assigned a unique and permanent identifier that can be used to cite the models. The platform supports interoperability by permitting models to be downloaded in various formats, such as SBML (Systems Biology Markup Language) or as an ODE system in the Octave syntax. BioModels provides a comprehensive and easily accessible compilation of mathematical models that describe biological systems, thereby promoting research, replication, and collaboration. All models are shared through the CC0 license. Database of Ring TheoryThe Database of Ring Theory is a comprehensive collection of rings and modules. It was created in 2013 by Ryan C. Schwiebert and currently holds a total of 162 rings, which can be explored through a list of 175 properties. It also stores data on 11 modules, classified according to 51 distinct properties and 63 theorems that are classified into 8 categories. Each object within the database can be accessed through its URL, which contains an internal ID. While the data within this portal is also published in a repository on GitHub [32], it is important to note that the license information, specifically the CC BY 4.0 license, is not explicitly mentioned on the website. However, this database encourages user participation by enabling data submissions through pull requests on the associated GitHub repository. The current implementation of the database has limited compliance with the FAIR principles. Encyclopedia of GraphsThe Encyclopedia of Graphs is an online repository of graph collections established in 2012 as part of the GreGAS project, funded by the European Science Foundation. As of 2023, it holds 46 collections that encompass not only graphs but also graph-like structures such as maps, configurations, and networks. Users can filter the objects in each collection using a list of over 30 distinct properties that vary depending on the specific collection. Each graph is assigned a unique Universal Graph Identifier, enabling direct access to its properties. The graph data within the repository is provided in the canonical sparse6 format and is released under the CC BY-NC-SA 3.0 license. The platform partially complies with the FAIR principles, but lacks interoperability due to the absence of machine-readable metadata, accessible through an API. FigshareFigshare is a general purpose scientific repository operated by commercial UK-based company Digital Science & Research Solutions Ltd. It was established in 2011 and supports researchers from all disciplines [33]. As of June 2023, it contains more than 7 million records, including around 350,000 entries for mathematics. The mathematical records consist mainly of figures, datasets, and journal articles. Figshare offers generation of DOI's for hosted content, and various licenses are available, such as the various Creative Commons licenses, GPL variants, Apache and MIT licenses. Furthermore, Figshare offers a public REST-based API, OAI-PMH endpoints, and the possibility to integrate GitHub, GitLab, Overleaf and other applications. Figshare fulfills the FAIR principles for findability, accessibility and interoperability and partially fulfills the principles for reusability. For interoperability, Figshare supports OAI-PMH which enables the inclusion of qualified references in (meta-)data. The domains which define the topics of records use controlled vocabularies. A license can be selected for a record for reusability, but the selection is not required. Figshare supports a data citation metadata schema which can be customized by the users. FindStatFindStat is an online database dedicated to combinatorial statistics and their relations [34]. Inspired by the OEIS, the project was initiated in 2011 by Chris Berg and Christian Stump at the Universite du Quebec. Within the database, users can explore nearly 2000 combinatorial statistics, organized into 24 distinct categories, along with 296 maps and 24 collections. The platform is continuously updated with new entries that can be submitted through an online form. Each object receives a unique identifier for easy access to its properties page, explicitly mentioned in the metadata. Released under a CC BY 4.0 license, the data can be accessed in plain text or JSON format and integrates seamlessly with SageMath. FindStat fully complies with the FAIR principle of findability and partially complies with the remaining principles. HAL open scienceHAL open science (Hyper Article en Ligne) is an open access data repository for all academic fields. It is operated by the French data center _Centre pour la communication scientifique directe_ (CCSD), which is part of the _French National Centre for Scientific Research_ (CNRS). HAL was launched in 2001 [35] and stores around three million records. Since HAL is a major french research data infrastructure, many publications are written in French. Out of this, more than 130,000 entries are connected to mathematics as of June 2023. The majority of the mathematical entries are journal articles, conference papers or preprints. HAL stores approximately 3000 entries of non-written mathematical research data such as videos, software and photos. HAL offers generation of DOIs for hosted content, and various Creative Commons licenses are available. The HAL platform fully fulfills the FAIR principles except for some reusability sub-principles. Harvard DataverseThe Harvard Dataverse is a cross-disciplinary institutional repository open for submissions from around the world. It runs on the open-source software Dataverse which has been in operation since 2006 [36] and is maintained by the Institute for Quantitative Social Science at Harvard University. The Harvard Repository is one of about 100 installations of the Dataverse software, and as of June 2023, it hosts about 170,000 records (6,000 Dataverse collections and 164,000 datasets). A Dataverse collection is a customizable collection of datasets (or a virtual repository) for organizing, managing, and showcasing datasets, with features allowing custom metadata and searchable metadata facet selection. Overall, this repository contains about 1,7 million files. About 500 datasets and 120 Dataverse collections were tagged under "Mathematical Sciences". Multiple tags are possible for a given record, and "Computer and Information Science" as well as "Social Sciences" are most frequently associated with mathematical content. The data can be put under various licenses, including Creative Commons licenses and the Creative Commons Zero (CC0) waiver. The Harvard Dataverse meets most FAIR criteria. MathRepoThe MathRepo [37], or Mathematical Research Data Repository, is a repository for mathematical research data of and by the Max Planck Institute for Mathematics in Sciences. It has been operational since 2017 and as of 2023 contains more than 70 records. About half of the FAIR principles are not fulfilled, however, they state their plan of restructuring the website to meet the FAIR criteria and follow MaRDI recommendations. The site is built on GitHub with Read the Docs and does not provide an API, but can be queried using URL strings. There is no general license for all records; instead, contributors can choose their preferred license. Network RepositoryThe Network Repository is a cross-disciplinary repository for network graph data [38]. Established in 2012, it has as of June 2023 about 6,600 networks classified in more than 30 domains, which are all available under the terms of a Creative Commons Attribution Share-Alike License (version not specified). The platform assigns a unique string identifier to each network and enables comparisons between different networks based on a given list of properties. A key feature of the site is that it offers interactive visualizations to explore the data. Users are invited to upload suitable graph data. The metadata related to each network is not available in a machine-readable format and thus the platform does not fulfill the interoperability FAIR principles. The rest of the principles are partially fulfilled. On-Line Encyclopedia of Integer SequencesThe On-Line Encyclopedia of Integer Sequences (OEIS) [39] was founded in 1964 and contains integer sequences and further information about the individual items. In 1996, the corresponding website was launched. Target groups are professional as well as amateur mathematicians. One of the key features is its ability to search and compare sequences. Each sequence is identified by a serial number, which makes it unequivocally identifiable. The information provided by the encyclopedia includes the sequence itself, paper references, links to material concerning a sequence, the formula used to generate it, keywords, as well as code in several programming languages and visualizations. It presently contains more than 300,000 sequences. The data contained in the Encyclopedia are made available under the CC BY-NC 4.0 license. The OEIS fulfills most of the FAIR criteria. Open Science FrameworkThe Open Science Framework (OSF) [40] is an open-source platform designed to support the entire research (-project) lifecycles. It offers functionalities to design studies, collect and analyze data as well as to publish reports and archive results. The platform is developed and maintained by the Center of Open Science (COS), a non-profit technology organization founded in 2013 that supports scientific research communities. Initially conceived for the field of psychology research, it has since its foundation become multidisciplinary. As of June 2023, it contains more than 4,500 files, 1,800 preprints and 1,300 projects in the field of mathematics. These represent only a minority among the over a million indexed preprints and over six million files hosted on the platform. For each project created using the platform, a DOI can be generated, and a license can be chosen, including Creative Commons, MIT, Apache, and GNU General Public licenses. The platform also offers file storage, version control and integration to citation management and storage tools, including Mendeley, Zotero, Figshare and GitHub. The platform adheres only partially to the FAIR principles, as it lacks persistent metadata, and does not include qualified references or detailed data provenance. Open Science LibraryThe Open Science Library is part of the Code Ocean platform, which provides cloud-based computational environments. This allows computational researchers to share their data and the necessary code to enable others to reproduce the published analysis. It is run by the commercial company Code Ocean Inc. since 2016. The key feature of this platform is that all-needed components, i.e., data, source codes, and the runtime-environment are packaged together as a container ("compute capsule"). These containers are hosted on the platform and can be run from a web browser or locally without the need to install libraries or runtime-special environments. The platform contains more than 3000 capsules categorized into multiple research disciplines, including mathematics. A capsule is assigned a DOI, is built on FAIR principles, and allows easy migration across operating systems and platforms. Licenses vary and can be chosen by the authors (e.g. MIT, CC0, etc.). Papers with CodePapers with Code is an online resource that connects scientific papers with code implementations, datasets, methods and evaluation tables. The platform offers additional valuable resources such as benchmarks that facilitate the comparison between state-of-the-art models. The entries in the platform can be explored through six specialized portals, which include the fields of machine learning, computer science, physics, astronomy, mathematics and statistics. The specific portal for mathematics contains as of June 2023 more than 4000 datasets. Open to contributions from all users, the website operates under a CC BY-SA license. Each paper is assigned a string ID based on its title. This ID can be used to access the paper metadata through a REST API. The API also allows access to metadata regarding authors, conferences, datasets, evaluations, methods, models and repositories. The platform fully adheres to the FAIR principles. \(\pi\)-base\(\pi\)-base is a community database that focuses on topological counterexamples. Launched in 2014 by James Dabbs, the project has grown to include 79 spaces, each offering information through 146 properties and 344 theorems. To facilitate easy access, each object within the database is assigned a unique ID, allowing users to retrieve specific objects via their corresponding URLs. While the data is also available in a GitHub repository[41], it lacks direct accessibility in a machine-readable format. The GitHub repository operates under a CC BY 4.0 license, but this licensing information is not explicitly stated on the website. Additionally, the repository features a guide outlining contribution conventions, offering users guidance on how to contribute effectively. polyDBpolyDB is a database of discrete geometric objects which was launched in 2013 by Andreas Paffenholz and Silke Horn as an extension of the software package polymake[42]. As of June 2023 the database contains 21 collections that are classified into four groups: Manifolds, Matroids, Polytopes and Tropical Objects. In total, these collections contain more than 500 million documents. The data for each document and collection is stored as plain JSON and can be accessed through a REST API. To this purpose, both the collections and the documents receive a unique ID that can be used to access the data. Instructions to submit new collections are provided but there is no explicit information on the license under which the data is released. The platform partially fulfills all FAIR principles except the ones related to reusability. Science Data BankThe Science Data Bank (ScienceDB)[43] is a public multidisciplinary research data repository for eScience which was launched in 2015. ScienceDB aims to become a long-term data sharing and data publishing repository in China which covers the entire spectrum of scientific fields. As of June 2023, it has close to 6 million open datasets, with over 25,000 being related to mathematics. These mathematics-related records consist of journal publications as well as datasets, slides, code data and other multimedia content and cover a wide range of mathematical topics. Users have the option to select from a range of licenses, including CC-licenses, for licensing their published data. The uploaded data undergoes a review process by the curators and can be assigned a DOI. To facilitate access and utilization, ScienceDB provides an open REST-based API that allows users to retrieve metadata, conduct entry searches, and obtain dataset metrics. While ScienceDB aligns with the majority of the FAIR principles, it does not include qualified references in the metadata. SuiteSparse Matrix Collection[44] is a curated set of sparse matrices that arise in real applications from a wide spectrum of domains, such as thermodynamics, material science and optimization. The target group is the numerical linear algebra community, which is provided with curated data allowing for robust and repeatable experiments or for benchmarking purposes. Matrices are identifiable by ID and related metadata, such as the matrix norm or the structural rank. The matrices can be accessed over several interfaces for Java, Matlab, Julia and Python and are made available under a CC BY 4.0 License. The SuiteSparse Matrix Collection fulfills less than half of the FAIR criteria. The House of GraphsThe House of Graphs (HoG) provides a searchable database of graphs and network structures. It was created in 2013 and includes a growing collection of graphs with nearly 22,000 entries [45] that are classified based on various characteristics, such as size, degree distribution, and connectivity. Registered users can add new graphs to the database and existing graphs can be downloaded in various formats, along with their corresponding metadata. The HoG also provides tools for graph visualization, enabling researchers to gain insights into the structures of the graphs in the database. No information about the used licenses is given. The current implementation of the platform adheres to most of the FAIR principles for findability and accessibility but does not comply the principles for interoperability and reusability. WikidataWikidata is an open, cross-disciplinary and multilingual collaborative knowledgebase [46] that has taken Wikipedia's "anyone can edit" approach to the Linked Open Data world. It is built on MediaWiki, with a set of extensions for handling of mathematical expressions [47, 48] and structured data, collectively known as Wikibase. Launched in 2012, Wikidata currently contains about 1.5 billion statements about 100 million items and 1 million lexemes, expressed via about 10,000 properties. The over 20,000 monthly contributors have made a total of about 2 billion edits so far, mostly via semi-automated tools. The data is licensed CC0 and accessible as dumps, via APIs, via a SPARQL endpoint and via a range of tools for browsing or editing. Wikidata meets all FAIR criteria [49]. Roughly 1% of the content is math-related, including math publications, mathematicians, mathematical research organizations, societies, databases, conferences, software packages, algorithms, theorems, proofs, numbers, number series and more, albeit usually with incomplete coverage [50, 51]. Wikidata's 2022 growth rate was approximately 4% for items, 12% for properties, 46% for lexemes and 6% for statements. ZenodoZenodo[52] is an open science data repository maintained by CERN based on the open-source Invenio framework. It was created in 2015 to provide a solution for scientists to store, share, and publish their research data and digital artifacts, such as research papers, software, or data sets. The Zenodo system provides users with a range of services, including long-term data preservation, versioning, data citations, and DOIs. The platform has a simple and user-friendly interface, making it easy to upload and manage research data. Zenodo is integrated with a range of other platforms and services, including Github, CERN's Open Data Portal, and the European Open Science Cloud, among others. The Zenodo platform contains almost 3 million records, with the majority of them being freely accessible according to the FAIR criteria. ## Discussion The previous section has introduced several prominent repositories for sharing mathematical research data, highlighting their distinctive characteristics. By examining the collected data for each repository, a comparison can be made in terms of their focus, size, and available features. An important aspect to take into account is the repository's capability to assign persistent identifiers to its resources. This includes both the items stored within the repository and the references to other resources, such as authors and publications. This ability directly impacts the indexing and citation process of resources within the repository, and it is also closely tied to the FAIR principles, particularly in terms of facilitating findability. Platforms can also be compared based on their metadata management features. In this context, it is crucial to assess whether a clear distinction is made between the data itself and the accompanying metadata. Additionally, it is important to determine whether the metadata includes relevant fields that describe the resource, such as qualified references to other resources in the form of persistent IDs. Another important feature to consider is the inclusion of timestamping data within the metadata, which can indicate when the resource was created or updated. Also relevant to the management of metadata is the platform's provision of API endpoints for retrieving metadata, as well as search and submission capabilities. These features are directly related to the FAIR principles of interoperability. Lastly, we have also analyzed the status of the current platforms with regard to the reusability of their data. This assessment is based on their adherence to the FAIR principles on reusability as well as on the availability of license information for the published resources. The diverse range of features and services provided by repositories plays a crucial role in promoting open science practices and fostering collaborations among researchers. Given the importance of online data repositories in scientific research and their variations in focus, functionality and FAIR compliance, researchers should carefully select the most suitable repository based on their research needs and the nature of the data they intend to share. Based on all the evaluated factors we present a thorough analysis of the current status of open platforms in the field of mathematics, which can support this selection process. This is accompanied by a specific analysis of the current status of FAIR compliance among the reviewed platforms. ### The Status of Open Data Platforms The growing number of open data platforms in science and academia indicates a significant shift toward the democratization of knowledge and the promotion of open science. These platforms provide the necessary infrastructure for storing, sharing, and reusing data, which promotes collaboration, transparency, and the advancement of research. Tables 3 and 4 summarize our evaluation of the current status of these platforms and their usage, especially within mathematical research. The focus of a platform plays a fundamental role in determining the features it offers. This is particularly important in relation to the discipline or type of data that the platform stores. As can be observed from the second column in Table 3, platforms can be categorized into two distinct types: multidisciplinary repositories that contain a significant amount of math data and specialized platforms that exclusively store data within specific mathematical disciplines, such as abstract algebra, statistics or topology. The platform's focus determines the type of data it accepts and directly impacts decisions regarding the technology used, the need for curation and review processes, the implementation of a metadata scheme, and the policies for persistent data storage. Within the multidisciplinary platforms, there is a subgroup that consists of general-purpose data repositories such as 4TU research data, Figshare, Harvard Dataverse, the Science Data Bank, and Zenodo. These platforms aim to facilitate the sharing of research data and are therefore targeted at a similar audience. Consequently, it is not surprising that some of these platforms are partially built on the same software, as is the case with 4TU Research data, which is built on Figshare. As a result, these platforms offer a similar range of features, including the assignment of DOIs to resources, ensuring long-term data preservation, providing options for both public and private repositories, and timestamp metadata for each uploaded version. Another group of multidisciplinary platforms focuses on enhancing collaboration, study design, and data analysis among researchers. One example is the Open Science Framework, which serves as a general-purpose data repository but also offers functionalities to manage the entire research project lifecycle. Similarly, the Open Science Library falls into this category. Although the OSL is limited to storing data and source code packaged as computational capsules, it provides researchers with tools for study design, data collection and analysis, report dissemination, collaboration, and integration with other services. Lastly, there is a third group of multidisciplinary platforms that exclusively house textual data in the form of scientific publications or metadata associated with them. Examples of such platforms include arXiv, HAL, and Papers with code. arXiv and HAL primarily focus on storing scientific articles, while Papers with code goes a step further by linking these articles to code repositories and available benchmarks. On the opposite end of the spectrum, we encounter a group of platforms that specialize in storing specific types of data. Examples of these platforms include Archive of Formal Proofs, the Database of Ring Theory, the House of Graphs, and the On-Line Encyclopedia of Integer Sequences, which focus on storing proofs, ring data, graph data, and integer sequences, respectively. Typically, these platforms are built using customized code based on available frameworks. Many of these platforms also choose to open-source their source code and data, often making them available as repositories on platforms like GitHub or GitLab. This approach offers several advantages for platforms with a relatively small number of items, such as \(\pi\)-base, the Database of Ring Theory, and MathRepo. It provides redundancy for data storage, facilitates authentication mechanisms, and enables users to submit new content through pull requests. Another key aspect that varies among platforms is how they manage persistent identifiers. This includes the assignment of identifiers to stored resources, as well as the inclusion of qualified references to other resources like authors and publications. While the majority of the reviewed platforms assign persistent identifiers to their resources, there are some exceptions. Platforms like the Archive of Formal Proofs, MathRepo, the Network Repository, and Papers with Code can only refer to resources using a URL, without guaranteeing persistence. In contrast, the remaining platforms all provide internal persistent IDs for their resources. Many general-purpose platforms, in addition to their internal IDs such as arXiv ID, idHAL, or Zenodo ID, also assign a DOI. The allocation of a DOI is particularly important as it simplifies citation practices and helps monitoring the use of data, as well as giving credit to data providers. None of the reviewed platforms enforces an assignment of a persistent identifier to the resource creator. However, some platforms offer optional fields to include identifiers, such as the ORCID ID. Notably, this functionality is primarily supported in multidisciplinary platforms and not in any of the math-specific portals. Additionally, some of the general-purpose platforms generate internal author IDs on an optional basis, as seen in arXiv with the arXiv author ID. While some platforms only store author names as plain text, others like OSL or polyDB also include affiliation information. Certain platforms enable the creation of profile pages for individual authors, allowing them to voluntarily add identifiers. This feature is present in platforms based on MediaWiki, including Wikidata and OEIS. In contrast, when it comes to referencing publications, the support for persistent identifiers is more prevalent compared to authors, as Table 4 confirms. Nearly all platforms employ some form of identifier to reference other publications. The most common method is through the use of DOIs. However, there is an interesting exception in the case of FindStat, which uses MathSciNet IDs to reference the mentioned publications. Furthermore, cross-referencing based on internal identifiers from other platforms is also present. For example, both FindStat and the Encyclopedia of Graphs include references to integer sequences using the OEIS ID. Another distinguishing factor among the platforms is their approach to data review and curation. Specialized repositories focused on specific domains prioritize data curation to ensure the publication of high-quality data. Due to their specialization, these platforms store limited types of data, making the data curation process more manageable. In contrast, general-purpose repositories that cover a wide range of disciplines and types of data often do not perform data curation or review. These platforms typically conduct some form of monitoring for uploaded datasets to ensure compliance with site policies. In some cases, they only review the uploaded metadata, as seen in the case of 4TU Research Data. Other repositories, such as Harvard Dataverse, review all deposits to ensure reusable data are included, offer free consultation services to help users set up their collections, ensure proper metadata, and offer data curation as a paid service. In cases where only basic data control is implemented, it is also common practice to include timestamp information whenever the uploaded data is updated or modified. The availability of API endpoints for (meta)data retrieval varies among the platforms. Notably, more than a third of them do not offer any API functionality, limiting access to their data solely through a web browser. This absence often leads to a lack of clear differentiation between data and metadata, significantly hindering data reusability. However, among the platforms that provide API endpoints for resource exploration and search, around half of them also support the submission of new resources through the API. It is worth noting that these capabilities are predominantly found in general-purpose repositories. The licensing terms for the reviewed platforms vary considerably. Numerous platforms provide versatile licensing options, such as numerous Creative Commons (CC) licenses, BSD, LGPL, MIT, GPL, Apache, and copyright licenses. In some cases, platforms adopt a default license that can be modified by the author. For example, both 4TU Research Data and Harvard Dataverse default to CC0 for data release. Conversely, platforms like Biomodels and Wikidata strictly require the use of CC0. Judging by the quantity of items (i.e. datasets, papers, proofs, models, sequences, etc.) available on these platforms, arXiv, Figshare, HAL, Zenodo, and the Open Science Framework are most popular within the academic community among the multidisciplinary platforms. Other platforms specializing in very specific types of data store an even larger number of items, such as Wikidata, the Encyclopedia of Graphs or PolyDB, each of which contains millions of items. In conclusion, the analysis of the current landscape of open data platforms shows that they have acquired significant traction in science and academia, enabling the standard, accessible sharing and reuse of data. Despite their advantages, there are still obstacles to overcome, including enhancing citability through use of persistent identifiers, standardizing licensing practices and improving interoperability through APIs. As the data landscape continues to evolve, these platforms must adapt to meet the changing needs of researchers and ensure the continued advancement of open science. In the next section, we will specifically examine the status of FAIR compliance among the reviewed platforms. #### FAIR Compliance Table 5 provides an assessment of the platforms included in this review with regard to their adherence to the FAIR principles. The data gathered in this table reveals two main points. Firstly, it highlights the shortcomings regarding FAIR compliance within the current open platform ecosystem, specifically within the domain of mathematics. Secondly, it brings to attention the challenges encountered in adhering to certain FAIR principles, which are often not only technical but arise also from ambiguous interpretations of the principles and due to the absence of relevant standards within certain disciplines. In relation to the findability principles, almost all reviewed platforms assign unique and persistent identifiers to their resources, thereby satisfying principle F1. Moreover, almost all of the reviewed platforms provide rich metadata on each object, thereby also fulfilling principle F2. It is worth noting, however, that while most platforms do establish a clear distinction between data and metadata, half of them do not explicitly reference the persistent identifier within the metadata, resulting in non-compliance with principle F3. This principle is also immediately not fulfilled by those platforms that do not assign persistent identifiers in the first place. Finally, the adherence to principle F4 concerning the indexing of the resources does not pose a technical challenge and is fulfilled by all platforms. Given that this review focuses on open platforms, all the platforms under examination adhere to principle A1 regarding accessibility, as well as its corresponding sub-principles. However, it is worth mentioning that only half of these platforms comply with principle A2, which guarantees the availability of metadata even if the original data resource is deleted. The non-adherence to this principle can often be attributed to the absence of a clear distinction between data and metadata, coupled in some platforms with the lack of persistent identifiers. Adhering to the interoperability principles requires the publication of metadata in a format that can be readily interpreted by machines. In this regard, approximately two thirds of the platforms grant access to metadata via an API, but only less than half of them employ controlled vocabularies for metadata description. Furthermore, qualified references to other metadata, such as DOIs for publications or ORCID IDs for referenced authors, are included in only half of the platforms, as can also be seen in Table 4. With regard to the principles concerning reusability, principle R1 requires enriching metadata to assess the usefulness of data within a given context. While a clear definition of rich metadata is lacking, our evaluation has determined that only half of the platforms offer some form of contextual metadata. Moreover, six of the reviewed platforms do not disclose metadata with explicit license information, thereby failing to meet the requirements for principle R1.1. Adhering to principle R1.2 presents a technical challenge due to the difficulties involved in describing data provenance and workflows by means of controlled vocabularies and machine-readable formats. This principle is easily fulfilled by platforms that exclusively store specific data types, such as descriptions of particular mathematical objects, but poses a challenge in multidisciplinary platforms capable of storing diverse data resources. For the same reason, compliance with principle R1.3, which mandates conformity to domain-relevant community standards, proves challenging for multidisciplinary platforms. The adherence to this principle relies on the existence of community standards, which may not always be present, particularly in niche disciplines, consequently impeding its compliance. A general overview of these results show that, with the exception of principles F3 and A2, most of the platforms fulfill the FAIR principles for findability and accessibility. However, compliance rates are comparatively lower in the areas of interoperability and reusability. In particular: 1. Findability principles do not present a technical challenge and are mostly fulfilled by all platforms. The main shortcoming in the mathematical ecosystem is the absence of explicit references to persistent identifiers (F1) in the metadata, thereby preventing the fulfillment of principle F3. 2. Accessibility principles are entirely fulfilled except for principle A2. This last principle requires a clear distinction between data and metadata, along with ensuring the availability of metadata even if the original resource is deleted. 3. The compliance of the interoperability principles is coupled with the technical challenges of publishing rich metadata in a machine-readable format. Even among the platforms that fulfill principle I1, there is still a lack of compliance regarding the use of controlled vocabularies with persistent identifiers and the inclusion of qualified references to other metadata. 4. The reusability principles are only partially fulfilled among the covered platforms. The compliance with these principles could be improved by adding contextual metadata, explicit license information and metadata on data provenance. The adherence to principle R1.3 requires not only a technical implementation by the platforms but also the definition and acceptance of well-defined community standards within each discipline, and in particular, in the different mathematical disciplines. Requirements for Open Data PlatformsUseful open data platforms should meet a number of essential criteria in order to effectively serve their users. These requirements can be broadly categorized into technical requirements, user-centric requirements, and legal and ethical requirements. Technical Requirements: 1. Scalability and Reliability: As a platform's user base expands, it must be able to scale in order to accommodate growing data volumes and user requests. Additionally, it must be trustworthy, with a high availability and robust backup and recovery processes to prevent data loss. 2. Interoperability: The platform must utilize standard data formats and protocols to ensure that the data can be readily accessed and utilized by multiple systems and applications. It should also facilitate the integration and linking of data from various sources. 3. Findability: Effective data indexing and search capabilities are essential for assisting users in locating the required data. This entails offering metadata, unique identifiers, and a robust search engine. 4. Security: The platform must provide robust security measures to prevent unauthorized data access or modification. This includes encryption, access control, and auditing functions. User-centric Requirements: 1. Usability: The platform's interface should be simple and intuitive. Users should be able to upload, retrieve, and manipulate data without difficulty. 2. Accessibility: All data should be accessible to users with varying requirements, including those with disabilities. Also, APIs can be provided for developers to access and use the data programmatically. 3. Support: Users should have access to documentation, tutorials, and user support in order to understand how to use the platform and its data. Legal and Ethical Requirements: 1. Licensing and Citability: The platform should provide explicit licensing information for each dataset, indicating how it may be utilized and whether or not it should be cited. 2. Compliance with FAIR principles: The data must be discoverable, accessible, interoperable, and reusable (FAIR). This entails making data readily discoverable, accessible under open licenses, compatible with other data sets, and well-documented so that it can be reused in a variety of contexts. 3. Privacy and Ethics: If the platform hosts personal data, it must comply with privacy regulations such as the General Data Protection Regulation (GDPR). Establishing and adhering to ethical guidelines for the collection, use, and sharing of data is necessary. 4. Transparency and Accountability: The platform should have transparent data collection, utilization, and governance policies and be responsible for their enforcement. #### Challenges and obstacles for Open Data Platforms Implementing open data platforms that are widely utilized is a challenging task complicated with numerous challenges and obstacles. While the need for these platforms is widely acknowledged, a number of significant obstacles remain. 1. Data Privacy and Security: Ensuring data privacy and security is one of the primary challenges. It is a delicate task to balance the openness of data with the need to safeguard sensitive information. In Europe, laws such as GDPR have imposed stringent data management requirements, making it even more difficult for open data platforms to comply. 2. Data Standardization and Interoperability: The absence of data standardization is a significant technical obstacle. It can be challenging to integrate disparate data sources onto a single platform due to the diverse formats of the data. This lack of interoperability can reduce the platform's utility, as users may have difficulty locating, comparing, and utilizing data from various sources. 3. Data integrity and Curation: Another challenge is ensuring the integrity of data on these platforms. Inaccurate data can lead to erroneous conclusions, so open data platforms must have robust data validation and curation procedures. This is a resource-intensive endeavor that frequently necessitates subject-matter specialists and substantial computational resources. 4. Funding and Sustainability: Open data platforms necessitate substantial resources for development, maintenance, and enhancement. This includes technical infrastructure, personnel, and data curation on an ongoing basis. Obtaining stable, long-term funding for these activities is typically difficult, especially for platforms that provide unrestricted access to their resources. 5. User Engagement and Training: Lastly, promoting the adoption and proper utilization of open data platforms is a challenge in and of itself. Many potential users may lack the technical expertise required to utilize these platforms effectively. This requires investments in user training and ongoing efforts to enhance the usability and accessibility of the platform. In conclusion, despite the fact that the implementation of open data platforms can present significant obstacles, the potential benefits to the scientific community and the general public make it worthwhile to overcome these obstacles. To overcome these obstacles and actualize the full potential of open data, a coordinated effort involving policymakers, funding bodies, technical experts, and end users is required. #### Lessons Learned The sharing of research data is essential for scientific research in all domains and disciplines. Open data platforms promote transparency, reproducibility, and the acceleration of scientific discovery by adhering to Open Science principles. In mathematics, open data platforms are becoming increasingly important for researchers, as they facilitate the sharing and reuse of research data. In this article, we gave an overview and an evaluation of the current status of open data platforms, with a focus on the field of mathematics. We included their primary requirements and the obstacles preventing their successful implementation. To be successful, an open data platform must satisfy numerous requirements that we addressed in this article. However, also several challenges and obstacles were identified, including: 1. Ensure that all researchers, regardless of their location, resources, or expertise, have easy access to research data in order to promote data sharing. However, some open data platforms may be difficult to access, posing obstacles for researchers who wish to use the platform. 2. For encouraging researchers to share and utilize data, user-friendly platforms are essential. Researchers may be less inclined to use a platform if it is too complicated or difficult to navigate. 3. For researchers who want to share their data on an open platform, efficient data submission protocols are essential. Processes that are cumbersome or time-consuming may discourage researchers from submitting their data, thereby diminishing the efficacy of the platform. 4. Open data platforms must adhere to the FAIR principles to guarantee that the data is discoverable, accessible, interoperable, and reusable. Platforms that do not adhere to these recommendations may reduce the utility of the data shared on their platform. Understanding these challenges and obstacles can help researchers and developers to create - or improve - more effective platforms for sharing mathematical research data. This way, open data platforms can become an even more valuable resource for mathematicians, fostering collaboration, the sharing of data, and the acceleration of scientific discovery. ## Appendix A: FAIR principles The adherence to the FAIR principles is one of the criteria that we have used to evaluate the current status of open data platforms in the field of mathematics. These guiding principles prioritize machine-actionability and emphasize the findability, accessibility, interoperability, and reusability of data. The primary objective of these principles is to facilitate the sustainable reuse of research data by establishing guidelines that research data platforms and infrastructures should adhere to as part of their service offerings. These principles were initially introduced in a publication by Wilkinson et al. in 2016[26]. For a thorough discussion on their interpretation and additional implementation considerations, refer to the work of Jacobsen et al.[53] For completeness, we present below the list of FAIR principles as outlined by Wilkinson et al. [26] * F1: (Meta)data are assigned globally unique and persistent identifiers. * F2: Data are described with rich metadata. * F3: Metadata clearly and explicitly include the identifier of the data it describes. * F4: (Meta)data are registered or indexed in a searchable resource. * A1: (Meta)data are retrievable by their identifier using a standardized communication protocol. * A1.1: The protocol is open, free and universally implementable. * A1.2: The protocol allows for an authentication and authorisation procedure, where necessary. * A2: Metadata are accessible, even when the data are no longer available * I1: (Meta)data use a formal, accessible, shared, and broadly applicable language for knowledge representation. * I2: (Meta)data use vocabularies that follow the FAIR principles. * I3: (Meta)data include qualified references to other (meta)data. * R1: (Meta)data are richly described with a plurality of accurate and relevant attributes. * R1.1: (Meta)data are released with a clear and accessible data usage license. * R1.2: (Meta)data are associated with detailed provenance. * R1.3: (Meta)data meet domain-relevant community standards. In summary, the findability principles emphasize the indexing of metadata using persistent and unique identifiers, enabling the discovery and unique identification of data resources. The accessibility principles refer to the capability to access data via open protocols after it has been located, which may involve authentication and authorization. The interoperability principles specifically address machine-actionability, so that data can be readily integrated into existing workflows. The reusability principles describe the criteria for enriching metadata, enabling users to identify the specific context and conditions under which the data can be re-used. ## APPENDIX B: MediaWiki Math Platforms Mathematical research data is frequently published online through wiki-based platforms. These platforms allow for the rapid creation and iterative editing of entries in a collaborative environment. Due to the convenience of this approach, there are several platforms dedicated to specific mathematical disciplines. Table 6 in this appendix presents a selection of exemplary platforms that fall into this category. The table includes a collection of platforms, all of which operate on MediaWiki, except for nLab, which is built on Instiki, a wiki software based on Ruby on Rails. Among the platforms listed, there are a few noteworthy examples worth highlighting: the Encyclopedia of Math, Complexity Zoo, and nLab. The Encyclopedia of Math is an online wiki initially created by the Springer Verlag and managed in cooperation with the European Mathematical Society. It hosts over 8,000 articles covering advanced mathematical topics, which can be updated by users and undergo editorial board review for accuracy. The Complexity Zoo, initiated by Scott Aaronson in 2002, aims to catalog all classes of computational complexity and currently documents over 500 complexity classes. Lastly, nLab is a collaborative platform that includes more than 18,000 pages spanning mathematics, physics, and philosophy, with a strong emphasis on type theory, category theory, and homotopy theory. While this approach facilitates the sharing of research data, the existing implementations currently show limited adherence to the FAIR principles. Merely relying on the default environment offered by a MediaWiki instance does not inherently ensure satisfactory compliance with FAIR principles. Achieving proper FAIR adherence requires a deliberate effort and the addition of supplementary measures. Most of the platforms featured in this table do not assign persistent identifiers to their resources. Instead, the resources are solely accessible through URLs, without any guarantee of persistence. Despite the availability of MediaWiki's API for accessing metadata on existing MediaWiki sites, the listed platforms do not use this mechanism to provide comprehensive metadata describing their stored resources. Qualified references to other resources are rarely provided, and explicit license information is seldom included. Only in cases where user pages are available for each user, some individuals may voluntarily add identifiers, such as an ORCID ID, which can serve as qualified references. However, this information is not directly included in the retrieved metadata. In summary, using MediaWiki as the foundation for a research data platform can serve as a valuable initial step in establishing a collaborative environment for resource sharing among users. However, to ensure adherence to the FAIR principles, additional features must be implemented beyond the basic configuration. These include assigning persistent identifiers to resources, making these identifiers readily available along with comprehensive and contextual metadata through the provided API, implementing controlled vocabularies, including qualified references \begin{table} \begin{tabular}{l l l} \hline \hline Platforms & Math Focus & URL \\ \hline Boolean Zoo & Boolean analysis & booleanzoo.weizmann.ac.il \\ \hline Complexity Zoo & Complexity classes & complexityzoo.net \\ \hline Encyclopedia of Math & Articles in the field of mathematics & encyclopediaofmath.org \\ \hline GroupProps & Group properties & groupprops.subwiki.org \\ \hline MOR Wiki & Model benchmarks & morwiki.mpi-magdeburg.mpg.de \\ \hline nLab & Category theory & ncatlab.org \\ \hline The Knot Atlas & Knots & katlas.org \\ \hline The Manifold Atlas & Manifolds & map.mpim-bonn.mpg.de \\ \hline \hline \end{tabular} \end{table} Table 6: Mathematical research data platforms based on MediaWiki to other resources, and clearly publishing license information.
2307.16432
Prometheus: An Open-Source Neutrino Telescope Simulation
The soon-to-be-realized, global network of neutrino telescopes will allow new opportunities for collaboration between detectors. While each detector is distinct, they share the same underlying physical processes and detection principles. The full simulation chain for these telescopes is typically proprietary which limits the opportunity for joint studies. This means there is no consistent framework for simulating multiple detectors. To overcome these challenges, we introduce Prometheus, an open-source simulation tool for neutrino telescopes. Prometheus simulates neutrino injection and final state and photon propagation in both ice and water. It also supports user-supplied injection and detector specifications. In this contribution, we will introduce the software; show its runtime performance; and highlight successes in reproducing simulation results from multiple ice- and water-based observatories.
David Kim
2023-07-31T06:41:23Z
http://arxiv.org/abs/2307.16432v1
# Prometheus: An Open-Source Neutrino Telescope Simulation ###### Abstract: The soon-to-be-realized, global network of neutrino telescopes will allow new opportunities for collaboration between detectors. While each detector is distinct, they share the same underlying physical processes and detection principles. The full simulation chain for these telescopes is typically proprietary which limits the opportunity for joint studies. This means there is no consistent framework for simulating multiple detectors. To overcome these challenges, we introduce Prometheus, an open-source simulation tool for neutrino telescopes. Prometheus simulates neutrino injection and final state and photon propagation in both ice and water. It also supports user-supplied injection and detector specifications. In this contribution, we will introduce the software; show its runtime performance; and highlight successes in reproducing simulation results from multiple ice- and water-based observatories. ## 1 Introduction The global network of neutrino telescopes, defined as gigaton-scale neutrino detectors, has allowed us observe the Universe in new ways. The subset of this network of telescope deployed on ice or water includes the IceCube Neutrino Observatory [1] near the South Pole, proposed detectors ORCA and ARCA [2] in the Mediterranean Sea (KM3NeT collaboration), and Baikal-GVD in Lake Baikal, Russia [3] (BDUNT collaboration). Additionally, new experiments like P-ONE [4] off the coast of Vancouver and TRIDENT [5] in the South China Sea are underway, as well as an expansion for the IceCube Observatory [6, 7]. These telescopes all share many technological features. Each of these detectors operate by detecting Cherenkov photons emitted by neutrino interaction byproducts, and as such follow the same general simulation chain illustrated in Fig. 1. The only proprietary step is the final detector response, which occurs after an optical module (OM) has detected a photon. Yet for many years we have lacked a simulation framework that takes advantage of this similarity. Existing packages individually cover one or two of these common steps, but until now there has been no easy way to simulate a particle from injection to photon propagation. Prometheus [8] looks to correct this by providing an integrated framework to simulate these common steps for arbitrary detectors in ice and water, using a combination of publicly available packages and those newly developed for this work. Neutrino injection is handled by LeptonInjector[9], an event generation recently developed by the IceCube Collaboration. Taus and muons are then propagated by PROPOSAL[10]. Light yield simulation and photon propagation in ice relies on PPC[11], while in water these steps are covered by Fennel[12] and Hyperion, respectively. Prometheus's flexibility allows one to optimize detector configurations for specific physics goals, while the common format allows one to develop reconstruction techniques that may be Figure 1: _Schematic showing the physical processes Prometheus models._ (1), Prometheus selects an interaction vertex within _simulation volume_, depicted here by the lighter-colored region. (2), the final states of this interaction are then propagated, accounting for energy losses and any daughter particles which may be produced. (3), these losses are then converted to a number of photons. (4), finally, these photons are then propagated until they either are absorbed or reach an optical module. applied across different experiments. With the recent explosion in machine-learning research, it is now more important than ever that we are able to rapidly implement and test new ideas without relying on tools and data that may be proprietary to their experiments. The rest of this article is organized as follows. In Sec. 2 we outline the format of Prometheus's output and validate against published results. In Sec. 3 we present community work that employs Prometheus. In Sec. 4 we provide a short example running Prometheus code. Finally, in Sec. 5 we conclude and offer our future outlook. ## 2 Prometheus: Output and Validation Prometheus outputs to Parquet[13] files that include two primary fields--photons and mc_truth. photons contains information on photons that produce hits in user-defined detection regions. This includes photon arrival time, OM identification numbers, OM position, and the final-state particle that produced the photon. Fig. 1 shows event displays for various detectors generated using the information in photons. mc_truth includes information on the injection like the interaction vertex; the initial neutrino type, energy, and direction; and the final state types, energies, directions, and parent particles. Users may also save the configuration file (see Sec. 4) as a json file. This allows the user to resimulate events using the same parameters, which is useful for comparing the same event across multiple detector geometries. The information stored in mc_truth allows us to compute effective areas when combined with the weights from LeptonWeighter. Since it mainly depends on the physics implemented in Prometheus--such as neutrino-nucleon interactions, lepton range, and photon propagation-- Figure 2: _Event views for various detector geometries._ This shows the events created by either \(\nu_{\mu}\) charged-current or \(\nu_{e}\) charged-current interactions in a variety of geometries of current and proposed neutrino telescopes. Each black dot is an OM, while each colored dot indicates the average time at which photons arrived at the OM; black indicates an earlier arrival, orange indicates a later arrival, and purple an arrival in between. Furthermore, the size of the colored spheres is proportional to the number of photons that arrived at the OM. Detectors which appear against lighter blue backgrounds—the top row—are ice-based, while those against the darker blue backgrounds are water-based. effective area serves as a reliable indicator of our code's performance when all simulation steps are properly integrated. We can then validate our simulation by comparing these effective areas to published data. Fig. 3 compares different experiments published effective areas to our estimation using Prometheus simulations. It is worth noting that the calculations for effective area rely on detector-specific cuts and OM response, to which we have limited or no access. We can therefore expect differences of \(\mathcal{O}(10\%)\) from these missing detector details. ## 3 Community Contributions Developing a published reconstruction is often a long-term effort internal to an experiment, meaning methods are slow to implement and difficult to compare. Prometheus remedies this by allowing users to readily generate the large data sets necessary to test the feasibility and relative performance of different reconstruction methods. In this section we highlight two submissions at ICRC 2023 that utilize Prometheus in such a way. The first of these works is a software-focused effort that looks to improve the speed of first reconstructions at detectors and the second is a hardware-focused effort concerning GPU alternatives for low power computing. Ref. [20], proposes sparse-submanifold convolutions (SSCNNs) as an alternative to the convolutional neural network (CNN) and traditional trigger-level event reconstructions currently used in neutrino telescopes. Their directional and energy algorithm is trained on a data set of 412892 events and tested on a further 50000 events, cut from a set of 3 million generated by Prometheus. Figure 3: _Effective area computed using Prometheus with comparisons to published results._ We compare the \(\nu_{\mu}\) effective areas computed with Prometheus for IceCube, P-ONE3, ORCA, and ARCA for three different hit requirements, denoted by different line styles, to published effective areas. The IceCube effective area, taken from Ref. [14], is for \(\nu_{\mu}+\nu_{\tau}\) events which pass the SMT-8 trigger and agrees with our calculation to within uncertainties. The ARCA [2] and ORCA [15] cases effective areas are constructed with more complicated hit requirements. Still, the scale and shape of the ORCA and ARCA effective areas and the Prometheus effective areas agree within uncertainties despite the simplified selection criterion. As of the publication of this proceeding, there is no published effective area for P-ONE3. As seen in Fig. 4, the SSCNN is capable of running at speeds comparable to the neutrino telescope trigger rates. In Fig. 5, it achieves a median angular resolution below \(4^{\circ}\) for the highest-energy trigger-level events, which matches or outperforms currently employed reconstructions. Their work could be applied to improve on-site reconstructions and filtering for notable events at any detector. Ref. [21], is another Prometheus user submission that explores hardware accelerators to improving event reconstruction. Specifically, they look at how a Tensor Processing Unit (TPU) compatible algorithm could lower energy consumption while running comparable operations to the current GPU-based ones. Their model reaches a median angular resolution around \(5^{\circ}\) above \(10^{3}\) GeV. Meanwhile, approximate peak total power consumption drops from 100W on the best performing GPU (Apple M1 Pro chip) down to 3W on the Google Edge TPU. The portion of the power consumption directed to the ML accelerator component drops from 15W to 2W. This submission illustrates Prometheus's utility in testing more speculative ideas and providing proof-of-concepts to encourage further work, particularly in the direction of machine learning. The findings of both works are generic to any ice or water neutrino telescope. Their models Figure 4: [16, 17, 18]_Event rates of triggers in different neutrino telescopes compared to the run-times of various reconstruction methods._ Notably, sparse submanifold CNNs can process events well above standard trigger rates in both ice- and water-based experiments. The CNN and maximum likelihood method run-times are taken from [19]. Reproduced with permission of the authors from Ref. [20]. Figure 5: _Angular reconstruction performance as a function of the true neutrino energy._ The angular resolution results are binned by the true neutrino energy, with the median taken from each bin to form the lines shown [20]. are trained and tested on example detector geometries rather than the geometry of any existing or proposed detector. Since Prometheus is fully open-source, we hope it will facilitate not only easier iteration on these techniques but also greater collaboration in sharing and adapting them. ## 4 Examples In this section we will walk through producing a simulation of \(\nu_{\mu}\) charged-current interactions in an ice-based detector. This example will use mostly default values for injection parameters to show the essential steps in running a simulation, after which we will show how the user can set these parameters. In this example, we set the number of events to simulate, the detector geometry file, and--using the final state particles--the event type. With just this, we can simulate events with Prometheus! ``` 1importprometheus 2fromprometheusimportconfig,Prometheus 3 4config["run"]["nevents"]=100 5geofile=f'{resource_dir}/geofiles/demo_ice.geo" 6config["detector"]["geofile"]=geofile 7injection_config["simulation"]["finalstate1"]="MuMinus" 8injection_config["simulation"]["finalstate2"]="Hadrons" 9 10p=Prometheus(config) 11p.sim() ``` As briefly shown here, the config dictionary is our primary interface for configuring Prometheus. Key parameters not shown here include the output directory; random state seed; and injection parameters like injection angle and energy. Information on a detector's medium, either ice or water, is stored in its geometry file. The geofiles directory in the GitHub repository has geometry files for all of the detectors in Fig. 2. Again, this only scratches the surface of Prometheus's capabilities. For a more thorough description of all the options and features available see Ref. [8], which has more in-depth examples for \(\nu_{\mu}\) charged-current events in ice and \(\tilde{\nu}_{e}\) neutral-current events in water as well as directions on constructing a detector, weighting events, and getting event rates. ## 5 Conclusion In this submission we have introduced Prometheus as an open-source software package for simulating neutrino telescopes. We have provided a brief example for simulating events in an ice-based detector and highlighted two current works that employ Prometheus. Prometheus's flexibility of input for detector geometry and injection parameters allows it to handle simulation for the full range of existing and proposed telescopes in both water and ice. As we have demonstrated in the highlighted community contributions, Prometheus facilitates the implementation of new ideas without the need for proprietary data, or for data on a scale not yet available. Via its particular application to machine-learning models, Prometheus can be a key piece in accelerating the development of faster, more efficient reconstructions for all detectors. Finally, it is our hope that Prometheus opens the door for greater collaboration within the community. By encouraging the sharing of methods and simulated data sets, we hope work done by any one effort more quickly and easily becomes progress for every group in the global neutrino telescope network. ## 6 Acknowledgements We would like to thank all users who tested early versions of this software, including--in no particular order--Miaochen Jin, Eliot Genton, Tong Zhu, Rasmus Orsoe, Savanna Coffel, and Felix Yu. The authors that developed Prometheus were supported by Faculty of Arts and Sciences of Harvard University, the Alfred P. Sloan Foundation, NSF under grants PLR-1600823, PHY-1607644, Wisconsin Research Council with funds granted by the Wisconsin Alumni Research Foundation, Australian Government through the Australian Research Council's Discovery Projects funding scheme (project DP220101727), and Lynne Sacks and Paul Kim.
2309.05848
C=Anything and the switchback effect in Schwarzschild-de Sitter space
We investigate observables within the framework of the codimension-one C=Anything (CAny) proposal for Schwarzschild-de Sitter (SdS) space under the influence of shockwave sources. Within the proposal, there is a set of time-reversal invariant observables that display the same rate of growth at early and late-times for a background with or without shockwave sources. Once we introduce shockwaves in the weak gravitational coupling regime, there is a decrease in the late-time complexity growth due to cancellations with early-time perturbations, known as the switchback effect. The result shows that some CAny observables in SdS may reproduce the same type of behavior found in anti-de Sitter black holes. We comment on how our results might guide us to new explorations in the putative quantum mechanical theory.
Sergio E. Aguilar-Gutierrez
2023-09-11T22:20:52Z
http://arxiv.org/abs/2309.05848v3
# C=Anything and the switchback effect in Schwarzschild-de Sitter space ###### Abstract We investigate observables within the framework of the codimension-one C=Anything (CAny) proposal for Schwarzschild-de Sitter (SdS) space under the influence of shockwave sources. Within the proposal, there is a set of time-reversal invariant observables that display the same rate of growth at early and late times for a background with or without shockwave sources. Once we introduce shockwaves in the weak gravitational coupling regime, there is a decrease in the late-time complexity growth due to cancellations with early-time perturbations, known as the switchback effect. The result shows that some CAny observables in SdS may reproduce the same type of behavior found in anti-de Sitter black holes. We comment on how our results might guide us to new explorations in the putative quantum mechanical theory. ###### Contents * 1 Introduction * 2 C=Anything in SdS spacetimes * 2.1 SdS spacetimes * 2.2 C=Anything: CMC slices * 3 The switchback effect * 4 Discussion and outlook * A Details on the CAny evaluation ## 1 Introduction Recently, a lot of interest in quantum information theoretic notions has surfaced in an effort to characterize semiclassical gravitational observables. Particularly, holographic complexity has been used to study bulk gravitational dynamics that can probe regions inaccessible to entanglement entropy [1; 2; 3], and whose variation can also reproduce gravitational equations of motion [4; 5; 6; 7; 8; 9]. This notion is also expected to reproduce the properties for the computational complexity of a quantum circuit model1 of the conformal field theory (CFT) dual to asymptotically Anti-de Sitter (AdS) spacetimes [2]. This translates to robust features captured by a large family of proposals, starting with the Complexity=Volume (CV) [3], Complexity=Action (CA) [11; 12], Complexity=Spacetime Volume (CV2.0) [13], and a recent generalization known as the Complexity=Anything (CAny) proposal [14; 15]. There are two defining features for the CAny observables in AdS black holes: (i) a late boundary time linear growth, and (ii) the switchback effect. The latter is a characteristic decrease in the late-time linear growth once perturbations in the geometry are introduced due to energy pulses. Footnote 1: See [10] for a recent review. Perhaps, one of the most exciting aspects about this family of spacetime probes is to characterize the properties of cosmological backgrounds, and in particular for de Sitter (dS) space. The nature of the microscopic degrees of freedom encoded inside the cosmological horizon remains mysterious (see [16] for a recent review). A recent proposal in dS holography, known as stretched horizon holography [17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27], has sparked many developments in dS space complexity. The stretched horizon is a region in the static patch of dS space where the dual theory is conjectured to be located [17]. One may therefore perform gravitational dressings with respect to the stretched horizon to probe the dual degrees of freedom. Although the precise location is not explicit in this approach, it is expected to be close to the cosmological horizon. This approach has been explored to study holographic complexity proposals in asymptotically dS space [28; 29; 30; 31; 32; 33; 34]. One of the striking features originally found in [28] for the CV, CA, and CV2.0 proposals was that the rate of growth of holographic complexity might diverge a finite times relative to the stretched horizon, denoted by hyperfast growth. The result was associated with the conjecture where the double-scaled Sachdev-Ye-Kitaev (SYK) model is identified as the quantum mechanical dual to dS\({}_{2}\) space [17]. One may introduce a cutoff surface to perform an analytic continuation allowing for late-time evolution [28]. However, it was recently found that hyperfast growth is not a universal phenomenon in the space of holographic complexity proposals [33]. Instead, a set of the CAny observables may evolve to arbitrarily late (or early) static patch times without introducing regulator surfaces. Notice, however, that it is still unclear what kind of interpretation the different types of CAny observables might have for dS space. To properly define holographic complexity proposals one needs a good understanding of the basic properties that complexity for the dual field theory side should satisfy. Such understanding of the microscopics associated with dS space is still lacking. It is important to learn about the different signals one should look for in a candidate for the dS space holographic model. Alternatively, one might find clearer interpretations of holographic complexity when dS space is embedded in a higher dimensional bulk AdS spacetime. This perspective was recently approached by [34]. They considered a particular type of CAny proposals in a braneworld model consisting of a higher dimensional AdS bulk geometry capped off by a pair of dS space end-of-the-world branes. In this setting, the resulting complexity is associated with the field theory dual living on a brane near the asymptotic boundary of the AdS space. In this case, the CAny proposals with the expected late time growth in the AdS bulk are those that obey the late time growth in the dS braneworld. On the other hand, one of the defining properties of holographic complexity proposals in the AdS context, the switchback effect, has been recently explored in asymptotically dS spacetime [31; 32] for the CV, CA, CV2.0 proposals. In this case, there are energy pulses associated with perturbations in the evolution of the putative dual theory residing in the stretched horizon. This can be an important diagnostic if the CAny observables can indeed be associated with complexity consistent with Nielsen's geometric approach in the dual theory [33]. However, the switchback effect in the class of CAny observables where the late-time growth in dS space is allowed has not been studied. This is the main goal of our work, to learn new lessons for stretched horizon holography. To study the switchback effect, we describe the shockwave geometry in asymptotically dS spacetimes. We specialize in Schwazschild-de Sitter (SdS) space, which describes spherically symmetric vacuum solutions to Einstein equations with a positive cosmological constant. We work in the perturbative weak gravity regime to treat the shockwave geometry based on previous findings in [35], and recently discussed in the context of SdS black holes by [36]. As for the observables, we will work with codimension-one CAny proposals evaluated in constant mean curvature (CMC) slices, originally introduced in [15]. The mean curvature is one of the main factors distinguishing the proposals that display late time growth from those with hyperfast growth [33]. Our work is focused on alternating early and late-time perturbations in the geometry and the resulting growth of the CAny observables. Our findings indicate that the set of proposals with late-time growth will display the switchback effect. The rate of growth will be determined by the definition of the CAny proposal. In general, however, the manifestation of the switchback effect in the CAny proposals only occurs once we perform a time-reversal symmetric extension, originally hinted in [37]. We select CMC slices that minimize the CAny proposal and find that the late-time and early-time contributions to complexity growth will partially cancel out. Under these conditions, the analysis of the switchback effect shows great similarity with respect to that of AdS black hole backgrounds. Interestingly, the late time growth during the switchback phase is unaffected by the particular location of the stretched horizon. The structure of the manuscript goes as follows. In Sec. 2 we review generalities about the shockwave geometries in SdS spacetimes, as well as the CAny proposals that display the early and late-time growth in SdS spacetime. We introduce some new results regarding the generalizing the set of proposals that reproduce the late time growth of complexity in SdS space of arbitrary mass (below the extremal one). In Sec. 3, we present the results on the switchback effect in SdS spacetime by performing a series of alternating shockwave insertions in the background geometry in the weak gravity regime. Finally, Sec. 4 includes a summary of our findings in this setting and some interesting directions for future research. For the convenience of the reader, we provide an App. A containing some of the details about the evaluation of the late-time growth of the CAny proposals. ## 2 C=Anything in SdS spacetimes In this section, we briefly review basic notions about \(\mathrm{SdS}_{d+1}\) black holes, the modification in the geometry due to shockwave insertions, and the CAny proposals that we investigate in this work. New results include extending the set of CAny proposals studied in [33] for SdS space, to more general observables than volumes of CMC slices. This allows for different types of late-time growth behaviors in \(d=2\), \(d=3\). We also show that late-time growth persists for arbitrary mass black holes. ### SdS spacetimes The configuration of interest is \(\mathrm{SdS}_{d+1}\) space, described by the line element \[\mathrm{d}s^{2} =-f(r)\mathrm{d}t_{L/R}^{2}+\frac{\mathrm{d}r^{2}}{f(r)}+r^{2} \mathrm{d}\Omega_{d-1}^{2}\,, \tag{1}\] \[f(r) =1-\frac{r^{2}}{\ell^{2}}-\frac{2\mu}{r^{d-2}}\,\quad\mu\equiv \frac{16\pi G_{N}M}{(d-1)\Omega_{d-1}r^{d-2}}\, \tag{2}\] with \(\ell^{2}=d(d-1)/(2\Lambda)\);2\(\Lambda>0\) is the cosmological constant; \(M\) parametrizes the mass of the black hole; \(G_{N}\) is Newton's constant; \(\Omega_{d-1}=2\pi^{d/2}/\Gamma(d/2)\) is the volume of a unit \((d-1)\)-sphere; \(\mu\in[0,\,\mu_{N}]\). The case where \(\mu=\mu_{N}\), with Footnote 2: Through the rest of the work, we use rescaled coordinates where \(\ell=1\). \[\mu_{N}=\tfrac{1}{d}\big{(}\tfrac{d-2}{d}\big{)}^{\frac{d-2}{2}}\, \tag{3}\] describes the most massive black hole supported in dS space, which we refer to as the extremal SdS limit. Meanwhile, \(\mu=0\), reproduces \(\mathrm{dS}_{d+1}\) space. We will be interested in describing shockwaves in the geometry. It's convenient to use Kruskal coordinates, defined by \[\begin{split} U_{\mathrm{b,c}}&=\mathrm{e}^{\frac{ f^{\prime}(r_{\mathrm{b,c}})}{2}(r_{*}(r)-t)}\,\\ V_{\mathrm{b,c}}&=-\mathrm{e}^{\frac{f^{\prime}(r_{ \mathrm{b,c}})}{2}(r_{*}(r)+t)}\,.\end{split} \tag{4}\] where the subindices "b, c" denote coordinates on the black hole and inflating patches. These patches are centered at the horizons \(r_{\mathrm{b,c}}\), and cover the range \(0\leq r<r_{\mathcal{O}}\) and \(r_{\mathcal{O}}\leq r<\infty\) respectively, with \(r_{\mathcal{O}}\) a reference point, which we take as the location of the static patch observer. In the present work, we will focus on the inflating region, as we aim to probe it with CAny observables. We replace \(U_{c}\), \(V_{c}\to U\), \(V\) in what follows. We can then express (1) as \[\mathrm{d}s^{2}=-\frac{4f(r)}{f^{\prime}(r_{c})}\mathrm{e}^{-f^{\prime}(r_{c}) r_{*}(r)}\mathrm{d}U\,\mathrm{d}V+r^{2}\mathrm{d}\Omega_{d-1}^{2} \tag{5}\] where \(r_{\mathcal{O}}\leq r<\infty\), and \(r_{*}=\int\frac{\mathrm{d}r}{f(r)}\) is the tortoise coordinate. Once we add shockwave perturbations, the geometry gets distorted, as well as the location of the stretched horizon, see Fig. 1.3 Although stretched horizon holography fixes the location of the dual theory to be located at a constant \(r\) surface in the static patch, it remains unknown how the theory should behave under shockwave perturbations. We will be considering the case where the shockwaves are sent through \(U=0\) and the stretched horizon remains fixed at a constant \(r=r_{\rm st}\) coordinate4, for which time evolution along the stretched horizon is continuous. Footnote 3: We represent the Penrose diagram with the coordinate \(r\) being continuous along the shockwave geometry, while the coordinates \(U\) or \(V\) can be discontinuous. Footnote 4: Similar considerations have been carried out in [30; 31], as well as alternative proposals. We are mainly interested in the metric under shockwave perturbations in the weak gravitational coupling regime to define the notions of energy below. We consider an SdS black hole with mass \(M\) absorbs a shell of matter with mass \(E\ll M\) along the surface \[U=U_{0}=\mathrm{e}^{\frac{f^{\prime}(r_{\rm st})}{2}(r_{*}(r_{\mathcal{O}})-t_ {0})}\, \tag{6}\] with \(t_{0}\) the static time shockwave insertion with respect to \(r_{\mathcal{O}}\). The SdS black hole after the shockwave has mass \(M-E\) in (1) for matter obeying the null energy condition (NEC) [31; 36]. We glue the coordinates along a shell \(U,\,V\) to the past of the shell with those to the future, denoted by \(\tilde{U},\,\tilde{V}\). The resulting cosmological line element for SdS black holes [35; 36]: \[\mathrm{d}s^{2}=-\frac{4\tilde{f}(r)}{\tilde{f}^{\prime}(r_{c})}\mathrm{e}^{- \tilde{f}^{\prime}(\bar{r}_{c})\bar{r}_{*}(r)}\mathrm{d}\tilde{U}\mathrm{d} \tilde{V}+r^{2}\mathrm{d}\Omega_{d-1}^{2} \tag{7}\] where tilded quantities are given by the replacement of \(M\to M-E\) in the untilded ones. In the inflating patch, the shift along the \(V\) coordinate can be described by a shift in the coordinate \[\tilde{V}=V-\alpha. \tag{8}\] Figure 1: Shockwave perturbations (orange wavy line) along \(U_{c}=0\) in: \(\mathrm{SdS}_{3}\) space (_left_); and \(\mathrm{SdS}_{d+1\geq 4}\) space (_right_), illustrated in the extremal SdS limit. The stretched horizon (in green) is shown at a fixed location \(r=r_{\rm st}\). The cosmological and black hole horizons are shown with the dashed lines. The Kruskal coordinates \(U\), \(V\) are displayed with black arrows. The NEC also imposes that \(\alpha\geq 0\)[38]; while \(E\ll M\) guarantees we work in the \(\alpha\ll 1\) limit5. Notice that the shift is exactly the opposite sign than in a crunching geometry [39]. We will express the shift parameter as Footnote 5: See [36] for remarks on the approximation for the extremal SdS limit. \[\alpha=2{\rm e}^{-\frac{f^{\prime}(r_{c})}{2}(t_{c}^{(*)}\pm t_{0})}. \tag{9}\] Here the \(\pm\) sign depends on whether the shockwave is left or right moving, and the cosmological scrambling time \(t_{c}^{(*)}\) will be defined through this relation. We will take the shockwave close to the cosmological horizon and set \(U_{0}=0\) in the following. (7) transforms into \[{\rm d}s^{2}=-2a(U[V-\alpha\Theta(U)]){\rm d}U{\rm d}V+b(U[V- \alpha\Theta(U)]){\rm d}\Omega_{d-1}^{2}\, \tag{10}\] \[a(UV)=-\frac{2}{UV}\frac{f(r)}{f^{\prime}(r_{c})^{2}}\,\quad b(UV)=r^{2}. \tag{11}\] ### C=Anything: CMC slices We are mainly interested in codimension-one observables within the class of the C=Anything proposal, introduced in [14; 15], \[\mathcal{C}^{\epsilon}\equiv\frac{1}{G_{N}}\int_{\Sigma_{\epsilon}}{\rm d}^{ d}\sigma\,\sqrt{h}\ F[g_{\mu\nu},\,\mathcal{R}_{\mu\nu\rho\sigma},\,\nabla_{\mu}]\, \tag{12}\] where \(F[g_{\mu\nu},\,\mathcal{R}_{\mu\nu\rho\sigma},\,\nabla_{\mu}]\) is an arbitrary scalar functional of \(d+1\)-dimensional bulk curvature invariants, \(\Sigma_{\epsilon}\) is a \(d\)-dimensional spatial slice labeled by \(\epsilon(=+,\,-)\), which is anchored on the stretched horizon, \(h\) is the determinant of the induced metric, \(h_{\mu\nu}\), on \(\Sigma_{\epsilon}\). We will let \(F[\dots]\) be a general functional throughout the work. The reader is referred to footnote 9 to verify that these proposals display a switchback effect in AdS planar black holes in the time-reversal symmetrization in (37). To define the region of evaluation, we employ a combination of codimension-one and codimension-zero volumes with different weights, given by \[\mathcal{C}_{\rm CMC}=\tfrac{1}{G_{N}}\bigg{[}\alpha_{+}\int_{\Sigma_{+}}{ \rm d}^{d}\sigma\,\sqrt{h}+\alpha_{-}\int_{\Sigma_{-}}{\rm d}^{d}\sigma\, \sqrt{h}+\alpha_{B}\int_{\mathcal{M}}{\rm d}^{d+1}x\sqrt{-g}\bigg{]} \tag{13}\] where \(\mathcal{M}\) is the bulk region; \(\alpha_{\pm}\), \(\alpha_{B}\) are constants; and \(\Sigma_{+}\), \(\Sigma_{-}\) are the future and past boundary slices in \(\partial\mathcal{M}=\Sigma_{+}\cup\Sigma_{-}\), see Fig 2. The extremization of \(\mathcal{C}_{\rm CMC}\) reveals that \(\Sigma_{\pm}\) are CMC slices, whose mean curvature is given by: \[K_{\epsilon}\equiv\left.K\right|_{\Sigma_{\epsilon}}=-\epsilon\frac{\alpha_{B }}{\alpha_{\epsilon}}\, \tag{14}\] where \(K^{\mu\nu}=h^{\mu\alpha}\nabla_{\alpha}n^{\nu}\) is the extrinsic curvature, and we consider \(n^{\mu}\) to be a future pointing normal vector for both \(\Sigma_{\epsilon}\). To simplify the evaluation of (12), we employ time-symmetric evolution on each of the static patches, so that we set \(t_{L}=t_{R}\) in (1). Moreover, we introduce Eddington-Finkelstein coordinates in (1), \[{\rm d}s^{2}=-f(r){\rm d}v^{2}+2{\rm d}v{\rm d}r+r^{2}{\rm d}\Omega_{d-1}^{2}\, \tag{15}\] which are related to the Kruskal coordinates (4) by \[U={\rm e}^{-\frac{f^{\prime}(r_{c})}{2}u}\,\quad V=-{\rm e}^{\frac{f^{ \prime}(r_{c})}{2}v}. \tag{16}\] Evaluating (12, 13) with (1), one finds \[{\cal C}^{\epsilon} =\tfrac{\Omega_{d-1}}{G_{N}}\int_{\Sigma_{\epsilon}}{\rm d}\sigma\, r^{d-1}\sqrt{-f(r)\dot{v}^{2}+2\dot{v}\dot{r}}\,a(r)\,, \tag{17}\] \[{\cal C}_{\rm CMC} =\tfrac{\Omega_{d-1}}{G_{N}}\sum_{\epsilon}\alpha_{\epsilon}\int_ {\Sigma_{\epsilon}}{\rm d}\sigma\,{\cal L}_{\epsilon}\, \tag{18}\] where \(a(r)\) is a scalar functional corresponding to the evaluation of \(F[g_{\mu\nu},\,{\cal R}_{\mu\nu\rho\sigma},\,\nabla_{\mu}]\); \(\sigma\) is a general parametrization of the coordinates \(v(\sigma)\), \(r(\sigma)\) on the slice \(\Sigma_{\epsilon}\); and \[{\cal L}_{\epsilon}\equiv r^{d-1}\sqrt{-f(r)\dot{v}^{2}+2\dot{v}\dot{r}}- \epsilon\tfrac{K_{\epsilon}}{d}\dot{v}r^{d}. \tag{19}\] The details of the evaluation are shown in App. A. The late-time evolution of complexity results in: \[\lim_{t\to\infty}\frac{{\rm d}}{{\rm d}t}{\cal C}^{\epsilon}\simeq\frac{ \Omega_{d-1}}{G_{N}}\sqrt{-f(r_{f})r_{f}^{2(d-1)}}\,a(r_{f})\ \ \mbox{with}\ r_{f}\equiv\lim_{t\to\infty}r_{t}. \tag{20}\] Here \(r_{f}\) is a local maximum of the effective potential at late times: \[{\cal U}\bigg{|}_{r_{f}}=0,\quad\partial_{r}{\cal U}\bigg{|}_{r_{f}}=0,\quad \partial_{r}^{2}{\cal U}\bigg{|}_{r_{f}}\leq 0. \tag{21}\] These conditions lead to the following relation \[H(r_{f},\,K_{\epsilon})\equiv 4r_{f}f\left(r_{f}\right)\left((d-1)f^{\prime} \left(r_{f}\right)+K_{\epsilon}^{2}r_{f}\right)+4(d-1)^{2}f\left(r_{f}\right) {}^{2}+r_{f}^{2}f^{\prime}\left(r_{f}\right){}^{2}=0. \tag{22}\] The roots \(r_{f}\) of the function \(H(r_{f},\,K_{\epsilon})\) can be found explicitly for pure dS and extremal SdS black hole limits, as originally derived in [33], \[\left(r_{f}^{\rm(dS)}\right)^{2} =\frac{{K_{\epsilon}}^{2}-2d(d-1)\pm|K_{\epsilon}|\sqrt{{K_{ \epsilon}}^{2}-4(d-1)}}{2({K_{\epsilon}}^{2}-d^{2})}\,\quad|K_{\epsilon}|\geq 2\sqrt{d-1}\ ; \tag{23}\] \[r_{f}^{\rm(N)} =\sqrt{\tfrac{d-2}{d}}\,\quad|K_{\epsilon}|\geq\sqrt{d}. \tag{24}\] Figure 2: Implementation of the codimension-one CAny proposals with CMC slices as evaluation regions in the unperturbed (S)dS space, following the notation in Fig. 1. Let us now show that for a generic SdS black hole spacetimes (\(d\geq 3\)), there will always be a real root to (22). One can evaluate \(H(r_{f},\,K_{\epsilon})\) in (22) with the roots in (23, 24) while keeping the mass of the black hole arbitrary. We will denote \(m\equiv\mu/\mu_{N}\in[0,\,1]\), such that we may express: \[H\Big{(}r_{f}^{\rm(dS)},\,2\sqrt{d-1}\Big{)}=\frac{4\mu\left((d-2 )^{d}d^{2}\mu-4(d-2)^{2}((d-1)d)^{d/2}\right)}{(d-2)^{4-d}(d-1)^{d-2}d^{d}}\, \tag{25}\] \[H\Big{(}r_{f}^{\rm(N)},\,K_{\epsilon}\Big{)}=\frac{4(1-m)\left(2 K_{\epsilon}^{2}(d-2)+d^{2}(1-m)\right)}{d^{2}}. \tag{26}\] Notice that (25) is clearly negative for all \(d\geq 3\) and \(m\in(0,\,1)\), while (26) is positive. Moreover, as we increase \(|K_{\epsilon}|>2\sqrt{d-1}\), \(H\Big{(}r_{f}^{\rm(dS)},\,K_{\epsilon}\Big{)}\) becomes more negative in (22). Then, according to the _intermediate value theorem_, there will exist at least a real root \(r_{f}\in\Big{[}r_{f}^{\rm(dS)},\,r_{f}^{\rm(N)}\Big{]}\) for general \({\rm SdS}_{d+1}\) space. On the other hand, since we have allowed \(a(r)\) to be an arbitrary function in (20), we see that when \(r_{f}\to\infty\)6 there would be arbitrary types of late-time growth for \(C^{\epsilon}\) depending on the particular choice of \(a(r)\). For instance, the case \(a(r)=1\) leads to late-time exponential behavior when \(r_{f}\to\infty\); meanwhile, having a different degree of divergence in (20) would lead to enhancement or decrease in the late-time growth. However, for this to be a valid CAny proposal, we require also a modification, as explained below. Footnote 6: This condition is satisfied in (23) when \(K_{\epsilon}=d\) in \({\rm dS}_{2}\) and (S)dS\({}_{3}\) space. When one evaluates the early time evolution in (20) \(t\to-\infty\), there is a sign flip in \(K_{\epsilon}\to-K_{\epsilon}\). As a result, the rate of growth of the CAny observables at early and late times does not coincide for a given CMC slice. The future or past growth would be given by (20), while the other generates hyperfast growth.7 Importantly for us, the switchback effect is not respected in this case, as on requires a cancellation between early and late-time contributions to the complexity growth. We will make this more explicit below. Footnote 7: See [33] for comments about possible interpretations in terms of circuit complexity. ## 3 The switchback effect We will study the set of observables (12, 13) in the shockwave geometry (10). We begin performing a sequence of an even number of shockwaves, \(n\), in the inflating patch (i.e. \(r\in[r_{\mathcal{O}},\,\infty]\)). Let us denote \(t_{1},\,t_{2},\,\dots,\,t_{n}\) as the insertion static patch times with respect to the stretched horizon in alternating insertion order, i.e. \(t_{2k+1}>t_{2k}\), and \(t_{2k}<t_{2k-1}\), restricted to \(|t_{i+1}-t_{i}|\gg t_{*}\). Accounting for the sign of shift in the backreacted metric (10), the functional (12) has an additive property under these insertions in the strong shockwave limit [14; 15], \[\begin{split}\mathcal{C}^{\epsilon}(t_{L},\,t_{R})=& \mathcal{C}^{\epsilon}(t_{R},\,V_{1})+\mathcal{C}^{\epsilon}(V_{1}- \alpha_{1},\,V_{2})+\dots\\ &+\mathcal{C}^{\epsilon}(U_{n-1}+\alpha_{n-1},\,V_{n})+\mathcal{ C}^{\epsilon}(V_{n}-\alpha_{n},\,t_{L})\end{split} \tag{27}\] where \(\mathcal{C}^{\epsilon}(\cdot,\,\cdot)\) denotes the contributions from \(\Sigma_{\epsilon}\) with two fixed endpoints and all endpoints are located either on the left/right horizon or on the stretched horizon \(r_{\rm st}\). For the evaluation of \(\mathcal{C}^{\epsilon}\), we search for the locations \(u_{R,\,L},\,v_{R,\,L}\) where \(\Sigma_{\epsilon}\) intersect with the left/right horizon \(r_{c}\), \[v_{R}-v_{t}=\int_{r_{c}}^{r_{t}}{\rm d}r\tfrac{\dot{v}}{r}=\int_{r_{c}}^{r_{t} }\tfrac{{\rm d}r}{f(r)}\Bigg{(}1-\frac{P_{v}^{\epsilon}+\frac{\epsilon K_{ \epsilon}}{d}r^{d}}{\sqrt{-\mathcal{U}(P_{v}\epsilon,\,r)}}\Bigg{)}\, \tag{28}\] and \(v_{t}=v_{R}(r_{t})\). We perform the expansion around the final slice, such that (2.21) allows us to approximate \[\lim_{r\to r_{f}}\mathcal{U}(P^{\epsilon}_{v},\,r)\simeq\tfrac{1}{2}(r-r_{f})^{2 }\mathcal{U}^{\prime\prime}(P^{\epsilon}_{v},\,r)+\mathcal{O}(|r-r_{f}|^{3}). \tag{3.3}\] We can proceed as outlined in App. A (see (A.7)) to express (2.17) as \[\mathcal{C}^{\epsilon}= -\tfrac{2\Omega_{d-1}}{G_{N}}a(r_{t})\sqrt{-f(r_{t})r_{t}^{2(d-1) }}\int_{r_{e}}^{r_{t}}\tfrac{(P^{\epsilon}_{v}+\tfrac{iK_{\epsilon}}{d}r^{ \epsilon})\mathrm{d}r}{f(r)\sqrt{-\mathcal{U}(P^{\epsilon}_{v},r)}} \tag{3.4}\] \[+\tfrac{2\Omega_{d-1}}{G_{N}}\int_{r_{e}}^{r_{t}}\tfrac{a(r)f(r) \,r^{2(d-1)}+a(r_{t})\sqrt{-f(r_{t})r_{t}^{2(d-1)}}\big{(}P^{\epsilon}_{v}+ \tfrac{iK_{\epsilon}}{d}r^{\epsilon}\big{)}}{f(r)\sqrt{-\mathcal{U}(P^{ \epsilon}_{v},r)}}\.\] Using (3.3) to approximate the potential near the final turning point \(r_{f}\), we find that \[\mathcal{C}^{\epsilon}=\tfrac{\Omega_{d-1}}{G_{N}}P^{\epsilon}_{\infty}v\,\quad P^{ \epsilon}_{\infty}=a(r_{f})\sqrt{-f(r_{f})r_{f}^{2(d-1)}} \tag{3.5}\] where \(r_{f}\) is determined with (2.22). The results above (3.5, 2.20) can be used to express the contributions in (3.1) in Kruskal coordinates (2.16) as: \[\mathcal{C}^{\epsilon}(t_{L},\,V_{R}) =\frac{\Omega_{d-1}}{G_{N}}P^{\epsilon}_{\infty}\log\mathrm{e}^{ t_{L}}V_{R}\, \tag{3.6}\] \[\mathcal{C}^{\epsilon}(U_{L},\,V_{R}) =\frac{\Omega_{d-1}}{G_{N}}P^{\epsilon}_{\infty}\log U_{L}V_{R}\,\] (3.7) \[\mathcal{C}^{\epsilon}(U_{L},\,t_{R}) =\frac{\Omega_{d-1}}{G_{N}}P^{\epsilon}_{\infty}\log U_{L}\mathrm{ e}^{t_{R}}. \tag{3.8}\] However, there is also an early time contribution in the shockwave geometry8, given by the term Footnote 8: This was also noticed in [33, 40] for the AdS black hole background. \[\mathcal{C}^{\epsilon}(V_{L},\,U_{R})=\frac{\Omega_{d-1}}{G_{N}}P^{\epsilon}_{- \infty}\log V_{L}U_{R}\, \tag{3.9}\] where \[P^{\epsilon}_{-\infty}=a(r_{I})\sqrt{-f(r_{I})r_{I}^{2(d-1)}}\,\ \ \text{with}\ \ r_{I}=\lim_{t\to-\infty}r_{t}. \tag{3.10}\] As mentioned above in Sec. 2.2, for \(t\to-\infty\), there is a sign flip in \(K_{\epsilon}\to-K_{\epsilon}\). In that case, \(r_{I}\) is a solution to (2.21, 2.22) with the appropriate modification of \(K_{\epsilon}\). However, as we also pointed out, the CMC slices that display late time growth in the far past/future display hyperfast growth in the future/past respectively. Instead, consider a protocol where we evaluate (2.17) over different CMC slices in the past and future, such that there are always solutions \(r_{f}\) and \(r_{I}\) with respect to the stretch horizon evolution. (3.1) then transforms into \[\mathcal{C}^{\epsilon}(t_{L},\,t_{R})\simeq\tfrac{\Omega_{d-1}}{G_{N}} \Big{[}P^{\epsilon}_{\infty}\log\!\left(U_{1}\mathrm{e}^{t_{R}} \right)+P^{\epsilon}_{-\infty}\log(U_{1}+\alpha_{1})V_{2}+P^{\epsilon}_{\infty }\log(V_{2}-\alpha_{2})U_{3}+\ldots \tag{3.11}\] \[\qquad+P^{\epsilon}_{\infty}\log\!\left((V_{n}-\alpha_{n}) \mathrm{e}^{t_{L}}\right)\Big{]}\.\] We can then extremize (3.11) with respect to an arbitrary interception point (\(V_{i}\), \(U_{i}\)) in the multiple shockwave geometry, \[\frac{\mathrm{d}\mathcal{C}^{\epsilon}(t_{L},\,t_{R})}{\mathrm{d}V_{i}}=0\,\quad \frac{\mathrm{d}\mathcal{C}^{\epsilon}(t_{L},\,t_{R})}{\mathrm{d}U_{i}}=0\, \tag{3.12}\] which allows us to locate \[V_{i}^{\epsilon}=\frac{P_{-\infty}^{\epsilon}\alpha_{i}}{P_{\infty}^{\epsilon}+ P_{-\infty}^{\epsilon}}\,\quad U_{i}^{\epsilon}=-\frac{P_{\infty}^{\epsilon}\alpha_{i}}{P_{\infty}^{ \epsilon}+P_{-\infty}^{\epsilon}}. \tag{3.13}\] Replacing the interception points into (3.11) generates: \[\mathcal{C}^{\epsilon}\simeq\frac{\Omega_{d-1}}{G_{N}}\Bigg{(}P_{+\infty}^{ \epsilon}(t_{R}+t_{L})+(P_{+\infty}^{\epsilon}+P_{-\infty}^{\epsilon})\Bigg{(} \sum_{k=1}^{n}t_{k}-nt_{*}^{(c)}\Bigg{)}\Bigg{)}\, \tag{3.14}\] up to constant terms in terms of \(P_{+\infty}^{\epsilon}\), \(P_{-\infty}^{\epsilon}\). Importantly, it was noticed in [33] that the CAny proposals with a generic functional \(F[\dots]\) in (2.18) for an AdS planar black hole background only satisfy the switchback effect when the rate of growth in the past and future are the same. This means that for (2.18) to obey the definition of holographic complexity in [14; 15], we require9 Footnote 9: The derivation of this requirement for the AdS planar black hole case follows the same steps that we have presented for the SdS case, although some replacements need to be made. This includes inverting the integration limits in (3.2, 3.2); setting \(r_{c}\to r_{b}\); \(K_{\epsilon}\to-K_{\epsilon}\); and \(\alpha_{i}\to-\alpha_{i}\). \[P_{+\infty}^{\epsilon}=P_{-\infty}^{\epsilon}. \tag{3.15}\] In that case, the evaluation of (3.14) reduces to \[\mathcal{C}^{\epsilon}\propto|t_{R}+t_{1}|+|t_{2}-t_{1}|+\dots+|t_{n}-t_{L}|- 2nt_{*}^{(c)}\, \tag{3.16}\] where the term \(-2nt_{*}^{(c)}\) appears due to cancellation in the complexity growth due to early and late time perturbations. Notice that a possible way to satisfy (3.15) in SdS space can be obtained by setting \(K_{-}=-K_{+}\) and selecting a complexity proposal \(\mathcal{C}\) as10 Footnote 10: However, instead of minimization, one might as well perform a maximization over the CMC slices, or an averaging, as either of those would satisfy (3.15) in AdS planar black holes; although that would reproduce the hyperfast growth in SdS space. \[\mathcal{C}=\min_{t}\big{(}\mathcal{C}^{+}(t),\,\mathcal{C}^{-}(t)\big{)}. \tag{3.17}\] See Fig. 3 for an illustration of the evolution of the CMC slices in the shockwave geometry. We close the section with a few remarks. First, the result (3.16) reproduces the same type of behavior as the switchback effect for AdS planar black holes, at least for the CAny proposals with early and late-time linear growth in SdS space. Second, as we mentioned in Sec. 2.2 there are fine-tuned situations where \(\mathcal{C}^{\epsilon}\) can have any type of early and late-time growth behavior for SdS\({}_{3}\). It might be interesting to study the modifications in the switchback in those cases. Lastly, the switchback effect has also been recovered in a different and more explicit analysis for particular asymptotically dS backgrounds [30; 31], hinting at the possibility that this is a rather universal phenomenon in shockwave geometries. ## 4 Discussion and outlook In summary, we studied the appearance of the switchback effect in asymptotically dS spacetimes by studying the late (and early time) evolution of the codimension-one CAny observables under shockwave nsertions. We picked a set of observables that are evaluated in CMC slices of different curvature in the past and future boundaries. We proved that under a weakly gravitating regime, the CAny observables show a reduction in the complexity growth due to cancellations of the energy perturbations. We also explicitly verified one of the predictions in [33], namely that a time-reversal symmetric protocol would be necessary for the switchback effect to occur11. Moreover, our findings show a great similarity with respect to the behavior of CAny proposals for AdS black holes under the switchback effect. However, we reiterate that the CAny observables in our study do not necessarily represent holographic complexity in asymptotically dS space. To have a clear notion of holographic complexity, we might require a quantum circuit interpretation for the observables, which would also require a reliable quantum circuit model for dS space. Some toy models allowing much progress in this direction have been studied in [44; 45; 46; 47]. Footnote 11: On the field theory side, the notion of Nielsen geometric approach to complexity [41; 42; 43] suggests that this must be indeed respected. We comment on some interesting future directions. Our work focused on the alternating shockwave insertion on the inflating patch of the SdS spacetime. However, we can also extend the analysis when the SdS spacetime has multiple patches, to enquire about the black hole interior as well. These types of geometries have been used as toy models for multiverses in [37]. One of the striking features previously found was that the information available to the past light cone of an observer in one of the multiverses would encode the information of the other universes, in the semiclassical regime. It would be very interesting to see whether the notions of general codimension-zero CAny observables may also encode such information. We might be able to learn if some of these observables might have an interpretation from the point of view of quantum cosmology. It might be also fruitful to study how the introduction of perturbations in the geometry affects the coarse-graining of information found in [37]. An important aspect in the search for the holographic description of dS space would be obtaining a dual interpretation of the observables that we studied in this work. It would be interesting to analyze quantum circuit observables that display similar behaviors to the ones studied within our work to study whether the effect of perturbations on the stretched horizon might also have the interpretation of an epidemic type of growth given the insertion of operators. Moreover, it would be interesting to see what type of signals can be found in a UV complete description of the stretched horizon, motivated by the DSSYK model [17; 48]. Applications to of CAny to dS space braneworld models were recently carried out in [34] to gain more information about dS holography. In these models [49; 50], one includes an end-of-the-world Figure 3: Representative CMC slices evolving in a single shockwave geometry in SdS space. brane, whose tension determines the cosmological constant in the effective gravitational theory on the brane. It would be interesting to incorporate the switchback effect to characterize perturbations in a double holographic setting, with a clearer field theory dual. This effective theory might be further modified by adding intrinsic gravity theories on the brane, leading to a more intricate holographic complexity evolution [51; 52; 53]. Moreover, the fluctuations associated with the brane location lead to an effective description as dS JT gravity on one of the branes [34]. It would be interesting to study the switchback effect of this effective theory. On the other hand, as we found in Fig. 2, the CAny observables generically reach a terminal turning \(r_{f}\) (as well as a time-reversal version) determined by the choice of the CMC slices through (2.22). This implies that the CAny proposals in the article do not prove the whole cosmological patch of SdS spacetime. However, we would expect that any notion of static patch holography should also encode the degrees of freedom of the inflating region, similar to investigations in asymptotically AdS space [54].12 Nevertheless, one can probe more of the geometry outside the cosmological horizon using the alternating shockwave geometry. It seems that adding perturbations in the stretched horizon reveals more information, even when its explicit localization is irrelevant. Footnote 12: We thank Eivind Jorstad for correspondence about this point. Finally, much progress in the dS holography has been made possible through the study of interpolating geometries in two-dimensional dilaton-gravity theories [55; 56; 57; 58; 59; 60]. While the CV proposal has been extensively studied in [61] for certain interpolating geometries; new members in this set have recently appeared [58], and the CAny proposals have not been treated yet. This might allow for a clearer interpretation of the properties studied in this work in the context of dS\({}_{2}\) space, appearing from near extremal limits near horizon limits of black hole geometries [62; 63]. ## Acknowledgements It's my pleasure to thank Stefano Baiguera, Alex Belin, Rotem Berman, Michal P. Heller, Eivind Jorstad, Edward K. Morvan-Benhaim, Juan F. Pedraza, Andrew Svesko, Silke Van der Schueren, and Nicolo Zenoni for valuable discussions. I also thank the University of Amsterdam, and the Delta Institute for Theoretical Physics for their hospitality and support during various phases of this project; and to the organizers of the Modave summer school, where part of the project was completed. The work of SEAG is partially supported by the FWO Research Project G0H9318N and the inter-university project iBOF/21/084. ## Appendix A Details on the CAny evaluation For the choice \[\sqrt{-f(r)\dot{v}^{2}+2\dot{v}\dot{r}}=r^{d-1}\,\] (A.1) the Euler-Lagrange equations corresponding to (2.18) can be expressed as \[\dot{r}^{2}+\mathcal{U}(P_{v}^{\epsilon},\,r)=0\,\] (A.2) where \[P_{v}^{\epsilon} \equiv\frac{\partial\mathcal{L}_{\epsilon}}{\partial\dot{v}}= \dot{r}-\dot{v}\,f(r)-\epsilon\frac{K_{\epsilon}}{d}\,r^{d}\ ;\] (A.3) \[\mathcal{U}(P_{v}^{\epsilon},\,r) \equiv-f(r)r^{2(d-1)}-\left(P_{v}^{\epsilon}+\epsilon\frac{K_{ \epsilon}}{d}r^{d}\right)^{2}\.\] (A.4) Then, we can express (2.17) as \[\mathcal{C}^{\epsilon}=\tfrac{2\Omega_{d-1}}{G_{N}}\int_{r_{\text{st}}}^{r_{t}} \tfrac{r^{2(d-1)}a(r)}{\sqrt{-\mathcal{U}(P_{v}^{\epsilon},\,r)}}\mathrm{d}r\.\] (A.5) In a similar way, the parameter \(t\) can be expressed as \[\begin{split} t&=\int_{\Sigma_{\epsilon}}\mathrm{d}r \tfrac{i}{r}=\int_{\Sigma_{\epsilon}}\mathrm{d}r\tfrac{\psi-\dot{r}/f(r)}{ \sqrt{-\mathcal{U}(P_{v}^{\epsilon},\,r)}}\\ &=-2\int_{r_{\text{st}}}^{r_{t}}\tfrac{\mathrm{d}r}{f(r)\sqrt{- \mathcal{U}(P_{v}^{\epsilon},\,r)}}\left(P_{v}^{\epsilon}+\tfrac{eK_{\epsilon }}{d}r^{d}\right)\.\end{split}\] (A.6) We proceed to evaluate (A.5) with (A.6) carefully. Since \(U(P_{v}^{\epsilon},\,r_{t})=0\) by definition, we need to take care of the denominator in (A.5, A.6) at each of the turning points. We do so by adding a subtracting a term: \[\mathcal{C}^{\epsilon}= -\tfrac{2\Omega_{d-1}}{G_{N}}a(r_{t})\sqrt{-f(r_{t})r_{t}^{2(d-1 )}}\int_{r_{\text{st}}}^{r_{t}}\tfrac{\left(P_{v}^{\epsilon}+\tfrac{eK_{ \epsilon}}{d}r^{d}\right)\mathrm{d}r}{f(r)\sqrt{-\mathcal{U}(P_{v}^{\epsilon},\,r)}}\] (A.7) \[+\tfrac{2\Omega_{d-1}}{G_{N}}\int_{r_{\text{st}}}^{r_{t}}\tfrac{a( r)f(r)\,r^{2(d-1)}+a(r_{t})\sqrt{-f(r_{t})r_{t}^{2(d-1)}}\left(P_{v}^{\epsilon}+ \tfrac{eK_{\epsilon}}{d}r^{d}\right)}{f(r)\sqrt{-\mathcal{U}(P_{v}^{\epsilon },\,r)}}\.\] Then, we can identify the relationship between time in (A.6) and complexity in (A.7) \[\begin{split}\mathcal{C}^{\epsilon}=&\tfrac{\Omega_{ d-1}}{G_{N}}a(r_{t})\sqrt{-f(r_{t})r_{t}^{2(d-1)}}t\\ &+\tfrac{2\Omega_{d-1}}{G_{N}}\int_{r_{\text{st}}}^{r_{t}}\tfrac {a(r)f(r)\,r^{2(d-1)}+a(r_{t})\sqrt{-f(r_{t})r_{t}^{2(d-1)}}\left(P_{v}^{ \epsilon}+\tfrac{eK_{\epsilon}}{d}r^{d}\right)}{f(r)\sqrt{-\mathcal{U}(P_{v}^ {\epsilon},\,r)}}\.\end{split}\] (A.8) We can then straightforwardly take the time derivative, \[\frac{\mathrm{d}\mathcal{C}^{\epsilon}}{\mathrm{d}t}= \tfrac{\Omega_{d-1}}{G_{N}}a(r_{t})\sqrt{-f(r_{t})r_{t}^{2(d-1)}}\] (A.9) \[+\tfrac{2\Omega_{d-1}}{G_{N}}\frac{\mathrm{d}P_{v}^{\epsilon}}{ \mathrm{d}t}\int_{r_{\text{st}}}^{r_{t}}\mathrm{d}r\frac{r^{2(d-1)}\left(a(r_ {t})\sqrt{-f(r_{t})r_{t}^{2(d-1)}}-a(r)\left(P_{v}^{\epsilon}+\tfrac{eK_{ \epsilon}}{d}r^{d}\right)\right)}{\left(-\mathcal{U}(P_{v}^{\epsilon},\,r) \right)^{3/2}}\.\] Thus, in the \(t\to\infty\) regime we then recover (2.20). Moreover, the \(\frac{\mathrm{d}P_{v}^{\epsilon}}{\mathrm{d}t}\) term vanishes (2.20) provided that the effective potential reaches a maximum [54], shown in (2.21).
2302.14389
Information-Restricted Neural Language Models Reveal Different Brain Regions' Sensitivity to Semantics, Syntax and Context
A fundamental question in neurolinguistics concerns the brain regions involved in syntactic and semantic processing during speech comprehension, both at the lexical (word processing) and supra-lexical levels (sentence and discourse processing). To what extent are these regions separated or intertwined? To address this question, we trained a lexical language model, Glove, and a supra-lexical language model, GPT-2, on a text corpus from which we selectively removed either syntactic or semantic information. We then assessed to what extent these information-restricted models were able to predict the time-courses of fMRI signal of humans listening to naturalistic text. We also manipulated the size of contextual information provided to GPT-2 in order to determine the windows of integration of brain regions involved in supra-lexical processing. Our analyses show that, while most brain regions involved in language are sensitive to both syntactic and semantic variables, the relative magnitudes of these effects vary a lot across these regions. Furthermore, we found an asymmetry between the left and right hemispheres, with semantic and syntactic processing being more dissociated in the left hemisphere than in the right, and the left and right hemispheres showing respectively greater sensitivity to short and long contexts. The use of information-restricted NLP models thus shed new light on the spatial organization of syntactic processing, semantic processing and compositionality.
Alexandre Pasquiou, Yair Lakretz, Bertrand Thirion, Christophe Pallier
2023-02-28T08:16:18Z
http://arxiv.org/abs/2302.14389v1
Information-Restricted Neural Language Models Reveal Different Brain Regions' Sensitivity to Semantics, Syntax and Context ###### Abstract A fundamental question in neurolinguistics concerns the brain regions involved in syntactic and semantic processing during speech comprehension, both at the lexical (word processing) and supra-lexical levels (sentence and discourse processing). To what extent are these regions separated or intertwined? To address this question, we trained a lexical language model, Glove, and a supra-lexical language model, GPT-2, on a text corpus from which we selectively removed either syntactic or semantic information. We then assessed to what extent these information-restricted models were able to predict the time-courses of fMRI signal of humans listening to naturalistic text. We also manipulated the size of contextual information provided to GPT-2 in order to determine the windows of integration of brain regions involved in supra-lexical processing. Our analyses show that, while most brain regions involved in language are sensitive to both syntactic and semantic variables, the relative magnitudes of these effects vary a lot across these regions. Furthermore, we found an asymmetry between the left and right hemispheres, with semantic and syntactic processing being more dissociated in the left hemisphere than in the right, and the left and right hemispheres showing respectively greater sensitivity to short and long contexts. The use of information-restricted NLP models thus shed new light on the spatial organization of syntactic processing, semantic processing and compositionality. ## Introduction Understanding the neural bases of language processing has been one of the main research efforts in the neuroimaging community for the past decades (see, e.g., _Friederici, 2011; Binder et al., 2009, for reviews). However, the complex nature of language makes it difficult to discern how the various processes underlying language processing are topographically and dynamically organized in the human brain, and therefore many questions remain open to this date. One central open question is whether semantic and syntactic information are encoded and processed jointly or separately in the human brain. Language comprehension requires to access word meanings (lexical semantics), but also to compose these meanings to construct the meaning of entire sentences. In languages such a English, semantic composition strongly depends on word order in the sentence - for example, The boy kissed the girl! has a different meaning compared to The girl kissed the boy? although both sentences contain the exact same words. The brain constructs these different meanings conditionally on words order, which is the backbone of sentence processing, indicating how to combine the lexical meanings of its sub-parts. Importantly, meaning construction of new sentences would be roughly done in the same way if only the structure of the sentences remains the same (The X kissed the Y), independently of the lexical meanings of the single nouns in the sentences ('boy' and 'girl'). This combinatorial property of language allows to construct meanings of sentences that we have never heard before and suggests that it might be computationally advantageous for the brain to have developed neural mechanisms for composition that are separate from those dedicated to the processing of lexico-semantic content. Such neural mechanisms for composition would be sensitive to only the abstract structure of sentences and would implement the syntactic rules according to which sentence parts should be composed. Following related considerations, the dominant view over the past decades claimed that syntactic information is represented and processed in specialized brain regions, akin to the classic modular view [13, 14]. Neuronal modularity of language processing gained support from early lesion studies suggesting that syntactic processing takes place in localized and specialized brain regions such as Broca's area, showing double dissociations between syntactic and semantic processing [15, 16]. Neuroimaging studies [12, 13, 14, 15, 17, 18, 19, 20, 21, 22] have provided further support to this view since then. However, in parallel, an opposing view has argued that semantics and syntax are processed in a common distributed language processing system [1, 13, 15, 16, 17, 18, 20]. Recent work in support of this view has raised concerns regarding the replicability of some of the early results from the modular view [14] and provided evidence that semantic and syntactic processing in the language network might not be so easily dissociated from one another [12, 13]. Neuroimaging studies, cited to defend one or the other view, have mainly relied on one of two methodological approaches: on the one hand, controlled experimental paradigms, which manipulate the words or sentences [1, 15, 16, 17, 18, 19] and, on the other hand, naturalistic paradigms that make use of stimuli closer to what one could find in a daily environment. The former approach probes linguistic dimensions in one of the following ways: varying the presence or absence of syntactic or semantic information [13, 14] or varying the syntactic structure difficulty or the semantic interpretation difficulty [12, 13, 14, 15, 16, 17, 18, 19]. However, the conclusions from such studies may be bounded to the peculiarity of the task and setup used in the experiment [13]. To overcome these shortcomings, over the last years, researchers have become increasingly interested in data using "Ecological Paradigms", in which participants are engaged in more natural tasks, such as conversation or story listening [12, 13, 14, 15, 16, 17, 18, 19, 20, 21]. This avoids any task-induced bias and takes into consideration both lexical and supra-lexical levels of syntax and semantic processing. Integrating supra-lexical level information is essential for understanding language processing in the brain, because the lexical-semantic information of a word and the resulting semantic compositions depend on its context. More recently, following advances in natural language processing, neural language models have been increasingly employed in the analysis of data collected from ecological paradigms. Neural language models are models based on neural networks, which are trained to capture joint probability distributions of words in sentences using next-word, or masked-word prediction tasks [12, 13, 14, 15]. By doing so, the models have to learn semantic and syntactic relations among word tokens in the language. To study brain data collected from ecological paradigms, neural language models are presented with the same sentence stimuli, then, their activations (aka, embeddings) are extracted and used to fit and predict the brain data (_Wehbe et al._, 2014; Huth et al._, 2016; Pasquiou et al._, 2022; Caucheteux and King_, 2022). This approach has led to several discoveries, such as wide networks associated with semantic processing uncovered by _Huth et al._ (2016) using word embeddings (see also _Pereira et al._, 2018a), or context-sensitivity maps discovered by _Jain and Huth_ (2018) and _Toneva and Wehbe_ (2019). Despite these advances and extensive neuroscientific and cognitive explorations, the neural bases of semantics, syntax and the integration of contextual information still remain debated. In particular, a central puzzle remains in the field: on the one hand, studies investigating syntax and semantics found vastly distributed networks when using naturalistic stimuli (_Fedorenko et al._, 2020; Caucheteux et al._, 2021) and others found more localized activations for syntax, typically in inferior frontal and posterior temporal regions, when using constraint experimental paradigms e.g., (_Pallier et al._, 2011; Matchin et al._, 2017). Thus, whether there is a hierarchy of brain regions integrating contextual information or the extent to which syntactic information is independently processed from semantic information, in at least some brain regions, remains largely debated to date. So far, insights from neural language models about this central puzzle were also rather limited. This is mostly due to the complexity of the models in terms of size, training and architecture. This complexity makes it difficult to identify how and what information is encoded in their latent representations, and how to use their embeddings to study brain function. _Caucheteux et al._ (2021) used a neural language model, GPT-2, in an novel way to separate semantic and syntactic processing in the brain. Specifically, using a pre-trained GPT-2 model, they built syntactic predictors by averaging the embeddings of words from sentences that shared syntactic but no semantic properties, and used them to identify syntactic-sensitive brain regions. They defined as semantic-sensitive brain regions, the regions that were better predicted by the GPT-2's embeddings computed on the original text, compared to the syntactic predictors. They observed that syntax and semantics, defined in this way, rely on a common set of distributed brain areas. _Jain and Huth_ (2018) used pre-trained LSTM models to study context integration. They varied the amount of context used to generate word embeddings, and obtained a map indicating brain regions' sensitivity to different sizes of context. Here, we propose a new approach to tackle the questions of syntactic vs. semantic processing and contextual integration, by fitting brain activity with word embeddings derived from _information-restricted_ models. By this, we mean that the models are trained on text corpora from which specific types of information (syntactic, semantic, or contextual) were removed. We then assess the ability of these information-restricted models to fit brain activations, and compared it to the predictive performance of a neural model trained on the original dataset. More precisely, we created a text corpus of novels from the Gutenberg Project ([http://www.gutenberg.org](http://www.gutenberg.org)) and used it to define three different sets of features: (i) _Integral features_, the full text from the corpus (ii) _Semantic features_, the content words from the corpus; (iii) _Syntactic features_, where each word and punctuation sign from the corpus is replaced by syntactic characteristics. We then trained two types of models on each feature space: a non-contextual model, Glove (_Pennington et al._, 2014), and a contextual model, GPT-2 (_Radford et al._, 2019) (See Fig. 1A). The text transcription of the audio-book, to which participants listened in the scanner, was then presented to the neural language models from which we derived embedding vectors. After fitting these embedded representations to fMRI brain data with linear encoding models, we computed the cross-validated correlations between the encoding models' predicted time courses and the observed time-series. In a first set of analyses, this allowed us to quantify the sensitivity to syntactic and semantic information in each voxel (Fig. 1B). In a second set of analyses, we identified brain regions integrating information beyond the lexical level. We first compared the contextual model (GPT-2) and the non contextual model (Glove), before investigating the brain regions processing short (5 words), medium (15 words) and long (45 words) contexts, using a non-contextualized Glove model as a 0-context baseline (See Fig. 1C.). ## Results ### Dissociation of syntactic and semantic information in embeddings We first assessed the amount of syntactic and semantic information contained in the embedding vectors derived from GloVe and GPT-2 trained on the different sets of features. In order to do so, we trained logistic classifiers to decode either the semantic category or the syntactic category from the embeddings generated from the text of _The Little Prince_. The decoding performances of the logistic classifiers are displayed in Fig.2. The models trained directly on the integral features, that is, the intact texts, have relatively high performance on the two tasks (75% in average for both GloVe and GPT-2). The models trained on the syntactic features performed well on the syntax decoding task (decoding accuracy >95%), but are near chance-level on the semantic decoding task (decoding accuracy around 25% with a chance-level at 16%). Similarly, the models trained on the semantic features display good performance on the semantic decoding task (decoding accuracy greater than 80%), but a relatively poorer decoding accuracy on the syntax decoding task (45%, chance level: 16%). These results validate the experimental manipulation by showing that syntactic embeddings essentially encode syntactic information and semantic embeddings essentially encode semantic information. ### Correlations of fMRI data with syntactic and semantic embeddings Our objective was to evaluate how well the embeddings computed from GloVe and GPT-2 on the syntactic and semantic features fit the fMRI signal in various parts of the brain. For each model/features combination, we computed the increase in R score when the resulting embeddings were appended to a baseline model that comprised low-level variables (acoustic energy, word onsets and lexical frequency). This was done separately for each voxel. The resulting maps are displayed in Fig.3A. The maps reveal that semantic and syntactic feature-derived embeddings from GloVe or GPT-2 significantly explain the signal in a set of bilateral brain regions including frontal and temporal regions, as well as the Temporo-parietal junction, the Precuneus and Dorso-Medial Prefrontal Cortex (dMPC). The classical left-lateralized language network, which includes the Inferior Frontal Gyrus (IFG) and the Superior Temporal Sulcus (STS), is entirely covered. Overall, a vast network of regions is modulated by both semantic and syntactic information. Nevertheless, detailed inspection of the maps shows different R score distribution profiles (see Appendix 1-R Scores Distribution for GloVe and GPT-2 Trained on Semantic or Syntactic Features Appendix 1-Fig.4). For example, syntactic embeddings yield the highest fits in the Superior Temporal Lobe, extending from the Temporal Pole (TP) to the Temporo-Parietal Junction (TPI), as well as the Inferior Frontal Gyrus (IFG, BA-44 and 47), the Superior Frontal Gyrus (SFG), the Dorso-Medial Prefrontal Cortex (dMPC) and the posterior Cingulate cortex (pCC). Semantic embeddings, on the other hand, show peaks in the posterior Middle Temporal Gyrus (pMTG), the Angular Gyrus (AG), the Inferior Frontal Sulcus (IFS), the dMPC and the Precuneus/pCC. ### Regions best fitted by semantic or syntactic embeddings As noticed above, despite the fact that the regions fitted by semantic and syntactic embeddings essentially overlap (Fig.3A), the areas where each model has the highest R scores differ. To better visualize the maxima from these maps, we selected, for each of them, the 10% of voxels having the highest R scores. Thresholding at the 90-th percentile of the distributions (threshold values displayed in Appendix 1-Fig.4) produces the maps presented in Fig.3B. A first observation is that the number of supra-threshold voxels is quite similar in the left (19%) and right (21%) hemispheres, whether GPT-2 or Glove is considered, showing that during the pro ## Appendix A Linguistic manipulations Figure 1: **Experimental setup A)** A corpus of novels was used to create a dataset from which we extracted three different sets of features: (I) _Integral features_, comprising all tokens (words+punctuation); (ii) _Semantic features_, comprising only the content words; (iii) _Syntactic features_, comprising syntactic characteristics (Part-of-speech, Morphological syntactic characteristics, Number of Closing Nodes) of all tokens. GloVe and GPT-2 models were trained on each feature space. **B)** fMRI scans of human participants listening to an audio-book were obtained. The associated text transcription was input to Neural models, yielding embeddings that were convolved with an haemodyamic kernel and fitted to brain activity using a Ridge-regression. Brain maps of cross-validated correlation between encoding models’ predictions and fMRI time-series were computed. **C)** To study sensitivity to context, a GPT-2 model was trained and tested on input sequences of bounded context length (5, 15 and 45). The resulting representations were then used to predict fMRI activity. Figure 3: **Comparison of the ability of GloVe and GPT-2 to fit brain data when trained on either the semantic or the syntactic features.****A)** Significant increase in R scores relative to the baseline model for GloVe (a non contextual model) and GPT-2 (a contextual model), trained either on the Syntactic features or on the Semantic features (voxel-wise thresholded group analyses; N=51 subjects; corrected for multiple comparisons with a FDR approach \(p<0.005\); for each figure \(z_{FDR}\) indicates the significance threshold on the Z-scores). **B)** Bilateral spatial organisation of syntax and semantics highest R scores. Voxels whose R score belong in the 10% highest R scores (in green for models trained on the semantic features, and in red for models trained on the syntactic features) are projected onto brain surface maps for GloVe and GPT-2 (overlap in yellow and other voxels in grey). Jaccard score for each hemisphere are computed, i.e. the ratio between the size of the intersection and the size of the union of semantics and syntax peak regions; the proportion of voxels of each category are displayed for each hemisphere and model. Figure 2: **Decoding syntactic and semantic information from words embeddings.** For each dataset and model type (Glove and GPT-2), logistic classifiers were set up to decode either the syntactic or the semantic categories of the words from the text of _The Little Prince_. Chance-level was assessed using dummy classifiers and is indicated by black vertical lines. cessing of natural speech, both syntactic and semantic features modulate activations in both hemispheres to a similar extent. The regions involved include, bilaterally, the TP, the STS, the IFG and IFS, the DMPC, the pMTG, the TPJ, the Precuneus and pCC. One noticeable difference between the two hemispheres, apparent in Fig.3B, concerns the _overlap_ between the semantic and syntactic peak regions: it is stronger in the right than in the left hemisphere. To assess this overlap, we computed the Jaccard indices (see Jaccard index) between voxels modulated by syntax and voxels modulated by semantics. The Jaccard indices were much larger in the right hemisphere (\(J^{right}_{Gi\sigma\epsilon}=0.52\) and \(J^{right}_{GPT-2}=0.60\)) than in the left (\(J^{left}_{Gi\sigma\epsilon\epsilon}=0.14\) and \(J^{left}_{GPT-2}=0.20\)). The left hemisphere displayed distinct peak regions for semantics and syntax; syntax involving the STS, the pSTG, the anterior TP, the IFG (BA-44/45/47) and the MFG, while semantics involves the pMTG, AG, the TPJ and the IFS. We only observe overlap in the upper IFG (BA-44), AG and posterior STS. On medial faces, semantics and syntax share peak regions in the Precuneus, the pCC and the DMPC. In the right hemisphere, syntax and semantics share the STS, pMTG and most frontal regions, with only syntax-specific peak regions in the TP and SFG and semantics-specific peak regions in the TPJ. Overall, this shows that the neural correlates of syntactic and semantic features appear more separable in the left than in the right hemisphere. ### Gradient of sensitivity to syntax or semantics The analyses presented above revealed a large distributed network of brain regions sensitive to both syntax and semantics but with varying local sensitivity to both conditions. We further investigated these differences by defining a _specificity index_ that reflects, for each voxel, the logarithm of the ratio between the R scores derived from the semantic and the syntactic embeddings (see Specificity index). A score of \(x\) indicates that the voxel is \(10^{x}\)-times more sensitive to semantics compare to syntax if \(x>0\) (green), and conversely, the voxel is \(10^{x-x}\)-times more sensitive to syntax compare to semantics if \(x<0\) (red). Voxels with specificity indexes close to 0, are colored in yellow and show equal sensitivity to both conditions. Specificity indexes are plotted on Figure 4: **Voxels’ sensitivity to syntactic and semantic embeddings.** Voxels’ specificity indexes are projected onto brain surface maps reflecting how much semantic information helps to better fit the time-courses of a voxel compared to syntactic information; the greener the more the voxel is categorized as a semantic voxel, the redder the more the voxel is categorized as a syntactic voxel. Yellow regions are brain areas where semantic and syntactic information lead to similar R score increases. The top row displays specificity indexes in voxels where there was a significant effect for semantic or syntactic embeddings in Fig.3A. The bottom row is the voxel-wise thresholded group analyses; N=51 subjects; corrected for multiple comparisons with \(FDR<0.005\) (for each figure \(z_{FDR}\) indicates the significance threshold on the Z-scores). surface maps in Fig.4. The top row shows the specificity index of voxels where there was a significant effect for syntactic or for semantic embeddings in Fig.3A, while the bottom row shows group specificity indexes corrected for multiple comparison using an FDR-correction of 0.005 (N=51). The top row of Fig.4 shows that voxels that are more sensitive to Syntax include, bilaterally, the anterior Temporal Lobes (aTL), the STG, the Supplementary Motor Area (SMA), the MFG and sub-parts of the IFG. Voxels more sensitive to Semantics are located in the pMTG, the TPJ/AG, the IFS, SFS and the Precuneus. Voxels sensitive to both types of features are located in the posterior STG, the STS, the dMPC, the CC, the MFG and in the IFG. More specifically, in Fig.4 bottom, one can observe significantly low ratios (in favor of the syntactic embeddings) in the STG, aTL and pre-SMA, and significantly large ratios (in favor of the semantic embeddings) in the pMTG, the AG and the IFS. Specificity index maps are consistent with the maps of R score differences between semantic and syntactic embeddings for Glove and GPT-2 (see Appendix 1-Fig.5), but provide more insights into the relative sensitivity to syntax and semantics. These maps highlight that some brain regions show stronger responses to the semantic or to the syntactic condition even when they show sensitivity to both. ### Unique contributions of syntax and semantics The previous analyses allowed us to quantify the amounts of brain signal explained by the information encoded in various embeddings. Yet, when two embeddings explain the same amount of signal, that is, have similar R score, it remains to be clarified whether they hinge on information represented redundantly in the embeddings or information specific to each embedding. To address this issue, we analyzed the additional information brought by each embedding on top of the other one. To this end, we evaluated correlations that are uniquely explained by the semantic embeddings compared to the syntactic embeddings, and conversely. To quantify the unique contribution of each feature space to the prediction of the fMRI signal, we first estimated the Pearson correlation explained by the embeddings learned from the individual feature space - e.g., using only syntactic embeddings or semantic embeddings. We then assessed the correlation explained by the concatenation of embeddings derived from different feature spaces - e.g., concatenating syntactic and semantic embedding vectors (_de Heer et al._, 2017). Because it can identify single voxels whose responses can be partly explained by different feature spaces, this approach provides more information than simple subtractive analyses that estimate the R score difference per voxel (see Appendix 1-Fig.5). Syntactic embeddings (Fig.5A) uniquely explained brain data in localized brain regions: the STG, the TP, the pre-SMA and in the IFG, with R scores increases of about 5%. Semantic embeddings (Fig.5B) uniquely explained signal bilaterally in the same wide network of brain regions as the one highlighted in Fig.3A, including frontal and temporo-parietal regions bilaterally as well as the Precuneus and pCC medially, with similar R scores increases around 5%. This suggests that even if most of the brain is sensitive to both syntactic and semantic conditions, syntax is preferentially processed in more localized regions than semantics which is widely distributed. ### Synergy between syntax and semantics To probe regions where the joint effect of syntax and semantics is greater than the sum of the contributions of these features, we compared the R scores of the embeddings derived from the integral features with the R scores of the encoding models concatenating the semantic and syntactic embeddings (see Fig.5C). For the embeddings obtained with GloVe, this analysis did not reveal any significant effect. For the embeddings obtained with GPT-2, significant effects were observed in most of the brain, but with higher effects in the semantic peak regions: pMTG, TPJ, AG and in frontal regions. Figure 5: **Correlation uniquely explained by each embeddings A)** Increase in R scores relative to the semantic embeddings when concatenating semantic and syntactic embeddings in the encoding model. **B)** Increase in R scores relative to the syntactic embeddings when concatenating semantic and syntactic embeddings in the encoding model. **C)** Increase in R scores relative to the concatenated semantic and syntactic embeddings for the integral embeddings. These maps are voxel-wise thresholded group analyses; N=51 subjects; corrected for multiple comparisons with a FDR approach \(p<0.005\); for each figure \(z_{FDR}\) indicates the significance threshold on the Z-scores. ### Integration of contextual information To further examine the effect of context, we compared GPT-2, the supra-lexical model which takes context into account, to GloVe, a purely lexical model. The differences in R scores between the two models, trained on each of the three datasets are presented in Fig.6. GPT-2 embeddings elicit stronger R scores than GloVe. The difference spreads over wider regions when the models were trained on syntax compared to semantics (see Fig.6 top left and right). The comparison for syntax led to significant differences bilaterally in the STS/STG, from the Temporal Pole to the TP], in superior, middle and inferior frontal regions, and medially in the pCC and dMPC. For semantics, the comparison only led to significant differences in the Precuneus, the right STS and posterior STG. Fig.6 (bottom left) shows the comparison between GPT-2 and GloVe when trained on the Integral features. Given that both semantic and syntactic contextual information were available to GPT-2, these maps reflect the regions that benefit from context during story listening. To show that context has an effect is one thing, but different brain regions are likely to have different integration window's size. To address this question, we developed a fixed-context window training protocol to control for the amount of contextual information used by GPT-2 (Fig.1C). We trained models with short (5 tokens), medium (15 tokens) and long (45 tokens) range windows sizes. This ensures that GPT-2 was not sampling out of the learnt distribution at inference, and not using more context than what was available in the context window. Comparing GPT-2 with 5 tokens to GloVe (0-size context) highlighted a large network of frontal and temporo-parietal regions. Medially, it included the Precuneus, the pCC and the DMPC (Fig.6, short). Short context-sensitivity showed peak effects in the Supramarginal gyri, the pMTG and medially in the Precuneus and pCC. Counting the number of voxels showing significant short-context effects highlighted an asymmetry between the left and right hemisphere with 1.6 times more significant voxels in the left hemisphere compared to the right. Contrasting a GPT-2 model using 15 Figure 6: **Comparison of lexical and supra-lexical processing levels.** Brain regions that are significantly better predicted by GPT-2 (in red) compared to Glove, when trained on syntactic features (top left), semantic features (top right) and integral features (bottom left). Maps are voxel-wise thresholded group analyses; N=51 subjects; corrected for multiple comparisons with a FDR approach \(p<0.005\); for each figure \(z_{FDR}\) indicates the significance threshold on the Z-scores. tokens of context (the average size of a sentence in _The Little Prince_) versus a GPT-2 model using only 5 tokens, yielded localized significant differences in the SFG/SFS, the TP, MFG and STG near Heschl's gyri and medially in the Precuneus and pCC (Fig.6, Medium). The biggest medium context effects included the left MFG, the right SFG and DMPC and bilaterally the Precuneus and pCC. Finally, contrasting models using respectively 45 and 15 tokens of context revealed 2.8 times as many significant differences in the right hemisphere as in the left. Significant effects were the highest bilaterally and medially in the pCC, followed, in the right hemisphere, by the Precuneus, the DMPC, MFG, SFG, STS and TP (see Fig.6, bottom). Taken together, our results show 1) that syntax dominantly determines the integration of contextual information, 2) that a bilateral network of frontal and temporo-parietal regions is modulated by short context, 3) that short-range context integration is preferentially located in the left hemisphere, 4) that the right hemisphere is involved in the processing of longer context sizes, and finally 5) that medial regions (Precuneus and pCC) are core regions of context integration, showing context effects at all scales. Figure 7: **Integration of context at different levels of language processing.****A)** Per hemisphere histograms of significant context effects after group analyses (N=51 subjects); thresholded at p\(<\)0.005 voxel-wise, corrected for multiple comparisons with the FDR approach. **B)** Uncorrected group averaged surface brain maps representing R scores increases when fitting brain data with models leveraging increasing sizes of contextual information. **C)** Corrected group averaged surface brain maps representing R scores increases when fitting brain data with models leveraging increasing sizes of contextual information; thresholded at p\(<\)0.005 voxel-wise, corrected for multiple comparisons with the FDR approach (for each figure \(x_{\textit{FDR}}\) indicates the significance threshold on the Z-scores). **(top row)** Comparison of the model trained with 5 tokens of context (GPT-2-\(C_{\textit{Context}-S}\)) with the non-contextualized Glove. **(middle row)** Comparison of the models respectively trained with 15 (GPT-2-\(C_{\textit{Context}-S}\)) and 5 (GPT-2-\(C_{\textit{Context}-S}\)) tokens of context. **(bottom row)** Comparison of the models respectively trained with 45 (GPT-2-\(C_{\textit{Context}-S}\)) and 15 (GPT-2-\(C_{\textit{Context}-S}\)) tokens of context. ## Discussion Language comprehension in humans is a complex process, which involves several interacting sub-components (word recognition, processing of syntactic and semantic information to construct sentence meaning, pragmatic and discourse inference,...) _(Jackendoff, 2002, e.g.)_. Discovering how the brain implements these processes is one of the major goals of neurolinguistics. A lot of attention has been devoted, in particular, to the syntactic and semantic components _(Friederici, 2017; Binder and Desai, 2011, for reviews)_ and the extent to which they are implemented in (practically) distinct or identical regions is still debated (e.g. Fedorenko et al., 2020). In Fig.8, we present the outcome of a meta analysis of the literature based on the search for the keywords'syntactic' and'semantic' in the Neurosynth database (see Meta-Analysis based on Neurosynth). This analysis, albeit somewhat simplistic, reveals the brain regions most often associated with syntax and semantics. It must be noted that a fair proportion of the studies included in the meta analysis relied on controlled experimental paradigms with single words or sentences, based on the manipulation of complexity or violations of expectations. To study language processing in a more natural way, several recent studies have presented naturalistic texts to participants, and have analyzed their brain activations using Artificial Neural Language Models (e.g. Pereira et al., 2018; Huth et al., 2016; Schrimpf et al., 2020; Pasquiou et al., 2022). These models are known to encode some aspects of semantics and syntax (e.g. Pennington et al., 2014; Hewitt and Manning, 2019; Lakretz et al., 2019). In the current work, to further dissect brain activations into separate linguistic processes, we trained NLP models on a corpus from which we selectively removed syntactic, semantic or contextual information and examined how well these information-restricted models could explain fMRI signal recorded from participants who had listened to an audiobook. The rationale was to highlight brain regions representing syntactic and semantic information, at the lexical and supralexical levels (comparing a lexical model GloVe, and a contextual one, GPT-2). Additionaly, by varying the amount of context provided to the supralexical model, we sought to identify the brain regions sensitive to different context sizes (see Jain and Huth (2018) for a similar analysis). Whether models were trained on syntactic features or on semantic features, they fit fMRI activations in a wide bilateral network which goes beyond the classic language network comprising the IFG and temporal regions: it also includes most of the dorso lateral and medial prefrontal cortex, the inferior parietal cortex, and on the internal face, the precuneus and posterior cingulate cortex (see Fig.3). Nevertheless, the regions _best_ predicted by syntactic features on the one hand, and semantic features on the other hand, are not exactly the same. While they overlap quite a Figure 8: **Association maps for the terms “semantic” and “syntactic” in a meta-analysis using Neurosynth ([http://neurosynth.org](http://neurosynth.org)) The association test map for syntactic (resp. semantic) displays voxels that are reported more often in articles that include the term syntactic (resp. semantic) in their abstracts than articles that do not (FDR correction of 0.01).** lot in the right hemisphere, they are more dissociated in the left hemisphere Fig.3, panel B). In addition, the relative sensitivity to syntax and semantics varies from region to region, with syntax predominating in the temporal lobe (see Fig.4). Elimination of shared variance between syntactic and semantic features confirmed that pure syntactic effects are restricted to STG/STS, bilaterally, IFG, and pre-SMA, while pure semantic effects occur throughout the network (Fig.5 A-B). The comparison between the supralexical model (GPT-2) and the lexical one (GloVe), revealed brain regions involved in compositionality (Fig.6) and a synergy between syntax and semantics that arises only at the supralexical level (Fig.5C). Finally, analyses of the influence of the size of context provided to GPT-2 when computing word embeddings, show that (1) a bilateral network of front-temporo-parietal regions is sensitive to short context, that (2) there is a dissociation between the left and right hemispheres, respectively associated with short-range and long-range context integration, and finally that (3) the medial Precuneus and posterior Cingulate gyri show the highest effects at every scale, hinting at an important role in large context integration (Fig.7). Models trained on semantic and syntactic features fit brain activity in a widely distributed network, but with varying relative degrees. When trained on the integral corpus, that is on the integral features, both the lexical (GloVe) and contextual (GPT-2) models captured brain activity in a large _extended language network_ (Appendix 1-Fig.3). This large extended language network goes beyond the _core_ language network, that is, the left IFG and temporal regions, encompassing homologous areas in the right hemisphere, the dorsal prefrontal regions, both on the lateral and medial surfaces, as well as in the inferior parietal, Precuneus and posterior Cingulate. The result is consistent with the ones from previous studies that have looked at brain responses to naturalistic text, whether analysed with NLP models (e.g. _Huth et al._, 2016; Pereira et al._, 2018; Jain and Huth_, 2018; Caucheteux et al._, 2021) or not (_Lerner et al._, 2011; Chang et al._, 2022). The Precuneus/pCC, inferior parietal and dorsomedial prefrontal cortex are part of the Default Mode Network (DMN) (_Raichle_, 2015). The same areas are actually also relevant in language and high-level cognition. For example, early studies examining the role of coherence during text comprehension had pointed out the same regions (_Ferstl and von Cramon_, 2001; Xu et al._, 2005): coherent discourses elicit stronger activations than incoherent ones. Recent work by (_Chang et al._, 2022) has revealed that the DMN is the last stage in a temporal hierarchy of processing naturalistic text, integrating information on the scale of paragraphs and narrative events, see also (_Simony et al._, 2016; Baldassano et al._, 2017). These regions are not language-specific though, as they have been shown to be activated during various theory of mind tasks, relying on language or not, and have thus also been dubbed the "Mentalizing network" (_Mar_, 2011; Baetens et al._, 2014). Models trained on the information-restricted semantic and syntactic features fit signal in this widely distributed network (Fig.3A). This is in agreement with _Caucheteux et al._ (2021) and _Fedorenko et al._ (2020) who, using very different approaches, found that syntactic predictors modulated activity throughout the language network. _Caucheteux et al._ (2021) first constructed new texts that matched, as well as possible, the text presented to participants in terms of their syntactic properties. The lexical items being different, the semantics of the new texts bear little relation with the original text. Then, using a pre-trained version of GPT-2, the authors obtained embeddings from these new texts and averaged them to create syntactic predictors. They found that these syntactic embeddings fitted a network of regions (ibid. Fig5D) similar to the one we observed (Fig.3A). Further, defining the effect of semantics as the difference between the scores obtained from the embeddings from the original text, and the scores from the syntactic embeddings, _Caucheteux et al._ (2021) observed that semantics had a significant effect throughout the same network (ibid. Fig5G). Should one conclude that syntax and semantics equally modulate the entire language network? Our results reveal a more complex picture. Figure 4 presents a semantics vs syntax specificity index map, showing higher sensitivity to syntax in the STG and anterior temporal lobe, whereas the parietal regions are more sensitive to semantics, consistent with _Binder et al._ (2009). Another point to take in consideration is that syntactic and semantic features are not perfectly orthogonal. Indeed, the logistic decoder trained on the embeddings from the semantic dataset was better than chance at recovering syntactic features (Fig.2), and vice versa. This might be due, for example, to the fact that some features like gender or number are present in both datasets, explicitly in the syntactic dataset and implicitly in the semantic dataset. To focus on the unique contributions of syntax and semantic, we remove the shared variance from the syntactic and semantic models using model comparisons (Fig.5). ### "Pure" semantic but not "pure" syntactic features modulate activity in a wide set of brain regions. The unique effect of semantics, when its shared component with syntax was removed, remains widespread (Fig.5B). This is consistent with the notion that semantic information is widely distributed over the cortex, an idea popularized by embodiment theories (_Hauk et al., 2004; Pulvermuller, 2013_), but which was already supported by the neuropsychological observations revealing domain-specific semantic deficits in patients (_Damasio et al., 2004_). On the other hand the "pure" effect of syntax "shrinked" to the STG and aTL (bilaterally), the IFG (on the left) and the pre-SMA (Fig.5A). The left IFG and STG/STS have previously been implicated in syntactic processing (_Friederici, 2017, 2011_, e.g.), and this is confirmed by the new approach employed here. Note that we are not claiming that these regions are specialized for syntactic processing only. Indeed they also appear to be sensitive to the "pure" semantic component (Fig.5B). ### The contributions of the right hemisphere. A striking feature of our results is the strong involvement of the right hemisphere. The notion that the right hemisphere has some linguistic abilities is supported by the studies on split-brains (_Sperry, 1961_) and by the patterns of recovery of aphasic patients after lesions in the left hemisphere (_Dronkers et al., 2017_). Moreover, a number of brain imaging studies have confirmed the right hemisphere involvement in higher-level language tasks, such as comprehending metaphors or jokes, generating the best endings to sentences, mentally repairing grammatical errors, detecting story inconsistencies (see _Jung-Beeman (2005); Beeman and Chiarello (2013)_). All in all, this suggests that the right hemisphere is apt at recognizing distant relations between words. This conclusion is further reinforced by our observation of long-range (paragraph-level) context effects in the right hemisphere (Fig.7, Long). The effects we observed in the right hemisphere are not simply the mirror image of the left hemisphere. Spatially, syntax and semantics dissociate more in the left than the right. (see Fig.3, Panel B). Moreover, the regions of overlap correspond to the regions integrating long context (Fig.7C, bottom row), suggesting that the left hemisphere is relatively more involved in the processing of local semantic or syntactic information, whereas the right hemisphere integrates both information at a larger time-scale (supra sentential). ### Syntax drives the integration of contextual information. The comparison between the predictions of the integral model trained on the intact texts, and the predictions of the combined syntactic and semantic embeddings from the information-restricted models (Fig.5C), highlights a striking contrast between GloVe and GPT2. While the former, a purely lexical model, does not benefit from being trained on the integral text, GPT-2 shows clear synergetic effects of syntactic and semantic information. GPT-2's embeddings fit brain activation better when syntactic and semantic information can contribute together. The fact that the regions that benefit most from this synergetic effect are high-level integrative regions, at the end of the temporal processing hierarchy described by _Chang et al._ (2022), suggests that the availability of syntactic information drives the semantic interpretation at the sentence level. These regions are quite similar to the semantic peak regions highlighted in Fig. 3A, and overlap with the regions showing context effects (Fig.7). This replicates, and extends, the results from _Jain and Huth_ (_2018_)who, varying the amount of context fed to LSTM models, from 0 to 19 words, found shorter context effects in temporal regions (ibid. Fig 4). ### Limitations of our study Two limitations of our study must be acknowledged. The dissociation between syntax and semantics is not perfect. The way we created the semantic dataset by removing function words clearly impacts supra-lexical semantics. For example, removing instances of _and_ and _or_ prevents the NLP model from distinguishing between the meaning of "A or B" and "A and B". In other words, the logical form of sentences can be perturbed. This may partly explain the synergetic effect of syntax and semantics described above. Removing pronouns is also problematic as this removed the arguments of some verbs. Ideally, one would like to find transformations of the sentences that keep the semantic information associated to the function words like conjunctions or pronouns, but it is not clear how to do that. A second limitation concerns potential confounding effects of prosody. One cannot exclude that the embeddings of the models captured some prosodic variables correlated with syntax (_Bennett and Elfner_, _2019_). For example, certain categories of words (e.g. determiners or pronouns) are shorter and less accented than others. Also, although the models are purely trained on written text, they acquire the capacity to predict the end of sentences, which are more likely to be followed by pauses in the acoustic signal. We included acoustic energy and the words' offsets in the baseline models to try and diminish the impact of such factors, but such controls cannot be perfect. One way to address this issue would be to have participants _read_ the text, presented at a fixed presentation rate. This would effectively remove all low-level effects of prosody. ### Conclusion State-of-the-art Natural Language Processing models, like transformers, trained with large enough corpora, can generate essentially flawless grammatical text, showing that they can acquire the grammar of the language. Using them to fit brain data has become a common endeavour, even if their architecture rules them out of plausible models of the brain. Yet, despite their low biological plausibility, their ability to build rich distributed representations can be exploited to study language processing in the brain. In this paper, we have demonstrated that restricting information provided to the model during training can be used to show which brain areas encode this information. Information-restricted models are powerful and flexible tools to probe the brain as they can be used to investigate whatever representational space chosen, such as semantics, syntax or context. Moreover, once they are trained, these models can be used directly on any dataset in order to generate information-restricted features for model-brain alignment. This approach is highly beneficial, both in term of richness of the features, and scalability, compared to classical approaches that use manually crafted features or focus on specific contrasts. In future experiments, more fine grained control of both the information given to the models as well as model's representations will permit more precise characterisation of the role of the various regions involved in language comprehension. ### Methods and Materials ### Creation of datasets; Semantic, Syntactic and Integral features We selected a collection of English novels from Project Gutenberg (www.gutenberg.org; data retrieved on February 21, 2016). This _original dataset_ comprised 4.4GB of text for training purposes and 1.1GB for validation. From it, we created two information-restricted datasets: the _semantic dataset_ and the _syntactic dataset_. In the _semantic dataset_, only content words were kept, while all grammatical, function words and punctuation signs were filtered out. In the _syntactic dataset_, each token (word or punctuation sign) was replaced by an identifier encoding a triplet (POS, Morph, NCN) where POS is the Part-of-speech computed using Spacy (_Hennibal and Montani, 2017_), Morph corresponds to the morphological features obtained from Spacy and NCN stands for the Number of Closing Nodes in the parse tree, at the current token, computed using the Berkeley Neural Parser (_Kitaev and Klein, 2018_) available with Spacy. In this paper, we refer to the content of the original dataset as _integral features_, the content of the semantic dataset as _semantic features_, and the content of the syntactic dataset as _syntactic features_. Examples of integral, semantic and syntactic features are given in Appendix 1-Models training. ### GloVe Training GloVe (Global Vectors for Word Representation) relies on the co-occurence matrix of words in a given corpus to generate fixed embedding vectors that capture the distributional properties of the words (_Pennington et al., 2014_). Using the open-source code provided by Pennington and al. ([https://nlp.stanford.edu/projects/glove/](https://nlp.stanford.edu/projects/glove/)), we trained GloVe on the three datasets (integral, semantic and syntactic), setting the context window size to 15 words, the embedding vectors' size to 768, and the number of training epochs to 23. ### GPT-2 Training GPT-2 (Generative Pretrained Transformer 2) is a deep learning transformer-based language model. We trained the open-source implementation GPT2LMHeadModel, provided by HuggingFace (_Wolf, 2020_), on the three datasets (integral, semantic and syntactic). The GPT2LMHeadModel architecture is trained on a next-token prediction task using a CrossEntropyLoss and the Pytorch python package (_Paszke et al., 2019_). The training procedure can easily be extended to any feature type by adapting both vocabulary size and tokenizer to each vocabulary. Indeed, the inputs given to GPT2LMHeadModel are ids encoding vocabulary items. All the analyses reported in this paper were performed with 4-layer models having 768 units per layer and 12 attention heads. As shown in (_Pasquiou et al., 2022_), these 4-layer models fit brain data nearly as well as the usual 12-layer models. We presented the models with input sequences of 512 tokens, and let the training run for 5 epochs; convergence assessments are provided in Appendix 1-Convergence of the language models during training (Appendix 1-Fig.1). For the GTP-2 trained on the semantic features, small modifications had to be made to the model architecture in order to remove all residual syntax. By default, GPT-2 encodes the absolute positions of tokens in sentences. When training GPT-2 on the semantic features, as word ordering might contain syntactic information, we had to make sure that position information could not be leveraged by means of its positional embeddings, yet keeping information about word proximity as it influences semantics. We modified the implementation so that the GPT-2 trained on semantic features follows these specifications (see Appendix 1-Removing absolute position information in GPT-2 trained on semantic features). ### Stimulus: The Little Prince story The stimulus used to obtain activations from humans and from NLP models was _The Little Prince_ novella. Humans listened to an audio-book version, spliced into 9 tracks that lasted approximately 11 minutes each (see _Li et al., 2022_). In parallel, NLP models were provided with an exact transcription of this audio-book, enriched with punctuation signs from the written version of the Little Prince. The text comprised 15,426 words and 4,482 punctuation signs. The acoustic onsets and offsets of the spoken words were marked to align the audio recording with the _The Little Prince_ text. ### Computing Embeddings from the Little Prince text The tokenized versions of the Little Prince (one for each feature type) were run through Glove and GPT-2 in order to generate embeddings that could be compared with fMRI data. For GloVe, we simply retrieved the fixed embedding vector learnt during training for each token. For GPT-2, we retrieved the contextualized third layer hidden-state (aka embedding) vector for each token, so that the dimension is comparable to the dimension of GloVe's embeddings (768 units). Layer 3 (out of 4) was selected because it has been demonstrated that late middle layers of recurrent language models are best able to predict brain activity (_Toneva and Wehbe, 2019; Jain and Huth, 2018_). The embedding built by GPT-2 for a given token rely on the past tokens (aka past context). The bigger the past context, the more reliable the token embedding will be. We designed the following procedure to ensure that the embedding of each token used similar past context size: the input sequence was limited to a maximum of 512 tokens. The text was scanned with a sliding window of size \(N=512\) tokens, and a step of 1 token. The embedding vector of the next to last token (in the sliding window) was then retrieved. For the context-constrained versions of GPT-2 (denoted GPT-2\({}_{Caurest-4}\)), the input text was formatted as the training data (see Fig.1C) in batches of input sequences of length (\(k+S\)) tokens (see Appendix 1-Context-limited models for examples), and only the embedding vector of the current token was retrieved. Embedding matrices were built by concatenating words embeddings. More precisely, calling \(d\) the dimension of the embeddings retrieved from of a neural model, corresponding to the number of units in one layer in our case, and \(w\) the total number of tokens in the text, we obtained an embedding matrix \(\mathbf{X}\in\mathbb{R}^{n\times d}\) after the presentation of the entire text to the model. ### Decoding of syntax and semantics categories from embeddings We designed two decoding tasks: a syntax decoding task in which we tried to predict the triplet (Part-of-speech, morphological information and number of closing nodes) of each word from its embedding vector (355 categories), and a semantic decoding task in which we tried to predict each word's semantic category (from _Wordnet_, [https://wordnet.princeton.edu/](https://wordnet.princeton.edu/)) from its embedding vector (837 categories). We used Logistic Classifiers and the text of _The Little Prince_ as train and test data, which was split using a 9-fold cross-validation on runs, training on 8 runs and evaluating on the remaining one for each split. Dummy classifiers were fitted and used as estimations of chance-level for each task and model. All classifiers implementations were taken from Scikit-Learn (_Pedregosa et al., 2011_). ### MRI data We used the functional Magnetic Resonance Imaging (fMRI) data of 51 English speaking participants who listened to an entire audio-book of The Little Prince during about one hour and a half. These data, available at [https://openneuro.org/datasets/ds003643/versions/1.0.2](https://openneuro.org/datasets/ds003643/versions/1.0.2) are described in details by _Li et al. (2022)_. In short, the acquisition used echo-planar imaging (TR=2s; resolution=3.75x3.75x3.75mm) with a multi-echo (3 echos) sequence to optimize signal-to-noise (_Kundu et al., 2018_). Preprocessing comprised multi-echo independent components analysis (ME-ICA) to denoise data for motion, physiology and scanner artifacts, correction for slice-timing differences, and nonlinear alignment to the MNI template brain. For each participant, there were 9 runs of fMRI acquisition representing about 10 minutes of brain activations each. We re-sampled the preprocessed individual scans at 4x4x4 mm (to reduce computation load) and applied linear detrending and standardization (mean removal and scaling to unit variance) to each voxel's time-series. Finally, we computed a global brain mask to only keep voxels containing useful signal (using nillearn's _compute_epi_mask_ function, we find the least dense point of the total image histogram) across all runs for at least 50% of all participants. This global mask contained 26,164 voxels at 4x4x4mm resolution. All analyses reported in this paper were performed within this global mask. ### Correlations between embeddings and individual fMRI data The embeddings (X) derived from neural language models were mapped to each subject's fMRI activations (\(\textbf{Y}_{s},s=1..S\)) following the pipeline outlined in Fig.1B. The process, using the standard model-based encoding approach to modelling fMRI signals _(Huth et al., 2016; Naselaris et al., 2011; Pasquiou et al., 2022)_, is detailed in Appendix 1-Mapping NLM activations to brain data. In brief, each column of X was first aligned with the words' offsets in the audio stream and convolved with the default _SPM_ haemodynamic kernel (using Nilearm's _compute_regressor_ function from the _nilearm_glm_first_level_ module). The resulting time-course was sub-sampled to match the sampling frequency of the scans \(\textbf{Y}_{s}\) (giving \(\hat{\textbf{X}}\)). Next, in each individual voxel, the time-course of brain activation was regressed on \(\hat{\textbf{X}}\) using Ridge regression. The Ridge regression regularization was estimated using a nested-cross validation scheme (see Appendix 1-Mapping NLM activations to brain data for more details). Finally, the cross-validated Pearson correlation \(R\) between the encoding model's prediction and the fMRI signal for subject \(s\) in voxel \(v\) was computed. The output of this process is a map of correlations between the encoding models' predictions and the observed time series, for a given participant. ### Baseline fMRI model To obtain a more accurate evaluation of the specific impact of the embeddings on brain scores, we removed the contribution of three confounding variables from all maps presented in this paper. The three confounding variables were: a) _the acoustic energy_ (root mean squared of the audio signal sampled every 10ms) b) _the word-rate_ (one event at each word offset) c) _the log of the unigram lexical frequency_ of each word (modulator of the word events). An fMRI Ridge linear model that only included these three regressors was used to compute a map of cross-validated correlations for each participant. The \(R\)-maps presented in Fig.3 of this paper are corrected for the contribution of these variables, that is they display \(\Delta R\), the increase in \(R\) when adding a model to the baseline model versus the baseline model by itself. Appendix 1-Fig.2 displays the significant correlations in the group-level \(R\) maps associated with the Baseline Model, corrected for multiple comparison using a FDR correction (\(p<0.005\)). ### Group-level Maps The brain maps presented in this document display group average increase in \(R\) scores obtained from individuals correlation maps (relative to the baseline model or to another model). Only voxels showing statistically significant increase in \(R\) score are shown. Significance was assessed through one-sample t-tests applied to the spatially smoothed correlation maps, with an isotropic Gaussian kernel with FWHM of \(\mathrm{fmm}\). In each voxel, the test assessed whether the distribution of \(R_{test}\) values across participants was significantly larger than zero. To control for multiple comparisons, all maps were corrected using a False Discovery Rate (FDR) correction with \(p<0.005\)(Benjamini and Hochberg, 1995). On each corrected figure, the FDR threshold on the z-scores, named \(z_{FDR}\), is indicated at the bottom, that is, values reported on these maps (e.g. \(R\) scores) are shown only for voxels whose z-score survived this threshold (\(z_{test}>z_{FDR}\)). While all analyses were done on volume data, all brain maps were projected onto brain surface for visualization purposes, using _'fsoverage5'_ (from Nilearm's _datasets_fetch_surf_f_s_average) mesh and the _'vol_to_surf'_ function (from Nilearm's _surface_ module). ### Syntax and Semantics peak regions We decided to also report brain maps' _peak regions_, i.e. the 10% of the voxels having the highest \(R\) score in a brain map. The motivation is that two different language processes might elicit lots of brain regions in common, while the regions that are better fitted by the representations derived from each process might differ. The peak regions of the neural correlates of semantic and syntactic representations are displayed on surface brain maps. The proportions of voxels belonging to each peak region as well as the Jaccard score between syntax and semantics are displayed for each model and hemisphere. ### Jaccard index The Jaccard index (computed using scikit-learn _jaccard_score_ function from the _metrics_ module) for two sets \(X\) and \(Y\) is defined in the following manner: \(J(X,Y)=|X\cap Y|/|X\cup Y|\). It behaves as a similarity coefficient: when the two sets completely overlap, J=1; when their intersection is nil, J=0. ### Specificity index To quantify how much each voxel \(v\) is influenced by semantic and syntactic embeddings, we defined a _specificity index_ in the following manner: \[x_{semantics}(v)=\log_{10}\left(\frac{r_{semantics}(v)}{r_{s_{\text{jmax}}}(v)}\right)\] \(r_{s_{\text{jmax}}}\) is the \(R\) score increase relative to the baseline model for the syntactic embeddings. \(r_{s_{\text{jmax}}}\) is the \(R\) score increase relative to the baseline model for the semantic embeddings. In Fig.4, the higher and greener \(x\) is, the more sensitive it is to semantic embeddings compared to syntactic embeddings. The lower and redder \(x\) is, the more sensitive it is to syntactic embeddings compared to semantic embeddings. \(x\) close to 0, indicates an equal sensitivity to syntactic and semantic embeddings. Group average specificity index maps were computed from each subject's map and significance was assessed through one-sample t-tests applied to the spatially smoothed specificity maps, with an isotropic Gaussian kernel with FWHM of 6mm. A FDR correction (\(p<0.005\)) was used to correct for multiple comparisons. ### Meta-Analysis based on Neurosynth We used the _Neurosynth_ database ([https://github.com/neurosynth/neurosynth](https://github.com/neurosynth/neurosynth)) to perform a meta-analysis of brain regions that appeared in fMRI articles containing the words'syntactic' or'semantic' in their abstract. Using a frequency threshold of 0.05, the keyword _semantic_ yielded 626 articles, while _syntactic_ yielded 128 articles. The _meta_MetaAnalysis_ function from the neurosynth package was then used to create association test maps for syntax and semantics. These maps display voxels that are reported more often in articles that mention the keyword than articles that do not. Such association test maps indicate whether or not there's a non-zero association between activation of the voxel in question and the use of a particular term in a study. We fused the maps associated to _syntactic_ and _semantic_, thresholded with a False Discovery Rate set to 0.01, to produce Fig.8. ### Data Availability The Integral Dataset (train, test and dev) is available at: [https://osf.io/jzcvu/](https://osf.io/jzcvu/). The semantic and syntactic datasets can be derived from the Integral Dataset using the scripts provided in [https://github.com/AlexandrePsq/Information-Restrived-NLMs](https://github.com/AlexandrePsq/Information-Restrived-NLMs). All analyses, as well as model training, features extraction and the fitting of encoding models were performed using Python 3.7.6 and can be replicated using the code provided in the same Github repository ([https://github.com/AlexandrePsq/Information-Restrived-NLMs](https://github.com/AlexandrePsq/Information-Restrived-NLMs)). The required packages are listed there. A non-exhaustive list includes numpy (_Harris et al._, _2020_), scipy (_Virtanen et al._, _2020_), scikit-learn (_Pedregosa et al._, _2011_), matplotlib (_Hunter_, _2007_), pandas (_McKinney et al._, _2010_) and nilearn ([https://nilearn.github.io/stable/index.html](https://nilearn.github.io/stable/index.html)). The fMRI dataset is publicly available at [https://openeuro.org/datasets/ds003643/versions/1.0.2](https://openeuro.org/datasets/ds003643/versions/1.0.2), and all details regarding the dataset are described in details by _Li et al._(_2022_). ## Acknowledgments This project/research has received funding from the American National Science Foundation under Grant Number 1607441 (USA), the French National Research Agency (ANR) under grant ANR-14-CERA-0001, the European Union's Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3), and the KARAIB AI chair (ANR-20-CHIA-0025-01). ## References * Baetens et al. (2014) Baetens K, Ma N, Steen J, Van Overwalle F. Involvement of the mentalizing network in social and non-social high construval. Social Cognitive and Affective Neuroscience. 2014 Jun; 9(6):817-824. [https://doi.org/10.1093/scan/nst048](https://doi.org/10.1093/scan/nst048), doi: 10.1093/scan/nst048. * Baldassano et al. (2017) Baldassano C, Chen J, Zadbood A, Pillow JW, Hasson U, Norman KA. Discovering Event Structure in Continuous Narrative Perception and Memory. Neuron. 2017; 95(3):709-721.e5. [https://www.sciencedirect.com/science/article/pii/S0896627317305937](https://www.sciencedirect.com/science/article/pii/S0896627317305937), doi: [https://doi.org/10.1016/j.neuron.2017.06.041](https://doi.org/10.1016/j.neuron.2017.06.041). * Bates et al. (2002) Bates E, Dick F. Language, gesture, and the developing brain. Developmental Psychobiology: The Journal of the International Society for Developmental Psychobiology. 2002; 40(3):293-310. [https://pubmed.ncbi.nlm.nih.gov/11891640/](https://pubmed.ncbi.nlm.nih.gov/11891640/), publisher: Wiley Online Library. * Bates et al. (1989) Bates E, MacWhinney B. Functionalism and the Competition Model. In: MacWhinney B, Bates E, editors. _The Crosslinguistic Study of Sentence Processing_ Cambridge University Press; 1989.p. 3-73. [https://www.researchgate.net/publication/230875840_Functionalism_andthe_Competition_Model/link/545a97170c72c16febbb1cd5/download](https://www.researchgate.net/publication/230875840_Functionalism_andthe_Competition_Model/link/545a97170c72c16febbb1cd5/download). * Beeman and Chiarello (2013) Beeman MJ, Chiarello C. Right hemisphere language comprehension: Perspectives from cognitive neuroscience. Psychology Press; 2013. [https://www.taylorfrancis.com/books/mono/10.4324/9780203763544/right-hemisphere-language-comprehension-mark-jung-beeman-christine-chiarello](https://www.taylorfrancis.com/books/mono/10.4324/9780203763544/right-hemisphere-language-comprehension-mark-jung-beeman-christine-chiarello). * Benjamini and Hochberg (1995) Benjamini Y, Hochberg Y. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical Society Series B (Methodological). 1995; 57(1):289-300. [http://www.jstor.org/stable/2346101](http://www.jstor.org/stable/2346101). * Bennett and Elfner (2019) Bennett R, Elfner E. The Syntax-Prosody Interface. Annual Review of Linguistics. 2019 Jan; 5(1):151-171. [https://www.annualreviews.org/doi/10.1146/annurev-lingistics-011718-012503](https://www.annualreviews.org/doi/10.1146/annurev-lingistics-011718-012503), doi: 10.1146/annurev-lingistics-011718-012503. * Binder and Desai (2011) Binder JR, Desai RH. The neurobiology of semantic memory. Trends in Cognitive Sciences. 2011 Nov; 15(11):527-536. [http://linkinghub.elsevier.com/retrieve/pii/S1364661311002142](http://linkinghub.elsevier.com/retrieve/pii/S1364661311002142), doi: 10.1016/j.tics.2011.10.001. * Binder et al. (2009) Binder JR, Desai RH, Graves WW, Conant LL. Where is the Semantic System? A Critical Review and Meta-Analysis of 120 Functional Neuroimaging Studies. Cerebral Cortex. 2009 03; 19(12):2767-2796. [https://doi.org/10.1093/ceror/bhp055](https://doi.org/10.1093/ceror/bhp055), doi: 10.1093/ceror/bhp055. * Bottini et al. (1995) Bottini G, Corcoran R, Sterzi R, Paulesu E, Seenone P, Scarpa P, Frackowiak R, Frith C. The role of the right hemisphere in the interpretation of figurative aspects of language. A positron emission tomography activation study. Brain : a journal of neurology. 1995 01; 117 ( Pt 6):1241-53. [https://www.researchgate.net/publication/15377772_The_role_of_the_right_hemisphere_in_the_interpretation_of_figurative_aspects_of_language_A_positron_emission_tomography_activation_study](https://www.researchgate.net/publication/15377772_The_role_of_the_right_hemisphere_in_the_interpretation_of_figurative_aspects_of_language_A_positron_emission_tomography_activation_study), doi: 10.1093/brain/117.6.1241. * Caplan et al. (1998) Caplan D, Alpert N, Waters G. Effects of Syntactic Structure and Propositional Number on Patterns of Regional Cerebral Blood Flow. Journal of Cognitive Neuroscience. 1998 Jul; 10(4):541-552. [https://doi.org/10.1162/089892998562843](https://doi.org/10.1162/089892998562843), doi: 10.1162/089892998562843, eprint: [https://direct.mit.edu/jocn/article-pdf/10/4/541/1931814/089892998562843.pdf](https://direct.mit.edu/jocn/article-pdf/10/4/541/1931814/089892998562843.pdf). * Caramazza and Zurif (1976) Caramazza A, Zurif EB. Dissociation of algorithmic and heuristic processes in language comprehension: Evidence from aphasia. Brain and language. 1976; 3(4):572-582. [https://pubmed.ncbi.nlm.nih.gov/974731/](https://pubmed.ncbi.nlm.nih.gov/974731/). * 38th International Conference on Machine Learning_ Online conference, France; 2021. p. 13. [https://hal.archives-ouvertes.fr/hal-03361421](https://hal.archives-ouvertes.fr/hal-03361421). * Cattanek et al. (2015) Caucheteux C, King JR. Brains and algorithms partially converge in natural language processing. Communications Biology. 2022; [https://pubmed.ncbi.nlm.nih.gov/35173264/](https://pubmed.ncbi.nlm.nih.gov/35173264/), doi: 10.1038/s42003-022-03036-1. * Chang et al. (2022) Chang CHC, Nastase SA, Hasson U. Information flow across the cortical timescale hierarchy during narrative construction. Proceedings of the National Academy of Sciences. 2022 Dec; 119(51):e2209307119. [http://www.pnas.org/doi/full/10.1073/pnas.2209307119](http://www.pnas.org/doi/full/10.1073/pnas.2209307119), doi: 10.1073/pnas.2209307119, publisher: Proceedings of the National Academy of Sciences. * Chomsky (1984) Chomsky N. Modular Approaches to the Study of the Mind, vol. 1. San Diego State University Press San Diego; 1984. [https://archive.org/details/modularapproache00noam/page/ng/mode/2up](https://archive.org/details/modularapproache00noam/page/ng/mode/2up). * Cooke et al. (2001) Cooke A, Zurif EB, DeVita C, Alsop D, Koenig P, Detre J, Gee J, PinNEng M, Balogh J, Grossman M. Neural basis for sentence comprehension: Grammatical and short term memory components. Human Brain Mapping. 2001 Nov; 15(2):80-94. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6872024/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6872024/), doi: 10.1002/hbm.10006. * Damasio et al. (2004) Damasio H, Tranel D, Grabowski T, Adolphs R, Damasio A. Neural systems behind word and concept retrieval. Cognition. 2004; 92(1-2):179-229. doi: 10.1016/j.cognition.2002.07.001. * Devlin et al. (2019) Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:181004805 [cs]. 2019 May; [http://arxiv.org/abs/1810.04805](http://arxiv.org/abs/1810.04805), arXiv: 1810.04805. * Dick et al. (2001) Dick F, Bates E, Wulfeck B, Uttman JA, Dronkers N, Gernsbacher MA. Language deficits, localization, and grammar: evidence for a distributive model of language breakdown in aphasic patients and neurologically intact individuals. Psychological review. 2001; 108(4):759. [https://psynet.apa.org/record/2001-18918-004](https://psynet.apa.org/record/2001-18918-004), publisher: American Psychological Association. * Dronkers et al. (2017) Dronkers NF, Ivanova MV, Baldo JV. What Do Language Disorders Reveal about Brain-Language Relationships? From Classic Models to Network Approaches. Journal of the International Neuropsychological Society : JINS. 2017 Oct; 23(9-10):741-754. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6606454/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6606454/), doi: 10.1017/S1355617710701126. * Elman (1991) Elman J. Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning. 1991; 7:195-225. [https://link.springer.com/article/10.1007/BF00114844](https://link.springer.com/article/10.1007/BF00114844). * Embick (2000) Embick D. Features, Syntax, and Categories in the Latin Perfect. Linguistic Inquiry. 2000; 31(2):185-230. [http://www.jstor.org/stable/4179104](http://www.jstor.org/stable/4179104). * Fedorenko et al. (2020) Fedorenko E, Blank I, Siegelman M, Mineroff Z. Lack of selectivity for syntax relative to word meanings throughout the language network. bioRxiv. 2020; p. 477851. [https://www.sciencedirect.com/science/article/pii/S0010027720301670](https://www.sciencedirect.com/science/article/pii/S0010027720301670), publisher: Cold Spring Harbor Laboratory. * Ferstl et al. (2001) Ferstl EC, von Cramon DY. The role of coherence and cohesion in text comprehension: an event-related fMRI study. Cognitive Brain Research. 2001 Jun; 11(3):325-340. [http://www.sciencedirect.com/science/article/pii/S092664100100076](http://www.sciencedirect.com/science/article/pii/S092664100100076), doi: 10.1016/S0926-6410(01)00007-6. * Fodor (1983) Fodor J. The modularity of mind. MIT press; 1983. [https://mitpress.mit.edu/9780262560252/the-modularity-of-mind/](https://mitpress.mit.edu/9780262560252/the-modularity-of-mind/). * Friederici (2011) Friederici AD. The Brain Basis of Language Processing: From Structure to Function. Physiol Rev. 2011; 91:36. [https://pubmed.ncbi.nlm.nih.gov/22013214/](https://pubmed.ncbi.nlm.nih.gov/22013214/). * Friederici et al. (2017) Friederici AD, Chomsky N, Berwick RC, Moro A, Bolhuis JJ. Language, mind and brain. Nature human behaviour. 2017; 1(10):713-722. * Friederici et al. (2006) Friederici AD, Fiebach CJ, Schlesewsky M, Bornkessel ID, von Cramon DY. Processing Linguistic Complexity and Grammaticality in the Left Frontal Cortex. Cerebral Cortex. 2006 01; 16(12):1709-1717. [https://doi.org/10.1093/cercor/bij106](https://doi.org/10.1093/cercor/bij106). * Friederici et al. (2009) Friederici AD, Kotz SA, Scott SK, Dlieser J. Disentangling syntax and intelligibility in auditory language comprehension. Human Brain Mapping. 2009 Aug; 31(3):448-457. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6870868/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6870868/), doi: 10.1002/hbm.20878. * Friederici et al. (2009) Friederici AD, Makuuchi M, Bahlmann J. The role of the posterior superior temporal cortex in sentence comprehension. NeuroReport. 2009 Apr; 20(6):563-568. [https://journals.lww.com/neuroreport/Fultext/2009/04220/The_role_of_the_posterior_superior_temporal_cortex.6.aspx](https://journals.lww.com/neuroreport/Fultext/2009/04220/The_role_of_the_posterior_superior_temporal_cortex.6.aspx), doi: 10.1097/WNR.0b013e3283297dee. Friederici AD, Rauschemeyer SA, Hahne A, Fiebach GJ. The Role of Left Inferior Frontal and Superior Temporal Cortex in Sentence Comprehension: Localizing Syntactic and Semantic Processes. Cerebral Cortex. 2003 02; 13(2):170-177. [https://doi.org/10.1093/cercor/13.2.170](https://doi.org/10.1093/cercor/13.2.170), doi:10.1093/cercor/13.2.170. * Friederici (2017) Friederici AD. Neurobiology of Syntax as the Core of Human Language. BIOLINGUISTICS. 2017; 11. [https://bioling.psychopen.eu/index.php/bioling/article/view/9093](https://bioling.psychopen.eu/index.php/bioling/article/view/9093). * Garrard et al. (2004) Garrard P, Carroll E, Vinson D, Vigliocco G. Dissociation of Lexical Syntax and Semantics: Evidence from Focal Cortical Degeneration. Neuroscience. 2004; 10(5):353-362. [https://doi.org/10.1080/13554790490892248](https://doi.org/10.1080/13554790490892248), doi:10.1080/13554790490892248, pMID: 15788273. * Goodfellow (1993) Goodfellow H. Understanding aphasia. Academic Press; 1993. [https://www.jstor.org/stable/416147](https://www.jstor.org/stable/416147). * Grodzinsky and Santi (2008) Grodzinsky Y, Santi A. The battle for Broca's region. Trends in Cognitive Sciences. 2008; 12(12):474-480. [https://www.sciencedirect.com/science/article/pii/S1364661308002222](https://www.sciencedirect.com/science/article/pii/S1364661308002222), doi:[https://doi.org/10.1016/j.tics.2008.09.001](https://doi.org/10.1016/j.tics.2008.09.001). * Hagoort (2014) Hagoort P. Nodes and networks in the neural architecture for language: Broca's region and beyond. Current opinion in Neurobiology. 2014; 28:136-141. [https://pubmed.ncbi.nlm.nih.gov/2506247/](https://pubmed.ncbi.nlm.nih.gov/2506247/), publisher: Elsevier. * Harris et al. (2020) Harris CR, Millman KJ, van der Walt SJ, Gommers R, Virtanen P, Cournapeau D, Wieser E, Taylor J, Berg S, Smith NJ, Kern R, Picus M, Hoyer S, van Kerkwijk MH, Brett M, Haldane A, del Rio JF, Wiebe M, Peterson P, Gerard-Marchant P, et al. Array programming with NumPy. Nature. 2020 Sep; 585(7825):357-362. [https://doi.org/10.1038/s41586-020-2649-2](https://doi.org/10.1038/s41586-020-2649-2), doi:10.1038/s41586-020-2649-2. * Hashimoto and Sakai (2002) Hashimoto R, Sakai KL. Specialization in the Left Prefrontal Cortex for Sentence Comprehension. Neuron. 2002; 35(3):589-597. [https://www.sciencedirect.com/science/article/pii/S0896627302007882](https://www.sciencedirect.com/science/article/pii/S0896627302007882), doi:[https://doi.org/10.1016/S0896-62730](https://doi.org/10.1016/S0896-62730)(20)00788-2. * Hauk et al. (2004) Hauk O, Johnsrude I, Pulvermuller F. Somatotopic representation of action words in human motor and premotor cortex. Neuron. 2004; 41(2):301-307. [http://www.sciencedirect.com/science/article/pii/S0896627303008389](http://www.sciencedirect.com/science/article/pii/S0896627303008389). * de Heer et al. (2017) de Heer WA, Huth AG, Griffiths TL, Gallant JL, Theunissen FE. The Hierarchical Cortical Organization of Human Speech Processing. The Journal of Neuroscience. 2017 Jul; 37(27):6539-6557. [http://www.jneurosci.org/lookup/doi/10.1523/JNEUROSCI.3267-16.2017](http://www.jneurosci.org/lookup/doi/10.1523/JNEUROSCI.3267-16.2017), doi:10.1523/JNEUROSCI.3267-16.2017. * Hewitt and Manning (2019) Hewitt J, Manning CD. A Structural Probe for Finding Syntax in Word Representations. In: _North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)_; 2019. p. 10. [https://acanthology.org/N19-1419/](https://acanthology.org/N19-1419/). * Honnibal and Montani (2017) Honnibal M, Montani L. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing; 2017, [https://spacy.io/usage](https://spacy.io/usage), to appear. * Hunter (2007) Hunter JD. Matplotlib: A 2D Graphics Environment. Computing in Science & Engineering. 2007; 9(3):90-95. [https://ieeexplore.ieee.org/document/4160265](https://ieeexplore.ieee.org/document/4160265), doi:10.1109/MCSE.2007.55. * Huth et al. (2016) Huth AG, de Heer WA, Griffiths TL, Theunissen FE, Gallant JL. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature. 2016 Apr; 532(7600):453-458. [http://www.nature.com/articles/nature17637](http://www.nature.com/articles/nature17637), doi:10.1038/nature17637. * Jackendoff (2002) Jackendoff R. Foundations of Language: Brain, Meaning, Grammar, Evolution. Oxford University Press UK; 2002. [https://academic.oup.com/book/32834](https://academic.oup.com/book/32834). * Jain et al. (2018) Jain S, Huth A. Incorporating Context into Language Encoding Models for fMRI. In: Bengio S, Wallach H, Larochelle H, Grauman K, Cesa-Bianchi N, Garnett R, editors. _Advances in Neural Information Processing Systems_, vol. 31 Curran Associates, Inc.; 2018. p. 10. [https://proceedings.neurips.cc/paper/2018/file/64f1223d1a61d5b5a7dc45c9d01df19-Paper.pdf](https://proceedings.neurips.cc/paper/2018/file/64f1223d1a61d5b5a7dc45c9d01df19-Paper.pdf). * Jung-Beeman (2005) Jung-Beeman M. Bilateral brain processes for comprehending natural language. Trends in Cognitive Sciences. 2005 Nov; 9(11):512-518. [http://linkinghub.elsevier.com/retrieve/pii/S1364661305002718](http://linkinghub.elsevier.com/retrieve/pii/S1364661305002718), doi:10.1016/j.tics.2005.09.009. * Kinno et al. (2007) Kinno R, Kawamura M, Shioda S, Sakai KL. Neural correlates of noncanonical syntactic processing revealed by a pictured sentence matching task. Human Brain Mapping. 2007 Oct; 29(9):1015-1027. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6871174/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6871174/), doi:10.1002/hbm.20441. * Kundu et al. (2018) Kitaev N, Klein D. Constituency Parsing with a Self-Attentive Encoder. In: _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ Melbourne, Australia: Association for Computational Linguistics; 2018. p. 2676-2686. [https://www.aclweb.org/anthology/P18-1249](https://www.aclweb.org/anthology/P18-1249), doi:10.18653/v1/P18-1249. * Kundu et al. (2018) Kundu P, Voon V, Balchandani P, Lombardo MV, Poser BA, Bandettini PA. Multi-echo fMRI: A review of applications in fMRI denoising and analysis of BOLD signals. NeuroImage. 2018; 154. [http://linkinghub.elsevier.com/retrieve/pii/S1053811917302410](http://linkinghub.elsevier.com/retrieve/pii/S1053811917302410), doi:10.1016/j.neuroimage.2017.03.033. * Lakretz et al. (2020) Lakretz Y, Hupkes D, Vergallito A, Marelli M, Baroni M, Dehaene S. Mechanisms for handling nested dependencies in neural-network language models and humans. Cognition. 2021; 213:104699. [https://arxiv.org/abs/2006.11098](https://arxiv.org/abs/2006.11098). * Lakretz et al. (2019) Lakretz Y, Kruszewski G, Desbordes T, Hupkes D, Dehaene S, Baroni M. The emergence of number and syntax units in LSTM language models. In: _NAACL-HLT (I)_; 2019. p. 11-20. [https://arxiv.org/abs/1903.07435](https://arxiv.org/abs/1903.07435). * LeBel et al. (2022) LeBel A, Wagner L, Jain S, Adhikari-Desai A, Gupta B, Morgenthal A, Tang J, Xu L, Huth AG. A natural language fmri dataset for Voxelwise Encoding models. Biorxiv. 2022; [https://www.biorxiv.org/content/10.1101/2022.09.22.509104v1](https://www.biorxiv.org/content/10.1101/2022.09.22.509104v1), doi:10.1101/2022.09.22.509104. * Lerner et al. (2011) Lerner Y, Honey CJ, Silbert Lj, Hasson U. Topographic Mapping of a Hierarchy of Temporal Receptive Windows Using a Narrated Story. Journal of Neuroscience. 2011 Feb; 31(8):2906-2915. [http://www.jneurosci.org/cgi/doi/10.1523/JNEUROSCI.3684-10.2011](http://www.jneurosci.org/cgi/doi/10.1523/JNEUROSCI.3684-10.2011), doi:10.1523/JNEUROSCI.3684-10.2011. * Lij et al. (2022) Lij, Bhattasali S, Zhang S, Franzliebbers B, Luh WM, Spreng N, Brennan JR, Yang Y, Pallier C, Hale J. Le Petit Prince Multilingual Naturalistic FMRI Corpus. Scientific Data. 2022; 9. [https://doi.org/10.1038/s41597-022-01625-7](https://doi.org/10.1038/s41597-022-01625-7). * Mar (2011) Mar RA. The neural bases of social cognition and story comprehension. Annual review of psychology. 2011; 62:103-134. [https://pubmed.ncbi.nlm.nih.gov/21126178/](https://pubmed.ncbi.nlm.nih.gov/21126178/). * Matchin et al. (2017) Matchin W, Hammery C, Lau E. The role of the IFG and pSTS in syntactic prediction: Evidence from a parametric study of hierarchical structure in fMRI. cortex. 2017; 88:106-123. Publisher: Elsevier. * Matchin and Hickok (2020) Matchin W, Hickok G. The Cortical Organization of Syntax. Cerebral Cortex. 2020 Mar; 30(3):1481-1498. [https://pubmed.ncbi.nlm.nih.gov/28088041/](https://pubmed.ncbi.nlm.nih.gov/28088041/), doi:10.1093/cercor/f0hz180. * Mazoyer et al. (1993) Mazoyer BM, Tzourio N, Frak V, Syrota A, Murayama N, Levrier O, Salamon G, Dehaene S, Cohen L, Mehler J. The Cortical Representation of Speech. Journal of Cognitive Neuroscience. 1993 Oct; 5(4):467-479. [https://doi.org/10.1162/jocn.1993.5.4.467](https://doi.org/10.1162/jocn.1993.5.4.467), doi:10.1162/jocn.1993.5.4.467, _eprint: [https://direct.mit.edu/jocn/article-pdf/5/4/467/1932303f0cn.1993.5.4.467.pdf](https://direct.mit.edu/jocn/article-pdf/5/4/467/1932303f0cn.1993.5.4.467.pdf). * McKinney et al. (2010) McKinney W, et al. Data structures for statistical computing in python. In: _Proceedings of the 9th Python in Science Conference_, vol. 445 Austin, TX; 2010. p. 51-56. [https://conference.scipy.org/proceedings/scipy2010/pdfs/mckinney.pdf](https://conference.scipy.org/proceedings/scipy2010/pdfs/mckinney.pdf). * Mollica et al. (2018) Mollica F, Siegelman M, Diachek E, Piantdosi ST, Mineroff Z, Futrell R, Fedorenko E. High local mutual information drives the response in the human language network. bioRxiv. 2018; p. 436204. [https://www.biorxiv.org/content/10.1101/436204v1.full](https://www.biorxiv.org/content/10.1101/436204v1.full). * Naselaris et al. (2011) Naselaris T, Kay KN, Nishimoto S, Gallant JL. Encoding and decoding in fMRI. NeuroImage. 2011 May; 56(2):400-410. [http://linkinghub.elsevier.com/retrieve/pii/S1053811910010657](http://linkinghub.elsevier.com/retrieve/pii/S1053811910010657), doi:10.1016/j.neuroimage.2010.07.073. * Nastase et al. (2020) Nastase SA, Goldstein A, Hasson U. Keep it real: rethinking the primacy of experimental control in cognitive neuroscience. NeuroImage. 2020; 222. [https://www.nature.com/articles/s41597-021-01033-3](https://www.nature.com/articles/s41597-021-01033-3), doi:10.1016/j.neuroimage.2020.117254, publisher: NeuroImage. * Nastase et al. (2021) Nastase SA, Liu VF, Hillman H, Zdobood A, Hasenfratz L, Keshavarzian N, Chen J, Honey CJ, Yeshurn Y, Regev M, et al. The "narratives" fmri dataset for evaluating models of naturalistic language comprehension. Scientific Data. 2021; 8(1). doi:10.1038/s41597-021-01033-3. * Newman et al. (2010) Newman SD, Ikuta T, Burns T. The effect of semantic relatedness on syntactic analysis: an fMRI study. Brain and language. 2010 May; 113(2):51-58. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2854177/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2854177/), doi:10.1016/j.bandl.2010.02.001. * O'Reilly and Frank (2006) O'Reilly RC, Frank MJ. Making working memory work: a computational model of learning in the prefrontal cortex and basal ganglia. Neural computation. 2006; 18(2):283-328. [https://pubmed.ncbi.nlm.nih.gov/16378516/](https://pubmed.ncbi.nlm.nih.gov/16378516/), doi:10.1016/j.nev.2006.001. * O'Reilly et al. (2019) Pallier C, Devauchelle AD, Dehaene S. Cortical representation of the constituent structure of sentences. Proceedings of the National Academy of Sciences. 2011; 108(6):2522-2527. [https://www.pnas.org/doi/10.1073/pnas.1018711108](https://www.pnas.org/doi/10.1073/pnas.1018711108), publisher: National Acad Sciences. * Pasquiou et al. (2022) Pasquiou A, Lakretz Y, Hale JT, Thirion B, Pallier C. Neural Language Models are not Born Equal to Fit Brain Data, but Training Helps. In: Proceedings of the 39th International Conference on Machine Learning (ICML), Vol. 162; 2022. p. 17499-17516. [https://arxiv.org/abs/2207.03380](https://arxiv.org/abs/2207.03380). * Paszke et al. (2019) Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, Desmaison A, Kopf A, Yang E, DeVito Z, Raison M, Tejani A, Chilamkurthy S, Steiner B, Fang L, Bai J, et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In: Advances in Neural Information Processing Systems 32 Curran Associates, Inc.; 2019.p. 8024-8035. [http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf](http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf). * Pedregosa et al. (2011) Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research. 2011; 12:2825-2830. [https://www.jmlr.org/papers/volume12/pedregosa11/pedregosa11.pdf](https://www.jmlr.org/papers/volume12/pedregosa11/pedregosa11.pdf). * Pennington et al. (2014) Pennington J, Socher R, Manning C. Glove: Global Vectors for Word Representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) Doha, Qatar: Association for Computational Linguistics; 2014. p. 1532-1543. [http://aclweb.org/anthology/D14-1162](http://aclweb.org/anthology/D14-1162), doi: 10.3115/v1/D14-1162. * Pereira et al. (2018) Pereira F, Lou B, Pritchett B, Ritter S, Gershman SJ, Kanwisher N, Botvinick M, Fedorenko E. Toward a universal decoder of linguistic meaning from brain activation. Nature Communications. 2018 Mar; 9(1):963. [https://www.nature.com/articles/s41467-018-03068-4](https://www.nature.com/articles/s41467-018-03068-4), doi: 10.1038/s41467-018-03068-4, number: 1 Publisher: Nature Publishing Group. * Pereira et al. (2018) Pereira F, Lou B, Pritchett B, Ritter S, Gershman SJ, Kanwisher N, Botvinick M, Fedorenko E. Toward a universal decoder of linguistic meaning from brain activation. Nature Communications. 2018 Mar; 9(1):963. [http://www.nature.com/articles/s41467-018-03068-4](http://www.nature.com/articles/s41467-018-03068-4), doi: 10.1038/s41467-018-03068-4, bandiera_abtest: a Cc_license_type: cc_by Cg_type: Nature Research Journals Number: 1 Primary_atype: Research Publisher: Nature Publishing Group Subject_term: Computational science;Neural decoding Subject_term:id. computational-science;neural-decoding. * Pulvermuller (2013) Pulvermuller F. Semantic embodiment, disembodiment or misembodiment? In search of meaning in modules and neuron circuits. Brain and Language. 2013 Oct; 127(1):86-103. doi: 10.1016/j.bandl.2013.05.015. * Radford et al. (2019) Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I, others. Language models are unsupervised multitask learners. OpenAI blog. 2019; 1(8):9. [https://d4mucfphysw.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf](https://d4mucfphysw.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). * Raichle (2015) Raichle ME. The brain's default mode network. Annual review of neuroscience. 2015; 38:433-447. [https://pubmed.ncbi.nlm.nih.gov/25938726/](https://pubmed.ncbi.nlm.nih.gov/25938726/), publisher: Annual Reviews. * Regev et al. (2013) Regev M, Honey CJ, Simony E, Hasson U. Selective and Invariant Neural Responses to Spoken and Written Narrattives. Journal of Neuroscience. 2013 Oct; 33(40):15978-15988. [http://www.jneurosci.org/cgi/doi/10.1523/JNEUROSCI.1580-13.2013](http://www.jneurosci.org/cgi/doi/10.1523/JNEUROSCI.1580-13.2013), doi: 10.1523/JNEUROSCI.1580-13.2013. * Russin et al. (2019) Russin J, JoJ, OrReilly RC, Bengio Y. Compositional generalization in a deep seq2seq model by separating syntax and semantics; 2019, [https://arxiv.org/abs/1904.09708](https://arxiv.org/abs/1904.09708), doi: 10.48550/ARQIV.1904.09708, aRXV.1904.09708. * Santi A et al. (2010) Santi A, Grodzinsky Y. fMRI adaptation dissociates syntactic complexity dimensions. NeuroImage. 2010; 51(4):1285-1293. [https://www.sciencedirect.com/science/article/pii/S1053811910003216](https://www.sciencedirect.com/science/article/pii/S1053811910003216), doi: [https://doi.org/10.1016/j.neuroimage.2010.03.034](https://doi.org/10.1016/j.neuroimage.2010.03.034). * Schrimpf et al. (2020) Schrimpf M, Blank I, Tuckute G, Kauf C, Hosseini EA, Kanwisher N, Tenenbaum J, Fedorenko E. Artificial Neural Networks Accurately Predict Language Processing in the Brain. MIT; 2020. * Shetreet & Friedmann (2014) Shetreet E, Friedmann N. The processing of different syntactic structures: fMRI investigation of the linguistic distinction between wh-movement and verb movement. Journal of Neurolinguistics. 2014; 27(1):1-17. [https://www.sciencedirect.com/science/article/abs/pii/S0911604413000468](https://www.sciencedirect.com/science/article/abs/pii/S0911604413000468). * Siegelman et al. (2019) Siegelman M, Blank IA, Mineroff Z, Fedorenko E. An attempt to conceptually replicate the dissociation between syntax and semantics during sentence comprehension. Neuroscience. 2019; 413:219-229. [https://www.sciencedirect.com/science/article/pii/S0306452219304026](https://www.sciencedirect.com/science/article/pii/S0306452219304026), publisher: Elsevier. * S. S. Simony E, Honey Qi, Chenj L, Losticky Q, Yeshurun Y, Wiesel A, Hasson U. Dynamic reconfiguration of the default mode network during narrative comprehension. Nature Communications. 2016 Jul; 7(1):12141. [http://www.nature.com/articles/ncomms12141](http://www.nature.com/articles/ncomms12141), doi: 10.1038/ncomms12141, number: 1 Publisher: Nature Publishing Group. * Sperry (1961) Sperry RW. Cerebral Organization and Behavior: The split brain behaves in many respects like two separate brains, providing new research possibilities. Science. 1961; 133(3466):1749-1757. [https://pubmed.ncbi.nlm.nih.gov/17829720/publisher](https://pubmed.ncbi.nlm.nih.gov/17829720/publisher): American Association for the Advancement of Science. * Stromswold et al. (1996) Stromswold K, Caplan D, Alpert N, Rauch S. Localization of Syntactic Comprehension by Positron Emission Tomography. Brain and Language. 1996; 52(3):452-473. [https://www.sciencedirect.com/science/article/pii/50093934V306900243](https://www.sciencedirect.com/science/article/pii/50093934V306900243), doi: [https://doi.org/10.1006/brn.1996.0024](https://doi.org/10.1006/brn.1996.0024). * Toneva and Wehbe (2019) Toneva M, Wehbe L. Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain). In: Wallach H, Larochelle H, Beygelzimer A, Alche-Buc Fd, Fox E, Garnett R, editors. _Advances in Neural Information Processing Systems 32_ Curran Associates, Inc; 2019,p. 14954-14964. [http://papers.nips.cc/paper/9633-interpreting-and-improving-natural-language-processing-in-machines-with-natural-language-processing-in-the-brain.pdf](http://papers.nips.cc/paper/9633-interpreting-and-improving-natural-language-processing-in-machines-with-natural-language-processing-in-the-brain.pdf). * Ullman (2004) Ullman MT. Contributions of memory circuits to language: the declarative/procedural model. Cognition. 2004 May; 92(1):231-270. [https://www.sciencedirect.com/science/article/pii/50010027703002324](https://www.sciencedirect.com/science/article/pii/50010027703002324). * Vigliocco (2000) Vigliocco G. Language processing: The anatomy of meaning and syntax. Current Biology. 2000; 10(2):R78-R80. [https://www.sciencedirect.com/science/article/pii/509609820002827](https://www.sciencedirect.com/science/article/pii/509609820002827), doi: [https://doi.org/10.1016/S0960-9822](https://doi.org/10.1016/S0960-9822)(00)00282-7. * Virtanen et al. (2020) Virtanen P, Gommers R, Oliphant TE, Haberland M, Reddy T, Cournapeau D, Burovski E, Peterson P, Weckesser W, Bright J, van der Walt SJ, Brett M, Wilson J, Millman KJ, Mayorov N, Nelson ARJ, Jones E, Kern R, Larson E, Carey QJ, et al. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nature Methods. 2020 Mar; 17(3):261-272. [https://doi.org/10.1038/s41592-019-0686-2](https://doi.org/10.1038/s41592-019-0686-2), doi: 10.1038/s41592-019-0686-2. * Wehbe et al. (2014) Wehbe L, Murphy B, Talukdar P, Fyshe A, Ramdas A, Mitchell T. Simultaneously uncovering the patterns of brain regions involved in different story reading subprocesses. PloS one. 2014; 9(11). [https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0112575](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0112575), publisher: Public Library of Science. * Wolf (2020) Wolf T. Huggingface; 2020, [https://huggingface.co/](https://huggingface.co/), to appear. * Xu et al. (2005) Xu J, Kemeny S, Park G, Frattali C, Braun A. Language in context: emergent features of word, sentence, and narrative comprehension. NeuroImage. 2005 Apr; 25(3):1002-1015. [http://www.sciencedirect.com/science/article/pii/S1053811904007748](http://www.sciencedirect.com/science/article/pii/S1053811904007748), doi: 10.1016/j.neuroimage.2004.12.013. ## Appendix 1 Models training We trained GloVe and GPT-2 on syntactic or semantic features by adapting both vocabulary size and the associated tokenizer. Table 1 provides examples of the features extracted from a short passage. After feature extraction, a vocabulary listing all possible feature instances is created for each feature type. A unique id is then associated to each element of the vocabulary. The tokenizer converts each feature to its unique id. Finally, the model is fed sequences of ids and learns to perform its task. The Morphology field contains a list of morphological features, with vertical bar (\(|\)) as list separator and with underscore to represent the empty list. All features represent attribute-value pairs, with an equals sign (\(=\)) separating the attribute from the value. In addition, features are selected from the universal feature inventory ([https://universaldependencies.org/u/feat/index.html](https://universaldependencies.org/u/feat/index.html)) and are sorted alphabetically by attribute names. It is possible that a feature has two or more values for a given word: Case=Acc,Dat. In this case, the values are sorted alphabetically. Note: for display purposes, the morphology attribute values were removed for 'was', it was originally equal to 'Mood=Ind \(|\) Number=Sing \(|\) Person=3\(|\)Tense=Past \(|\)VerbForm=Fin'. ### Context-limited models Using the same original collection of English novels from Project Gutenberg, we trained three GPT-2 models to probe context integration. More precisely, we restricted the preceding context (size \(k=5,15\) or \(45\) tokens) given to the GPT-2 models during training on the "_Integral dataset_". When training GPT-2 with a limited amount of contextual information, each input sequence contained \(k+5\) tokens: a special token at the beginning, \(k\) context tokens, the current token for which we retrieve the activations in order to fit fMRI brain data, the token that is predicted by the current token and the 2 special tokens at the end (the last special end-of-sentence token is always preceded by a token encoding a blank space, we omitted it in the following table). ## Appendix A Appendix ## References Removing absolute position information in GPT-2 trained on semantic features For the GTP-2 model trained on the semantic features, small modifications had to be made to the model architecture in order to remove all residual syntax. By default, GPT-2 encodes the absolute positions of tokens in sentences. As word ordering might contain syntactic information, we had to make sure that it could not be leveraged by GPT-2 by means of its positional embeddings, yet keeping information about word proximity as it influences semantics. We achieved it by slightly modifying the architecture of GPT-2: we first removed the default positional embeddings, and added to the attention scores embeddings encoding relative positions between input tokens. Indeed, just removing positional embeddings would have led to a bag-of-words model. By adding these embeddings encoding relative position to the attention scores a token will weight the attention granted to another token depending on their distance. By doing so, information about absolute and relative positions is removed from tokens' embeddings as it is not directly added to the tokens' hidden states. The following explains how this operation was performed. Let \(\mathbf{c}_{\mathbf{w}}=\left(c_{w_{1}},\dots,c_{w_{n}}\right)\) be a sequence of \(m\) tokenized content words. \(\mathbf{c}_{\mathbf{w}}\) is then fed to a \(n_{layers}\) transformer with \(n_{heads}\), of dimension \(d_{heads}\), that first build an embedding representation \(\mathbf{E}_{i},i=1..m\) (of size \(d=d_{heads}*n_{heads}\)) to which it appends (by default) a position embedding \(\mathbf{p}_{i},i=1..m\) (of size \(d\)) for each token. To remove all syntactic content, the first step is to discard the previously mentioned positional embeddings \(\mathbf{p}_{i},i=1..m\). However stopping here would only lead to a bag-of-word model where a given token might be influenced similarly by an adjacent token or one far away. As a consequence, we had to weight the attention score granted to a token depending on its relative distance. The attention operation can be described as mapping a query (Q) and a set of key-value (K,V) pairs to an output, where the query, keys, values, and output are all vectors (generally packed into matrices). The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. We thus modify the classical attention operation: \[\text{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V})=\text{Softmax}((\mathbf{Q} \mathbf{K}^{T})/\sqrt{d_{k}})\mathbf{V}\] by adding the previously described relative positional embedding \(\mathbf{W}\) in the attention mechanisms: \[\text{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V})=\text{Softmax}((\mathbf{Q} \mathbf{K}^{T}+\mathbf{W})/\sqrt{d_{k}})\mathbf{V}\] To build \(\mathbf{W}\), we first defined the matrix \(\mathbf{D}=(n-1+j-i)_{i,j=1..m}\in\mathbb{R}^{n\text{-}}\) (encoding the number of tokens separating two tokens in the input sequence shifted by \(n-1\)) for each input sequence \(\mathbf{c}_{\mathbf{w}}\), where \(n\) is the maximal input size. \(\mathbf{D}\) is then embedded using a lookup table that stores an embedding of size (\(d_{heads}\)) for each possible value of \(\mathbf{D}\), giving \(\mathbf{U}\left(\in\mathbb{R}^{n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}\text{-}n\text{-}n\text{-}n\text{-}n\text{-}\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}\text{-}n\text{-}n \text{-}n\text{-}\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-} \text{-}n\text{-}n\text{-}n\text{-}n\text{-}n\text{-}\text{-}n\text{-}n\text{-}n \text{-}n\text{-}\text{-}n\text{-}n\text{-}n\text{-}\text{-}n\text{-}n\text{-}n \text{-}\text{-}n\text{-}n\text{-}n\text{-}n\text{-}\text{-}n\text{-}n\text{-}n \text{-}\text{-}n\text{-}n\text{-}\text{-}n\text{-}n\text{-}n\text{-}\text{-}n \text{-}n\text{-}\text{-}n\text{-}n\text{-}\text{-}n\text{-}\text{-}n\text{-}n \text{-}\text{-}n\text{-}n\text{-}n\text{-}\text{-}n\text{-}n\text{-}n\text{-}n \text{-}\text{-}n\text{-}n\text{-}\text{-}n\text{-}n\text{-}\text{-}n\text{-}n \text{-}\text{-}n\text{-}\text{-}n\text{-}\text{-}n\text{-}n\text{-}\text{-}n \text{-}\text{-}n\text{-}\text{-}n\text{-}\text{-}n\text{-}n\text{-}\text{-}n \text{-}n\text{-}\text{-}n\text{-}\text{-}n\text{-}\text{-}n\text{-}n\text{-}\text{-}n \text{-}\text{-}n\text{-}\text{-}n\text{-}\text{-}n\text{-}\text{-}n\text{-}\ ## Appendix A Convergence of the language models during training ## References Mapping NLM activations to brain data Given two non-linear transformations \(\varphi_{1}\) (the neural language model that takes as input the sentence and from which we extract latent representations) and \(\varphi_{2}\) (the brain that takes as input the sentence and from which we extract voxels' activations) and an input sequence \(\mathbf{w}\) = (\(\mathbf{w}_{i},\ldots,\mathbf{w}_{M}\)), we define \(\mathbf{Y}_{i}=\varphi_{2}(\mathbf{w})\in\mathbb{R}^{N\times V}\) and \(\mathbf{X}=\varphi_{1}(\mathbf{w})\in\mathbb{R}^{M\times d}\), and we aimed at finding a linear transformation from \(\mathbf{X}\) to \(\mathbf{Y}_{i}\), where \(d\) is the dimension of the model, \(V\) is the number of brain voxels, and \(N\) the number of fMRI scans acquired. One issue is that \(\mathbf{X}\) and \(\mathbf{Y}_{i}\) don't have the same sampling frequency: \(\mathbf{X}\) being defined at word-level while \(\mathbf{Y}_{i}\) has been re-sampled at the fMRI acquisition frequency, every 2 seconds. To map \(\mathbf{X}\) to \(\mathbf{Y}_{i}\) we first need to temporally align them, taking the dynamic of the fMRI BOLD signal into account, and then determine a linear spatial mapping between the convolved and re-sampled \(\mathbf{X}\) and \(\mathbf{Y}_{i}\). Using the standard model-based encoding approach to modelling fMRI signals (_Naselaris et al., 2011; Huth et al., 2016; Pasquiou et al., 2022_), we first convolve each column of \(\mathbf{X}\) with the _SPM_ haemodynamic kernel (\(\mathbf{K}\)), which corresponds to the profile of the fMRI BOLD response following a Dirac stimulation, and then sub-sampled the signal to match the sampling frequency of \(\mathbf{Y}_{i}\), giving \(\tilde{\mathbf{X}}=S_{ub}(\mathbf{K}\circ X)\), with \(S_{ub}\) the sub-sampling operator. Finally, we learn the linear spatial mapping between \(\tilde{\mathbf{X}}\) and \(\mathbf{Y}_{i}\) using a nested cross-validated L2-regularized (aka Ridge) univariate linear encoding model. More precisely, for each voxel \(\mathbf{y}_{i}^{c}\), we learn a linear projection \(\hat{\beta}_{s}^{c}\) from \(\tilde{\mathbf{X}}\) to \(\mathbf{y}_{s}^{c}\) using a nested cross-validated L2-regularized univariate linear encoding model whose general solution is given by: \[\hat{\beta}_{s}^{c}=arg\min_{\hat{\beta}_{s}}\|\mathbf{y}_{s}^{c}-\hat{\beta}_ {s}^{T}\mathbf{X}\|^{2}+\lambda\|\hat{\beta}_{s}\|_{2}^{2}\text{. i.e. }\hat{\beta}_{s}=\text{ Ridge}(\mathbf{X},\mathbf{Y}_{s})\] The latter stage resulted for each model and each run into a design matrix \(\mathbf{X}\) of size \(N\times d\). Given a neural language model, we gave the associated nine design-matrices to a nested cross-validated L2-regularized univariate linear encoding model to fit the fMRI brain data (of size \(N\times V\)). To evaluate model performance and the optimal regularization parameter \(\lambda^{*}\), we used a nested cross-validation procedure: we split each participant's dataset into training, validation and test sets, such that the training set included 7 out of the 9 experiment runs, and the validation and test sets contained one of the two remaining sessions. We evaluated model performance using Pearson correlation coefficient \(R\), which is a measure of the linear correlation between encoding models' predicted time-courses and the actual time-courses. For each subject and each voxel, we first determined \(\lambda^{*}\) by comparing \(R_{valid}\) for 10 different values of \(\lambda\), linearly spaced in log-scale between \(10^{-3}\) and \(10^{4}\). We then calculated \(R_{test}\) for \(\lambda^{*}\). Finally, we repeated this procedure 9 times, using cross-validation. This resulted in 9 \(R_{test}\) values that we then averaged to produce a single \(R_{test}\) map for the participant. We evaluated the quality of the mapping for subject \(s\) in voxel \(v\) using Pearson correlation: \[R(X)_{s}^{c}=\text{Corr}(\mathbf{Y}_{s}^{c},\hat{\beta}_{s}^{c}\mathbf{X})\]
2309.16834
Energy Optimal Control of a Harmonic Oscillator with a State Inequality Constraint
In this article, the optimal control problem for a harmonic oscillator with an inequality constraint is considered. The applied energy of the oscillator during a fixed final time period is used as the performance criterion. The analytical solution with both small and large terminal time is found for a special case when the undriven oscillator system is initially at rest. For other initial states of the Harmonic oscillator, the optimal solution is found to have three modes: wait-move, move-wait, and move-wait-move given a longer terminal time.
Mi Zhou, Erik I Verriest, Chaouki Abdallah
2023-09-28T20:25:39Z
http://arxiv.org/abs/2309.16834v1
# Energy Optimal Control of a Harmonic Oscillator with a State Inequality Constraint ###### Abstract In this article, the optimal control problem for a harmonic oscillator with an inequality constraint is considered. The applied energy of the oscillator during a fixed final time period is used as the performance criterion. The analytical solution with both small and large terminal time is found for a special case when the undriven oscillator system is initially at rest. For other initial states of the Harmonic oscillator, the optimal solution is found to have three modes: wait-move, move-wait, and move-wait-move given a longer terminal time. ## I Introduction _"Now, here, you see, it takes all the running you can do, to keep in the same place." from Alice in Wonderland, Lewis Carroll._ This sentence is used to describe Alice constantly running but remaining in place. We found the same phenomenon in the optimal control of harmonic oscillators where the optimal behavior may involve "remaining in place" for some time. The details will be shown in Section III. The problem of minimum time optimal control of a pendulum is a classical problem [1]. When the control magnitude is bound, it is described by a second-order differential equation \(\ddot{x}+x=u\) where \(x\) is the position with \(|u|\leq 1\). The resulting optimal control \(u\) is known as bang-bang control, i.e., \(u=-\text{sign}(\dot{x})\). Separating the domain \(u=-1\) of the phase plane from the domain \(u=+1\) are unit semicircle centered at points of the form \((2k+1,0)\) (\(k\in\mathbb{Z}\)) [2]. In [3], the authors solved the optimal control problem for the harmonic oscillator with a fixed initial and final state under bounded control actions, and for two types of performance criteria. Reference [4] presented a solution to the minimum time control problem for a classical harmonic oscillator to reach a target energy from a given initial state by controlling its frequency. Finding the analytical solutions to optimal control problems may not be possible in general. However, there are many numerical methods to solve optimal control problems with/without constraints. An introduction and reviews of numerical methods can be found in [5, 6, 7]. In this article, we model the optimal control problem of harmonic oscillators with state inequality and solve it using the indirect optimal control method based on Pontryagin's minimum (maximum) principle [8]. The solutions are found to have three modes: wait-move (WM), move-wait (MW), and move-wait-move (MWM), which may be justified using human experience and physics. To the best of our knowledge, this is the first work that such behavior has been found and explained. This article is organized as follows: In Section II, we formulate our problem and provide some preliminaries of optimal control with global inequalities. In Section III, we present the analytical solution to this formulated optimal control problem. Section IV presents the simulation results to illustrate our analytical solutions. Finally, we conclude our article in Section V. ## II Problem description A general undamped harmonic oscillator has the form \(m\ddot{x}+kx=u\). Figure 1 is a harmonic oscillator with mass \(m\), stiffness \(k\), and external excitation \(u\). Without loss of generality, we let \(m=1\), \(k=1\) in our problem. We consider the following linear state space realization for the undamped harmonic oscillator with fixed oscillation frequency: \[\dot{x}_{1} =x_{2}\] \[\dot{x}_{2} =-x_{1}+u, \tag{1}\] where \(x_{1}\in\mathbb{R}^{1}\) is the position, \(x_{2}\in\mathbb{R}^{1}\) is its velocity, and \(u\in\mathbb{R}^{1}\) is the control input. The performance criterion is the applied energy \[J=\frac{1}{2}\int_{0}^{T}u^{2}\mathrm{d}t, \tag{2}\] with \(T\) the fixed terminal time. We consider the state inequality constraint \[x_{2}\geq 0. \tag{3}\] which means only forward motion is allowed. We would like to find the optimal control law \(u^{*}(t)\) such that the criterion (2) is minimized and the dynamics (1) are satisfied with the inequality constraint (3). ### _Optimal control with inequality constraints_ For a general optimal control problem \[\dot{x}=f(x,u,t)\] with global inequality constraint \(w(x,u,t)\leq 0\), we can construct the Hamiltonian function \[H=L(x,u)+\lambda^{\top}f(x,u,t)+\mu^{\top}w(x,u,t),\] where \(\mu=[\mu_{1},\mu_{2},\cdots,\mu_{r}]\). **Theorem 1** ([9]): _The multiplier \(\mu_{i}\), \(i=1,2,\cdots,r\) must satisfy the complimentary slackness condition:_ \[\mu_{i}\begin{cases}\geq 0,&w_{i}(x,u,t)=0\\ =0,&w_{i}(x,u,t)<0\end{cases}.\] The necessary conditions for \(u^{*}\) (corresponding \(x^{*}\)) to be an optimal solution are that there exist \(\lambda\), \(\mu\) which satisfy the following conditions: 1. \(\dot{x}^{*}=f(x^{*},y^{*},t),\ x^{*}(0)=x_{0},\ x^{*}(T)=x_{f}\). 2. Pontryagin minimum condition: \(H[x^{*}(t),u^{*}(t),\lambda(t),t]\leq H[x^{*}(t),u(t),\lambda(t),t]\). 3. The Euler-Lagrange equation: \[\dot{\lambda}=\left(-\frac{\partial L}{\partial x}-\lambda^{\top}\frac{ \partial f}{\partial x}+\mu^{\top}\frac{\partial w}{\partial x}\right)|_{x=x^{ *}}.\] The solution of an optimal control problem with global inequality constraints is continuous but may not be a differentiable function. In the following section, we will use Theorem 1 to analyze the solution of the defined optimal control problem Eqn. (1), (2), (3). ## III Solution of the problem: Theoretical analysis The proposed system (1) models a mechanical spring system. Intuitively, when there is no input, and if the initial displacement is greater than zero, the spring is in a state of stretching. The system will then try to go to the state equilibrium \((0,0)\) thus requiring the velocity to be less than zero at the beginning and then behave as a harmonic oscillator. If the spring is initially compressed, the initial velocity will be positive then show the tendency of harmonic oscillation. To solve the optimal control problem for such a system, we first construct the following Hamiltonian \[H=\frac{1}{2}u^{2}+\lambda_{1}x_{2}+\lambda_{2}(-x_{1}+u)-\mu x_{2}.\] Using the optimality conditions derived above, we have \[u =-\lambda_{2}\] \[\dot{\lambda}_{1} =\lambda_{2}\] \[\dot{\lambda}_{2} =-\lambda_{1}+\mu \tag{4}\] where \(\mu=0\) if the constraint is not activated, otherwise it must satisfy \(\mu\geq 0\). Solving (1) and (4) with the constraint not activated (i.e., \(\mu=0\)), we have \[x_{1}(t) =C_{1}\cos t+C_{2}\sin t+C_{3}t\cos t+C_{4}t\sin t \tag{5}\] \[x_{2}(t) =(C_{4}-C_{1})\sin t+(C_{2}+C_{3})\cos t-C_{3}t\sin t+C_{4}t\cos t\] \[u(t) =-2C_{3}\sin t+2C_{4}\cos t\] where \(C_{1},C_{2},C_{3},C_{4}\) are constants. Denote the initial state as \((s,0)\) and the terminal state as \((x_{f},0)\). We then analyze the solution to the above problem from four different scenarios: 1. \(s=0\) and \(x_{f}>0\); 2. \(0<s<x_{f}\); 3. \(s<0<x_{f}\); 4. \(s<x_{f}<0\). The scenario that \(x_{f}<s\) can not happen because the inequality constraint implies that the spring can not move backward. The terminal time is denoted as \(T\). ### _Initial state \((0,0)\) and \(x_{f}>0\)_ In the first scenario, we consider the initial state \((0,0)\) and the terminal position \(x_{f}>s=0\). The initial state \((0,0)\) means that the harmonic oscillator is in the natural state at the beginning, neither stretching nor compressing, with no velocity either. Substituting the initial state \((x_{1}(0),x_{2}(0))=(0,0)\) into Eqn. (5), we obtain \[C_{1} =0\] \[C_{2}+C_{3} =0.\] Henceforth, \[x_{1}(t)=C_{2}\sin t-C_{2}t\cos t+C_{4}t\sin t \tag{6}\] and \[x_{2}(t)=C_{4}\sin t+C_{2}t\sin t+C_{4}t\cos t. \tag{7}\] Using the terminal state \((x_{1}(T),x_{2}(T))=(x_{f},0)\), we can formulate the following equality with respect to the unknown constants \(C_{2}\) and \(C_{4}\), \[\underbrace{\begin{bmatrix}\sin T-T\cos T&T\sin T\\ T\sin T&T\cos T+\sin T\end{bmatrix}}_{M}\begin{bmatrix}C_{2}\\ C_{4}\end{bmatrix}=\begin{bmatrix}x_{f}\\ 0\end{bmatrix}.\] It is obvious that \(\det(M)=\sin^{2}T-T^{2}<0\) for all \(T>0\). Thus solving the above equations, we can obtain \[C_{2} =-\frac{x_{f}(\sin T+T\cos T)}{T^{2}-\sin^{2}T} \tag{8}\] \[C_{4} =\frac{x_{f}T\sin T}{T^{2}-\sin^{2}T}. \tag{9}\] Substituting the value of \(C_{2}\) and \(C_{4}\) into Eqn. (7), we have \[x_{2}(t)=\beta Tt\sin(T-t)+\beta(T-t)\sin T\sin(t) \tag{10}\] and \[u(t)=-2\beta\sin T\sin t+2\beta T\sin(T-t) \tag{11}\] Fig. 1: Mechanical harmonic oscillator. where \(\beta=\frac{x_{f}}{T^{2}-\sin^{2}T}>0\). Analyzing the Eqn. (10), we find that there exists a \(T^{*}\) when \(T\leq T^{*}\), \(x_{2}(t)\geq 0\). **Theorem 2**: _The boundary of the terminal time \(T\) for the state becoming nonsmooth is_ \[T^{*}=\pi,\] First, we re-organize \(x_{2}(t)\) as follows: \[x_{2}(t) =\beta Tt(T-t)\left[\frac{\sin(T-t)}{T-t}+\frac{\sin T}{T}\frac{ \sin t}{t}\right]\] \[=\underbrace{\beta Tt(T-t)}_{\geq 0}[\operatorname{sinc}(T-t)+ \operatorname{sinc}(T)\operatorname{sinc}(t)]\] _where we define \(\operatorname{sinc}(x)=\frac{\sin x}{x}\). When \(T\in(0,\pi]\), \(\operatorname{sinc}(T)\geq 0\), \(\operatorname{sinc}(t)\geq 0\), \(\operatorname{sinc}(T-t)\geq 0\). Thus, \(x_{2}(t)\geq 0\). Suppose \(T=\pi+\varepsilon\) where the small perturbation \(\varepsilon>0\) is the new bound. Substitute it into_ \[\gamma(t) =\operatorname{sinc}(T-t)+\operatorname{sinc}(T)\operatorname{sinc}t\] \[=\frac{-\cos(\varepsilon-t)}{\pi+\varepsilon-t}-\frac{\cos \varepsilon}{\pi+\varepsilon}\frac{\sin t}{t}.\] _We then find that \(\gamma(0)=-\frac{\cos\varepsilon}{\pi+\varepsilon}-\frac{\cos\varepsilon}{ \pi+\varepsilon}<0\) since \(\lim_{t\to 0}\frac{\sin t}{t}=1\). Thus \(T^{*}\) must be \(\pi\)._ If \(T\) is chosen to satisfy that \(x_{2}(t)\geq 0\) for \(t\in[0,T]\), then Eqn. (10) is the analytical solution. When the terminal time \(T>\pi\), the above analysis fails. Looking at the system dynamics, we find that there is no cost to keep the state stagnating at the initial state. That being said, the optimal solution in this case is "WM": \[x_{1}(t)=\begin{cases}0,&t\in[0,\tau]\\ x_{1}^{c}(t),&t\in[\tau,T]\end{cases} \tag{12}\] and \[x_{2}(t)=\begin{cases}0,&t\in[0,\tau]\\ x_{2}^{c}(t),&t\in[\tau,T]\end{cases} \tag{13}\] where \(\tau=T-\pi\), \(x_{1}^{c}(t)\) and \(x_{2}^{c}(t)\) are continuously differentiable functions dependent on time. The corresponding optimal controller has the form: \[u(t)=\begin{cases}0,&t\in[0,\pi]\\ u^{c}(t),&t\in[\pi,T]\end{cases}. \tag{14}\] Substituting the boundary condition \((x_{1}(\tau),x_{2}(\tau))=(0,0)\), we thus have \[C_{1}\cos\tau+C_{2}\sin\tau+C_{3}\tau\cos\tau+C_{4}\tau\sin\tau =0\] \[(C_{4}-C_{1})\sin\tau+(C_{2}+C_{3})\cos\tau-C_{3}\tau\sin\tau+C_{4 }\tau\cos\tau =0\] \[C_{1}\cos T+C_{2}\sin T+C_{3}T\cos T+C_{4}T\sin T =x_{f}\] \[(C_{4}-C_{1})\sin T+(C_{2}+C_{3})\cos T-C_{3}T\sin T+C_{4}T\cos T =0.\] Substituting \(\tau=T-\pi\) and simplifying the above equation, we have \[\begin{bmatrix}-\cos T&-\sin T&-(T-\pi)\cos T&-(T-\pi)\sin T\\ \sin T&-\cos T&(T-\pi)\sin T-\cos T&-\sin T-(T-\pi)\cos T\\ \cos T&\sin T&T\cos T&T\sin T\\ -\sin T&\cos T&\cos T-T\sin T&\sin T+T\cos T\end{bmatrix}\] By solving the above linear equations, we obtain \[C_{1} =\frac{x_{f}\sin T+\pi x_{f}\cos T-Tx_{f}\cos T}{\pi}\] \[C_{2} =-\frac{x_{f}(\cos T-\pi\sin T+T\sin T)}{\pi}\] \[C_{3} =\frac{x_{f}\cos T}{\pi}\] \[C_{4} =\frac{x_{f}\sin T}{\pi}.\] Thus the optimal controller for the time interval \([\pi,T]\) is \[u^{c}(t)=\frac{2x_{f}}{\pi}\sin(T-t)\] which makes the optimal cost \[J^{*}=\frac{x_{f}^{2}}{\pi},\] which implies that the optimal cost is only a function of \(x_{f}\). ### _Initial state \((s,0)\)_ Here Eqn. (5) still holds. The key point here is to confirm the constant \(C_{1}\sim C_{4}\) using known boundary conditions. We then study the solution for the initial arbitrary state on the constraint. When \(T\) is small, we substitute the initial state \((s,0)\) into Eqn. (5) and obtain \[M\begin{bmatrix}C_{2}\\ C_{4}\end{bmatrix}=\begin{bmatrix}x_{f}-s\cos T\\ s\sin T\end{bmatrix}\] which gives \[C_{1} =s\] \[C_{2} =\frac{s(T+\cos T\sin T)-x_{f}(\sin T+T\cos T)}{T^{2}-\sin^{2}T}\] \[C_{3} =-C_{2}\] \[C_{4} =\frac{x_{f}T\sin T-s\sin^{2}T}{T^{2}-\sin^{2}T}.\] Thus, \[x_{1}(t) =s\cos t+C_{2}(\sin t-t\cos t)+C_{4}t\sin t\] \[x_{2}(t) =C_{4}(\sin t+t\cos t)-s\sin t+C_{2}t\sin t. \tag{15}\] When \(T\) is large, the constraint will be activated at some time interval. Here, we divide it into the following three cases. #### Iv-B1 Case 1: \(0<s<x_{f}\) When \(T\) is large, the stay will be in the initial state (i.e., "WM"). Similar to the case when \(s=0\), we assume \(\tau\) as the staying time. Thus, the solutions of \(x_{1}\) and \(x_{2}\) have the same form as Eqn. (12) and Eqn. (13) respectively. The controller during \([0,\tau]\) is the value such that \[\dot{x}_{2}=-x_{1}+u=0\Rightarrow-s+u=0\Rightarrow u=s\] which implies that during this time interval, with the constant control input \(u=s\), we can keep the state stay at the initial state. As we know, we have five unknowns, i.e., \(C_{1}\sim C_{4}\) and \(\tau\). Using the fact that \(u(\tau)=s\) and the boundary conditions of states, we have the following five constraints: \[C_{1}\cos\tau+C_{2}\sin\tau+C_{3}\tau\cos\tau+C_{4}\tau\sin\tau=s\] \[\begin{split}&(C_{4}-C_{1})\sin\tau+(C_{2}+C_{3})\cos\tau-C_{3}\tau \sin\tau+C_{4}\tau\cos\tau=0\\ & C_{1}\cos T+C_{2}\sin T+C_{3}T\cos T+C_{4}T\sin T=x_{f}\\ &(C_{4}-C_{1})\sin T+(C_{2}+C_{3})\cos T-C_{3}T\sin T+C_{4}T\cos T =0\\ &-2C_{3}\sin\tau+2C_{4}\cos\tau=s.\end{split}\] which allows us to solve for the constants. The cost function is \[J =\int_{0}^{\tau}\frac{1}{2}s^{2}\mathrm{d}t+\int_{\tau}^{T}\frac{ 1}{2}u^{2}\mathrm{d}t\] \[=\frac{1}{2}s^{2}\tau+\frac{1}{2}\int_{\tau}^{T}(-2C_{3}\sin t+2C _{4}\cos t)^{2}\mathrm{d}t.\] #### Iii-B2 Case 2: \(s<0<x_{f}\) When \(T\) is large, the stay will be in the middle because it needs less energy to stay there (i.e., "MWM"). We denote the staying time interval as \([\tau_{1},\tau_{2}]\). During \([\tau_{1},\tau_{2}]\), \[\dot{x}_{1} =x_{2}=0\] \[\dot{x}_{2} =-x_{1}+u=0,\] which gives \[x_{1s} =x_{1}(t)=u, \forall t\in[\tau_{1},\tau_{2}] \tag{16}\] \[x_{2s} =x_{2}(t)=0, \forall t\in[\tau_{1},\tau_{2}] \tag{17}\] where \((x_{1s},x_{2s})\) is a constant staying state. Thus the solution has the following form: \[u(t)=\left\{\begin{array}{ll}u^{c}(t),&\forall t\in[0,\tau_{1}]\\ x_{1s},&\forall t\in[\tau_{1},\tau_{2}]\\ u^{p}(t),&\forall t\in[\tau_{2},T]\end{array}\right.\] where \(u^{c}(t)\), \(u^{p}(t)\) has the same formula as in (5), and \[x_{1}(t)=\left\{\begin{array}{ll}x_{1}^{c}(t),&t\in[0,\tau_{1}]\\ x_{1s},&t\in[\tau_{1},\tau_{2}]\\ x_{1}^{p}(t),&t\in[\tau_{2},T]\end{array}\right. \tag{18}\] and \[x_{2}(t)=\left\{\begin{array}{ll}x_{2}^{c}(t),&t\in[0,\tau_{1}]\\ x_{2s},&t\in[\tau_{1},\tau_{2}]\\ x_{2}^{p}(t),&t\in[\tau_{2},T]\end{array}\right. \tag{19}\] where \(x_{1}^{c}(t)\), \(x_{1}^{p}(t)\), \(x_{2}^{c}(t)\), \(x_{2}^{p}(t)\) satisfying the form of Eqn. (5) are some continuously differentiable functions dependent on time \(t\). There are 11 unknowns to be solved, i.e., \(C_{1}\sim C_{4}\) (parameters for \(x_{1,2}^{c}(t)\)), \(C_{1}^{c}\sim C_{4}^{c}\) (parameters for \(x_{1,2}^{p}(t)\)), \(\tau_{1}\), \(\tau_{2}\), and \(x_{1s}\) while the known boundary conditions are \[\left\{\begin{array}{ll}x_{1}^{c}(0)&=s\\ x_{2}^{c}(0)&=0\\ x_{1}^{c}(\tau_{1})&=x_{1s}\\ x_{2}^{c}(\tau_{1})&=0\\ x_{1}^{p}(\tau_{1})&=x_{1s}\\ x_{2}^{p}(\tau_{2})&=0\\ x_{1}^{p}(T)&=x_{f}\\ x_{2}^{p}(T)&=0\\ x_{1}^{p}(\tau_{1})&=x_{1s}\\ u^{p}(\tau_{2})&=x_{1s}\end{array}\right.\] Combined with optimizing the cost function, \[J=\int_{0}^{\tau_{1}}\frac{1}{2}u^{2}\mathrm{d}t+\frac{1}{2}x_{1s}u^{2}(\tau_{ 2}-\tau_{1})+\int_{\tau_{2}}^{T}\frac{1}{2}u^{2}\mathrm{d}t\] we can finally obtain the solutions. #### Iii-B3 Case 3: \(s<x_{f}<0\) When \(T\) is large, the stay will be in the terminal state (i.e., "MW"). Thus the solution has the following form: \[x_{1}(t)=\left\{\begin{array}{ll}x_{1}^{c}(t),&t\in[0,\tau]\\ x_{f},&t\in[\tau,T]\end{array}\right. \tag{20}\] and \[x_{2}(t)=\left\{\begin{array}{ll}x_{2}^{c}(t),&t\in[0,\tau]\\ 0,&t\in[\tau,T]\end{array}\right.. \tag{21}\] This means that when \(t\in[\tau,T]\), \[\dot{x}_{1}(t) =x_{2}(t)=0\] \[\dot{x}_{2}(t) =-x_{1}(t)+u=0\Rightarrow u=x_{1}(\tau)=x_{1}(T)=x_{f}.\] The constraints \(x_{1}(0)=s\), \(x_{2}(0)=0\), \(x_{1}(\tau)=x_{f}\), \(x_{2}(\tau)=0\), \(u(\tau)=x_{f}\) give us \[\left\{\begin{array}{ll}C_{1}=s\\ C_{2}+C_{3}=0\\ C_{1}\cos\tau+C_{2}\sin\tau+C_{3}\tau\sin\tau+C_{4}\tau\sin\tau=x_{f}\\ (C_{4}-C_{1})\sin\tau+(C_{2}+C_{3})\cos\tau-C_{3}\tau\sin\tau+C_{4}\tau\cos\tau= 0\\ -2C_{3}\sin\tau+2C_{4}\cos\tau=x_{f}\end{array}\right.\] solving which we can obtain the solution of \(x_{1}^{c}(t)\) and \(x_{2}^{c}(t)\). ## IV Simulation In this section, we compare our solution with the numerical solution given by [10]. The following simulations are all done on a personal computer with MATLAB R2020b. ### _Experiment 1: Initial state \((s,0)\) and \(0=s<x_{f}\)_ To illustrate the results we obtained in subsection III-A, we divide our experiments into two parts: \(T\) is small and \(T\) is large. We fix \(x_{f}=2\). #### Iv-A1 \(T\) is small When the terminal time \(T\) is small, we have the following results which satisfy our analytical solution and physical explanation. Let \(T=1\). Fig. 2 shows the numerical solution (NS) using the toolbox [10] and the analytical solution (AS) we derived in Eqn. (10). The numerical solution, the analytical solution, and the physics analysis coincide with each other. #### Iv-A2 \(T\) is large When the terminal time \(T\) is large, say, \(T=5\), we have the following results. Fig. 3 shows that our analytical solution coincides with the numerical solution. Moreover, the optimal cost given by the numerical solution is \(J=1.2732\) which is indeed \(\frac{x_{f}^{2}}{\pi}\). Consider the harmonic oscillator in Fig. 1 to be moved from its initial natural state to its final state. Physically, we would want to pull it right. If the total time is short, we would pull it right directly to the end position. If the terminal time is long, we would try to stay at the initial position because it needs less energy to hold the oscillator. So both simulation and analytical analysis satisfy our physical observations. ### _Experiment 2: initial state \((s,0)\) and \(s>0\)_ When \(T\) is large, we would also want to stay at the beginning because the longer the oscillations, the more energy is needed to hold it at the same place. Let \(s=1\), \(x_{f}=2\), \(T=5\). Fig. 4 shows that our solution is identical to our numerical solution obtained using the toolbox [10]. The optimal cost is \(J\approx 3.918\), \(\tau\approx 2.568\). ### _Experiment 3: initial state \((s,0)\) and \(s<0\) and \(x_{f}>0\)_ Let \(s=-2\), \(x_{f}=1\). In this experiment, we find that as \(T\rightarrow\infty\), the staying point will be \((0,0)^{\top}\). The optimal cost using our method is \(J\approx 1.5109\), \(\tau_{1}=3.3333\), \(\tau_{2}=5.1890\), \(x_{1s}=0.2279\) as shown in Fig. 5. If \(|s|>|x_{f}|\), the staying point \(x_{1s}>0\), otherwise, \(x_{1s}<0\). Physically, this scenario means that, at the beginning, the oscillator is compressed, and we want to pull it to a stretched state (\(x_{f}>0\)). Obviously, it will stay at some position in the middle for the same reason as aforementioned. ### _Experiment 4: initial state \((s,0)\) and \(s<x_{f}<0\)_ Let \(x_{f}=-1\), \(s=-2\). This scenario is symmetric to Experiment 1. We found that \(\tau\approx 2.432\), \(J\approx 3.918\) using both our method and the toolbox. Interestingly, it has the same energy consumption compared to its symmetric case. As we can see from Fig. 6, our solution and the numerical solution given by the toolbox are the same. These phenomena coincide from the physics perspective of the optimal control of the harmonic oscillator. ## V Conclusion In this paper, we examined and solved the optimal control problem of controlling a harmonic oscillator in a forward motion in terms of minimizing the energy given a fixed terminal time and terminal position. More specifically, we found the bound of the terminal time when the solutions start becoming non-smooth, and we derived the explicit analytical solution of the optimal controller when the initial state is in the equilibrium of the autonomous system. We also analyzed the optimal solution when the initial state is Fig. 4: Large terminal time (“WM”): \(s=1\), \(x_{f}=2\), \(T=5\). Fig. 3: Evolution of state (“WM”): terminal time \(T=5\). Fig. 2: Evolution of state: terminal time \(T=1\) (legend ”AS” denotes analytical solution, ”NS” denotes numerical solution). in a state of stretching or compression. Simulation results verified our analysis and we provided physical justification of our theoretical results. These results shed some light on the optimal swimming policy in a vortex. We expect this work will also give some insight into other linear time-invariant systems with complex eigenvalues. Future work will further extend to the unsolved optimal control [11] of similar systems with state-dependent switched dynamics or stage cost which will appear multiple switching phenomena at the switching interface.
2309.09914
Quantum algorithm for imaginary-time Green's functions
Green's function methods lead to ab initio, systematically improvable simulations of molecules and materials while providing access to multiple experimentally observable properties such as the density of states and the spectral function. The calculation of the exact one-particle Green's function remains a significant challenge for classical computers and was attempted only on very small systems. Here, we present a hybrid quantum-classical algorithm to calculate the imaginary-time one-particle Green's function. The proposed algorithm combines variational quantum eigensolver and quantum subspace expansion to calculate Green's function in Lehmann's representation. We demonstrate the validity of this algorithm by simulating H$_2$ and H$_4$ on quantum simulators and on IBM's quantum devices.
Diksha Dhawan, Dominika Zgid, Mario Motta
2023-09-18T16:28:11Z
http://arxiv.org/abs/2309.09914v1
# Quantum algorithm for imaginary-time Green's functions ###### Abstract Green's function methods lead to ab initio, systematically improvable simulations of molecules and materials while providing access to multiple experimentally observable properties such as the density of states and the spectral function. The calculation of the exact one-particle Green's function remains a significant challenge for classical computers and was attempted only on very small systems. Here, we present a hybrid quantum-classical algorithm to calculate the imaginary-time one-particle Green's function. The proposed algorithm combines variational quantum eigensolver and quantum subspace expansion to calculate Green's function in Lehmann's representation. We demonstrate the validity of this algorithm by simulating H\({}_{2}\) and H\({}_{4}\) on quantum simulators and on IBM's quantum devices. ## I Introduction The solution of the time-independent Schrodinger equation is one of the central challenges of computational many-body quantum mechanics. For the system of interest, such a solution provides access to multiple properties, such as excitation energies, ionization potentials, electron affinities, multipole moments, and optimized geometries. While traditionally in quantum chemistry, the wavefunction formalism is employed to obtain solutions of the Schrodinger equation, the Green's function formalism provides an equally powerful theoretical and computational framework. In quantum mechanics, Green's functions are defined as correlation functions, from which most commonly one extracts information about the system, such as the density of states, quasiparticle properties, and response functions. Moreover, the Green's function formalism also provides direct access to thermodynamic quantities such as Gibbs energy, entropy, or heat capacity while explicitly including the temperature dependence. Consequently, due to its direct access to spectra and thermodynamic quantities, the Green's function formalism is frequently employed [1; 2; 3] in the study of solids. Over the years, many approximate methods to compute Green's functions have been proposed, such as GW [4; 5; 6; 7; 8; 9], the second-order Green's function method (GF2) [10; 11; 12; 13; 14], and Green's function coupled cluster (GFCC) [15; 16; 17; 18; 19; 20; 21], and they have been applied to numerous molecular and condensed-matter problems. GF2, GW, and other methods based on diagrammatic expansions provide accurate results in the weakly and moderately correlated regimes. However, many strongly correlated systems fall outside the regime of validity of these approximations. Embedding Green's function methods such as dynamical mean-field theory (DMFT) [22; 23; 24] and self-energy embedding theory(SEET) [25; 26; 27; 28; 29; 30], have been proposed to overcome this challenge. In these methods, a subset of strongly correlated orbitals is treated with a highly accurate method (referred to as a solver). Such a solver is required to describe electronic correlation more accurately than other computationally less expensive, approximate methods such as GW or GF2. The most common solvers are based on the full configuration interaction (FCI) [31; 32] and its truncations [33; 34]. The exponential scaling of FCI severely limits the number of impurity orbitals that can be treated by them. The use of truncated methods, on the other hand, compromises with the accuracy of these solvers. Recent developments in quantum computing have shown promise in overcoming these limitations. Several algorithms have been proposed for obtaining the Green's function using quantum machines. The majority of these algorithms focus on calculating the real-time Green's function. Real-time Green's functions provide us access to various experimental properties, including spectra, but they require the time evolution of a state. Early works in this field include algorithms based on phase estimation [35; 36; 37] and quantum Lanczos recursion [38]. Despite their accuracy and scalability, these fault-tolerant algorithms require longer coherence times, often exceeding the capabilities of contemporary quantum devices. In contrast, algorithms based on variational ansatz-based simulation of time evolution are noise-resilient [39; 40; 41], and require fewer qubits and shallower circuits. While promising, variational approaches face challenges, especially in the choice of an ansatz, which can compromise the accuracy of the results [42]. Additionally, we should consider the scalability of nonlinear parameter optimizations, which can possibly suffer from barren plateaus [43; 44]. In Ref [41; 45], the authors proposed the use of variational quantum simulation (VQS) to time-evolve the system. In an alternative approach, presented in Ref [45], a Lehmann's representation [46] of Green's function is used, and the excited states are obtained through the subspace-search variational quantum eigensolver (SSVQE). In other works, SSVQE is replaced by the quantum-equation of motion (q-EOM) method [47; 48]. Jamet et al. [49] calculated the continued-fraction representation of the Green's function in the Krylov
2301.00684
Astrophysical black holes embedded in organized magnetic fields: Case of a nonvanishing electric charge
Large scale magnetic fields pervade the cosmic environment where the astrophysical black holes are often embedded and influenced by the mutual interaction. In this contribution we outline the appropriate mathematical framework to describe magnetized black holes within General Relativity and we show several examples how these can be employed in the astrophysical context. In particular, we examine the magnetized black hole metric in terms of an exact solution of electro-vacuum Einstein-Maxwell equations under the influence of a non-vanishing electric charge. New effects emerge: the expulsion of the magnetic flux out of the black-hole horizon depends on the intensity of the imposed magnetic field.
Vladimir Karas
2022-12-27T13:55:21Z
http://arxiv.org/abs/2301.00684v2
# Astrophysical black holes embedded in organized magnetic fields ###### Abstract Large scale magnetic fields pervade the cosmic environment where the astrophysical black holes are often embedded and influenced by the mutual interaction. In this lecture, we outline the appropriate mathematical framework to describe magnetized black holes within General Relativity and we show several examples how these can be employed in the astrophysical context. In particular, we examine the magnetized black hole metric in terms of an exact solution of electro-vacuum Einstein-Maxwell equations under the influence of a non-vanishing electric charge. New effects emerge: the expulsion of the magnetic flux out of the black-hole horizon depends on the intensity of the imposed magnetic field. Black holes - Electromagnetic fields - General relativity 1 ## 1 Introduction Astrophysical black holes are cosmic objects that can be mathematically described by a set of Einstein-Maxwell equations (e.g. Romero & Vila 2014 [10]). Various formulations of the Uniqueness Theorems express in a rigorous way the conditions under which the black hole solutions exist and they constrain the parameter space that is necessary to specify different cases (Wald 1984 [13]). It turns out that classical black holes are described by a small number of such parameters, in particular, the mass, electric (or magnetic) charge, and angular momentum (spin). Black holes do not support their own magnetic field except the gravito-magnetically induced components in the rotating, charged Kerr-Newman metric. However, astrophysical black holes are embedded in a magnetic field of external origin, which then interacts with the internal properties of the black hole (Ruffini & Wilson [11]). In the case of very strong magnetic intensity, the magnetic field even contributes to the spacetime metric. In the present contribution we examine interesting properties of such an electrically charged, magnetized, rotating black hole. To this end we employ the solution originally derived in 1970s by means of Ernst magnetization techniques (Ernst & Wild 1976 [3]) and demonstrate its interesting features in terms of magnetic flux threading different regions of the black hole horizon or an entire hemisphere (see Bicak & Hejda 2015 [1], and further references cited therein). We limit our discussion to axially symmetric and stationary solutions. These are vacuum, asymptotically non-flat solutions, where the influence of plasma is ignored but the effects of strong gravity are taken into account. ## 2 Magnetized black holes with spin and charge We can write the system of mutually coupled, Einstein-Maxwell partial differential equations (Chandrasekhar 1983 [2]), \[R_{\mu\nu}-\tfrac{1}{2}Rg_{\mu\nu}=8\pi T_{\mu\nu}, \tag{1}\] where the source term \(T_{\mu\nu}\) is of purely electromagnetic origin, \[T^{\alpha\beta}\equiv T^{\alpha\beta}_{\rm EMG}=\frac{1}{4\pi}\left(F^{\alpha \mu}F^{\beta}_{\mu}-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}g^{\alpha\beta}\right), \tag{2}\] \[T^{\mu\nu}{}_{;\nu}=-F^{\mu\alpha}j_{\alpha},\qquad F^{\mu\nu}{}_{;\nu}=4\pi j ^{\mu},\qquad{}^{\star}F^{\mu\nu}{}_{;\nu}=4\pi{\cal M}^{\mu}, \tag{3}\] and \({}^{\star}F_{\mu\nu}\equiv\frac{1}{2}\varepsilon_{\mu\nu}{}^{\rho\sigma}F_{ \rho\sigma}\). We will consider the spacetime solutions for the metric that satisfies electro-vacuum case with a regular event horizon under the constraints of axial symmetry and stationarity, \[{\rm d}s^{2}=f^{-1}\left[e^{2\gamma}\left(\,{\rm d}z^{2}+\,{\rm d}\rho^{2} \right)+\rho^{2}\,{\rm d}\phi^{2}\right]-f\left(\,{\rm d}t-\omega\,{\rm d} \phi\right)^{2}, \tag{4}\] with \(f\), \(\omega\), and \(\gamma\) being functions of \(z\) and \(\rho\) only. In the weak electromagnetic field approximation, the electromagnetic (test) field is supposed to reside in the background of a rotating black hole, e.g., Kerr metric or a weakly charged Kerr metric (e.g. Wald 1984 [13], Gal'tsov 1986 [4]). As an example, in an asymptotically flat spacetime, the axial Killing vector \(\partial_{\phi}\) generates a uniform magnetic field, whereas the field vanishes asymptotically for the time-like Killing vector \(\partial_{t}\). These two solutions are known as the Wald's field (Wald 1974 [12]): \[F=\tfrac{1}{2}B_{0}\left(\,{\rm d}\tilde{\xi}+\tfrac{2J}{M}\,{\rm d}\xi\right). \tag{5}\] Magnetic flux surfaces are defined, \[4\pi\Phi_{\cal M}=\int_{\cal S}\mathbf{F}\;=\;{\rm const}, \tag{6}\] Magnetic and electric (Lorentz) forces are then given by \[m\mathbf{\dot{u}}=q_{\rm m}{}^{\star}\mathbf{F}.\mathbf{u},\qquad m\mathbf{\dot{u}}=q_{\rm e}\mathbf{F}. \mathbf{u}, \tag{7}\] and the magnetic field lines (in the axisymmetric case) are determined by \[\frac{{\rm d}r}{{\rm d}\theta}=\frac{B_{r}}{B_{\theta}}, \tag{8}\] in a perfect analogy with classical electromagnetism. We will employ the above-given quantities in our discussion further below. Magnetic (electric) lines of force are defined by the direction of Lorentz force that acts on electric (magnetic) charges, \[\frac{{\rm d}u^{\mu}}{{\rm d}\tau}\propto\,^{\star}F_{\nu}^{\mu}\,u^{\nu}, \qquad\frac{{\rm d}u^{\mu}}{{\rm d}\tau}\propto F_{\nu}^{\mu}\,u^{\nu}. \tag{9}\] In an axially symmetric system, the equation for magnetic lines takes a lucid form, \[\frac{{\rm d}r}{{\rm d}\theta}=-\frac{F_{\theta\phi}}{F_{r\phi}},\qquad\frac{{ \rm d}r}{{\rm d}\phi}=\frac{F_{\theta\phi}}{F_{r\theta}}, \tag{10}\] that is again in correspondence with eq. (8). Let us now turn our attention to the case of strong magnetic field, where we cannot ignore its influence on the spacetime metric. The latter is not necessarilly flat in the asymptotical spatial region far from the black hole (Ernst & Wild 1976 [3]; Karas & Vokrouhlicky 1990 [9]). Magnetized Kerr-Newman black hole metric can be expressed in the form (Garcia Diaz 1985 [5]) \[ds^{2} = |\Lambda|^{2}\Sigma\left(\Delta^{-1}\,{\rm d}r^{2}+\,{\rm d} \theta^{2}-\Delta A^{-1}\,{\rm d}t^{2}\right) \tag{11}\] \[+|\Lambda|^{-2}\Sigma^{-1}A\sin^{2}\theta\,(\,{\rm d}\phi-\omega \,{\rm d}t)^{2}\,,\] \(\Sigma=r^{2}+a^{2}\cos^{2}\theta\), \(\Delta=r^{2}-2Mr+a^{2}+e^{2}\), \(A=(r^{2}+a^{2})^{2}-\Delta a^{2}\sin^{2}\theta\) are functions from the Kerr-Newman metric. The outer horizon is located at radius \(r{\equiv}r_{+}=1+(1-a^{2}-e^{2})^{1/2}\), like in an unmagnetized case, and the horizon existence is restricted to the range of parameters \(a^{2}+e^{2}\leq 1\). Let us emphasise that, in the magnetized case, the traditional Kerr-Newman parameters \(a\) and \(e\) are _not identical_ with the black hole total spin and electric charge, as we will see further below. Moreover, because of asymptotically non-flat nature of the spacetime, the Komar-type angular momentum and electric charge (as well as the black hole mass) have to be defined by integration over the horizon sphere rather than at radial infinity. The magnetization function \(\Lambda=1+\beta\Phi-\frac{1}{4}\beta^{2}{\cal E}\) reads, in terms of the Ernst potentials \(\Phi(r,\theta)\) and \({\cal E}(r,\theta)\), \[\Sigma\Phi = ear\sin^{2}\theta-\Im e\left(r^{2}+a^{2}\right)\cos\theta, \tag{12}\] \[\Sigma{\cal E} = -A\sin^{2}\theta-e^{2}\left(a^{2}+r^{2}\cos^{2}\theta\right)\] (13) \[+2\Im a\left[\Sigma\left(3-\cos^{2}\theta\right)+a^{2}\sin^{4} \theta-re^{2}\sin^{2}\theta\right]\cos\theta.\] The corresponding components of the electromagnetic field can be written conveniently with respect to orthonormal LNRF components, \[H_{(r)}+{\rm i}E_{(r)} = A^{-1/2}\sin^{-1}\!\theta\,\Phi^{\prime}_{,\theta}, \tag{14}\] \[H_{(\theta)}+{\rm i}E_{(\theta)} = -\left(\Delta/A\right)^{1/2}\sin^{-1}\!\theta\,\Phi^{\prime}_{,r}, \tag{15}\] where \(\Phi^{\prime}(r,\theta)=\Lambda^{-1}\left(\Phi-\frac{1}{2}\beta{\cal E}\right)\). The total electric charge \(Q_{\rm H}\) is \[Q_{\rm H}=-|\Lambda_{0}|^{2}\Im{\rm m}\,\Phi^{\prime}\left(r_{+},0\right), \tag{16}\] and the magnetic flux \(\Phi_{\rm m}(\theta)\) across a cap placed in an axisymmetric position on the horizon is \[\Phi_{\rm m}=2\pi|\Lambda_{0}|^{2}\,\Re{\rm e}\,\Phi^{\prime}\left(r_{+},\bar{ \theta}\right)\Bigl{|}_{\bar{\theta}=0}^{\theta}, \tag{17}\] where \(\Lambda_{0}=\Lambda(\theta=0)\). Let us note that the span of the azimuthal coordinate in the magnetized solution must be rescaled by the multiplication factor \(\Lambda_{0}\) in order to avoid a conical singularity on the symmetry axis (Hiscock 1981 [6]): \[\Lambda_{0}=\left[1+\frac{3}{2}\beta^{2}e^{2}+2\beta^{3}ae+\beta^{4}\left( \frac{1}{16}e^{4}+a^{2}\right)\right]^{1/2}. \tag{18}\] This rescaling procedure effectively leads to the increase of the horizon surface area, and thereby also magnetic flux across the horizon (Karas 1988 [7]). Figure 1: The “butterfly diagram” shows the magnetic flux of magnetized Kerr-Newman black hole with \(a^{2}+e^{2}=1\) as a function of the total electric charge \(Q\). Solid curves correspond to a constant value of the dimensionless magnetization parameter \(\beta=BM\) (\(\beta=0\) is the case of an unmagnetized Kerr-Newman black hole). The area of the plot with ultra-strong magnetization is bounded by \(\beta=1\) (red curve) and emphasized by yellow colour in the plot. The lines of constant ratio of \(a/e\) and varying \(\beta\) are also plotted (dashed; the cases of \(a/e=\pm 0.85\) and \(0\) are shown); some distinctive combinations of the parameters \(a\), \(e\) are emphasized by colour points. In Figure 1, the magnetic flux across the entire black hole hemisphere in Kerr-Newman strongly magnetized black hole solution, \(F=\Phi_{\rm m}(\theta=\pi/2)\), is shown as a function of electric charge on the horizon, \(Q=Q_{\rm H}\) (additional details in Karas & Budinova 2000 [8]). Let us note that cases of intersection of the \(\beta={\rm const}\) curves with \(F=0\) and non-zero total charge, \(Q\neq 0\) correspond to the vanishing total angular momentum of the black hole, \(J=0\). This property is rather different from the behaviour of weakly magnetized black holes with only test magnetic field imposed on them. On the other hand, this exact solution does not allow us to study the effects of mis-alignment of the magnetic field with respect to the rotation axis, which is so far possible only in the test-field approximation or by numerical techniques. ## Acknowledgements The author acknowledges continued support from the Czech Science Foundation EXPRO grant titled "Accreting black holes in the new era of X-ray polarimetry missions", No. 21-06825X.